Title,Content,Category,Role,Whitepaper 10_Considerations_for_a_Cloud_Procurement,Archived10 Considerations for a Cloud Procurement March 2017 This version has been archived For the most recent version of this paper see: https://docsawsamazoncom/whitepapers/latest/considerationsfor cloudprocurement/considerationsforcloudprocurementhtmlArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 1 Contents Purpose 2 Ten Procurement Considerations 2 1 Understand Why Cloud Computing is Different 2 2 Plan Early To Extract the Full Benefit of the Cloud 3 3 Avoid Overly Prescriptive Requirements 3 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services 4 5 Incorporate a Utility Pricing Model 4 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing 5 7 Understand That Security is a Shared Responsibility 6 8 Design and Implement Cloud Data Governance 6 9 Specify Commercial Item Terms 6 10 Define Cloud Evaluation Criteria 7 Conclusion 7 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 2 Purpose Amazon Web Services (AWS) offers scalable costefficient cloud services that public sector customers can use to meet mandates reduce costs drive efficiencies and accelerate innovation The procurement of an infrastructure as a service (IaaS) cloud is unlike traditional technology purchasing Traditional public sector procurement and contracting approaches that are designed to purchase products such as hardware and related software can be inconsistent with cloud services (like IaaS) A failure to modernize contracting and procurement approaches can reduce the pool of competitors and inhibit customer ability to adopt and leverage cloud technology Ten Procurement Considerations Cloud procurement presents an opportunity to reevaluate existing procurement strategies so you can create a flexible acquisition process that enables your public sector organization to extract the full benefits of the cloud The following procurement considerations are key components that can form the basis of a broader public sector cloud procurement strategy 1 Understand Why Cloud Computing is Different Hyperscale Cloud Service Providers (CSPs) offer commercial cloud services at massive scale and in the same way to all customers Customers tap into standardized commercial services on demand They pay only for what they use The standardized commercial delivery model of cloud computing is fundamentally different from the traditional model for onpremises IT purchases (which has a high degree of customization and might not be a commercial item) Understanding this difference can help you structure a more effective procurement model IaaS cloud services eliminate the customer ’s need to own physical assets There is an ongoing shift away from physical asset ownership toward ondemand utilitystyle infrastructure services Public sector entities should understand how standardized utilitystyle services are budgeted for procured and used and then build a cloud procurement strategy that is ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 3 intentionally different from traditional IT —designed to harness the benefits of the cloud delivery model 2 Plan Early To Extract the Full Benefit of the Cloud A key element of a successful cloud strategy is the involvement of all key stakeholders (procurement legal budget/finance security IT and business leadership) at an early stage This involvement ensures that the stakeholders can understand how cloud adoption will influence existing practices It provides an opportunity to reset expectations for budgeting for IT risk management security controls and compliance Promoting a culture of innovation and educating staff on the benefits of the cloud and how to use cloud technology helps those with institutional knowledge understand the cloud It also helps to accelerate buyin during the cloud adoption journey 3 Avoid Overly Prescriptive Requirements Public sector stakeholders involved in cloud procurements should ask the right questions in order to solicit the best solutions I n a cloud model physical assets are not purchased so traditional data center procurement requirements are no longer relevant Continuing to recycle data center questions will inevitably lead to data center solutions which might result in CSPs being unable to bid or worse lead to poorly designed contracts that hinder public sector customers from leveraging the capabilities and benefits of the cloud Successful cloud procurement strategies focus on applicationlevel performancebased requirements that prioritize workloads and outcomes rather than dictating the underlying methods infrastructure or hardware used to achieve performance requirements Customers can leverage a CSP’s established best practices for data center operations because the CSP has the depth of expertise and experience in offering secure hyperscale Iaa S cloud services It is not necessary to dictate customized specifications for equipment operations and procedures (eg racks server types and distances between data centers) By leveraging commercial cloud industry standards and best practices (including industryrecognized accreditations and certifications) customers avoid placing unnecessary restrictions on the services they can use and ensure access to innovative and costeffective cloud solutions ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 4 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services There is a difference between procuring cloud infrastructure (IaaS) and procuring labor to utilize cloud infrastructure or managed services such as Software as a Service (SaaS) cloud Successful cloud procurements separate cloud infrastructure from “hands on keyboard” services and labor or other managed services purchases Cloud infrastructure and services such as labor for planning developing executing and maintaining cloud migrations and workloads can be provided by CSP partners (or other third parties) as one comprehensive solution However cloud infrastructure should be regarded as a separate “service” with distinct roles and responsibilities service level agreements (SLAs) and terms and conditions 5 Incorporate a Utility Pricing Model To realize the benefits of cloud computing you need to think beyond the commonly accepted approach of fixedprice contracting To contract for the cloud in a manner that accounts for fluctuating demand you need a contract that lets you pay for services as they are consumed CSP pricing should be:  Offered using a pay asyougo utility model where at the end of each month customers simply pay for their usage  Allowed the flexibility to fluctuate based on market pricing so that customers can take advantage of the dynamic and competitive nature of cloud pricing Allowing CSPs to offer pay asyougo pricing or flexible payper use pricing gives customers the opportunity to evaluate what the cost of the usage will be instead of having to guess their future needs and over procure CSPs should provide publicly available up todate pricing and tools that allow customers to evaluate their pricing such as the AWS Simple Monthly Calculator: http://awsamazoncom/calculator Additionally CSPs should provide customers with the tools to generate detailed and customizable billing reports t o meet business and compliance needs ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 5 CSPs should also provide features that enable customers to analyze cloud usage and spending so that customers can build in alerts to notify them when they approach their usage thresholds and projected/budgeted spend Such alerts enable organizations to determine whether to reduce usage to avoid overages or prepare additional funding to cover costs that exceed their projected budget 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing Leveraging industry best practices regarding security privacy and auditing provides assurance that effective physical and logical security controls are in place This prevents overly burdensome processes and duplicative approval workflows that are often unjustified by real risk and compliance needs There are many security frameworks best practices audit standards and standardized controls that cloud solicitations can cite such as the following:  Federal Risk and Authorization Management Program (FedRAMP)  Service Organization Controls (SOC) 1/American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] No 16)/International Standard on Assurance Engagements (ISAE) 3402 (formerly Statement on Auditing Standards [SAS] No 70) SOC 2 SOC 3  Payment Card Industry Data Security Standard (PCI DSS)  International Organization for Standardization (ISO) 27001 ISO 27017 ISO 27108 ISO 9001  Department of Defense (DoD) Security Requirements Guide (SRG)  Federal Information Security Management Act (FISMA)  International Traffic in Arms Regulations (ITAR)  Family Educational Rights and Privacy Act (FERPA)  Information Security Registered Assessors Program (IRAP) (Australia)  ITGrundschutz (Germany)  Federal Information Processing Standard (FIPS) 1402 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 6 7 Understand That Security is a Shared Responsibility As cloud computing customers are building systems on a cloud infrastructure the security and compliance responsibilities are shared between service providers and cloud consumers In an IaaS model customers control both how they architect and secure their applications and the data they put on the infrastructure CSPs are responsible for providing services through a highly secure and controlled infrastructure and for providing a wide array of additional security features The respective responsibilities of the CSP and the customer depend on the cloud deployment model that is used either IaaS SaaS or Platform as a Service (PaaS)Customers should clearly understand their security responsibilities in each cloud model 8 Design and Implement Cloud Data Governance Organizations should retain full control and ownership over their data and have the ability to choose the geographic locations in which to store their data with CSP identity and access controls available to restrict access to customer infrastructure and data Customers should clearly understand their responsibilities regarding how they store manage protect and encrypt their data A major benefit of cloud computing as compared to traditional IT infrastructure is that customers have the flexibility to avoid traditional vendor lock in Cloud customers are not buying physical assets and CSPs provide the ability to move up and down the IT stack as needed with greater portability and interoperability than the old IT paradigm Public sector entities should require that CSPs: 1) provide access to cloud portability tools and services that enable customers to move data on and off their cloud infrastructure as needed and 2) have no required minimum commitments or required longterm contracts 9 Specify Commercial Item Terms Cloud computing should be purchased as a commercial item and organizations should consider which terms and conditions are appropriate (and not appropriate) in this context A commercial item is recognized as an item that is of a type that has been sold leased licensed or otherwise offered for sale to the general public and generally performs the same for all users/customers both comme rcial and government IaaS CSP terms and conditions are designed to reflect how a cloud services model functions (ie physical assets are not being ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 7 purchased and CSPs operate at massive scale to offer standardized commercial services) It is critical that a CSP’s terms and conditions are incorporated and utilized to the fullest extent 10 Define Cloud Evaluation Criteria Cloud evaluation criteria should focus on system performance requirements Select the appropriate CSP from an established resource pool to take advantage of the cloud’s elasticity cost efficiencies and rapid scalability This approach ensures that you get the best cloud services to meet your needs the best value in these services and the ability to take advantage of marketdriven innovation The National Institute of Standards and Technology (NIST) definitions of cloud benefits are an excellent starting point to use for determining cloud evaluation criteria: http://nvlpubsnistgov/nistpubs/Legacy/SP/nistspecialpublication800 146pdf Conclusion Thousands of public sector customers use AWS to quickly launch services using an efficient cloudcentric procurement process Keeping these ten steps in mind will help organizations deliver even greater citizen student and mission focused outcomes,General,consultant,Best Practices 5_Ways_the_Cloud_Can_Drive_Economic_Development,Archived5 Ways the Cloud Can Drive Economic Development August 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents Amazon Web Services’s (“AWS”) current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this docu ment and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances fro m AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Sharing More Data and Information 1 Increasing Productivity 3 Preparing Citizens for the Workforce & Building Skills 5 Driving Local Development 6 Allocating Resources More Effectively 8 Key Takeaway 9 Contributors 9 Archived Abstract Government agencies often look to promote new technology for cost savings and efficiency but it does not stop there The second and third tier effects of technology can be long lasting for citizens businesses and economies When public institutions adop t the cloud they experience an internal transformation Inside an organization cloud usage drives greater accessibility of data and information sharing increases worker productivity and improves resource allocation The external benefit of the cloud is recognized through a government ’s ability to put reclaim ed time and resource s toward serving citizens This includes provision ing public services such as occupational skills training quicker and more effective service delivery a pathway to a more productive workforce and ultimately a boost to local development This whitepaper examines the enterprise level benefits of the cloud as well as the residual impact on economic development The US Economic Development Administration defines economic development as “[creating] the conditions for economic growth and improved quality of life by expanding the capacity of individuals firms and communities to maximize the use of their talents and skills to support innovation lower transaction costs and responsibly produce and trade valuable goods and services” We explore this concept through the lens of the cloud ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 1 Introduction Technology empowers governments to improve how and when they reach citizens It improves the quality and accessibility of public service s ultimately creat ing a more productive environment where citizens can thrive Leveraging the cloud is one way governments can accelerate this shift with benefits occurring first inside the institution Sharing More Data and Information One enterprise level benefit of the cloud is its emphasis on data and information sharing The cloud ’s data sharing tools encourage staff to store information in a central location adding visib ility inside the workplace A more collaborative environment can lead to increased communication and idea sharing among agencies and teams that might otherwise op erate in siloes This is true for federal regional and local governments as well as for businesses and entrepreneurs The result is n ear real time access to critical information across an array of industries Examples include data on job creation by location and level retention statistics payroll by industry classification – or North American Industry Classification System code s in the US – in addition to information on health services trade and commerce weather patterns and more Data and IoT solutions can help address development challenges Nexleaf Analytics is one organization harnessing the power of data to tackle global development issues From climate change to public health and food insecurity its mission is to preserve hu man life and protect the planet through sensor technologies and data analytics and by advocating for data driven solutions The organizatio n developed Internet ofThings (IoT) platforms ColdTrace and StoveTrace to help governments ensure the potency of life saving vaccines at the ‘last mile’ and to facilitate the adoption of cleaner cookstoves respectively ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 2 “Data is at the core of creating sustainable change By getting meaningful real time data flowing from the bottom up people have the tools and insights they need to take responsive actions” according to Mar tin Lukac Nexleaf’s CTO and cofounder Nexleaf’s solution powered by A mazon Web Services Inc (AWS) aggregates crucial data that can lead to responsive interventions By collaborating with governments and NGOs in 10 countries across Asia and Africa the organization ensures its solutions adhere to local country laws and preferences and identifies the right tools and analytics to benefit constituents Engaging people on the ground empowers a data driven approach to improving the effic iency of their systems advocating for better resources and tap ping into potential avenues for economic and social development Data drives c ommunity collaboration and innovation The cloud encourages partnerships and collaboration within communities It can lead local governments to facilitate relationships with small and medium sized enterprises (SMEs) which according to an Organisation for Economic Co operati on and Development (OECD ) report “account for over 95% of firms and 60% 70% of employment and generate a large share of new jobs in OECD economies” In Boston Massachusetts the Mayor's Office of New Urban Mechanics took an innovative approach to proble msolving through crowdsourcing Teaming with a technology firm the government sought creative ideas from across Boston to help improve Street Bump its app to collect roadside maintenance and plan long term investments for the city The use of big data and community engagement helped the agency find a creative solution to a public issue Street Bump’s website now reports that te ns of thousand s of bumps have been detected through the app The public private partnership brought automation and speed to an otherwise manual city improvement process and also gave local startups a platform to voice and implement innovative ideas that otherwise may n ot have been discovered Newport Wales is another example of a city optimizing public data in this case to assess environmental conditions It began using IoT sensors to collect ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 3 data such as pollution levels augmenting earlier process es of collecting air samples in glass vials across 85 different location s Together with Pinacl Solutions and Davra Networks Newport is working toward a solution for improving air quality flood control and waste management gleaning timely insights from sensor data via solutions hosted on AWS The effort aimed to boost citizens’ safety and quality of life as part of a vision to improve Newport’s economy The Humanitarian OpenStreetMap Team (HOT) is yet another global organization applying the pri nciples of open source and open data shar ing to humanitarian response and economic development Known for its ability to rapidly coordinate volunteers to map sites impacted by disaster HOT relies on a collaboration with Digi talGlobe Inc for critical satellite imagery data accessible through its Open Data Program and imagery license If not for this partnership HOT would not exist as it is today according to HOT’s Director of Technology Cristiano Giovando Additionally through the AWS Public Datasets Program anyone can analyze data and build comple mentary services using a broad range of compute and data analytics tools The cloud combines fragmented data from a variety of sources improving users’ access and enabling more time for analysis This can facilitate innovation and the possibility of new discover ies Increasing Productivity Consistent r eliability and a lack of physical infrastructure can d rive productivity gains inside and out of a cloud using organization Workforce productivity can improve up to 50% following a large scale AWS migration according to AWS migration experts In addition AWS’s more than 90 solutions offers organizations faster access to services they would otherwise have to build and maintain themselves Government organizations around the world including a road and traffic agency in Belgium and Italy’s public finance regulator have realized increased productivity from the cloud – both for the benefit of their operation s and their citizens ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 4 Productivity gains help institutions better deliver on their mission The Agentschap Wegen & Verkeer (AWV) deploy s new maintenance capabilities up to eight times faster thanks to the automation of services and databases through the AWS Cloud according to Bert Weyne planning & coordination lead at AWV The agency manages 6970 kilometers of roads and 7668 kilometers of cycle lanes in Belgium with its team of 250 road i nspectors having a direct impact on citizen safety In the event of a pothole for example the team uses an app to log information about the issue and prioritize repairs “When we wer e running on in house servers our road inspectors complained about the app’s reliability At times they were unable to access the app and would have to use paper and p en instead It was embarrassing ” says Weyne In addition to bett er performance Weyne’s team has used the cloud to reduce costs speed development and cut infrastructure management time He adds “… by using managed services we’ve slashed system admin time by 67 percent which has improved our agility We can now dev elop and test features three times faster” The cloud has also enabled Italy’s auditing and oversight authority for public accounts and bu dgets to operate more effectively as a remote team Prior to working with AWS Corte dei conti (Cdc) felt constrained by physical IT infrastructure “We wanted to change the way our 3000 plus employees worked enabling them to access applications from anywh ere on any device But we had to ensure that this flexibility for staff didn’ t jeopardize the safety of data ” said C dc’s IT officer Leandro Gelasi This was attainable through a hybrid architecture migration approach and through collaboration with AWS Advanced Consulting Partner XPeppers Srl “As a result [employees are] much more productive Decisions get made faster and the whole system works better It’s a brilliant result fo r our entire organization” said Gelasi As Gelasi and his team prove their ability to fulfill duties securely from any location it may lend an opportunity to employ more workers in small towns and rural locations ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 5 Preparing Citizens for the Workforce & Building Skills Skilldevelopment and education programs offer meaningful contributions to economic development In line with the United Nation s’ 2030 Sustainable Development Goals which includes training and skill building for youth cl oud technology provisions the scaling of educational content and innovative teaching formats to reach learners wherever they are Quality inclusive and relevant education is a key factor in breaking cycles of poverty and reduc ing gender inequalities worldwide By expanding learning beyond the confines of a physical classroom technology helps increase access to courses and level s the playing field for learners of diverse geographical and socio economic backgrounds For schools and educators the cloud offers not only cost savings and agility but also the opportunity to develop breakthroughs in educational models and student engagement Reaching diverse job seekers where ver they are Digital Divide Data (DDD) is a nonprofit social enterprise that uses AWS to support regional workforce development Its goal is to create sustainable tech jobs for youth through Impact Sourcing a model that provides economically marginalized youth with training and jobs in next generation technologies such as cloud computing machine learning cyber security and data analytics In col laboration with Intel AWS worked with DDD to launch the first ofits kind AWS Cloud Academy in Kenya to train certify and employ underserved youth in cloud computing as a stepping stone to more advanced IT careers The program's first cohort included 30 hi gh school graduates from Kibera Nairobi with the second cohort compris ed of 70% women The social enterprise plan s to train five cohorts annually graduating 150200 clo ud engineers p er year – all of whom have the option to work for DDD as cloud computing engineers or to pursue cloud opportunities in the growing local tech sector In terms of workforce benefits DDD and AWS graduates earn five times more than their peers While i nformal workers in Kenya earn an average of $116 USD ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 6 per month AWS graduates earn an average of $575 USD per month The combination of training and w ork experience propels DDD graduates to earn higher income gain economic security and ultimately create better futures for themselves and their families In the US the Loui siana Department of Public Safety and Corrections manages nine state correctional facilities that house 19000 adult prisoners The state run agency offers educational and vocational programs with the goal of helping inmates earn degrees gain job training secure employment and avoid re incarceration The agency sought to implement a new IT environment that would support a better and more reliable online learning solution It also needed effective system security to prevent inmates from accessing the inte rnet amid concerns about victims’ safety and other criminal activity After opting for Amazon WorkSpaces – a managed secure desktop computing service on AWS – the agency along with partner ATLO Software succeeded in launching educational training labs at four Louisiana correctional facilities With the addition of an Amazon Virtual Private Cloud they were operating on a secure network Thanks to onsite labs inmates now have better access to vocational training have the opportunity to earn college credits or degrees and can potentially participate in the labor market Driving Local Development Retaining Local Talent Retaining local talent can be a challenge for cities Moreover a concentration of intellectual capital and innovative businesses and startups can be a strong indicator of economic development Cloud technology can help give new businesses a boost in their forecasting demand generation and innovation when bringing their products or services to market AWS accelerate s this process through AWS Activate a program designed to provide startups with resourc es and credits to get started with the cloud; through access to tools like Amazon LightSail which provides technology like virtual private servers to enterprises of all sizes for the cost of a cup of ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 7 coffee ; and by encouraging public private partnerships and small business linkages namely through the strength of the AWS Partner Network (APN) Additionally AWS CloudStart formed to encourage the growth of SMEs and economic development organizations by providing resources to educate train and help these entities embrace the cost effectiveness of the AWS Cloud “As small businesses leverage a broader portfolio of digital solutions they can see an increase in agility while simultaneously lowering costs and reducing time to innovation” according to Zandile Keebine found er of participating organization GirlCode a nonprofit that aims to empower girls through technology In the US Kansas City Missouri is one example of a city that is successfully using smart technology to attract talent to an emerging business center Along the two mile corridor of the Kansas City Streetcar a $15 million public private partnership supports the deployment of 328 Wi Fi access points and 178 smart streetlights that can detect traffic patterns and open parking spaces It has also funded 25 video kiosks pavement sensors video cameras and other devices all connected by the city’s nearly ubiquitous fiber optic data network The successful use of smart city technology has been a key component in bringi ng people back to Kansas City’s core “Ten years ago we had fewer than 5000 people living downtown” said Bob Bennett Kansans City’s chief innovation officer “We have seen a 520 percent growth in the number of residents in downtown and a 400 percent gr owth in development investment I believe our smart city project has played a prominent role in getting people excited about living here” Entrepreneur ship and p ublicprivate partnerships Cloud technology provides governments with the means to educate and train citizens boosting workforce participation and eligibility Driving local entrepreneurship is an important outgrowth of this investment “A vibrant entrepreneurial sector is essential to small firm development” according to the OECD It adds that regions with “pockets of high entrepreneurial activity” and public private partnerships can lead to more job opportunities and innovation ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 8 A municipality in Sweden is feeling the effects of a strategic partnership aimed at helping small bu sinesses adapt and thrive Consultant CAG Malardalen in Västerås Sweden uses the cloud to help constituents make more data driven decisions deploy resources more efficiently and help shape the economic conditions essential for attract ing new economic activity “[We are] striving to bring the region the latest in cloud technology Our ambition is to always deliver the most relevant IT solutions to our customers Through working with AWS CloudStart our customers benefit from the foundational knowledge we have gathered and we are already seeing a lot of new possibilities for us as a service provider across Sweden” says Tomas Täuber CEO of CAG Malardalen Allocating Resources More Effectively Cloud technology allows governments to rethink critical processes It builds new efficiencies across procurement security compliance and data protection Additionally the cost effectiveness of the cloud enables agencies to redirect resources toward advancing their mission freeing up capacity to create more innovative public services Increased access to new and better citizen services ushers in a higher standard of living offering the potential to draw new inhabitants to a city or region The cloud can act as a catalyst for this type of development driving organizations tow ard increased operational efficiencies and enabling a greater focus on the mission In the Middle East the Kingdom of Bahrain underwent a shift in how it procures resources in its plan to digitize its economy Using the cloud to efficiently deliver ser vices to constituents The Kingdom of Bahrain Information & eGovernment Authority (iGA) is accountable for moving all of its government services online It is responsible for information and communications technolog y (ICT) governance and procurement for the entire Bahraini government The iGA launched a cloud first policy to support its economic development plans ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 9 Bahrain’s adoption of a cloud first policy boosted efficiency across the public sector and trimmed IT e xpenditures by up to 90% in 2017 according to the Economic Development Board annual report “Through adopting a cloud first policy we have helped reduce the government procurement process for new technology from months to less than two weeks” said Mohammed Ali Al Qaed CEO of Bahrain iGA With cloud based technology as the focus for public ICT procurement the Bahraini government can exercise minimal upfront investment by paying only for the services it needs With tools for cost alloc ation and service provisioning the AWS Cloud offers built in resource discipline enabling governments to shift their focus toward advancing development goals Key Takeaway Technology driven innovation is one way public institutions can drive economic development With the right technology governments nonprof its economic development organizations and other entities can improve their internal operations become more productive and ultimately focus more acutely on serving citizens This can create co nditions in which citizens enjoy improved quality of life and where businesses flourish As organizations increasingly embrace cloud based solutions long lasting effects can be realized in the form of community wide collaboration partnerships with local businesses and increased innovation This can help these institutions wield greater influence on economic development Contributors The following individuals and organizations contributed to this document: • Carina Veksler Public Sector Solutions AWS Public Sector • Randi Larson Public Sector Content AWS Public Sector • John Brennan International Expansion AWS Public Sector • Mike Grella Economic Development AWS Public Policy,General,consultant,Best Practices A_Platform_for_Computing_at_the_Mobile_Edge_Joint_Solution_with_HPE_Saguna_and_AWS,"ArchivedA Platform for Computing at the Mobile Edge: Joint Solution with HPE Saguna and AWS February 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 The Business Case for Multi Access Edge Computing 1 MEC Addresses the Need for Localized Cloud Services 2 MEC Leverages the Capabilities Inherent in Mobile Networks 2 MEC Provides a S tandards Based Solution that Enables an Ecosystem of Edge Applications 2 Mobile Edge Solution Overview 4 Example Reference Architectures for Edge Applications 6 Smart City Surveillance 7 AR/VR Edge Applications 10 Connected Vehicle (V2X) 13 Conclusion 15 Contributors 15 Appendix 15 Infrastructure Layer 16 Application Enablement Layer 22 ArchivedAbstract This whitepaper is written for communication service providers with network infrastructure as well as for application developers and technology suppliers who are exploring applications that can benefit from edge computing In this paper we esta blish the value of a standards based computing platform at the mobile network edge describe use cases that are well suited for this platform and present a reference architecture base d on the solutions offered by AWS Saguna and HPE A subset of use cases are reviewed in detail that illustrat e how the reference architecture can be adapted as a platform to serve use case specific requirementsArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 1 Introduction Imagine a world where cars can alert drivers about dangerous road conditions to help them take action to avoid collision and where devices can help fleets of cars drive autonomously and predict traffic patterns Consider a new Industrial Revolution where Internet of Things (IoT) devices or sensors report data collected in real time from large and small machines allowing for intelligent automation and orchestration in industries such as manufacturing agriculture healthcare and logistics Envision city and public services that provide intelligent parking congestion management pollution detection and mitigation emergency response and security While this is happening internet users access bandwidth of 10 times the current maximums and latencies at 1/100 th of current averages using a seamless combination of mobile WiFi and fixed access Fifth generation mobile network (5G) applications are enabling these scenarios by providing 10 ti mes the current bandwidth maximum and 1 This new generation of applications is fueling technological developments and creating new business opportunities for mobile operators One such technological and business development which is key to enabling many new generation of applications is “edge computing ” Edge computing addresses the latency requirements of specialized 5G applications helps manage the potentially exorbitant access cost and network load due to fast growing data demand and support s data localization where necessary By providing a cloud enabled platform for edge computing mobile operators are well positioned to take a leading role in the 5G ecosystem while opening up completely new business cases and revenue streams This whitepaper present s a solution that allow s you to leverage the infrastructure of your existing mobile networks and establis h a platform to enable new revenue generating applications and 5G use case s The Business Case for Multi Access Edge Computing Multi Access Edge Computing (MEC) is a cloud based IT service environment at the edge infrastructure of networks that serves multiple channels of telecommunications access for example mobile wide area networks Wi Fi or LTE based local area networks and wireline ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 2 In this section we discuss the many benefits of a MEC platform that sits at the edge of the cellular mobile network MEC Addresses t he Need for L ocalized Cloud Services Agility scalability ela sticity and cost efficiencies of c loud computing have made it the platform of choice for application development and delivery IoT applications need local cloud services that operat e close to connected devices to improve the economics of telemetry data processing to minimize latency for time critical applications and to ensure that sensitive information is protected locally MEC L everages the Capabilities Inherent in Mobile Network s Mobile networks have expanded to the point where they offer coverage in most countries around the world These networks combine wireless access broadband capacity and security MEC Provides a Standards Based Solution that Enables an Ecos ystem of Edge Applications MEC transforms mobile communication networks into distributed cloud computing platforms that operate at the mobile access network Strategically located in proximity to end users and connected devices MEC enables mobile operators to open their networks to new differentiated services while providing application developers and content providers access to Edge Cloud benefits The ETSI MEC Industry Specification Group (ISG) has defined the first set of standardized APIs and services for MEC The standard is supported by a wide range of industry participants including leading mobile operators and industry vendors Both HP E and Saguna are active members in the ETSI ISG In the following sections we outline the key benefits provided by MEC ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 3 Extremely Low Latency Traditional internet based cloud environments have physical limitations that prohibit you from hosting applications that require extremely low latency Alternatively MEC provides a lowlatency cloud computing env ironment for edge app lications by operating close to end users and connected IoT devices Broadband Delivery Video content is typically delivered using TCP streams When network latency is compounded by congestion users experience annoying delays due to the drop in bitrate The MEC environment provides low latency and minimal jitter which creates a broadband highway for streaming at high bitrates Economical and Scalable In massive IoT uses cases many devices such as sensors or cameras send vast amounts of data upstream which current backhaul networks1 cannot support MEC provides a cloud computing environment at the network edge where IoT data can be aggregated and processed locally thus significantly reducing upstream data MEC infrastructure can scale as you grow by e xpanding local capacity or by deploying additional edge clouds in new locations Privacy and Security By deploying the MEC Edge Cloud locally you can ensure that your private data stays on premise s However unlike server based on premise s installations MEC is a fully automated edge cloud environment with centralized management Role of MEC in 5G MEC enable s ultra low latency use cases specified as part of the 5G network goals MEC also enables fast delivery of data and the connection of billions of devic es while allowing for cost economization related to transporting enormous volumes of data from user devices and IoT over the backhaul network It is important to note that MEC is currently deployed in 4G networks By deploying this standard based technolo gy in existing networks communication service providers can benefit from MEC today while creating an evolutionary path to their next generation 5G network ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 4 Mobile Edge Solution Overview Saguna has developed a MEC virtualized radio access network (vRAN) solution that runs on Hewlett Packard Enterprise (HPE) edge infrastructure This solution lets application developers create mobile edge applications using AWS services while allowing mobi le operators to effectively deploy MEC and operate edge applications within their mobile network Figure 1: End toend MEC solution architecture The proposed m obile edge solution consists of three main layer s as illustrated in Figure 1: • Edge Infrastructure Layer – Based on the powerful x86 compute platform this layer provides compute storage and networking resources at edge locations It supports a wide range of deployment options from RAN base d station sites to backhaul aggregation sit es and regional branch offices • MEC Layer – This layer lets you place an application within a mobile access network and provides a number of services including mobile traffic breakout and steering registration and certification services for application s deployed at the e dge and radio network information services It also provides optional integration point s with mobile core network services such as charging and lawful intercept ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 5 • Application Enablement Layer – This layer provides tools and frameworks to build deploy and maintain edge assisted application s This layer allows you to place certain application modules locally at the edge (eg latency critical or bandwidth hungry components) while keeping other application functions in the cloud The flexible design inherent in the MEC solution architecture allows you to scale the edge component to fit the needs of concrete use cases You can deploy t he edge component at the deepest edge of mobile network (eg colocated with eNodeB equipment at a RAN site) which lets you to deploy lowlatency and bandwidth demanding application components in close proximity to end devices You can also deploy an edge component at any traffic aggregation point between a base station and mobile core which allows you to serve traffic from multiple base stations The proposed m obile edge platform provides a variety of tools to build deploy and manage edge assisted applications such as: • Development libraries and frameworks spanning edge tocloud including function asaservice at the edge and cloud AI frameworks for creating and training models in the cloud seamless deployment and inference at the edge and communication brokerage between edge application services and cloud These development libraries and frameworks expose well defined A PIs and have been widely adopt ed in the developer community shortening the learning curve and accelerating time tomarket for edge assisted applications and use cases • Tools to automate deployment and life cycle management of edge application component s throughout massively distributed edge infrastructure • Infrastructure services such as virtual infrastructure services at the edge traffic steering policies at the edge DNS services radio awareness services integration of edge platform into overall netwo rk function virtualization ( NFV ) framework of mobile operator • Diverse compute resources fitted to the particular needs of edge application such as CPU GPU for acceleration of graphics intensive or AI workloads FPGA accelerators cryptographic and data compression accelerators etc ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 6 This unique combination of functionalities lets you quickly develop edge applications de ploy and manage edge infrastructure and applications at scale and lets you achieve a fast time tomarket with edge enabled use cases Example Reference Architectures for Edge Applications A mobile edge platform enables new app lication behaviors By adding the ability to run certain components and application logic at the mobile network edge in close proximity to the user devices/c lient s the mobile edge platform allows you to reengineer the functional split between c lient and application server s and enables a new generation of application experiences The following list provide s examples of possible mobile edge computing applications in industrial automotiv e public and consumer domains : • Industrial o Next generation augmented reality ( AR) wearables (eg s mart glasses) o IoT for a utomation predictive maintenance o Asset tracking • Automotive o Driverless cars o Connected vehicle tovehicle or vehicle toinfrastructure (V2X ) • Smart Cities o Surveillance cameras o Smart parking o Emergency response managemen t • Consumer Enhanced Mobile Broadband o Next generation Augmented Reality/Virtual Reality ( AR/VR) and video analytics o Social media highbandwidth media sharing ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 7 o Live event streaming o Gaming In the following sections we provide examples of how the mobile edge solution can be implemented for smart city surveillance AR/VR edge applications and Connected V2X Smart City Surveillance Cities can take advantage of IoT technologies to increase the safety security and overall quality of life for residents and keep operational costs down For example video recognition technology enables real time situational analysis (also called “video as a sensor”) which allow s you to detect a variety of objects from video feed (eg people vehicles personal items ) recognize the overall situation (eg a traffic jam fight trespassing and abandoned objects) and classify recognized objects (eg faces license plates) The mobile edge solution enables new abilities in building robust and cost efficient smart city surveill ance systems: • Efficient video processing at the edge – Computer vision systems in general require high quality video input (especially for extracting advanced attributes) and hardware acceleration of inference models The mobile edge solution lets you host a computing environment at the network edge This lets you offload backhaul networks and cloud connectivity from bandwidth hungry high resolution video feeds and allows lowlatency actions based on recognition results (eg opening gates for recognized vehicles or people controlling traffic with adaptive traffic lights) The mobile edge platform provides industry standard GPU resources to accelerate video recognition and any other artificial intelligence ( AI) models deployed at the edge • Flexible access network – End toend smart city surveillance system s might leverage different means to generate video input such as existing fixed surveillance cameras mobile wearable cameras (eg for law enforcement services or first responders) and drone mounted mobile surveillance The diversity of endpoints generating video input requires a high degree of flexibility from access network – leveraging fixed video networks and mobile cellular network s with native mobility support for wearable or unmanned aerial vehicle (UAV )mounted ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 8 cameras Additionally automated drone mounted system s require low latency access to control the flight of the drone which might require endtoend latencies of millisecond scale The mobile edge platform provides a means to use robust lowlatency cellular access with native mobility support for the latter cases and incorporate s existing fixed video networks • Flexible video recognition models – Robust video recognition AI model s usually require extensive training on sample sets of objec ts and events as well as periodic tuning (or development of models for extracting some new attributes) These compute intensive tasks use highly scalable lower cost compute cloud resources However seamless deployment of the trained models to the edge f or execution and managing the life cycle of the deployed models is a complex operational task The mobile edge platform provides seamless development and operational experience starting from creation training and tuning an AI model in the cloud to depl oying it at edge locations and managing the lifecycle of the deployed models The following diagram shows an example architecture of a smart city surveillance edge application: Figure 2: Edge assisted smart city surveillance application A smart city surveillance solution has three main domains: • Field domain – D iverse ecosystem of video producing devices eg body worn cameras from first responder units drones fixed video ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 9 surveillance systems and wireless fixed cameras Video feeds are ingested int o the mobile edge platform via cellular connectivity and use existing video networks • Edge sites – L ocated in close proximity to the video generating devices and host latency sensitive services ( eg UAV flight control local alerts processing) bandwidth hungry compute intensive applications (edge inference) and gateway functionalities for video infrastructure control (camera management) Video services extract target attributes from the video streams and share metadata with local alerting services and cloud services Video services at the edge can also produce low resolution video proxy or sampling video s for transferring only the video s of interest to the cloud • Cloud domain – H osts centralized non latency critical functions such as device and service management functions AAA and policies command and control center functions as well as compute intensive non latency critical tasks of AI model training You can augment a MEC smart city surveillance application with machine learning (ML) and inference models via: • Model training (for surveillance patterns of interest eg facial recognition person counts dwell time analysis heat maps activity detection) using deep learning AMIs on the AWS Cloud • Deployment of trained models to the MEC platform’s application container using AWS Greengrass and Amazon Sage Maker • Application of inference logic (eg alerts or alarms based on select pattern detection) using AWS Greengrass ML inference Figure 3: Detailed view of solution for smart city surveillance application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 10 This design approach based on the mobile edge platform is a costefficient way of building and operating a s mart city surveillance system with edge processing for bandwidth hungry and laten cysensitive services AR/VR Edge Applications AR/VR is one of the use cases that benefits most from a mobile edge platform AR/VR edge applications can benefit from the m obile edge platform in the following ways: • Next generation AR wearables Current immersive AR experiences require heavy processing on the client side (eg calculating head and eye position and motion information from tracking sensors rendering of high quality 3 D graphics for the AR experience and running video recognition models) The requirement to run heavy computation s on AR device s (eg head mounted display s smart glasses smartphone s) has influenced the characteristics of the se devices —cost size weight battery life and overall aesthetic appeal Figure 4 : Nextgenerat ion AR devices You can avoid b ulkiness cost weight ergonomic and aesthetic limitations on the devices by offloading the heaviest computational tasks from the device s to a remote server or cloud However a truly immersive AR experience requires keeping coherence between AR content and the surrounding physical world with an end toend latency below 10 ms which is unachievable by offload ing to a traditional centralized cloud The m obile edge platform provides compute power at the network edge which allows you to offload latency critical functions from the AR device to the ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 11 network and enables the next generation of lightweight compact devices with long er battery life and native mobility • Mission critical operations AR experiences have been valuable in workforce enablement applications with remote collaboration applications AR assisted maintenance in the industrial space etc In many cases those AR experience s have become an important part of mission critical operations for example ARassisted mainte nance of equipment in hazardous conditions (eg oil extraction sites refineries and mines ) and in ARassisted healthcare Those use cases require high reliability from the AR application even when global connectivity from the c lient to the server side is degrad ed or broken The m obile edge platform provides the capability to re engineer an AR application in a way that the solution can operate offline with critical components deployed both locally in close proximity to devices and globally in the cloud as a fallback option • Localized data processing In many cases AR devices combine data from different local sources ( eg adding live sensor readings from a local piece of equipment to an AR maintenance application) In many cases ingesting data into th e cloud requires high bandwidth and is governed by data security or privacy frameworks A true AR experience requires localized data processing and ingest The m obile edge platform allows you to ingest data from any local source into the AR application as well as execute commands from the AR application to the local data sources (eg perform equipment maintenance tasks) The following diagram shows an example archit ecture for an AR edge application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 12 Figure 5: Edge assisted AR application The edge assisted AR application has three main domains: • Ultra thin client (eg head mounted display) – G enerates sensor readings of head and eye position location and other relevant data such as live video feed from embedded cameras • Edge services – Part of an AR backend hosted in close proximity to the client on network side These services execute latency critical functions (computing positioning and tracking from AR sensor readings AR graphics rendering) bandwidth hungry functions (eg computer vision models for video recognition) and local data (processing of IoT sensor readings from localized equipment) • Cloud services – Part of AR backend hosted in a traditional centralized cloud These services execute functions centralized in nature (eg authentication and policies command and control center and AR model repository) resource hungry non latency critical functions (computer vision model training) and horizontal cross enterprise functions (eg data lakes integration points with other enterprise systems etc ) This design approach allows client s to offload heavy computations which makes client devices cost efficient lightweight and battery efficient This design also allows local data to be ingested from external sources and contro ls actions to local systems enables offline operation saves costs of WAN connectivity and secures compliance with potential data localization guidelines By working as an integrated part of the mobile network this use case natively supports global mobility telco grade reliability and security ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 13 Connected Vehicle (V2X) Connectivity between vehicles pedestrians roadside infrastructure and other elements in the environment is enabling a tectonic shift in transportation T he full promise of V2X solu tions can only be realized with a new generation of mobile edge applications : • Transportation safety – V2X promises the ability to coordinate actions between vehicles sharing the road (T his ability is sometimes called “Cooperative Cruise Control ”) Informa tion exchange between connected vehicles about intention to change speed or trajectory can significantly improve the safety and robustness of automated or autonomous driving through cooperative maneurvering However due to the very dynamic nature of car traffic these decisions must be made in near real time (with end toend latencies on a millisecond time scale) The m assively distributed nature of road infrastructure near realtime decision making and the requirements for hi ghspeed mobility make the mobile edge platform perfect for host ing the distributed logic of cooperative driving • Transportation efficiency – Cooperative driving promises not only increase d safety o n the road but also a significant boost in transportation efficiency With coordinated vehicle maneuvers the overall capacity of road infrastructure can increase without significant investment in road reconstruction The promise of higher transportation efficiency is further supported by v ehicle toinfrastructure solutions Vehicles can communicate with roadside equipment for speed guidance to coordinate traffic light changes and to reserve parking lots While some information requires only short range communication (eg from a vehicle to a r oadside unit) the coordinated actions of a distributed infrastructure (eg coordinating traffic light changes between multiple intersections) req uires the mobile edge platform to host the logic • Transportation experience – With autonomous driving technologies car infotainment system s are becoming more widespread The mobile edge platform enables the unique possibility of massively distributed content caching with high localization and context awareness as well as the ability to enable location and context based inter actions with vehicle passengers (eg guidance about local ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 14 attractions for travelers time and location limited promotions from local vendors etc) The following diagram shows an example architecture of a V2X edge application Figure 6: Edge assisted connected vehible (V2X) application The V2X solution has three main domains: • Field domain – V ehicle s that generat e data about intended driving maneuvers (eg braking lane change s turn s acceleration) and receive notifications from surroun ding vehicles Road infrastructure that includes all sensors and actuators that are relevant to the driving experience ( eg wind and temperature sensors street lighting connected traffic lights that are controlled via gateway devices such as Road Side Unit) • Edge sites – L ocated in close proximity to the road (eg respective RAN eNodeB sites) and host latency sensitive or highly localized V2X application services Examples of those services include processing and relaying driving maneuver notification s for vehicle coordination processing local sensor readings from road infrastructure dynamic generation of control commands to road infrastructure (eg coordinated traffic lights across several intersections) and caching highly localized infotainment content • Cloud domain – Host s centralized and non latency critical functions such as AAA and policy control historical data collection and ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 15 processing command and control center functions and centralized infotainment content origin With this design approach you can realize low latency and a coordinated exchange of data and control commands between vehicles and surrounding infrastructure This provides a highly specific context for every interaction Conclusion Many technological and market developments are converging to create an opportunity for new applications that take advantage of modern mobile networks and the edge access infrastructure This paper emphasizes the need for an application enablement ecosystem approach and presents a platform to serve multiple edge use cases Contributors The following individuals and organizations contributed to this document: • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services • Tim Mattison Partner Solution s Architect Amazon Web Services • Alex Rez nik Enterprise Solution Architect and ETSI MEC Chair HPE • Rodion Naurzalin Lead Architect Edge Solutions HPE • Tally Netzer Marketing Leader Saguna • Danny Frydman CTO Saguna Appendix This Appendix gives a more detailed overview of the functional components of the proposed m obile edge platform solution as well as technical characteristics of each component Figure 7 illustrates a functional diagram of the mobile edge platform: ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 16 Figure 7: Mobile edge platform functional diagram Infrastructure Layer The physical infrastructure for a MEC node is based on an edge optimized converged HPE Edgeline EL4000 platform (Figure 8 ) Figure 8: HPE Edgel ine EL4000 chassis and four m710x cartridges The end toend MEC solution gives you the ability to place workloads within any segment of your mobile access network for example at a RAN site backhaul aggregation hub or CRAN hub T he HPE Edge line EL4000 has been optimized for the MEC solution as follows : Compute Density The Edge line EL4000 hosts up to four hot swap SoC cartridges in 1U chassis providing up to 64 Xeon D cores with optimized price/core and watt/core characteristics That design provides 2x – 3x higher compute density compared to a typical traditional data center pl atform while keeping power consumption low These characteristics allow an operator to place a MEC node based on Edge line EL4000 at the deepest edge of access network down to a RAN site ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 17 where space and power constrain ts make other general purpose compute platforms inefficient Workload Specific Compute The diversity of MEC use cases requires that the underlying infrastructure be able to provide different types of compute resources The Edge line EL4000 platform provides diverse compute and hardware acceler ation capabilities which allows you to co locate workloads with different compute needs: • x86 processors that serve general workloads Typical workload example s include a Virtual Network Function virtualized edge application enablement platform and applications that provide fast control actions at the edge for low latency use cases • Builtin GPU that accelerat es graphics processing Typical workload example s are video transcoding at the edge for MEC assisted content distribution and 3 D graphics rendering at the edge for AR/VR streaming application • Plug in dedicated GPU cards that accelerat e deep learning algorithms Enabled by strategic partnership with NVIDIA the Edge line platform can be used for deep learning hardware acceleration at the edge Ty pical workload example s include video analytics and computer vision at the edge and ML inference at the edge for anomaly detection and predictive maintenance • Builtin acceleration of cryptographic operations with QuckAssist Technology (eg accelerating cryptographic or data compression workloads) • Support of up to four PCI E extension slots in a single chassis which provides options for specialized plug in units such as dedicated FPGA boards neuromorphic chips etc Such specialized hardware accelerati on is being evaluated for many network function workloads (such as RAN baseband processing) and applications (efficient deep learning inference) Physical and Operational Characteristics A MEC node should be ready to operate at physical sites and is traditionally used for hosting telco purpose built appliances that are optimized for the physical site environment (eg radio base station equipment at RAN sites ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 18 access routers at traffic hubs etc ) The operational environment of the MEC node sites may be very different from the traditional data center with limited physical space for equipment hosting consumer grade climate control and limited physical accessibility The Edge line EL4000 is optimized to operate in such environments with operational characteristics comparable to the telco purpose built appliances: Parameter RAN Baseband Appliance Typical Data C enter Platform Edge line EL4000 Operating Temperature (oC) +0 …+50 +10 … +35 0 … +55 NonDestructive Shock Tolerance (G) 30 2 30 Expected Mean Time Between Failures ( MTBF ) (years) 3035 1015 >35 On top of enhanced operational characteristics the Edge line EL4000 exposes open iLO interface for the management of highly distributed infrastructure of MEC nodes The iLO interface is compliant with RedFish industry standard It exposes infrastructure management functions via simple RESTful service Saguna OpenRAN C omponents Overview The MEC p latform layer is based on the Saguna OpenRAN solution and consists of the following functions: • Saguna vE dge function located within MEC n ode • Saguna vGate function (optional) located at the core network site • Saguna OMA function (optional) located within a MEC node or at the aggregation point of several MEC n odes Saguna vEdge resides in the MEC node and enables services and applications to operate inside the mobile RAN by providing MEC services such as registration and certification Traffic Offload Function (TOF) real time Radio Network Information Services (RNIS) and optional DNS services The virtualized software node is deployed in the RAN on a server at a RAN site or aggregation point of mobile backhaul traffic It may serve single or multiple ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 19 eNodeB base stations and small cells It can easily be extended to support WiFi and other communications standards in heterogeneous network (HetNet) deployments Saguna vEdge taps the S1 interface (GTP U and S1 AP protocols) and steers the traffic to the appropriate local or remote endpoint based on configured policies Saguna vEdge implements local LTE traffic steering in number of modes (inline steering breakout tap) It has a communication link that connects it to the optional Saguna vGate node using Saguna’s OPTP (Open RAN Transport Protocol) It exposes open REST APIs for managing the platform and providing platform services to the MEC assisted applications Saguna vGate is an optional component that resides in the core network It is responsible for preserving core functionality for RAN generated traffic: l awful interception (LI) charging and policy control The Saguna vGate also enables mobility support for session generated by an MEC assisted application Operating in a v irtual machine Saguna vGate is adjacent to the enhanced packet core (EPC) It has a communication link that connects it to the Saguna vEdge nodes using Saguna’s OPTP (Open RAN Transport Protocol) and m obile network integrations for LI and charging functions Saguna OMA (Open Management and Automation) is an optional subsystem that resid es in the MEC n ode or at the aggregation p oint of several MEC n odes It provides a management layer for the MEC nodes and integrates into the cloud Network Function Virtualization ( NFV ) environment which includes the NFV Orchestrator the Virtual Infrastructure Manager (VIM) and Operations Support Systems (OSS) Saguna OMA provides two management modules: • Virtualized Network Function Manager (VNFM) Provides Life Cycle Management and monitoring for MEC Platform (Saguna vEdge) and MEC assisted applications This is a standard layer of management required within NFV environments It resides at the edge to manage the local MEC environment ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 20 • Mobile Edge Platform Manager (MEPM) – Provides an additional layer of management required for operating and prioritizing MEC applications It is re sponsible for managing the rules and requirements presented by each MEC application rules and resolving conflicts between different MEC assisted applications The Saguna OMA node operates on a virtual machine and manages on boarded MEC assisted application s via its workflow engine using Saguna and third party plugins The Saguna OMA is managed via REST API Saguna OpenRAN Services As a MEC p latform layer Saguna OpenRAN provides the following services: Mobile Network Integration Services • Mobility with Internal Handover support for mobility events between cells connected to the same MEC n ode and External Handover support between two or more MEC n odes and between cells connected to a MEC node and unconnected cells • Lawful Interception (LI) for RAN based generated data It supports X1 (Admin) X2 (IRI) and X3 (CC) interfaces and is pre integrated with Utimaco and Verint LI systems • Charging support using CDR generation for application based charging (based on 3GPP TDF CDR) and charging triggering based on time session and data Supported charging methods are File based (ASN1) and GTP’ • Management vEdge REST API for MEC services discovery and registration MEPM and VNFM let you efficiently operate a MEC solution and integrate it into your existing NFV en vironment Edge Services • Registration for MEC assisted applications The MEC Registration service provides dynamic registration and certification of MEC applications and registration to other MEC services provided by the MEC Platform setting the MEC appli cation type • Traffic Offload Function routes specific traffic flows to the relevant applications as configured by the user The TOF also handles tunneling ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 21 protocols such as GPRS Tunneling Protocol (GTP) for Long Term Evolution (LTE) network Standard A10/A 11 interfaces for 3GPP2 CDMA Network and handles plain IP traffic for WiFi/DSL Network • DNS provides DNS caching service by storing recent DNS addresses locally to accelerate the mobile i nternet and DNS server functionality preconfiguring specific DNS responses for specific domains This lets the User Equipment ( UE) connect to a local application for specific TCP sessions • Radio Network Information Service provided per Cell and per Radio Access Bearer (RAB) The service is vendor independent and can support eNodeBs from multiple RAN vendors simultaneously It supports standard ETSI queries (eg cell info) and notification mechanism (eg RAB establishment events) Additional information based on Saguna proprietary model provides real time feedback on cell congestion level and RAB available throughput using statistical analysis • Instant Messaging with Short Message Service (SMS) provided as a REST API request It offers smart messaging capabilities for example sending SMS to UEs on a specific area ( eg sports stadium) or sending SMS to UE when entering or exiting a specific area (eg shop) Mobile Edge Applications • Throughput guidance application uses the internal RNIS algorithm to deliver throughput guidance for specific IP addresses on the server side or according to domain names of the servers The application can be configured with the period of such Throughput Guidance update per target • DDoS Mitigation application monitors traffic originating from the connected device for specific DDoS attacks on different layers (IP layer for ICMP flooding IP scanning Ping of death; TCP/UDP layer for TCP sync attacks UDP message flooding; Application layer) Devices that are detected as generating DDoS traffic are reported to the network management and traffic from these devices can be locally stopped or the device can be remotely disabled by the network core ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 22 Application Enablement L ayer The Application Enablement layer consists of AWS Greengrass hosted on the MEC node side AWS Greengrass is designed to support IoT solutions that connect different types of devices with the cloud and each other It also runs local functions and parts of applications at the network edge Devices that run Linux and support ARM or x86 architectures can host the AWS Greengr ass Core The AWS Greengrass Core enables the local execution of AWS Lambda code messaging data caching and security Devices running the AWS Greengrass Core act as a hub that can communicate with other devices that have the AWS IoT Device SDK installed such as micro controller based devices or large appliances These AWS Greengrass Core devices and the AWS IoT Device SDK enabled devices can be configured to communicate with one another in a Greengrass Group If the AWS Greengrass Core device loses connection to the cloud devices in the Greengrass Group can continue to communicate with each other over the local network A Greengrass Group represents localized assembly of devices For example it may represent one floor of a building one truck or one home AWS Greengrass builds on AWS IoT and AWS Lambda and it can also access other AWS services It is built for offline operation and greatly simplifies the implementation of local processing Code running in the field can collect filter and aggregate fr eshly collected data and then push it up to the cloud for long term storage and further aggregation Further code running in the field can also take action very quickly even in cases where connectivity to the cloud is temporarily unavailable AWS Greengr ass has two constituent parts : the AWS Greengrass Core and the IoT Device SDK Both of these components run on onpremises hardware out in the field The AWS Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better and can take advantage of additional resources if available It runs Lambda functions locally interacts with the AWS Cloud manages security and authentication and communicates with the other devices under its purview ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 23 The IoT Device SDK is used to build the applications on devices connected to the AWS Greengrass Core device (generally via a LAN or other local connection) These applications capture data from sensors subscribe to MQTT topics and use AWS IoT device shadows to store and retrieve state information AWS Greengrass features include : • Local support for AWS Lambda – AWS Greengrass includes support for AWS Lambda and AWS IoT d evice shadows With AWS Greengrass you can run AWS Lambda functions right on the device to execute code quickly • Local support for AWS IoT d evice shadows – AWS Greengrass also includes the functionality of AWS IoT d evice shadows The d evice shadow caches the state of your device like a vi rtual version or “shadow” and tracks the device’s current versus desired state • Local messaging and protocol adapters – AWS Greengrass enables messaging between devices on a local network so they can communicate with each other even when there is no connection to AWS With AWS Greengrass devices can process messages and deliver them to other device s or to AWS IoT based on business rules that the user defines Devices that communicate via the popular industrial protocol OPC UA are supported by the AWS Gr eengrass protocol adapter framework and the out ofthebox OPC UA protocol module Additionally AWS Greengrass provides protocol adapter framework to implement support for custom legacy and proprietary protocols • Local resource access – AWS Lambda functions deployed on an AWS Greengrass Core can access local resources that are attached to the device This allows you to use serial ports USB peripherals such as add on security devices sensors and actuators on board GPUs or the local file system to quickly access and process local data • Local machine learning i nference – A llows you to locally run a n MLmodel that’s built and trained in the cloud With hardware acceleration available in the MEC infrastructure layer this feature provides a powerful mec hanism for solving any machine learning task at the local edge eg discovering patterns in data building computer vision systems and running anomaly detection and predictive maintenance algorithms ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 24 AWS Greengrass has a growing list of features Curren t features are shown in Figure 9 Figure 9: AWS Greengrass features AWS Greengrass on the MEC node acts as a pivot point It integrates the MEC platform with the AWS I oT solution and other AWS services providing a powerful application enablement environment for developing deploying and managing MEC assisted applications at scale The figure below illustrates the current portfolio of AWS services that enable a seamless IoT pipeline —from endpoints connecting via Amazon FreeRTOS or the IoT SDK through MQTT or OPC UA to edge gateways that host AWS Greengrass and Lambda functions providing data processing capabilities at the edge up to cloud hosted AWS IoT Core AWS Device Management AWS Device Defender and AWS IoT Analytics services as well as enterprise applications ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 25 Figure 10: AWS services that enable a seamless IoT pipeline 1 In a telecommunications network the backhaul portion of the network comprises the intermediate links between the core network or backbone network an d the small subnetworks at the ""edge""",General,consultant,Best Practices A_Practical_Guide_to_Cloud_Migration_Migrating_Services_to_AWS,Archived A Practical Gui de to Cl oud Migration Migratin g Service s to AWS December 2015 This paper has been archived For the latest technical content see: https://docsawsamazoncom/prescriptiveguidance/latest/mrpsolution/mrpsolutionpdfArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 2 of 13 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice C ustomers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document do es not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this docum ent is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 3 of 13 Contents Abstract 3 Introduction 4 AWS Cloud Adoption Framework 4 Manageable Areas of Focus 4 Successful Migrations 5 Breaking Down the Economics 6 Understand OnPremises Costs 6 Migration Cost Considerations 8 Migration Options 10 Conclusion 12 Further Reading 13 Contributors 13 Abstract To achieve full benefits of moving applications to the Amazon Web Services (AWS) platform it is critical to design a cloud migration model that delivers optimal cost efficiency This includes establishing a compelling business case acquiring new skills within the IT organization implemen ting new business processes and defining the application migration methodology to transform your business model from a traditional on premises computing platform to a cloud infrastructure ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 4 of 13 Perspective Areas of Focus Introduction Cloudbased computing introduces a radical shift in how technology is obtained used and managed as well as how organizations budget and pay for technology services With the AWS cloud platform project teams can easily configure the virtual network using t heir AWS account to launch new computing environments in a matter of minutes Organizations can optimize spending with the ability to quickly reconfigure the computing environment to adapt to changing business requirements Capacity can be automatically sc aled —up or down —to meet fluctuating usage patterns Services can be temporarily taken offline or shut down permanently as business demands dictate In addition with pay peruse billing AWS services become an operational expense rather than a capital expense AWS Cloud Adoption Framework Each organization will experience a unique cloud adoption journey but benefit from a structured framework that guides them through the process of transforming their people processes and technology The AWS Cloud Adopt ion Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud comput ing across your organization throughout your IT lifecycle Manageable Areas of Focus The AWS CAF breaks down the complicated planning process into manageable areas of focus Perspectives represent top level areas of focus spanning people process and te chnology Components identify specific aspects within each Perspective that require attention while Activities provide prescriptive guidance to help build actionable plans The AWS Cloud Adoption Framework is flexible and adaptable allowing organizations to use Perspectives Components and Activities as building blocks for their unique journey Business Perspective Focuses on identifying measuring and creating business value using technology services The Components and Activities within the Business Perspective can help you develop a business case for cloud align ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 5 of 13 business and technology strategy and support stakeholder engagement Platform Perspective Focuses on describing the structure and relationship of technology elements and services in complex IT environments Components and Activities within the Perspective can help you develop conceptual and functional models of your IT environment Maturity Perspective Focuses on defining the target state of an organization's capabilities measuring maturity and optimizing resources Components within Maturity Perspective can help assess the organization's maturity level develop a heat map to prioritize initiatives and sequence initiatives to develop the roadm ap for execution People Perspective Focuses on organizational capacity capability and change management functions required to implement change throughout the organization Components and Activities in the Perspective assist with defining capability and skill requirements assessing current organizational state acquiring necessary skills and organizational re alignment Process Perspective Focuses on managing portfolios programs and proj ects to deliver expected business outcome on time and within budget while keeping risks at acceptable levels Operations Perspective Focuses on enabling the ongoing operation of IT environments Components and Activities guide operating procedures service management change management and recovery Security Perspective Focuse s on helping organizations achieve risk management and compliance goals with guidance enabling rigorous methods to describe structure of security and compliance processes systems and personnel Components and Activities assist with assessment control selection and compliance validation with DevSecOps principles and automation Successful Migrations The path to the cloud is a journey to business results AWS has helped hundreds of customers achieve their business goals at every stage of their journey While every organization’s path will be unique there are common patterns approaches and best pract ices that can be implemented to streamline the process 1 Define your approach to cloud computing from business case to strategy to change management to technology 2 Build a solid foundation for your enterprise workloads on AWS by assessing and validating yo ur application portfolio and integrating your unique IT environment with solutions based on AWS cloud services Perspective Areas of Focus ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 6 of 13 3 Design and optimize your business applications to be cloud aware taking direct advantage of the benefits of AWS services 4 Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven validated designs Early planning communication and buy in are essential Understanding the forcing function (tim e cost availability etc) is key and will be different for each organization When defining the migration model organizations must have a clear strategy map out a realistic project timeline and limit the number of variables and dependencies for trans itioning on premises applications to the cloud Throughout the project build momentum with key constituents with regular meetings and reporting to review progress and status of the migration project to keep people enthused while also setting realistic ex pectations about the availability timeframe Breaking Down the Economics Understand On Premises Costs Having a clear understanding of your current costs is an important first step of your journey This provides the baseline for defining the migration model that delivers optimal cost efficiency Onpremises data centers have costs associated with the servers storage networking power cooling physical space and IT labor required to support applications and services running in the production environment Although many of these costs will be eliminated or reduced after applications and infrastructure are moved to the AWS platform knowing your current run rate will help determine which applications are good candidates to move to AWS which applications need to be rewrit ten to benefit from cloud efficiencies and which applications should be retired The following questions should be evaluated when calculating the cost of on premises computing: Understanding Costs To build a migration model for optimal efficiency it is important to accurately understand the current costs of running onpremises applications as well as the interim costs incurred during the transition ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 7 of 13 “Georgetown’s modernization strategy is not just about upgrading old systems; it is about changing the way we do business building new partnerships with the community and working to embrace innovation Cloud has been an important component of this Although we thought the primary driver would be cost savings we have found that agility innovation and the opportuni ty to change paths is where the true value of the cloud has impacted our environment “Traditional IT models with heavy customization and sunk costs in capital infrastructures —where 90% of spend is just to keep the trains running —does not give you the opp ortunity to keep up and grow” Beth Ann Bergsmark Interim Deputy CIO and AVP Chief Enterprise Architect Georgetown University  Labor How much do you spend on maintaining your environment (broken disks patching hosts servers going offline etc)?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity What is the cost of over provisioning for peak capacity? How do you plan for capacity? How much buffer capacity are you planning on carrying? If small what is your plan if you need to add more? What if you need less capacity? What is your plan to be abl e to scale down costs? How many servers have you added in the past year? Anticipating next year?  Availability / Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data center(s) last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overpr ovision for peak load? What is the cost of over provisioning?  Space Will you run out of data center space? When is your lease up? ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 8 of 13 Migration Cost Considerations To achieve the maximum benefits of adopting the AWS cloud platform new work pract ices that drive efficiency and agility will need to be implemented:  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified Migration Bubble AWS uses the term “migration bubble” to describe the time and cost of moving applications and infrastructure from on premises data centers to the AWS platform Although the cloud can provide significant savings costs may increase as you move into the migration bubble It i s important to plan the migration to coincide with hardware retirement license and maintenance expiration and other opportunities to reduce cost The savings and cost avoidance associated with a full all in migration to AWS will allow you to fund the mig ration bubble and even shorten the duration by applying more resources when appropriate Time Figure 1: Migration Bubble Level of Effort The cost of migration has many levers that can be pulled in order to speed up or slow down the process including labor process tooling consulting and technology Each of these has a corresponding cost associated with it based on the level of effort required to move the application to the AWS platform Migration Bubble Planning • • • • • • Planning and Assessment Duplicate Environments Staff Training Migration Consulting 3rd Party Tooling Lease Penalties Operation and Optimization Cost of Migration $ ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 9 of 13 To calculate a realistic total cost of ownership (TCO) you need to understand what these costs are and plan for them Cost considerations include items such as:  Labor During the transition existing staff will need to continue to maintain the production environment learn new skills and decommission the old infrastructure once the migration is complete Additional labor costs in the migration bubble include:  Staff time to plan and assess project scope and project plan to migrate applications and infrastructure  Retaining consulting partners with the expertise to streamline migration of applications and infrastructure as well as training staff with new skills  Due to the general lack of cloud experience for most organization s it is necessary to bring in outside consulting support to help guide the process  Process Penalty fees associated with early termination of contracts may be incurred (facilities software licenses etc) once applications or infrastructure are decommissioned  The cost of tooling to automate the migration of data and virtual machines from on premises to AWS  Technology Duplicate environments will be required to keep production applications/infrastructure available while transitioning to the AWS platform Cost considerations include:  Cost to maintain production environment during migration  Cost of AWS platform comp onents to run new cloud based applications  Licensing of automated migration tools license to accelerate the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 10 of 13 “I wanted to move to a model where we can deliver more to our citizens and r educe the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business” Chris Chiancone CIO City of McKinney City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going all in on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on delivering new and better services for its fast growing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of the city’s IT department AWS provides an easy fit for the way the city does business Without having to own the infrastructure the C ity of McKinney has the ability to use cloud resources to address business needs By moving from a CapEx to an OpEx model they can now return funds to critical city projects Migration Options Once y ou understand the current costs of an on premises production system the next step is to identify applications that will benefit from cloud cost and efficiencies Applications are either critical or strategic If they do not fit into either category they should be taken off the priority list Instead categorize these as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 2 illustrates decision points that should be considered in ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 11 of 13 “A university is really a small city with departments running about 1000 diverse small services across at the university We made the decision to go down the cloud journey and have been working with AWS for the past 4 years In building our business case we wanted the ability to give our customers flexible IT services th at were cost neutral “We embraced a cloud first strategy with all new services a built in the cloud In parallel we are migrating legacy services to the AWS platform with the goal of moving 80% of these applications by the end of 2017” Mike Chapple P hD Senior Director IT Services Delivery University of Notre Dame selecting applications to move to the AWS platform focusing on the “6 Rs” — retire retain re host re platform re purchase and re factor Decommission Refactor for AWS Rebuild Application Architecture AWS VM Import Org/Ops Change Do Not Move Move the App Infrastructure Design Build AWS Lift and Shift (Minimal Change) Determine Migration 3rd Party Tools Impact Analysis Management Plan Identify Environment Process Manually Move App and Data Ops Changes Migration and UAT Testing Signoff Operate Discover Assess (Enterprise Architecture and Determine Migration Path Application Lift and Shift Determine Migration Process Plan Migration and Sequencing 3rd Party Migration Tool Tuning Cutover Applications) Vendor S/PaaS (if available) Move the Application Refactor for AWS Recode App Components Manually Move App and Data Architect AWS Environment Replatform (typically legacy applications) Rearchitect Application Recode Application and Deploy App Migrate Data Figure 2: Migration Options Applications that deliver increased ROI through reduced operation costs or deliver increased business results should be at the top of the priority list Then you can determine the best migration path for each workload to optimize cost in the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 12 of 13 Conclusion Many organizations are extending or moving their business applications to AWS to simplify infrastructure management deploy quicker provide greater availability increase agility allow for faster innovation and lower cost Having a clear understanding of existing infrastructure costs the components of your migration bubble and their corresponding costs and projected savings will help you calculate payback time and projected ROI With a long history in enabling enterprises to successfully adopt cloud computing Amazon Web Services delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep Professional Services and Support organizations robust training programs and an ecosystem tens ofthousands strong AWS can help you move faster and do more With AWS you can:  Take advantage of more services storage options and security controls than any other cloud platform  Deliver on stringent standards with the broadest set of certifications accreditations and controls in the industry  Get deep assistance with our global cloud focused enterprise professional services support and training teams ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 13 of 13 Further Reading For additional help please consult the following sources:  The AWS Cloud Adoption Framework http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkp df Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector Sales Var  Carina Veksler Public Sector Solutions AWS Public Sector Sales Var,General,consultant,Best Practices Active_Directory_Domain_Services_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlActive Di rectory Domain Services on AWS Design and Planning Guide November 20 2020 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlContents Importance of Active Directory in the cloud 1 Terminology and definitions 1 Shared responsibility model 3 Direct ory services options in AWS 4 AD Connector 4 AWS Managed Microsoft Active Directory 5 Active Directory on EC2 7 Comparison of Active Directory Services on AWS 7 Core infrastructure design on AWS for Windows Workloads and Directory Services 9 Planning AWS accounts and Organization 9 Network design considerations for AWS Managed Microsoft AD 9 Design consideration for AWS Managed Micro soft Active Directory 12 Single account AWS Region and VPC 12 Multiple accounts and VPCs in one AWS Region 13 Multiple AWS Regions deploymen t 14 Enable Multi Factor Authentication for AWS Managed Microsoft AD 16 Active Directory permissions delegation 17 Design considerations for running Active Directory on EC2 instances 18 Single Region deployment 18 Multi region/global deployment of self managed AD 20 Designing Active Directory sites and services topology 21 Security considerations 22 Trust relationships with on premises Active Directory 22 Multi factor authentication 24 AWS account security 24 Domain controller security 24 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlOther considerations 25 Conclusion 26 Contributors 26 Further Reading 27 Document Revisions 27 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAbstract Cloud is now the center of most enterprise IT strategies Many enterprises find that a wellplanned move to the cloud results in an immediate business payoff Active Directory is a foundation of the IT infrastructure for many large enterprises This whitepaper covers best practices for designing Active Directory Domain Services (AD DS) architecture in Amazon Web Services (AWS) including AWS Managed Microsoft AD Active Directo ry on Amazon Elastic Compute Cloud (Amazon EC2) instances and hybrid scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 1 Importance of Active Directory in the cloud Microsoft Active Directory was introduced in 1999 and became de facto standard technology for centralized management of Microsoft Windows computers and user authentications Active Directory serves as a distributed hierarchical data storage for information about corporate IT infrastructure including Domain Name System (DNS) zones and records devices and users user credentials and access rights based on groups membership Currently 95% of enterprises use Active Directory for authentication Successful adoption of cloud technology requires considering existing IT infr astructure and applications deployed on premises Reliable and secure Active Directory architecture is a critical IT infrastructure foundation for companies running Windows workloads Terminology and definitions AWS Managed Microsoft Active Directory AWS Directory Service for Microsoft Active Directory also known as AWS Managed Microsoft AD is Microsoft Windows Server Active Directory Domain Services (AD DS) deployed and managed by AWS for you The service runs on actual Windows Server for the highest po ssible fidelity and provides the most complete implementation of AD DS functionality of cloud managed AD DS services available today Active Directory Connector (AD Connector) is a directory gateway (proxy) that redirects directory requests from AWS applic ations and services to existing Microsoft Active Directory without caching any information in the cloud It does not require any trusts or synchronization of user accounts Active Directory Trust A trust relationship (also called a trust) is a logical rel ationship established between domains to allow authentication and authorization to shared resources The authentication process verifies the identity of the user The authorization process determines what the user is permitted to do on a computer system or network Active Directory Sites and Services In Active Directory a site represents a physical or logical entity that is defined on the domain controller Each site is associated with an Active Directory domain Each site also has IP definitions for what IP addresses and ranges belong to that site Domain controllers use site information to inform Active Directory clients about domain controllers present within the closest site to the client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 2 Amazon V irtual Private Cloud ( Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including the selection of your own private IP address ranges creation of subnets and configuration of route tables and network gateways You can also create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC to leverage the AWS Cloud as an extension of your corporate data ce nter AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment AWS Single Sign On (AWS SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications With AWS SSO you can easily manage SSO access and user permissions to all of your accounts in AWS Organi zations centrally AWS Transit Gateway is a service that enables customers to connect their VPCs and their on premises networks to a single gateway Domain controller (DC) – an Active Directory server that responds to authentication requests and store a re plica of Active Directory database Flexible Single Master Operation (FSMO) roles In Active Directory some critical updates are performed by a designated domain controller with a specific role and then replicated to all other DCs Active Directory uses r oles that are assigned to DCs for these special tasks Refer to the Microsoft documentation web site for more information on FSMO roles Global Catalog A glob al catalog server is a domain controller that stores partial copies of all Active Directory objects in the forest It stores a complete copy of all objects in the directory of your domain and a partial copy of all objects of all other forest domains Read Only Domain Controller (RODC ) Read only domain controllers (RODCs) hold a copy of the AD DS database and respond to authentication requests but applications or other servers cannot write to them RODCs are typically deployed in locations where physical s ecurity cannot be provided VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 or IPv6 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 3 addresses Instances in either VPC can communicate with each other as if they are within the same network Shared responsibility model When operating in the AWS Cloud Security and Compliance is a shared responsibility between AWS and the custome r AWS is responsible for security “of” the cloud whereas customers are responsible for security “in” the cloud Figure 1 Shared Responsibility Model when operating in AWS Cloud AWS is responsible for securing its software hardware and the facilities where AWS services are located including securing its computing storage networking and database services In addition A WS is responsible for the security configuration of AWS Managed Services like Amazon DynamoDB Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon EMR Amazon WorkSpaces and so on Customers are responsible for implementing appropria te access control policies using AWS Identity and Access Management ( IAM) configuring AWS Security Groups (Firewall) to prevent unauthorized access to ports and enabling AWS CloudTrail Customers are also responsible for enforcing appropriate data loss p revention policies to ensure compliance with internal and external policies as well as detecting and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 4 remediating threats arising from stolen account credentials or malicious or accidental misuse of AWS If you decide to run your own Active Directory on Am azon EC2 instances you have full administrative control of the operating system and the A ctive Directory environment You can set up custom configurations and create a complex hybrid deployment topology However you must operate and support it in the sam e manner as you do with onpremises Active Directory If you use AWS Managed Microsoft AD AWS provides instance deployment in one or multiple regions operational management of your directory monitoring backup patching and recovery services You confi gure the service and perform administrative management of users groups computers and policies AWS Managed Microsoft AD has been audited and approved for use in deployments that require Federal Risk and Authorization Management (FedRAMP) Payment Card Industry Data Security Standard (PCI DSS) US Health Insurance Portability and Accountability Act (HIPAA) or Service Organizational Control (SOC) compliance When used with compliance requirements it is your responsibility to configure the directory password policies and ensure that the entire application and infrastructure deployment meets your compliance requirements For more information see Manag e Compliance for AWS Managed Microsoft AD Directory services options in AWS AWS provides a comprehensive set of services and tools for deploying Microsoft Windows workloads on its rel iable and secure cloud infrastructure AWS Active Directory Connector (AD Connector) and AWS Managed Microsoft AD are fully managed services that allow you to connect AWS applications to an existing Active Directory or host a new Active Directory in the cl oud Together with the ability to deploy selfmanaged Active Directory in Amazon EC2 instances these services cover all cloud and hybrid scenarios for enterprise identity services AD Connector AD Connector can be used in the following scenarios: • Sign in to AWS applications such as Amazon Chime Amazon WorkDocs Amazon WorkMail or Amazon WorkSpaces using corporate credentials (See the list of compatible applications on the AWS Documentation site) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 5 • Enable Access to the AWS Management Console with AD Crede ntials For large enterprises AWS recommends us ing AWS Single Sign On • Enable multi factor authentication by integrating with your existing RADIUS based MFA infrastructure • Join Windows EC2 instances to your on premises Active Directory Note: Amazon RDS for SQL Server and Amazon FSx for Windows File Server are not compatible with AD Connector Amazon RDS for SQL Server compatible with AWS Managed Microsoft AD only Amazon FSx for Windows File Server can be deployed with AWS Managed Microsoft AD or self managed Active Directory AWS Managed Microsoft Active Directory AWS Directory Service lets you run Microsoft Active Directory as a managed service By default each AWS Managed Microsoft AD has a minimum of two domain controllers each deployed in a separate Availability Zone (AZ) for resiliency and fault tolerance All domain controllers are exclusively yours with nothing shared with any oth er AWS customer AWS provides operational management to monitor update backup and recover domain controller instances You administer users groups computer and group policies using standard Active Directory tools from a Windows computer joined to the AWS Managed Microsoft AD domain AWS Managed Microsoft AD preserves the Windows single sign on (SSO) experience for users who access AD DS integrated applications in a hybrid IT environment With AD DS trust support your users can sign in once on premises and access Windows workloads runnin g onpremises and in the cloud You can optionally expand the scale of the directory by adding domain controllers thereby enabling you to distribute requests to meet your performance requirements You can also share the directory with any account and VPC Multi Region replication can be used to automatically replicate your AWS Managed Microsoft AD directory data across multiple Regions so you can improve performance for users and applications in disperse geographic locations AWS Managed Microsoft AD uses native AD replication to replicate your directory’s data securely to the new Region Multi Region replication is only supported for the Enterprise Edition of AWS Managed Microsoft AD AWS Managed Microsoft AD enables you to forward all domain controller’s Windows Security event log to Amazon CloudWatch giving you the ability to monitor your use of the directory and any administrative intervention performed in the course of AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 6 operating the service It is also approved for applications in the AWS Cloud tha t are subject to compliance by the US Health Insurance Portability and Accountability Act (HIPAA) Payment C ard Industry Data Security Standard (PCI DSS) Federal Risk and Authorization Management (FedRAMP) or Service Organizational Control (SOC) when you enable compliance for your directory You can also tailor security with features that enable you to manage password policies and enable secure LDAP communications through Secure Socket Layer (SSL)/Transport Layer Security (TLS) You can also enable multi factor authentication (MFA) for AWS Managed Micros oft AD This authentication provides an additional layer of security when users access AWS applications from the internet such as Amazon WorkSpaces or Amazon QuickSight AWS Managed Microsoft AD enables you to extend your schema and perform LDAP write operations These features combined with advanced security features such as Kerberos Constrained Deleg ation and Group Managed Service Account provide the greatest degree of compatibility for Active Directory aware applications like Microsoft SharePoint Microsoft SQL Server Always On Availability Groups and many NET applications Because Active Directo ry is an LDAP directory you can also use AWS Managed Microsoft AD for Linux Secure Shell (SSH) authentication and other LDAP enabled applications The full list of supported AWS applications is available on the AWS Documentation site AWS Managed Microsoft AD runs actual Window Server 2012 R2 Active Directory Domain Services and operates at the 2012 R2 functional level AWS Managed Microsoft AD is available in two editions: Standard and Enterprise These editions have different storage capacity ; Enterprise Edition also has multi region features Edition Storage capacity Approximate number of objects that can be stored* Approximate number of users in domain* Standard 1 GB ~30000 Up to ~5000 users Enterprise 17 GB ~500000 Over 5000 users * The number of objects varies based on type of objects schema extensions number of attributes and data stored in attributes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 7 Note: AWS Domain Administrators have full administrative access to all domains hosted on AWS See your agreement with AWS and the AWS Data Privacy FAQ for more information about how AWS handles content that you store on AWS systems including directory informat ion You do not have Domain or Enterprise Admin permissions and rely on delegated groups for administration AWS Managed Microsoft AD can be used for following scenarios: managing access to AWS Management Console and cloud services joining EC2 Windows ins tances to Active Directory deploying Amazon RDS databases with Windows authentication using FSx for Windows File Services and signing in to productivity tools like Amazon Chime and Amazon WorkSpaces For more information on this solution see Design consideration for AWS Managed Microsoft Active Directory in this document Active Directory on EC2 If you prefer to extend your Active Directory to AWS and manage it yourself for flexibility or other reasons you h ave the option of running Active Directory on EC2 For more information s ee Design considerations for running Active Directory on EC2 instances in this document Comparison of Active Directory Services on AWS The following table compares the features and functions between various Directory Services options available on AWS Many features are not applicable directly to AWS AD Connector because it is actins only as a proxy to the existing Active Directory domain Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Managed service yes yes no Multi Region deployment n/a yes Enterprise Edition yes Share directory with multiple accounts no yes no Supported by AWS applications (Amazon Chime Amazon WorkSpaces AWS Single Sign On & etc) yes yes yes (through federation or AD Connector) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 8 Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Supported by RDS (SQL Server Oracle MySQL PostgreSQL and MariaDB) n/a yes no Supported by FSx for Windows File Server n/a yes yes Creating users and groups yes yes yes Joining computers to the domain yes yes yes Create trusts with existing Active Directory domains and forests n/a yes yes Seamless domain join for Windows and Linus EC2 instances yes yes yes with AWS AD Connector Schema extensions n/a yes yes Add domain controllers n/a yes yes Group Managed Service Accounts n/a yes Depends on the Windows Server version Kerberos constrained delegation n/a yes yes Support Microsoft Enterprise CA n/a yes yes Multi Factor Authentication yes through RADIUS yes through RADIUS yes with AD Connector Group policy n/a yes yes Active Directory Recycle bin n/a yes yes PowerShell support n/a yes yes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 9 Core infrastructure design on AWS for Windows Workloads and Directory Services Planning AWS accounts and Organization AWS Organizations helps you centrally manage your AWS accounts identity services and access policies for your workloads on AWS Whether you a re a growing startup or a large enterprise Organizations helps you to centrally manage billing; control access compliance and security; and share resources across your AWS accounts For more information refer to the AWS Organizations User Guide With AWS Organization s you can centrally define critical resources and make them available to accounts across your organization For example you can authenticate against your central identity store and enable applications deployed in other accounts to access it If your users need to manage AWS services and access AWS applications with their Active Directory credentials we recommend integrating your identity servi ce with the management account in AWS Organization s • Deploy AWS Managed AD in the management account with trust to your on premises A ctive Directory to allow users from any trusted domain to access AWS Applications Share AWS Managed AD to other accounts across your organization • Deploy AWS Single Sign On in the management account to centrally manage access to multiple AWS accounts and business applic ations and provide users with single sign on access to all their assigned accounts and applications from one place AWS SSO also includes built in integrations to many business applications such as Salesforce Box and Microsoft Office 365 Network design considerations for AWS Managed Microsoft AD Network design for Microsoft workloads and directory services consist s of network connectivity and DNS names resolution This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 10 To plan the network topology for your organization refer to the whitepaper Building a Scalable and Secure Multi VPC AWS Network Infrastructure and consider the following recommendations: • Plan your IP networks for Microsoft workloads without overlapping address spaces Microsoft does not recommend using Active Directory over NAT • Place directory services into a centralized VPC that is reachable from any other VPC with workloads depending on Active Directory • By default instances that you launch into a VPC cannot communicate with your onpremises network To extend your existing AD DS i nto the AWS Cloud you must connect your on premises network to the VPC in one of two ways: by using Virtual Private Network (VPN) tunnels or by using AWS Direct Connect To connect multiple VPCs in AWS you can use VPC peering or AWS Transit Gateway Network port requirements and security groups Active Directory requires certain network ports to be open to allow traffic for LDAP AD DS replication user authentication Windows Time services Distributed File System (DFS) and many more When you deploy Active Directory on EC2 instances using the AWS Quick Start or AWS Managed Microsoft AD it automatically creates a new security group with all required por t rules If you manually deploy your Active Directory you need to create a security group and configure rules for all required network protocols For a complete list of ports see Active Directory and Active Directory Domain Services Port Requirements in the Microsoft TechNet Library DNS names resolution Active Directory heavily relies on DNS services and hosts its own DNS services on domain controllers To establish seamless name resolution in all your VPCs and your onpremises network create a Route 53 Resolver deploy inbound/ outbound endpoints in your VPC and configure conditional forwarders to all of your Active Directory domai ns (including AWS Managed AD and on premises A ctive Directory ) in the Route 53 Resolver Share centralized Route 53 Resolver endpoints across all VPC in your organization Create conditional forwarders on your on premises DNS servers for all Route 53 DNS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 11 zones and DNS zones on AWS Managed AD and point them to Route 53 Resolver Endpoints Figure 2 Route 53 Resolver configuration for hybrid network Here are design considerations for DNS resolution : • Make all Active Directory DNS domain s resolvable for all clients because they are using it to locate Active Directory services and register their DNS names using dynamic updates • Try to keep the DNS name resolution local to the AWS Region to reduce latency • Use Amazon DNS Server (2 resolve r) as a forwarder for all other DNS domains that are not authoritative on your DNS Servers on A ctive Directory domain controllers This setup allows your DCs to recursively resolve records in Amazon Route 53 private zone and use Route 53 Resolver condition al forwarders • Use Route 53 Resolver Endpoints to create DNS resolution hub and manage DNS traffic by creating conditional forwarders For more information on designing a DNS name resolution strategy in a hybrid scenario see the Amazon Route 53 Resolver for Hybrid Clouds blog post This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 12 Note: The Amazon EC2 instance limits the numbe r of packets that can be sent to the Amazon provided DNS server to a maximum of 1024 packets per second per network interface This limit cannot be increased If you run into this performance limit you must set up conditional forwarding for Amazon Route 5 3 private zones to use the Amazon DNS Server (2 resolver) and use root hints for internet name resolution This setup reduces the chances of you exceeding the 1024 packet limit on AWS DNS resolver Design consideration for AWS Managed Microsoft Active Dir ectory Active Directory depends on the network and accounts design Before you select the right Active Directory topology you must choose your network and organizational design Although there is no one sizefitsall answer for how many AWS accounts a par ticular customer should have most companies create more than one AWS account as multiple accounts provide the highest level of resource and billing isolation in the following cases: • The business requires strong fiscal and budgetary billing isolation betw een specific workloads business units or cost centers • The business requires administrative isolation between workloads • The business requires a particular workload to operate within specific AWS service limits and not impact the limits of another workload • The business’s workloads depend on specific instance reservations to support high availability (HA) or disaster recovery (DR) capacity requirements Single account AWS Region and VPC The simplest case is when you need to deploy a new solution i n the cloud from scratch You can deploy AWS Managed Microsoft AD in minutes and use it for most of the services and applications that require Active Directory This solution is ideal for scenarios with no additional requirements for logical isolation betw een application tiers or administrat ors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 13 Figure 3 Managed A ctive Directory architecture deployed by Quick Start Multiple accounts and VPCs in one AWS Region Large organizations use multiple AWS accounts for administrative delegation and billing purpose s You can share a single AWS Managed Microsoft AD with multiple AWS accounts within one AWS Region This capability makes it easier and more cost effective for you to manage directory aware workloads from a single directory across accounts and VPCs This option also allows you seamless ly join your Amazon EC2 Windows instances to AWS Managed Microsoft AD This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 14 Figure 4 Sharing single AWS Managed Microsoft AD with another account AWS recommends that you create a separate account for identity services like Active Directory and only allow a very limited group of administrators to have access to this account Generally you should treat Active Directory in the cloud in the same manner as on premises A ctive Directory Just as you would limit access to a physica l data center make sure to limit administrative access to the AWS account control Create additional AWS accounts as necessary in your organization and share the AWS Managed Microsoft AD with them After you have shared the service and configured routing these users can use A ctive Directory to join EC2 Windows instances but you maintain control of all administrative tasks Deploy AWS Managed AD in your management account of AWS Organization s This allow s you to use Managed AD for authentication with AWS Identity and Access Management (IAM) to access the AWS Management Console and other AWS applications using your Active Directory credentials Multiple AWS Regions deployment AWS Managed Microsoft AD Enterprise Edition supports Multi Region deployment You can use automated multi Region replication in all Regions where AWS Managed Microsoft AD is available AWS services such as Amazon RDS for SQL Server and Amazon FSx connect to the local instances of the global directory This allows your users to sign in once to AD aware applications running in AWS as well as AWS services like Amazon RDS for SQL Server in a ny AWS Region – using credentials from AWS Managed Microsoft AD or a This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 15 trusted AD domain or forest Refer to AWS Directory Service documentation for the current list of AWS Services supporting Mu ltiRegion replication feature With multi Region replication in AWS Managed Microsoft AD AD aware applications such as SharePoint SQL Server Always On AWS services such as Amazon RDS for SQL Server and Amazon FSx for Windows File Server use the dire ctory locally for high performance and are multi Region for high resiliency The following list comprises additional benefits of Multi Region replication • It enables you to deploy a single AWS Managed Microsoft AD instance globally quickly and eliminates the heavy lifting of self managing a global AD infrastructure • Optimal performance for workloads deployed in multiple regions • Multi Region resiliency AWS Managed Microsoft AD handles automated software updates monitoring recovery and the security of the underlying AD infrastructure across all Regions • Disaster recovery In the event that all domain controllers in one Region are down AWS Managed Microsoft AD recovers the domain controllers and replicates the directory data automatically Meanwhile do main controllers in other Regions are up and running To deploy AWS Managed Microsoft AD across multiple Regions you must create it in Primary region and after that a dd one or more Replicated regions Consider following factors for your Active Directory d esign: • When you deploy a new Region AWS Managed Microsoft AD creates two domain controllers in the selected VPC in the new Region You can add more domains controllers later for scalability • AWS Managed Microsoft AD uses a backend network for replication and communications between domain controllers • AWS Managed Microsoft AD creates a new Active Directory Site and names it the same name of the Region For example us east1 You can also rename this later using the Active Directory Sites & Services tool • AWS Managed AD is configured to use change notifications for inter site replications to eliminate replication delays This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 16 After you add your new Region you can do any of the following tasks : • Add more domain controllers to the new Region for horizontal scala bility • Share your directory with more AWS accounts per Region Directory sharing configurations are not replicated from the primary Region and you may have different sharing configuration in different region based on your security requirements • Enable log forwarding to retrieve your directory’s security logs using Amazon CloudWatch Logs from the new Region When you enable log forwarding you must provide a log group name in each Region where you replicated your directory • Enable Amazon Simple Notification Service (Amazon SNS) monitoring for the new Region to track your directory health status per Region Enable Multi Factor Authentication for AWS Managed Microsoft AD You can enable multi factor authentication (MFA) for your AWS Managed Microsoft AD to increase security when your users specify their A ctive Directory credentials to access supported Amazon enterprise applications When you enable MFA your users enter their user name and password (first factor) and then e nter an authentication code (second factor) that they obtain from your virtual or hardware MFA solution These factors together provide additional security by preventing access to your Amazon enterprise applications unless users supply valid user credenti als and a valid MFA code To enable MFA you must have an MFA solution that is a remote authentication dial in user service (RADIUS) server or you must have an MFA plugin to a RADIUS server already implemented in your on premises infrastructure Your MFA solution should implement onetime passcodes (OTP) that users obtain from a hardware device or from software running on a device (such as a mobile phone) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 17 Figure 6 Using AWS Managed Microsoft Active Directory with MFA for access to Amazon Work Spaces A more detailed description of this solution is available on the AWS Security Blog Active Directory permissions delegation When you use AWS Managed Microsoft AD AWS assumes responsibility for some of the service level tasks so that you may focus on other business critical tasks The following service level tasks are a utomatically performed by AWS • Taking snapshots of the Directory Service and providing the ability to recover data • Creating trusts by administrator request • Extending Active Directory schema by administrator request • Managing Active Directory forest config uration • Managing monitoring and updating domain controllers • Managing and monitoring DNS service for Active Directory • Managing and monitoring Active Directory replication • Managing Active Directory sites and networks configuration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 18 With AWS Managed Microsoft AD you also may delegate administrative permissions to some groups in your organization These permissions include managing user accounts joining computers to the domain managing group policies and password policies managing DNS DHCP DFS RAS CA and other services The full list of permissions that can be delegated is described in the AWS Directory Service Administration Guide Work with all teams that are using Active Directory services in your organization and create a li st with all of the permissions that must be delegated Plan security groups for different administrative roles and use AWS Managed Microsoft AD delegated groups to assign permissions Check the AWS Directory Service Administration Guide to make sure that it is possible to delegate all of the required permissions Design considerations for running Active Directory on EC2 instances If you cannot use AWS Managed Microsoft AD and you have Windows workloads you want to deploy on AWS you can still run Active Directory on EC2 instances in AWS Depending on the number of Regions where you are deploying your solution your Active Directory design may slightly differ The following section provides a deployment guide and recommendation on how you can deploy Active Directory on EC2 instances in AWS Single Region deployment This deployment scenario is applicable if you are operating in a singl e Region or you do not need Active Directory to be in more than a single Region The deployment options or architecture patterns are not significantly different whether you are operating in a single VPC or multiple VPCs If you are using multiple VPCs you must ensure that network connectivity between the VPCs is available through VPC peering VPN or AWS Transit Gateway The following diagrams depict how Active Directory can be deployed in a single Region in a single VPC or multiple VPCs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Activ e Directory Domain Services on AWS 19 Figure 7 Deplo ying Active Directory on EC2 instances in a single Region for single VPC Figure 8 Deploying Active Directory on EC2 instances in a single Region for multiple VPCs Consider the following points when deploying Active Directory in this architecture: • We recommend deploying at least two domain controllers (DCs) in a Region These domain controllers should be placed in different AZs for availability reasons • DCs and other non internet facing servers should be placed in private subnets • If you require additional DCs due to performance you can add more DCs to existing AZs or deploy to another available AZ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 20 • Configure VPCs in a Region as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures that all of your clients correctly select the closest available DC • If you have multiple VPCs you can centralize the Active Directory services in one of the existing VPCs or create a shared services VPC to centralize the domain controllers • You must ensure you have highly available network connectivity between VPCs such as VPC peering If you are connecting the VPCs using VPNs or other methods ensure connectivity is highly available • If you want to use your self managed Active Directory credentials to acc ess AWS Services or thirdparty services you can integrate your self managed AD with AWS IAM and AWS Single Sign On using AWS AD Connector or AWS Managed AD through a trust relationship In these cases AD Connector or AWS Managed AD must be deployed in t he management account of your organization Multi region/global deployment of self managed AD If you are operating in more than one Region and require Active Directory to be available in these Regions use the multi region/global deployment scenario Withi n each of the Regions use the guidelines for single Region deployment as all of the single Region best practices still apply The following diagrams depict how Active Directory can be deployed in multiple Regions In this example we are showing Active Di rectory deployed in three Regions that are interconnected to each other using cross Region VPC peering In addition these Regions are also connected to the corporate network using AWS Direct Connect and VPN This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 21 Figure 9 Deploying Active Directory on EC2 instances in multiple Regions with multiple VPCs Consider the following recommendations when deploying Active Directory in this architecture : • Deploy a t least two domain controllers in each Region These domain controllers should be placed in different AZs for availability reasons • Configure VPCs in a reg ion as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures all of your clients will correctly select the closest available domain controller • Ensure robust inter Region connectivity exists between all of the Regions Within AWS you can leverage cross Region VPC peering to achieve highly available private connectivity between the Regions You can also leverage the Transit VPC solution to interconnect multiple regions Designing A ctive Directory sites an d services topology It’s important to define A ctive Directory sites and subnets correctly to avoid clients from using domain controllers that are located far away as this would cause increased latency See How Domain Controllers are Located in Windows This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Ser vices on AWS 22 Follow these best practices for configuring sites and services: • Configure one A ctive Directory site per AWS Region If you are operating in multiple AWS Regions we recommend configuring one A ctive Directory site for each of these Regions • Define the entire VPC as a subnet and assign it to the A ctive Directory site defined for this Region • If you have multiple VPCs in the same Region define each of these VPCs as separate subnets and assign it to the single A ctive Directory site set up for this Region This setup allows you to use domain controllers in that Region to service all clients in that region • If you have enabled IPv6 in your Amazon VPC create the necessary IPv6 subnet definition and assign it to this A ctive Directory site • Define all IP address ranges If clients exist in undefined IP address ranges the clients might not be associated with the correct A ctive Directory site • If you have reliable high speed connectivity between all of the sites you can use a single site link for all of your AD sites and maintain a single replication configuration • Use consistent sites names in all AD forests connected by trusts Security considerations Trust relationships with on premises A ctive Directory Whether you are deploying Active Directory on EC2 instances or using AWS Managed Microsoft AD these are the three common deployment patterns seen on AWS 1 Deploy a standal one forest/domain on AWS with no trust In this model you set up a new forest and domain on AWS which is different and separate from the current Active Directory that is running on premises In this deployment both accounts (user credentials service acc ounts) and resources (computer objects) reside in Active Directory running on AWS and most or all of the member servers run on AWS in single or multiple Regions For this deployment there is no network connectivity requirement between on premises and AWS for the purposes of Active Directory as nothing is shared between the two A ctive Directory forests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 23 2 Deploy a new forest/domain on AWS with one way trust If you are planning on leveraging credentials from an on premises A ctive Directory on AWS member serve rs you must establish at least a one way trust to the Active Directory running on AWS In this model the AWS domain becomes the resource domain where computer objects are located and on premises domain becomes the account domain Note: You must have robust connectivity between your data center and AWS A connectivity issue can break the authentication and make the whole solution not accessible for users Consider to extend your Active Directory domains to AWS to eliminate dependency on connectivity with onpremises infrastructure or deploy a multi path AWS Direct Connect or VPN connection 3 Extend your existing domain to AWS In this model you extend your existing Active Directory deployment from on premises to AWS which means adding additional domain controllers (running on Amazon EC2) to your existing domain and placing them in multiple AZs within your Amazon VPC If you are operating in multiple Regions add domain controllers in each of these Regions This deployment is easy flexible and provides the following advantages: o You are not required to set up additional trusts o DCs in AWS are handling both accounts and resources o More resilient to network connectivity issues o You can seamlessly set up and use AWS Cloud in a hybrid scenario with least impact to the applications (Note that network connectivity is required between your data center and AWS for initial and on going replication of data between the domain controllers) When you use cross forest trust relationships in Active Direct ory you need to use consistent Active Directory site names in both forests to have optimal performance Refer to the article Domain Locator Across a Forest Trust for more information See How Domain and Forest Trusts Work on the Microsoft Doc umentation website for more information This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 24 Multifactor authentication Multi factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password With MFA enabled when users sign in to the AWS Management Console they are prompted for thei r user name and password (the first factor —what they “know ”) then prompted for an authentication response from their AWS MFA device (the second factor —what they “have ”) Taken together these multiple factors provide increased security for your AWS accoun t settings and resources We recommend enabling MFA on all of your privileged accounts regardless of whether you are using IAM or federating through SSO AWS account security Since you are running your domain controllers on Amazon EC2 securing your AWS account is an important process in securing your Active Directory domain Follow these recommendations to make sure your AWS account is secure • Enable MFA and then lock away your AWS root user credential • Use IAM groups to manage permission if you are using IAM users • Grant least privilege to all your users within AWS • Enable MFA for all privileged users • Use EC2 roles for applications that run on EC2 instances • Do not share access keys • Rotate credentials regularly • Turn on and analyze log files in AWS CloudTrail VPC Flow Logs and Amazon S3 bucket logs • Turn on encryption for data at rest and in transit where necessary Domain controller security Domain controllers provide the physical storage for the AD DS database i n addition to providing the services and data that allow enterprises to effectively manage their servers workstations users and applications If privileged access to a domain controller is obtained by a malicious user that user can modify corrupt or destroy the AD DS database and by extension all of the systems and accounts that are managed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 25 by Active Directory Make sure your domain controller is secure to avoid compromising your Active Directory data The following points are some of the best pract ices to secure domain controllers running on AWS: • Secure the AWS account where the domain controllers are running by following least privilege and role based access control • Ensure unauthorized users don’t have access in your AWS account to create/access A mazon Elastic Block Store (Amazon EBS) snapshots launch or terminate EC2 Instances or create/copy EBS volumes • Ensure you are deploying your domain controllers in a private subnet without internet access Ensure that subnets where domain controllers are running don’t have a route to a NAT gateway or other device that would provide outbound internet access • Keep your security patches up todate on your domain controllers We recommend you first test the security patches in a non production environment • Restrict ports and protocols that are allowed into the domain controllers by using security groups Allow remote management like remote desktop protocol (RDP) only from trusted networks • Leverage the Amazon EBS encryption feature to encrypt the root and addit ional volumes of your domain controllers and use AWS Key Management Service (AWS KMS) for key management • Follow Microsoft recommended security configuration baselines and Best Practices for Securing Active Directory Other considerations FSMO Roles You can follow the same recommendation you would follow for your on premises deployment to determine FSMO roles on DCs See also best practices from Microsoft In the case of AWS Managed Microsoft AD all domain controllers and FSMO roles assignments are managed by AWS and do not require you to manage or change them Global Catalog Unless you have slow connections or an extremely large A ctive Directory database w e recommend adding global catalog role to all of your domain This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 26 controllers in multi domain forests (except the domain controller with the Infrastructure Master role) If you are hosting Microsoft Exchange in AWS Cloud at least one global catalog server is required in a site with Exchange servers For more information about global catalog see Microsoft documentation Since there is only one domain in the forest for AWS Managed Microsoft AD all domain controllers are configured as global catalog and will have full informatio n about all objects Read Only Domain Controllers (RODC) It’s possible to deploy RODC on AWS if you are running A ctive Directory on EC2 instances and require it and there are no special considerations for doing so AWS Managed Microsoft AD does not suppo rt RODCs All of the domain controllers that are deployed as a part of AWS Managed Microsoft AD are writable domain controllers Conclusion AWS provides several options for deploying and managing Active Directory Domain Services in the cloud and hybrid env ironments You can leverage AWS Managed Microsoft AD if you no longer want to focus on heavy lifting like managing the availability of the domain controllers patching backups and so on Or you can run Active Directory on EC2 instances if you need to have full administrative control on your Active directory In this whitepaper we have discussed these two main approaches of deploying Active Directory on AWS and have provided you with guidance and consideration for each of the de sign Depending on our deployment pattern scale requirements and SLA you may select one of these options to support your Windows workloads on AWS Contributors Contributors to this document include : • Vladimir Provorov Senior Solutions Architect Amazon Web Services • Vinod Madabushi Enterprise Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 27 Further Reading For additional information see: • AWS Whitepapers • AWS Directory Service • Microsoft Workloads on AWS • Active Directory Domain Services on the AWS Cloud: Quick Start Reference Deployment • AWS Documentation Document Revisions Date Descript ion November 2020 AWS Managed Microsoft AD multi region feature update August 2020 Numerous updates throughout December 2018 First publication,General,consultant,Best Practices Amazon_Aurora_Migration_Handbook,"This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 1 Amazon Aurora Migration Handbook July 2020 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 3 Contents Introduction 5 Database Migration Considerations 6 Migration Phases 7 Features and Compatibility 7 Performance 8 Cost 9 Availability and Durability 9 Planning and Testing a Database Migration 11 Homogeneous Migrations 11 Summary of Available Migration Methods 12 Migrating Large Databases to Amazon Aurora 15 Partition and Shard Consolidation on Amazon Aurora 16 MySQL and MySQL compatible Migration Options at a Glance 17 Migrating from Amazon RDS for MySQL 18 Migrating from MySQL Compatible Databases 23 Heterogeneous Migrations 26 Schema Migration 27 Data Migration 28 Example Migration Scenarios 28 SelfManaged Homogeneous Migrations 28 Multi Threaded Migration Using mydumper and myloader 39 Heterogeneous Migrations 45 Testing and Cutover 46 Migration Testing 46 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 4 Cutover 47 Troubleshooting 49 Troubleshooting MySQL Specific Issues 49 Conclusion 54 Contributors 55 Further Reading 56 Document Revisions 56 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 5 Abstract This paper outlines the best practices for planning executing and troubleshooting database migrations from MySQL compatible and non MySQL compatible database products to Amazon Aurora It also teaches Amazon Aurora database administrators how to diagnose and troubleshoot common migration and replication erro rs Introduc tion For decades traditional relational databases have been the primary choice for data storage and persistence These database systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL and comparable performance of high end commercial databases Amazon Aurora is priced at one tenth the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazon RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning softwa re patching setup configuration monitoring and backup are completely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones (AZs) in a region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon AZs are isolated from each other and are connected through lo w latency links Each segment of your database volume is replicated six times across these AZs This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 6 Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simpl e Storage Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Amazon Aurora is highly secure and all ows you to encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the auto mated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in transit For a complete list of Aurora features see Amazon Aurora Given the rich feature se t and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mission critical applications Database Migration Considerations A database represents a critical component in the architecture of most applications Migrating t he database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliability You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migrations are among the most time consuming and critical tasks handled by database administrators Although the task has become easier with the advent of managed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 7 migration services such as AWS Database Migration Service large scale database migrations still require adequate planning and execution to meet strict compatibility and performance requirements Migration Phases Because database migrations tend to be complex we adv ocate taking a phased iterative approach Figure 1 Migration phases This paper examines the following major contributors to the success of every database migration project: • Factors that justify the migration to Amazon Aurora such as compatibility performance cost and high availability and durability • Best practices for choosing the optimal migration method • Best practices for planning and executing a migration • Migration troubleshooting hints This section discusses imp ortant considerations that apply to most database migration projects For an extended discussion of related topics see the Amazon Web Services (AWS) whitepaper Migrating Your Databases to Amazon Aurora Features and Compatibility Although most applications can be architected to work with many relational database engines you should make sure that your application works with Amazon Aurora Amazon Aurora is designed to be wire compatible with MySQL 5 55657 and 80 Therefore most of the code applications driver s and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora ser vice SSH This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 8 access to database nodes is restricted which may affect your ability to install third party tools or plugins on the database host For more details see Aurora on Amazon RDS in the Amazon Relational Database Service (Amazon RDS) User Guide Performance Performance is often the key motivation behind database migrations However deploying your database on Amazon Aurora can be beneficial even if your applications don’t have performance issues For example Amazon Aurora scalability features can greatly reduce the amount of engineering effort that is required to prepare your database platform for future traffic growth You should include benchmarks and performance evaluations in every migration project Therefore many successful database migration projects start with performance evaluations of the new database platform Although the RDS Aurora Performance Assessment Benchmarking paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your applications For more useful results test the database performance for time sensitive workloads by running your queries (or subset of your queries) on the new platform directly Consider these strategies : • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for t hose tables This gives you a good starting point Of course testing after complete data migration will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercia l engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduces writes to the storage system minimizes lock con tention and This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 9 eliminates delays created by database process threads Our tests with SysBench on r38xlarge instances show that Amazon Aurora delivers over 585000 reads per second and 107000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to driv e a large number of concurrent queries Cost Amazon Aurora provides consistent high performance together with the security availability and reliability of a commercial database at one tenth the cost Owning and running databases come with associated cost s Before planning a database migration an analysis of the total cost of ownership (TCO ) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are running a commercial database engine (Oracle SQL Server DB2 etc) a significant portion of your cost is database licensing Amazon Aurora can even be more cost efficient than open source databases because its high scalability helps you reduce the number of database instances that are required to handle the same workload For more details see the Amazon RDS for Aurora Pricing page Availability and Durability High availability and disaster recovery are important considerations for databases Your application may already have very strict recovery time objective (RTO) and recovery point objective (RPO) requirements Amazon Aurora can help you meet or exceed your availability goals by having the following components: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 10 1 Read replicas : Increase read throughput to support high volume application requests by creating up to 15 database Aurora replicas Amazon Aurora Replicas share the same underlying storage as the source inst ance lowering costs and avoiding the need to perform writes at the replica nodes This frees up more processing power to serve read requests and reduces the replica lag time often down to single digit milliseconds Aurora provides a reader endpoint so th e application can connect without having to keep track of replicas as they are added and removed Aurora also supports auto scaling where it automatically adds and removes replicas in response to changes in performance metrics that you specify Aurora sup ports cross region read replicas Cross region replicas provide fast local reads to your users and each region can have an additional 15 Aurora replicas to further scale local reads 2 Global Database : You can choose between Global Database which provides the best replication performance and traditional binlog based replication You can also set up your own binlog replication with external MySQL databases Amazon Aurora Global Database is de signed for globally distributed applications allowing a single Amazon Aurora database to span multiple AWS regions It replicates your data with no impact on database performance enables fast local reads with low latency in each region and provides disa ster recovery from region wide outages 3 Multi AZ: Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region regardless of whether the instances in the DB cluster span multiple Availability Zones For more i nformation on Aurora see Managing an Amazon Aurora DB Cluster When data is written to the primary DB instance Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume Doing so provides data redundancy eliminates I/O freezes and minimizes latency spikes during system backups Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against failure and Availability Zone disruption For more information about durability and availability features in Amazon Aurora see Aurora on Amazon RDS in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 11 Planning and Testing a Database Migration After you determine that Amazon Aurora is the right fit for your application the next step is to decide on a migration approach and create a database migration plan Here are the suggested high level steps: 1 Review the available migration techniques described in this document and choose one that satisfies your requirements 2 Prepare a migration plan in the form of a step bystep checklist A checklist ensures that all migration steps are executed in the correct order and that the migration process flow can be controlled (eg suspended or resumed) without the risk of important steps be ing missed 3 Prepare a shadow checklist with rollback procedures Ideally you should be able to roll the migration back to a known consistent state from any point in the migration checklist 4 Use the checklist to perform a test migration and take note of the time required to complete each step If any missing steps are identified add them to the checklist If any issues are identified during the test migration address them and rerun the test migration 5 Test all rollback procedures If any rollback proced ure has not been tested successfully assume that it will not work 6 After you complete the test migration and become fully comfortable with the migration plan execute the migration Homogeneous Migrations Amazon Aurora was designed as a drop in replacement for MySQL 56 It offers a wide range of options for homogeneous migrations (eg migrations from MySQL and MySQL compatible databases) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 12 Summary of Available Migration Methods This section lists common migration sources and the migration metho ds available to them in order of preference Detailed descriptions step bystep instructions and tips for advanced migration scenarios are available in subsequent sections Common method is widely adopted is built aurora read replica asynchronized wit h source master RDS or self managed MySQL databases Figure 1 Common migration sources and migration methods for Amazon Aurora Amazon RDS Snapshot Migration Compatible sources: • Amazon RDS for MySQL 56 • Amazon RDS for MySQL 51 and 55 (after upgrading to RDS for MySQL 56) Feature highlights: • Managed point andclick service available through the AWS Management Console • Best migration speed and ease of use of all migration methods • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from a MySQL DB Instance to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 13 Percona XtraBackup Compatible sources and limitations : • Onpremises or self managed MySQL 56 in EC2 can be migrated zero downtime migration • You can’t restore into an existing RDS instance using this method • The total size is limited to 6 TB • User accounts functions and stored procedures are not imported automatically Feature highlights: • Managed backup ingestion from Percona XtraBackup files stored in an Amazon Simple Storage Servi ce (Amazon S3) bucket • High performance • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from MySQL by using an Amazon S3 bucket in the Amazon RDS User Guide SelfManaged Export/Import Compatible sources: • MySQL and MySQL compatible databases such as MySQL MariaDB or Percona Server including managed servers such as Amazon RDS for MySQL or MariaDB • NonMySQL compatible databases DMS Migration Compatible sources: • MySQL compatible and non MySQL compatible databases Feature highlights: • Supports heterogeneous and homogenous migrations This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 14 • Managed point andclick data migration service available through the AWS Management Console • Schemas must be migrated separately • Supports CDC replication for near zero migration downtime For details see What Is AWS Database Migration Service? in the AWS DMS User Guide For a heterogeneous migration where you are migrating from a database engine other than MySQL to a MySQL datab ase AWS DMS is almost always the best migration tool to use But for homogeneous migration where you are migrating from a MySQL database to a MySQL database native tools can be more effective Using Any MySQL Compatible Database as a Source for AWS DMS: Before you begin to work with a MySQL database as a source for AWS DMS make sure that you the following prerequisites These prerequisites apply to either self managed or Amazon managed sources You must have an account for AWS DMS that has the Replicati on Admin Role The role needs the following privileges: • Replication Client: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Replication Slave: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Super: This privilege is required only in MySQL versions before 566 DMS highlights for non MySQL compatible sources: • Requires manual schema conversion from source database format into MySQL compatible format • Data migration can be performed manually using a universal data format such as comma separated values (CSV) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 15 • Change data capture (CDC) replication might be possible with third party tool s for near zero migration downtime Migrating Large Databases to Amazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication: Large databases typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replica tion (using MySQL native tools AWS DMS or third party tools) for changes to catch up • Copy static tables first: If your database relies on large static tables with reference data you may migrate these large tables to the target database before migratin g your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration: Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is not a common migration pattern this is an option nonetheless • Database clean up: Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply forget to drop unused tables Whatever the reason a database migration project p rovides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 16 Partition and Shard Consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an opportunity to consolidate these partitions or shards on a single Aurora databa se A single Amazon Aurora instance can scale up to 64 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL database Consolidating these partitions on a single Aurora instance not only redu ces the total cost of ownership and simplify database management but it also significantly improves performance of cross partition queries • Functional partitions : Functional partitioning means dedicating different nodes to different tasks For example i n an e commerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partitions usually have distinct nonoverlapping schemas o Consolidation strateg y: Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replication • Data shards : If you have the same schema with distinct sets of data acros s multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while keeping the same table schema This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 17 o Consolidation strategy : Since all shards share the sa me database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing MySQL and MySQL compatible Migration Options at a Glance Source Database Type Migration with Downtime Near zero Downtime Migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2: Manual migration using native tools* Option 3: Schema migration using native tools and data load using AWS DMS Option 1: Migration using native tools + binlog replication Option 2: RDS snapshot migration + binlog replication Option 3: Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or onpremises Option 1: Schema migration with native tools + AWS DMS for data load Option 1: Schema migration using native tools + A WS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or thirdparty data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 18 Migrating from Amazon RDS for MySQL If you are migrating from an RDS MySQL 56 database (DB) instance the recommended approach is to use the snapshot migration feature Snapshot m igration is a fully managed point andclick feature that is available through the AWS Management Console You can use it to migrate an RDS MySQL 56 DB instance snapshot into a new Aurora DB cluster It is the fastest and easiest to use of all the migrati on methods described in this document For more information about the snapshot migration feature see Migrating Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This section provides ideas for projects that use the snapshot migration feature The liststyle layout in our example instructions can help you prepare your own migration checklist Estimating Space Requirements for Snapshot Migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Am azon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to format the data for migration The two features that can potentially cause space issues during m igration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip this section because you should not have space issues During migration MyISAM tables are conve rted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapsho t was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in the original database then migration should succeed without encountering any space issues How ever if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables then the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 19 database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the database and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 64 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a m aximum size of 6 TB Non MyISAM tables in the source database can be up to 6 TB in size However due to additional space requirements during conversion make sure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exc eed 3 TB in size For more information see Migrating Data from an Amazon RDS MySQL DB Instance to an Amazon Aurora MySQL DB Cluster You might want to modify your d atabase schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Amazon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need t o provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon RDS User Guide The naming conventions used in this section are as follows: • Source RDS DB instance refers to the RDS MySQL 56 DB instance that you are migrating from • Target Aurora DB cluster refers to the Aurora DB cluster that you are migrating to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 20 Migrating with Downtime When migration downtime is acceptable you can use the following high level procedure to migrate an RDS MySQL 56 DB instance to Amazon Aurora: 1 Stop all write activity against the source RDS DB instance Database downtime begins here 2 Take a snapshot of the source RDS DB instance 3 Wait until the snapshot shows as Available in the AWS Management Console 4 Use the AWS Management Console to migrate the snapshot to a new Aurora DB cluster For instructions see Migra ting Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide 5 Wait until the snapshot migration finishes and the target Aurora DB cluster enters the Available state The time to migrate a snapshot primarily depends on the size of the database You can determine it ahead of the production migration by running a test migration 6 Configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 7 Resume write activity against the target Aurora DB cluster Database downtime ends here Migrating with Near Zero Downtime If prolonged migration downtime is not acceptable you can perform a near zero downtime migration through a combination of snapshot migration and binary log replication Perform the high level procedure as follows: 1 On the source RDS DB instance ensure that a utomated backups are enabled 2 Create a Read Replica of the source RDS DB instance 3 After you create the Read Replica manually stop replication and obtain binary log coordinates 4 Take a snapshot of the Read Replica This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 21 5 Use the AWS Management Console to migrat e the Read Replica snapshot to a new Aurora DB cluster 6 Wait until snapshot migration finishes and the target Aurora DB cluster enters the Available state 7 On the target Aurora DB cluster configure binary log replication from the source RDS DB instance using the binary log coordinates that you obtained in step 3 8 Wait for the replication to catch up that is for the replication lag to reach zero 9 Begin cut over by stopping all write activity against the source RDS DB instance Application downt ime begins here 10 Verify that there is no outstanding replication lag and then configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 11 Complete cut over by resuming write activity Application downtime ends here 12 Terminate replication between the source RDS DB instance and the target Aurora DB cluster For a detailed description of this procedure see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster in the Amazon RDS Us er Guide If you don’t want to set up replication manually you can also create an Aurora Read Replica from a source RDS MySQL 56 DB instance by using the RDS Management Console The RDS automation does the following: 1 Creates a snapshot of the source RDS DB instance 2 Migrates the snapshot to a new Aurora DB cluster 3 Establishes binary log replication between the source RDS DB instance and the target Aurora DB cluster After replication is established you can complete the cut over steps as described previously This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 22 Migrating from Amazon RDS for MySQL Engine Versions Other than 56 Direct snapshot migration is only supported for RDS MySQL 56 DB instance snapshots You can migrate RDS MySQL DB instances that are running other engine versions by u sing the following procedures RDS for MySQL 51 and 55 Follow these steps to migrate RDS MySQL 51 or 55 DB instances to Amazon Aurora: 1 Upgrade the RDS MySQL 51 or 55 DB instance to MySQL 56 • You can upgrade RDS MySQL 55 DB instances directly to MySQL 56 • You must upgrade RDS MySQL 51 DB instances to MySQL 55 first and then to MySQL 56 2 After you upgrade the instance to MySQL 56 test your applications against the upgraded database and address any compatibility or performance co ncerns 3 After your application passes the compatibility and performance tests against MySQL 56 migrate the RDS MySQL 56 DB instance to Amazon Aurora Depending on your requirements choose the Migrating with Downtime or Migrating with Near Zero Downtime procedures described earlier For more information about upgrading RDS MySQL engine versions see Upgrading the MySQL DB Engine in the Amazon RDS User Guide RDS for MySQL 57 For migrations from RDS MySQL 57 DB instances the snapshot migration approach is not supported because the database engine version ca n’t be downgraded to MySQL 56 In this case we recommend a manual dump andimport procedure for migrating MySQL compatible databases described later in this whitepaper Such a procedure may be slower than snapshot migration but you can still perform it with near zero downtime using binary log replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 23 Migrating from MySQL Compatible Databases Moving to Amazon Aurora is still a relatively simple process if you are migrating from an RDS MariaDB instance an RDS MySQL 57 DB instance or a se lf managed MySQL compatible database such as MySQL MariaDB or Percona Server running on Amazon Elastic Compute Cloud (Amazon EC2) or on premises There are many techniques you can use to migrate your MySQL compatible database workload to Amazon Aurora This section describes various migration options to help you choose the most optimal solution for your use case Percona XtraBackup Amazon Aurora supports migration from Percona XtraBackup files that are stored in an Amazon S3 bucket Migrating from binar y backup files can be significantly faster than migrating from logical schema and data dumps using tools like mysqldump Logical imports work by executing SQL commands to re create the schema and data from your source database which involves considerable processing overhead By comparison you can use a more efficient binary ingestion method to ingest Percona XtraBackup files This migration method is compatible with source servers using MySQL versions and 56 Migrating from Percona XtraBackup files invol ves three steps: 1 Use the innobackupex tool to create a backup of the source database 2 Upload backup files to an Amazon S3 bucket 3 Restore backup files through the AWS Management Console For details and step bystep instructions see Migrating data from MySQL by using an Amazon S3 Bucket in the Amazon RDS User Guide SelfManaged Export/Import You can use a variety of export/import tools to migrate your data and schema to Amazon Aurora The tools can be described as “MySQL native” because they are either part of a MySQL project or were designed specifically for MySQL compatible databases This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 24 Examples of native migration tools include the following: 1 MySQL utilities such as mysqldump mysqlimport and mysql command line client 2 Third party utilities such as mydumper and myloader For details see this mydumper project page 3 Builtin MySQL commands such as SELECT INTO OUTFILE and LOAD DATA INFILE Native tools are a great option for power users or database administrators who want to maintain full control over the migration process Self managed migrations involve more steps and are typically slower than RDS snapshot or Percona XtraBackup migrations but they offer the best compatibility and flexibility For an in depth discussion of the best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQ L Databases to Amazon Aurora You can execute a self managed migration with downtime (without replication) or with nearzero downt ime (with binary log replication) SelfManaged Migration with Downtime The high level procedure for migrating to Amazon Aurora from a MySQL compatible database is as follows: 1 Stop all write activity against the source database Application downtime begin s here 2 Perform a schema and data dump from the source database 3 Import the dump into the target Aurora DB cluster 4 Configure applications to connect to the newly created target Aurora DB cluster instead of the source database 5 Resume write activity Appli cation downtime ends here For an in depth discussion of performance best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 25 SelfManaged Migration with Near Zero Downtime The following is the high level procedure for near zero downtime migration into Amazon Aurora from a MySQL compatible database: 1 On the source database enable binary logging and ensure that binary log files are retained for at least the amount of time that is required t o complete the remaining migration steps 2 Perform a schema and data export from the source database Make sure that the export metadata contains binary log coordinates that are required to establish replication at a later time 3 Import the dump into the tar get Aurora DB cluster 4 On the target Aurora DB cluster configure binary log replication from the source database using the binary log coordinates that you obtained in step 2 5 Wait for the replication to catch up that is for the replication lag to reach zero 6 Stop all write activity against the source database instance Application downtime begins here 7 Double check that there is no outstanding replication lag Then configure applications to connect to the newly created target Aurora DB cluster inst ead of the source database 8 Resume write activity Application downtime ends here 9 Terminate replication between the source database and the target Aurora DB cluster For an in depth discussion of performance best practices of self managed migrations see the AWS whitepaper Best Practices for Mig rating MySQL Databases to Amazon Aurora AWS Database Migration Service AWS Database Migration Service is a managed database migra tion service that is available through the AWS Management Console It can perform a range of tasks from simple migrations with downtime to near zero downtime migrations using CDC replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 26 AWS Database Migration Service may be the preferred option if y our source database can’t be migrated using methods described previously such as the RDS MySQL 56 DB snapshot migration Percona XtraBackup migration or native export/import tools AWS Database Migration Service might also be advantageous if your migrat ion project requires advanced data transformations such as the following : • Remapping schema or table names • Advanced data filtering • Migrating and replicating multiple database servers into a single Aurora DB cluster Compared to the migration methods describe d previously AWS DMS carries certain limitations: • It does not migrate secondary schema objects such as indexes foreign key definitions triggers or stored procedures Such objects must be migrated or created manually prior to data migration • The DMS CDC replication uses plain SQL statements from binlog to apply data changes in the target database Therefore it might be slower and more resource intensive than the native master/slave binary log replication in MySQL For step bystep instructions on how to migrate your database using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Heterogeneous Migrations If you a re migrating a non MySQL compatible database to Amazon Aurora several options can help you complete the project quickly and easily A heterogeneous migration project can be split into two phases: 1 Schema migration to review and convert the source schema objects (eg tables procedures and triggers) into a MySQL compatible representation 2 Data migration to populate the newly created schema with data contained in the source database Optionally you can use a CDC replication for near zero downtime migratio n This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 27 Schema Migration You must convert database objects such as tables views functions and stored procedures to a MySQL 56 compatible format before you can use them with Amazon Aurora This section describes two main options for converting schema objects Whichever migration method you choose always make sure that the converted objects are not only compatible with Aurora but also follow MySQL’s best practices for schema design AWS Schema Conversion Tool The AWS Schema Conversion Tool (AWS SCT) can great ly reduce the engineering effort associated with migrations from Oracle Microsoft SQL Server Sybase DB2 Azure SQL Database Terradata Greenplum Vertica Cassandra and PostgreSQL etc AWS SCT can automatically convert the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with Amazon Aurora Any code that can’t be automatically converted is clearly marked so that it can be processed manually For more information see the AWS Schema Conversion Tool User Guide For step by step instructions on how to convert a non MySQL compatible schema using the AWS Schema Conversion Tool see t he AWS whitepaper Migrating Your Databases to Amazon Aurora Manual Schema Migration If your source database is not in the scope of SCT comp atible databases you can either manually rewrite your database object definitions or use available third party tools to migrate schema to a format compatible with Amazon Aurora Many applications use data access layers that abstract schema design from business application code In such cases you can consider redesigning your schema objects specifically for Amazon Aurora and adapting the data access layer to the new schema This might require a greater upfront engineering effort but it allows the new s chema to incorporate all the best practices for performance and scalability This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 28 Data Migration After the database objects are successfully converted and migrated to Amazon Aurora it’s time to migrate the data itself The task of moving data from a non MySQL compatible database to Amazon Aurora is best done using AWS DMS AWS DMS supports initial data migration as well as CDC replication After the migration task starts AWS DMS manages all the complexities of the process including data type transformations compression and parallel data transfer The CDC functionality automatically replicates any changes that are made to the source database during the migration process For more information see the AWS Database Migration Service User Guide For step bystep instructions on how to migrate data from a non MySQL compatible database into an Amazon Aurora cluster using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Example Migration Scenarios There are several approaches for performing both self managed homogeneo us migration and heterogeneous migrations SelfManaged Homogeneous Migrations This section provides examples of migration scenarios from self managed MySQL compatible databases to Amazon Aurora For an in depth discussion of homogeneous migration best pra ctices see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Note: If you are migrating from an Amazon RDS MySQL DB instance you can use the RDS snapshot migration feature instead of doing a self managed migration See the Migrating from Amazon RDS for MySQL section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 29 Migrating Using Percona XtraBackup One option for migrating data from MySQL to Amazon Aurora is to use the Percona XtraBackup utility For more information about usin g Percona Xtrabackup utility see Migrating Data from an External MySQL Database in the Amazon RDS User Guide Approach This scenario uses the Percona XtraBackup utility to take a binary backup of the source MySQL database The backup files are then uploaded to an Amazon S3 bucket and restored into a new Amazon Aurora DB cluster When to Use You can adopt this approach for small to large scale migrations when the following conditions are met: • The source database is a MySQL 55 or 56 database • You have administrative system level access to the source database • You are migrating database servers in a 1 to1 fashion: one source MySQL server becomes one new Aurora DB cluster When to Consider Other Options This approach is not currently supported in the following scenarios • Migrating into existing Aurora DB clusters • Migrating multiple source MySQL servers into a single Aurora DB cluster Examples For a step bystep example see Migrating Data from an External MySQL Database in the Amazon RDS User Guide OneStep Migration Using mysqldump Another migration option uses the mysqldump utility to migrate data from MySQL to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 30 Approach This scenario uses the mysqldump utility to export schema and data definitions from the source server and import them into the target Auro ra DB cluster in a single step without creating any intermediate dump files When to Use You can adopt this approach for many small scale migrations when the following conditions are met: • The data set is very small (up to 1 2 GB) • The network connection between source and target databases is fast and stable • Migration performance is not critically important and the cost of re trying the migration is very low • There is no need to do any intermediate schema or data transformations When to Cons ider Other Options This approach might not be an optimal choice if any of the following conditions are true • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapsho t migration or Percona XtraBackup respectively For more • details see the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections • It is impossible to establish a network connection from a single client instance to source and target databases due to network architecture or security considerations • The network connection between source and target databases is unstable or very slow • The data set is larger than 10 GB • Migration performance is critically important • An intermediate dump file is required in order to perform schema or data manipulations before you can import the schema/data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 31 Notes For the sake of simplicity this scenario assumes the following: 1 Migration commands are executed from a client instance running a Linux operating system 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) that is configured to allow connections from the client instance 3 The target Aurora DB cluster already exists and is configured to allow connections from the client instance If you don’t yet have an Aurora DB cluster review the stepbystep cluster launch instructions in the Amazon RDS User Guide 17 4 Export from the source database is performed using a privileged super user MySQL ac count For simplicity this scenario assumes that the user holds all permissions available in MySQL 5 Import into Amazon Aurora is performed using the Aurora master user account that is the account whose name and password were specified during the cluster launch process Examples The following command when filled with the source and target server and user information migrates data and all objects in the named schema(s) between the source and t arget servers mysqldump host= \ user= \ password= \ databases \ singletransaction \ compress | mysql host= \ user= \ password= This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 32 Descriptions of the options and option v alues for the mysqldump command are as follows: • : DNS name or IP address of the source server • : MySQL user account name on the source server • : MySQL user account password on the source server • : One or more schema names • : Cluster DNS endpoint of the target Aurora cluster • : Aurora master user name • : Aurora master user password • single transaction : Enforces a consi stent dump from the source database Can be skipped if the source database is not receiving any write traffic • compress : Enables network data compression See the mysqldump docume ntation for more details Example: mysqldump host=source mysqlexamplecom \ user=mysql_admin_user \ password=mysql_user_password \ databases schema1 \ singletransaction \ compress | mysql host=auroracluster xxxxxamazonawscom \ user=aurora_master_user \ password=aurora_user_password Note: This migration approach requires application downtime while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 33 FlatFile Migration Using Files in CSV Format This scenario demonstrates a schema and data migration using flat file dumps that is dumps that do not encapsulate data in SQL statements Many database administrators prefer to use flat files over SQL format files for the following reasons: • Lack of SQL encap sulation results in smaller dump files and reduces processing overhead during import • Flatfile dumps are easier to process using OS level tools; they are also easier to manage (eg split or combine) • Flatfile formats are compatible with a wide range of database engines both SQL and NoSQL Approach The scenario uses a hybrid migration approach: • Use the mysqldump utility to create a schema only dump in SQL format The dump describes the structure of schema objects (eg tables views and functions) but does not contain data • Use SELECT INTO OUTFILE SQL commands to create dataonly dumps in CSV format The dumps are created in a one filepertable fashion and contain table data only (no schema definitions) The import phase can be executed in two ways: • Traditional approach: Transfer all dump files to an Amazon EC2 instance located in the same AWS Region and Availability Zone as the target Aurora DB cluster After transferring the dump files you can import them into Amazon Aurora using the mysql command line client and LOAD DATA LOCAL INFILE SQL commands for SQL format schema dumps and the flat file data dumps respectively This is the approach that is demonstrated later in this section This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 34 • Alternative approach: Transfer the SQL format schema dumps t o an Amazon EC2 client instance and import them using the mysql command line client You can transfer the flat file data dumps to an Amazon S3 bucket and then import them into Amazon Aurora using LOAD DATA FROM S3 SQL commands For more information including an example of loading data from Amazon S3 see Migrating Data from MySQL by Using an Amazon S3 Bucket in the Amazon RDS User Guide When to Use You can adopt this approach for most migration projects where performance and flexibility are important: • You can dump small data sets and import them one table at a time You can also run multiple SELECT INTO OUTFILE and LOAD DATA INFILE operations in parallel for best performance • Data that is stored in flat file dumps is not encapsulated in database specific SQL statements Therefore it can be handled and processed easily by the systems participating in the data exchange When to Consider Other Options You might choose not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • The data set is very small and does not require a high performance migration approach • You want the migration process to be as simple as possible and you don’t require any of the performance and flexibility benefits listed earlier Notes To simplify the demons tration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 35 1 Migration commands are executed from client instances running a Linux operating system: o Client instance A is located in the source server’s network o Client instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora DB cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exist s and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instruct ions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 Export from the source database is performed using a privileged super user MySQL account For simplicity this scenario assumes that the user holds all permissions available in MySQL 6 Import into Amazon Aurora is performed using the master user account that is the account whose name and password were specified during the cluster launch process Note that this migration approach requires application downtime while t he dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime sectio n for more details Examples In this scenario you migrate a MySQL schema named myschema The first step of the migration is to create a schema only dump of all objects mysqldump host= \ user= \ password= \ databases \ singletransaction \ nodata > myschema_dumpsql This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 36 Descriptions of the options and option values for the mysqldump command are as follows: • : DNS name or IP address of th e source server • : MySQL user account name on the source server • : MySQL user account password on the source server • : One or more schema names • : Cluster DNS endpoint of the target Aur ora cluster • : Aurora master user name • : Aurora master user password • single transaction : Enforces a consistent dump from the source database Can be skipped if the source database is not receiving any write traffic • nodata : Creates a schema only dump without row data For more details see mysqldump in the MySQL 56 Reference Manual Example: admin@clientA:~$ mysqldump host=11223344 user=root \ password=pAssw0rd databases myschema \ singletransaction nodata > myschema_dump_schema_onlysql After you complete the schema only dump you can obtain data dumps for each table After logging in to the source MyS QL server use the SELECT INTO OUTFILE statement to dump each table’s data into a separate CSV file admin@clientA:~$ mysql host=11223344 user=root password=pAssw0rd mysql> show tables from myschema; + + This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 37 | Tables_in_myschema | + + | t1 | | t2 | | t3 | | t4 | + + 4 rows in set (000 sec) mysql> SELECT * INTO OUTFILE '/home/admin/dump/myschema_dump_t1csv' FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '""' LINES TERMINATED BY ' \n' FROM myschemat1; Query OK 4194304 rows affected (235 sec) (repeat for all remaining tables) For more information about SELECT INTO statement syntax see SELECT INTO Syntax in the MySQL 56 Reference Manual After you complete all dump operations the /home/admin/dump directory contains five files: one schema only dump and four data dumps on e per table admin@clientA:~/dump$ ls sh1 total 685M 40K myschema_dump_schema_onlysql 172M myschema_dump_t1csv 172M myschema_dump_t2csv 172M myschema_dump_t3csv 172M myschema_dump_t4csv Next you compress and transfer the files to client instance B located in the same AWS Region and Availability Zone as the target Aurora DB cluster You can use any file transfer method available to you (eg FTP or Amazon S3) This example uses SCP with SSH private key authentication admin@clientA:~/dump$ gzip mysc hema_dump_*csv admin@clientA:~/dump$ scp i sshkeypem myschema_dump_* \ @:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 38 After transferring all the files you can decompress them and import the schema and data Import the schema dump first because a ll relevant tables must exist before any data can be inserted into them admin@clientB:~/dump$ gunzip myschema_dump_*csvgz admin@clientB:~$ mysql host= user=master \ password=pAssw0rd < myschema_dump_schema_onlysql With the schem a objects created the next step is to connect to the Aurora DB cluster endpoint and import the data files Note the following: • The mysql client invocation includes a localinfile parameter which is required to enable support for LOAD DATA LOCAL INFILE commands • Before importing data from dump files use a SET command to disable foreign key constraint checks for the duration of the database session Disabling foreign key checks not only improves import performance but it also lets you import data files in arbitrary order admin@clientB:~$ mysql localinfile host= \ user=master password=pAssw0rd mysql> SET foreign_key_checks = 0; Query OK 0 rows affected (000 sec) mysql> LOAD DATA LOCAL INFILE '/home/ec2 user/myschema_dump_t1csv' > INTO TABLE myschemat1 > FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '""' > LINES TERMINATED BY ' \n'; Query OK 4194304 rows affected (1 min 266 sec) Records: 4194304 Deleted: 0 Skipped: 0 Warnings: 0 (repeat for all rema ining CSV files) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 39 mysql> SET foreign_key_checks = 1; Query OK 0 rows affected (000 sec) That’s it you have imported the schema and data dumps into the Aurora DB cluster You can find more tips and best practices for self managed migrations in the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Multi Threaded Migration Using mydumper and myloader Mydumper and myloader are popular open source MySQL export/import tools designed to address performance issues associated with the lega cy mysqldump program They operate on SQL format dumps and offer advanced features such as the following: • Dumping and loading data using multiple parallel threads • Creating dump files in a file pertable fashion • Creating chunked dumps in a multiple filespertable fashion • Dumping data and metadata into separate files for easier parsing and management • Configurable transaction size during import • Ability to schedule dumps in regular intervals For more details see the MySQL Data Dumper project page Approach The scenario uses the mydumper and myloader tools to perform a multi threaded schema and data migration without the need to manually invoke any SQL commands or desig n custom migration scripts The migration is performed in two steps: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 40 1 Use the mydumper tool to create a schema and data dump using multiple parallel threads 2 Use the myloader tool to process the dump files and import them into an Aurora DB cluster also in multi threaded fashion Note that mydumper and myloader might not be readily available in the package repository of your Linux/Unix distribution For your convenience the scenario also shows how to build the tools from source code When to Use You can adopt this approach in most migration projects: • The utilities are easy to use and enable database users to perform multi threaded dumps and imports without the need to develop custom migration scripts • Both tools are highly flexible and have reasonable co nfiguration defaults You can adjust the default configuration to satisfy the requirements of both small and large scale migrations When to Consider Other Options You might decide not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • You can’t use third party software because of operating system limitations • Your data transformation processes require intermediate dump files in a flat file forma t and not an SQL format Notes To simplify the demonstration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 41 1 You execute the migration commands from client instances running a Linux operating system: a Client instance A is located in the source server’s network b Clien t instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exists and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instructions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 You perform the export from the source database using a privileged super user MySQL account For simplicity the example assumes that the user holds all permissions available in MySQL 6 You perform the import into Amazon Aurora using the master user account that is the account whose n ame and password were specified during the cluster launch process 7 The Amazon Linux 2016033 operating system is used to demonstrate the configuration and compilation steps for mydumper and myloader Note : This migration approach requires application down time while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Dow ntime section for more details Examples (Preparing Tools) The first step is to obtain and build the mydumper and myloader tools See the MySQL Data Dumper project page for up todate download links and to ensure that tools are prepared on both client instances This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 42 The utilities depend on several packages that you should install first [ec2user@clientA ~]$ sudo yum install glib2 devel mysql56 \ mysql56devel zlib devel pcre devel openssl devel g++ gcc c++ cmake The next steps involve creating a directory to hold the program sources and then fetching and unpacking the source archive [ec2user@clientA ~]$ mkdir mydumper [ec2 user@clientA ~]$ cd mydumper/ [ec2user@clientA mydumper]$ wget https://launchp adnet/mydumper/09/091/+download/mydumper 091targz 20160629 21:39:03 (153 KB/s) ‘mydumper 091targz’ saved [44463/44463] [ec2user@clientA mydumper]$ tar zxf mydumper 091targz [ec2user@clientA mydumper]$ cd mydumper 091 Next you b uild the binary executables [ec2user@clientA mydumper 091]$ cmake (…) [ec2user@clientA mydumper 091]$ make Scanning dependencies of target mydumper [ 25%] Building C object CMakeFiles/mydumperdir/mydumperco [ 50%] Building C object CMakeFiles/mydumperdir/server_detectco [ 75%] Building C object CMakeFiles/mydumperdir/g_unix_signalco Linking C executable mydumper [ 75%] Built target mydumper Scanning dependencies of target myloader [100%] Building C object CMakeFiles/myloaderdi r/myloaderco Linking C executable myloader [100%] Built target myloader This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 43 Optionally you can move the binaries to a location defined in the operating system $PATH so that they can be executed more conveniently [ec2user@clientA mydumper 091]$ sudo mv mydumper /usr/local/bin/mydumper [ec2user@clientA mydumper 091]$ sudo mv myloader /usr/local/bin/myloader As a final step confirm that both utilities are available in the system [ec2user@clientA ~]$ mydumper V mydumper 091 built against MySQL 5631 [ec2user@clientA ~]$ myloader V myloader 091 built against MySQL 5631 Examples (Migration) After completing the preparation steps you can perform the migration The mydumper command uses the following basic syntax mydumper h u \ p B \ t o Descriptions of the parameter values are as follows: • : DNS name or IP address of the source server • : MySQL user account name on the source server • : MySQL user account password on the source server • : Name of the schema to dump • : Number of parallel threads used to dump the data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 44 • : Name of the directory where dump files should be placed Note : mydumper is a highly customizable data dumping tool For a complete list of supported parameters and their default values use the builtin help mydumper help The example dump is executed as follows [ec2user@clientA ~]$ mydumper h 11223344 u root \ p pAssw0rd B myschema t 4 o myschema_dump/ The operation results in the following files being created in the dump directory [ec2user@clientA ~]$ ls sh1 myschema_dum p/ total 733M 40K metadata 40K myschema schemacreatesql 40K myschemat1 schemasql 184M myschemat1sql 40K myschemat2 schemasql 184M myschemat2sql 40K myschemat3 schemasql 184M myschemat3sql 40K myschemat4 schemasql 184M myschemat4sql The directory contains a collection of metadata files in addition to schema and data dumps You don’t have to manipulate these files directly It’s enough that the directory structure is understood by the myloader tool Compress the entire directory and transfer it to client instance B [ec2user@clientA ~]$ tar czf myschema_dumptargz myschema_dump [ec2user@clientA ~]$ scp i sshkeypem myschema_dumptargz \ @:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 45 When the transfer is complete connect to client instance B and verify that the myloader utility is available [ec2user@clientB ~]$ myloader V myloader 091 built against MySQL 5631 Now you can u npack the dump and import it The syntax used for the myloader command is very similar to what you already used for mydumper The only difference is the d (source directory) parameter replacing the o (target directory) parameter [ec2user@clientB ~]$ tar zxf myschema_dumptargz [ec2user@clientB ~]$ myloader h \ u master p pAssw0rd B myschema t 4 d myschema_dump/ Useful Tips • The concurrency level (thread count) does not have to be the same for export and import operations A good rule of thumb is to use one thread per server CPU core (for dumps) and one thread per two CPU cores (for imports) • The schema and data dumps produced by mydumper use an SQL format and are compatible with MySQL 56 Although you will typically use the pair of mydumper and myloader tools together for best results technically you can import the dump files from myloader by using any other MySQL compatible client tool You can find more tips and best practices for self managed migrations in t he AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Heterogeneous Migrations For detailed step bystep instructions on how to migrate schema and data from a non MySQL compatib le database into an Aurora DB cluster using AWS SCT and AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Prior to running migration we suggest you to review Proof of Concept with Aurora to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 46 understand the volume of data and representative of your production environment as a blueprint Testing and Cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are no w ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database Migration T esting Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically executed upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are some common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expec ted values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 47 Test Category Purpose Functional tests These post cutover tests exercise the functionality of the applicat ion(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests Thes e post cutover tests assess the nonfunctional characteristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be executed by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have completed the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as cutover If the planning and testing phase has been executed properly cutover should not lead to unexpected issues Precutover Actions • Choose a cutover window: Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 48 • Make sure changes are caught up: If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significantly lagging behind the sour ce database • Prepare scripts to make the application configuration changes: In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applications may require updates to co nnection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application: Stop the application processes on the source database and put the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Execute pre cutove r tests: Run automated pre cutover tests to make sure that the data migration was successful Cutover • Execute cutover: If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Execute scripts created in the p re cutover phase to change the application configuration to point to the new Aurora database • Start your application: At this point you may start your application If you have an ability to stop users from accessing the application while the application is running exercise that option until you have executed your post cutover checks Post cutover Checks • Execute post cutover tests: Execute predefined automated or manual test cases to make sure your application works as expected with the new database It ’s a good strategy to start testing read only functionality of the database first before executing tests that write to the database This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 49 Enable user access and closely monitor: If your test cases were executed successfully you may give user access to the app lication to complete the migration process Both application and database should be closely monitored at this time Troubleshooting The following sections provide examples of common issues and error messages to help you troubleshoot heterogenous DMS migrat ions Troubleshooting MyS QL Specific Issues The following issues are specific to using AWS DMS with MySQL databases Topics • CDC Task Failing for Amazon RDS DB Instance Endpoint Because Binary Logging Disabled • Connections to a target MySQL instance are disconnected during a task • Adding Autocommit to a MySQL compatible Endpoint • Disable Foreign Keys on a Target MySQL compatible Endpoint • Characters Replaced with Question Mark • ""Bad event"" Log Entries • Change Data Capture with MySQL 55 • Increasing Binary Log Retention for Amazon RDS DB Instances • Log Message: Some changes from the source database had no impact when applied to the target database • Error: Identifier too long • Error: Unsupported Character Set Causes Field Data Conversion to Fail • Error: Codepage 1252 to UTF8 [120112] A field data conversion failed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 50 CDC Task Failing for Amazon RDS DB Instance E ndpoint Because Binary Logging Disabled This issue occurs with Amazon RDS DB instances because automated backups are disabled Enable automatic backups by setting the backup retention period to a non zero value Connections to a target MySQL instance are disconnected during a task If you have a task with LOBs that is getting disconnected from a MySQL target with the following type of errors in the task log you might need to adjust some of your task settings [TARGET_LOAD ]E: RetCode: SQL_ ERROR SqlState : 08S01 NativeError: 2013 Message: [ MySQL][ODBC 53(w) Driver ][mysqld5716log]Lost connection to MySQL server during query [122502] ODBC general error To solve the issue where a task is being disconnected from a MySQL target do the following: • Check that you have your database variable max_allowed_packet set large enough to hold your largest LOB • Check that you have the following variables set to have a large timeout value We suggest you use a value of at least 5 minutes for each of these variables o net_read_timeout o net_write_timeout o wait_timeout o interactive_timeout Adding Autocommit to a MySQL compatible Endpoint To add autocommit to a target MySQL compatible endpoint use the following procedure: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 51 1 Sign in to the AWS Management Console and sel ect DMS 2 Select Endpoints 3 Select the MySQL compatible target endpoint that you want to add autocommit to 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt = SET AUTOCOMMIT= 1 6 Choose Modify Disable Foreign Keys on a Target MySQL compatible Endpoint You can disable foreign key checks on MySQL by adding the following to the Extra Connection Attributes in the Advanced section of the target MySQL Am azon Aurora with MySQL compatibility or MariaDB endpoint To disable foreign keys on a target MySQL compatible endpoint use the following procedure: 1 Sign in to the AWS Management Console and select DMS 2 Select Endpoints 3 Select the MySQL Aurora MySQL or MariaDB target endpoint that you want to disable foreign keys 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt =SET FOREIGN_KEY_CHECKS= 0 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 52 6 Choose Modify Characters Replaced with Question Mark The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support For example AWS DMS engine versions prior to version 311 do n't support the UTF8MB4 character set Bad event Log Entries Bad event entries in the migration logs usually indicate that an unsupported DDL operation was attempted on the source database endpoint Unsupported DDL operations cause an event that the repli cation instance cannot skip so a bad event is logged To fix this issue restart the task from the beginning which will reload the tables and will start capturing changes at a point after the unsupported DDL operation was issued Change Data Capture with MySQL 55 AWS DMS change data capture (CDC) for Amazon RDS MySQL compatible databases requires full image row based binary logging which is not supported in MySQL version 55 or lower To use AWS DMS CDC you must up upgrade your Amazon RDS DB instance t o MySQL version 56 Increasing Binary Log Retention for Amazon RDS DB Instances AWS DMS requires the retention of binary log files for change data capture To increase log retention on an Amazon RDS DB instance use the following procedure The following example increases the binary log retention to 24 hours call mysqlrds_set_confi guration( 'binlog retention hours' 24); This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 53 Log Message: Some changes from the source database had no impact when applied to the target database When AWS DMS updates a MySQL database column’s value to its existing value a message of zero rows a ffected is returned from MySQL This behavior is unlike other database engines such as Oracle and SQL Server that perform an update of one row even when the replacing value is the same as the current one Error: Identifier too long The following error oc curs when an identifier is too long: TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1059 Message: MySQLhttp://ODBC 53(w) Driverhttp://mysqld 5610Identifier name '' is too long 122502 ODBC general error (ar_odbc_stmtc: 4054) When AWS DMS is set to create the tables and primary keys in the target database it currently does not use the same names for the Primary Keys that were used in the source database Instead AWS DMS creates the Primary Key na me based on the tables name When the table name is long the auto generated identifier created can be longer than the allowed limits for MySQL The solve this issue currently pre create the tables and Primary Keys in the target database and use a task w ith the task setting Target table preparation mode set to Do nothing or Truncate to populate the target tables Error: Unsupported Character Set Causes Field Data Conversion to Fail The following error occurs when an unsupported character set causes a fi eld data conversion to fail: ""[SOURCE_CAPTURE ]E: Column '' uses an unsupported character set [120112] A field data conversion failed (mysql_endpoint_capturec: 2154) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 54 This error often occurs because of tables or databases using U TF8MB4 encoding AWS DMS engine versions prior to 311 don't support the UTF8MB4 character set In addition check your database's parameters related to connections The following command can be used to see these parameters: SHOW VARIABLES LIKE '%char%' ; Error: Codepage 1252 to UTF8 [120112] A field data conversion failed The following error can occur during a migration if you have non codepage 1252 characters in the source MySQL database [SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in tabl e 'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed (mysql_endpoint_capturec: 2248) As a workaround you can use the CharsetMapping extra connection attribute with your source MySQL endpoint to specify character set mapping You might need to restart the AWS DMS migration task from the beginning if you add this extra connection attribute For example the following extra connection a ttribute could be used for a MySQL source endpoint where the source character set is utf8 or latin1 65001 is the UTF8 code page identifier CharsetMapping =utf865001 CharsetMapping =latin165001 Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and greater availability than other open source databases and lower costs than most commercial grade databases This paper proposes stra tegies for identifying the best method to migrate databases to Amazon Aurora and details the procedures for planning This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 55 and executing those migrations In particular AWS Database Migration Service (AWS DMS) as well as the AWS Schema Conversion Tool are the r ecommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Multiple factors contribute to a successful database migration: • The choice of the database product • A migration approach (eg methods tools) that meets performance and uptime requirements • Welldefined migration procedures that enable database administrators to prepare test and complete all migration steps with confidence • The ability to identify diagnose and deal with issues with little or no interruption to the migration process We hope that the guidance provided in this document will help you introduce meaningful improvements in all of these areas and that it will ultimately contribute to creating a bette r overall experience for your database migrations into Amazon Aurora Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect Amazon Web Services • Ashar Abbas Database Specialty Architect • Sijie Han SA Manager A mazon Web Services • Szymon Komendera Database Engineer Amazon Web Services This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 56 Further Reading For additional information see: • Aurora on Amazon RDS User Guide • Migrating Your Databases t o Amazon Aurora AWS whitepaper • Best Practices for Migrating MySQL Databases to Amazon Aurora AWS whitepaper Document Revisions Date Description July 2020 Added information for the large databases migrations on Amazon Aurora and functional p artition and data shard consolidation strategies are discussed in homogenous migration s ection s Multi threaded migration using mydumper and myload er open source tools are introduced Overall basic acceptance testing functional test non functional test and user acceptance tests are explained in the testing phase and pre cutover and post cut overs phase scenarios are further explained September 2019 First publication",General,consultant,Best Practices Amazon_Aurora_MySQL_Database_Administrators_Handbook_Connection_Management,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Aurora MySQL Database Administrato r’s Handbook Connection Management First Published January 2018 Updated October 20 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlContents Introduction 1 DNS endpoints 2 Connection handling in Aurora MySQL and MySQL 2 Common misconceptions 4 Best practices 5 Using smart drivers 5 DNS caching 7 Connection management and pooling 7 Connection scaling 9 Transaction management and autocommit 10 Connection handshakes 12 Load balancing with the reader endpoint 12 Designing for fault tolerance and quick recovery 13 Server configuration 14 Conclusion 16 Contributors 16 Further reading 16 Document revisions 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAbstract This paper outlines the best practices for managing database connections setting server connection parameters and configuring client programs drivers and connectors It’s a recommended read for Amazon Aurora MySQL Database Administrators (DBAs) and application developers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 1 Introduction Amazon Aurora MySQL (Aurora MySQL) is a managed relational database engine wirecompatible with MySQL 56 and 57 Most of the drivers connectors and tools that you currently use with MySQL can be used with Aurora MySQL with little or no change Aurora MySQL database (DB) clusters provide advanced fe atures such as: • One primary instance that supports read/write operations and up to 15 Aurora Replicas that support read only operations Each of the Replicas can be automatically promoted to the primary role if the current primary instance fails • A cluster endpoint that automatically follows the primary instance in case of failover • A reader endpoint that includes all Aurora Replicas and is automatically updated when Aurora Replicas are added or removed • Ability to create custom DNS endpoints contain ing a user configured group of database instances within a single cluster • Internal server connection pooling and thread multiplexing for improved scalability • Near instantaneous database restarts and crash recovery • Access to near realtime cluster metada ta that enables application developers to build smart drivers connecting directly to individual instances based on their read/write or read only role Client side components (applications drivers connectors and proxies) that use sub optimal configurati on might not be able to react to recovery actions and DB cluster topology changes or the reaction might be delayed This can contribute to unexpected downtime and performance issues To prevent that and make the most of Aurora MySQL features AWS encourag es Database Administrators (DBAs) and application developers to implement the best practices outlined in this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 2 DNS endpoints An Aurora DB cluster consists of one or more instances and a cluster volume that manages the data for those instances There are two types of instances: • Primary instance – Supports read and write statements Currently there can be one primary instance per DB cluster • Aurora Replica – Supports read only statements A DB cluster can have up to 15 Aurora Replicas The Auror a Replicas can be used for read scaling and are automatically used as failover targets in case of a primary instance failure Amazon Aurora supports the following types of Domain Name System (DNS) endpoints: • Cluster endpoint – Connects you to the primary instance and automatically follows the primary instance in case of failover that is when the current primary instance is demoted and one of the Aurora Replicas is promoted in its place • Reader endpoint – Includes all Aurora Replicas in the DB cluster und er a single DNS CNAME You can use the reader endpoint to implement DNS round robin load balancing for read only connections • Instance endpoint – Each instance in the DB cluster has its own individual endpoint You can use this endpoint to connect directly to a specific instance • Custom endpoints – User defined DNS endpoints containing a selected group of instances from a given cluster For more information refer to the Overview of Amazon Aurora page Connection handling in Aurora MySQL and MySQL MySQL Community Edition manages connections in a one thread perconnection fashion This means that each individual user connection receives a dedicated operating system thread in the mysqld process Issues with this type of connection handling include: • Relatively high memory use when there is a large number of user connections even if the connections are completely idle • Higher internal server contention and context switching overhead when working with thousands of user connections This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 3 Aurora MySQL supports a thread pool approach that addresses these issues You can characterize the thread pool approach as follows: • It uses thread multiplexing where a number of worker threads can switch between user sessions (connections) A worker thread is not fixe d or dedicated to a single user session Whenever a connection is not active (for example is idle waiting for user input waiting for I/O and so on) the worker thread can switch to another connection and do useful work You can think of worker threads as CPU cores in a multi core system Even though you only have a few cores you can easily run hundreds of programs simultaneously because they're not all active at the same time This highly efficient approach means that Aurora MySQL can handle thousands of concurrent clients with just a handful of worker threads • The thread pool automatically scales itself The Aurora MySQL database process continuously monitors its thread pool state and launches new workers or destroys existing ones as needed This is tr ansparent to the user and doesn’t need any manual configuration Server thread pooling reduces the server side cost of maintaining connections However it doesn’t eliminate the cost of setting up these connections in the first place Opening and closing c onnections isn't as simple as sending a single TCP packet For busy workloads with short lived connections (for example keyvalue or online transaction processing (OLTP) ) consider using an application side connection pool The following is a network pack et trace for a MySQL connection handshake taking place between a client and a MySQL compatible server located in the same Availability Zone: 04:23:29547316 IP client32918 > servermysql: tcp 0 04:23:29547478 IP servermysql > client32918: tcp 0 04:23:29547496 IP client32918 > servermysql: tcp 0 04:23:29547823 IP servermysql > client32918: tcp 78 04:23:29547839 IP client32918 > servermysql: tcp 0 04:23:29547865 IP client32918 > servermysql: tcp 191 04:23:29547993 IP servermysql > client329 18: tcp 0 04:23:29548047 IP servermysql > client32918: tcp 11 04:23:29548091 IP client32918 > servermysql: tcp 37 04:23:29548361 IP servermysql > client32918: tcp 99 04:23:29587272 IP client32918 > servermysql: tcp 0 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 4 This is a packet trace for closing the connection: 04:23:37117523 IP client32918 > servermysql: tcp 13 04:23:37117818 IP servermysql > client32918: tcp 56 04:23:37117842 IP client32918 > servermysql: tcp 0 As you can see even the simple act of opening and closing a single connection involves an exchange of several network packets The connection overhead becomes more pronounced when you consider SQL statements issued by drivers as part of connection setup (for example SET variable_name = value commands used to set session level configuration) Server side thread pooling doesn’t eliminate this type of overhead Common misconceptions The following are common misconceptions for database connection management • If the server uses connection pooling you don’t need a pool on the application side As explained previously this isn’t true for workloads where connections are opened and torn down very frequently and clients run relatively few statements per connectio n You might not need a connection pool if your connections are long lived This means that connection activity time is much longer than the time required to open and close the connection You can run a packet trace with tcpdump and see how many packets yo u need to open or close connections versus how many packets you need to run your queries within those connections Even if the connections are long lived you can still benefit from using a connection pool to protect the database against connection surges that is large bursts of new connection attempts • Idle connections don’t use memory This isn’t true because the operating system and the database process both allocate an in memory descriptor for each user connection What is typically true is that Auror a MySQL uses less memory than MySQL Community Edition to maintain the same number of connections However memory usage for idle connections is still not zero even with Aurora MySQL The general best practice is to avoid opening significantly more connect ions than you need This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 5 • Downtime depends entirely on database stability and database features This isn’t true because the application design and configuration play an important role in determining how fast user traffic can recover following a database event For more details refer to the Best practices section of this whitepaper Best practices The following are best practices for managing database connections and configuring connection drivers and pools Using smart drivers The cluster and reader endpoints abstract the role changes (primary instance promotion and demotion) and topology changes (addition and removal of instances) occurring in the DB cluster However DNS updates are not instantaneous In addition they can sometimes contribute to a slightly longer delay between the time a database event occurs and the time it’s noticed and handled by the application Aurora MySQL exposes near realtime metadata about DB instances in the INFORMATION_SCHEMAREPLICA_HOST_STATUS table Here is an example of a query against the metadata table: mysql> select server_id if(session_id = 'MASTER_SESSION_ID' 'writer' 'reader' ) as role replica_lag_in_milliseconds from information_schemareplica_host_status; + + + + | server_id | role | replica_lag_in_milliseconds | + + + + | aurora nodeusw2a | writer | 0 | | aurora nodeusw2b | reader | 19253999710083008 | + + + + 2 rows in set (000 sec) Notice that the table contains cluster wide metadata You can query the table on any instance in the DB cluster For the purpose of this whitepaper a smart driver is a database driver or connector with the ability to read DB cluster topology from the metadata table It can rou te new connections to individual instance endpoints without relying on high level cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 6 endpoints A smart driver is also typically capable of load balancing read only connections across the available Aurora Replicas in a round robin fashion The MariaDB Connector/J is an example of a third party Java Database Connectivity (JDBC) smart driver with native support for Aurora MySQL DB clusters Application developers can draw inspiration from the MariaDB driver to build drivers and connectors for languages other than Java Refer to the MariaDB Connector/J page for details The AWS JDBC Driver for MySQL (preview) is a client driver designed for the high availability of Aurora MySQL The AWS JDBC Driver for MySQL is drop in compatible with the MySQL Connector/J driver The AWS JDBC Driver for MySQL takes full advantage of the failover capabilities of Aurora MySQL The AWS JDBC Driver for MySQL fully maintains a cache of the DB cluster topology and each DB in stance's role either primary DB instance or Aurora Replica It uses this topology to bypass the delays caused by DNS resolution so that a connection to the new primary DB instance is established as fast as possible Refer to the AWS JDBC Driver for MySQL GitHub repository for details If you’re using a smart driver the recommendations listed in the following sections still apply A smart driver can automate and abstract certain layers of database connectivity However it doesn’t automatically configure itself with optimal settings or automatically make the application resilient to failures For example when using a smart driver you still need to ensure that the connection val idation and recycling functions are configured correctly there’s no excessive DNS caching in the underlying system and network layers transactions are managed correctly and so on It’s a good idea to evaluate the use of smart drivers in your setup Note that if a third party driver contains Aurora MySQL –specific functionality it doesn’t mean that it has been officially tested validated or certified by AWS Also note that due to the advanced builtin features and higher overall complexity smart driver s are likely to receive updates and bug fixes more frequently than traditional (bare bones) drivers You should regularly review the driver’s release notes and use the latest available version whenever possible This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 7 DNS caching Unless you use a smart databas e driver you depend on DNS record updates and DNS propagation for failovers instance scaling and load balancing across Aurora Replicas Currently Aurora DNS zones use a short Time ToLive (TTL) of five seconds Ensure that your network and client confi gurations don’t further increase the DNS cache TTL Remember that DNS caching can occur anywhere from your network layer through the operating system to the application container For example Java virtual machines (JVMs) are notorious for caching DNS in definitely unless configured otherwise Here are some examples of issues that can occur if you don’t follow DNS caching best practices: • After a new primary instance is promoted during a failover applications continue to send write traffic to the old insta nce Data modifying statements will fail because that instance is no longer the primary instance • After a DB instance is scaled up or down applications are unable to connect to it Due to DNS caching applications continue to use the old IP address of tha t instance which is no longer valid • Aurora Replicas can experience unequal utilization for example one DB instance receiving significantly more traffic than the others Connection management and pooling Always close database connections explicitly inst ead of relying on the development framework or language destructors to do it There are situations especially in container based or code asaservice scenarios when the underlying code container isn’t immediately destroyed after the code completes In su ch cases you might experience database connection leaks where connections are left open and continue to hold resources (for example memory and locks) If you can’t rely on client applications (or interactive clients) to close idle connections use the server’s wait_timeout and interactive_timeout parameters to configure idle connection timeout The default timeout value is fairly high at 28800 seconds ( 8 hours) You should tune it down to a value that’s acceptable in your environment Refer to the MySQL Reference Manual for details Consider using connection pooling to protect the database against connection surges Also consider connection pooling if the appli cation opens large numbers of connections (for example thousands or more per second) and the connections are short lived that is the time required for connection setup and teardown is significant compared to the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 8 total connection lifetime If your develo pment framework or language doesn’t support connection pooling you can use a connection proxy instead Amazon RDS Proxy is a fully managed highly available database proxy for Amazon Relational Database Service (Amazon RDS) that makes applications more scalable more resilient to database failures and more secure ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol Refer to the Connection scaling section of this document for more notes on connection pools versus proxies By using Amazon RDS Proxy you can allow your applications to pool and share database connections to improve their ability to scale Amazon RDS Proxy make s applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections AWS recommend s the following for configuring connection pools and proxies: • Check and validate connection healt h when the connection is borrowed from the pool The validation query can be as simple as SELECT 1 However in Amazon Aurora you can also use connection checks that return a different value depending on whether the instance is a primary instance (read/wri te) or an Aurora Replica (read only) For example you can use the @@innodb_read_only variable to determine the instance role If the variable value is TRUE you're on an Aurora Replica • Check and validate connections periodically even when they're not borrowed It helps detect and clean up broken or unhealthy connections before an application thread attempts to use them • Don't let connections remain in the pool indefinitely Recycle connections by closing and reopening them periodically (for example ev ery 15 minutes) which frees the resources associated with these connections It also helps prevent dangerous situations such as runaway queries or zombie connections that clients have abandoned This recommendation applies to all connections not just idl e ones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 9 Connection scaling The most common technique for scaling web service capacity is to add or remove application servers (instances) in response to changes in user traffic Each application server can use a database connection pool This approach ca uses the total number of database connections to grow proportionally with the number of application instances For example 20 application servers configured with 200 database connections each would require a total of 4000 database connections If the app lication pool scales up to 200 instances (for example during peak hours) the total connection count will reach 40000 Under a typical web application workload most of these connections are likely idle In extreme cases this can limit database scalabil ity: idle connections do take server resources and you’re opening significantly more of them than you need Also the total number of connections is not easy to control because it’s not something you configure directly but rather depends on the number of application servers You have two options in this situation: • Tune the connection pools on application instances Reduce the number of connections in the pool to the acceptable minimum This can be a stop gap solution but it might not be a long term solut ion as your application server fleet continues to grow • Introduce a connection proxy between the database and the application On one side the proxy connects to the database with a fixed number of connections On the other side the proxy accepts applicat ion connections and can provide additional features such as query caching connection buffering query rewriting/routing and load balancing Connection proxies • Amazon RDS Proxy is a fully managed highly available database proxy for Amazon RDS that makes applications more scalable more resilient to database failures and more secure Amazon RDS Proxy reduces the memory and CPU overhead for connection management on the database • Using Amazon RDS Proxy you can handle unpredictable surges in database traffic that otherwise might cause issues due to oversubscribing connections or creating new connections at a fast rate To protect the database against oversubscription you can control the number of database connections that are created This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 10 • Each RDS proxy performs connection pooling for the writer instance of its associated Amazon RDS or Aurora database Connection pooling is an optimization that reduces the overhead associated with opening and closing connections and with keeping many connections ope n simultaneously This overhead includes memory needed to handle each new connection It also involves CPU overhead to close each connection and open a new one such as Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking authentication ne gotiating capabilities and so on Connection pooling simplifies your application logic You don't need to write application code to minimize the number of simultaneous open connections Connection pooling also cuts down on the amount of time a user must w ait to establish a connection to the database • To perform load balancing for read intensive workloads you can create a read only endpoint for RDS proxy That endpoint passes connections to the reader endpoint of the cluster That way your proxy connectio ns can take advantage of Aurora read scalability • ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol For even greater scalability and availability you can use multiple proxy instances behind a single D NS endpoint Transaction management and autocommit With autocommit enabled each SQL statement runs within its own transaction When the statement ends the transaction ends as well Between statements the client connection is not in transaction If you need a transaction to remain open for more than one statement you explicitly begin the transaction run the statements and then commit or roll back the transaction With autocommit disabled the connection is always in transaction You can commit or roll back the current transaction at which point the se rver immediately opens a new one Refer to the MySQL Reference Manual for details Running with autocommit disabled is not recommended because it encourages long running transactions where they’re not needed Open transactions block a server’s internal garbage collection mechanisms which are essential to maintaini ng optimal performance In extreme cases garbage collection backlog leads to excessive storage consumption elevated CPU utilization and query slowness This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 11 Recommendations : • Always run with autocommit mode enabled Set the autocommit parameter to 1 on the database side (which is the default) and on the application side (which might not be the default) • Always double check the autocommit settings on the application side For example Python drivers such as MySQLdb and PyMySQL disable autocommit by default • Manage transactions explicitly by using BEGIN/START TRANSACTION and COMMIT/ROLLBACK statements You should start transactions when you need them and commit as soon as the transactional work is done Note that these recommendations are not specific to Aurora MySQL They apply to MySQL and other databases that use the InnoDB storage engine Long transactions and garbage collection backlog are easy to monitor: • You can obtain the metadata of currently running transactions from the INFORMATION_SCHEMAINNODB_TRX table The TRX_STARTED column contains the transaction start time and you can use it to calculate transaction age A transaction is worth investigating if it has been running for several minutes or more Refer to the MySQL Reference Manua l for details about the table • You can read the size of the garbage collection backlog from the InnoDB’s trx_rseg_history_len counter in the INFORMATION_SCHEMAINNODB_METRICS table Refer to the MySQL Reference Manual for details about the table The larger the counter value is the more severe the impact might be in terms of query performance CPU usage and storage consumption Values in the range of tens of thousands indicate that the garbage collection is somewhat delayed Values in the range of millions or tens of millions might be dangerous and should be investigated Note – In Amazon Aurora all DB instances use the same storage volume which means that the garbage collection is cluster wide and not specific to each instance Consequently a runaway transaction on one instance can impact all instances Therefore you sho uld monitor long transactions on all DB instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 12 Connection handshakes A lot of work can happen behind the scenes when an application connector or a graphical user interface (GUI) tool opens a new database session Drivers and client tools commonly run series of statements to set up session configuration (for example SET SESSION variable = value ) This increases the cost of creating new connections and delays when your application can start issuing queries The cost of connection handshakes becomes even more important if your applications are very sensitive to latency OLTP or keyvalue workloads that expect single digit millisecond latency can be visibly impacted if each connection is expensive to open For example if the driver runs six statements to set up a connection and each statement takes just one millisecond to run your application will be delayed by six milliseconds before it issues its first query Recommendations : • Use the Aurora MySQL Advanced Au dit the General Query Log or network level packet traces (for example with tcpdump ) to obtain a record of statements run during a connection handshake Whether or not you’re experiencing connection or latency issues you should be familiar with the inte rnal operations of your database driver • For each handshake statement you should be able to explain its purpose and describe its impact on queries you'll subsequently run on that connection • Each handshake statement requires at least one network roundtrip and will contribute to higher overall se ssion latency If the number of handshake statements appears to be significant relative to the number of statements doing actual work determine if you can disable any of the handshake statements Consider using connection pooling to reduce the number of c onnection handshakes Load balancing with the reader endpoint Because the reader endpoint contains all Aurora Replicas it can provide DNS based round robin load balancing for new connections Every time you resolve the reader endpoint you'll get an inst ance IP that you can connect to chosen in round robin fashion DNS load balancing works at the connection level (not the individual query level) You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution I f you only resolve the endpoint once and then keep the connection in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 13 your pool every query on that connection goes to the same instance If you cache DNS you receive the same instance IP each time you resolve the endpoint You can use Amazon RDS Proxy to create additional read only endpoints for an Aurora cluster These endpoints perform the same kind of load balancing as the Aurora reader endpoint Applications can reconnect more quickly to the proxy endpoints than the Aurora reader endpoint if reader in stances become unavailable If you don’t follow best practices these are examples of issues that can occur: • Unequal use of Aurora Replicas for example one of the Aurora Replicas is receiving most or all of the traffic while the other Aurora Replicas sit idle • After you add or scale an Aurora Replica it doesn’t receive traffic or it begins to receive traffic after an unexpectedly long delay • After you remove an Aurora Replica applications continue to send traffic to that instance For more information refer to the DNS endpoints and DNS caching sections of this document Designing for fault tolerance and quick recovery In large scale database operations you’re statistically more likely to experience issues such as connection interruptions or hardware failures You must also take operational actions more frequently such as scaling adding or removing DB instances and performing software upgrades The only scalable way of addressi ng this challenge is to assume that issues and changes will occur and design your applications accordingly Examples : • If Aurora MySQL detects that the primary instance has failed it can promote a new primary instance and fail over to it which typically h appens within 30 seconds Your application should be designed to recognize the change quickly and without manual intervention • If you create additional Aurora Replicas in an Aurora DB cluster your application should automatically recognize the new Aurora Replicas and send traffic to them • If you remove instances from a DB cluster your application should not try to connect to them This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 14 Test your applications extensively and prepare a list of assumptions about how the application should react to database events Then experimentally validate the assumptions If you don’t follow best practices database events (for example failovers scaling and software upgrades) might result in longer than expected downtime For example you might notice that a failover took 30 seconds (per the DB cluster’s event notifications) but the application remained down for much longer Server configuration There are two major server configuration variables worth mentioning in the context of this whitepaper : max_connections and max_connect_errors Configuration variable max_connections The configuration variable max_connections limits the number of database connections per Aurora DB instance The best practice is to set it slightly higher than the maximum number of connections you expect to open on each instance If you also enabled performance_schema be extra careful with the setting The Performance Schema memory structures are sized automatically based on server configuration variables including max_connections The higher you set the variable the more memory Performance Schema uses In extreme cases this can lead to out of memory issues on smaller instance types Note for T2 and T3 instance families Using Performance Schema on T2 and T3 Aurora DB instances with less than 8 GB of memory isn’t recommended To reduce the risk of out ofmemory issues on T2 and T3 instances: • Don’t enable Performance Schema • If you must use Performance Schema leave max_connections at the default value • Disable Performance Schema if you plan to increase max_connections to a value significantly greater than the default value Refer to the MySQL Reference Manual for details about the max_connections variable This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 15 Configuration variable max_connect_errors The configuration variable max_connect_errors determines how many successive interrupted connection requests are permitted from a given client host If the client host exceeds the number of successive failed connection attempts the server blocks it Further connection attempts from that client yield an error: Host 'host_name' is blocked because of many connection errors Unblock with 'mysqladmin flush hosts' A com mon (but incorrect) practice is to set the parameter to a very high value to avoid client connectivity issues This practice isn’t recommended because it: • Allows application owners to tolerate connection problems rather than identify and resolve the underl ying cause Connection issues can impact your application health so they should be resolved rather than ignored • Can hide real threats for example someone actively trying to break into the server If you experience “host is blocked” errors increasing t he value of the max_connect_errors variable isn’t the correct response Instead investigate the server’s diagnostic counters in the aborted_connects status variable and the host_cache table Then use the information to identify and fix clients that run in to connection issues Also note that this parameter has no effect if skip_name_resolve is set to 1 (default) Refer to the MySQL Reference Manual for details on the following: • Max_connect_errors variable • “Host is blocked ” error • Aborted_connects status variable • Host_cache table This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 16 Conclusion Understanding and implementing connection management best practices is critical to achieve scalability reduce downtime and ensure smooth integration between the application and database layers You can apply most of the recommendations provided in this whitepaper with little to no engineering effort The guidance provided in this whitepaper should help you introduce improvements in your current and future application deployments using Aurora MySQL DB clusters Contributor s Contributors to this document include: • Szymon Komendera Database Engineer Amazon Aurora • Samuel Selvan Database Specialist Solutions Architect Amazon Web Services Further reading For additional information refer to : • Aurora on Amazon RDS User Guide • Communication Errors and Aborted Connections in MySQL Reference Manual This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 17 Document revisions Date Description October 20 2021 Minor content updates to follow new style guide and hyperlinks July 2021 Minor content updates to the following topics: Smart Drivers Connection Management and Pooling and Connection Scaling March 2019 Minor content updates to the following topics: Introduction DNS Endpoints and Server Configuration January 2018 First publication,General,consultant,Best Practices Amazon_EC2_Reserved_Instances_and_Other_Reservation_Models,"Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Table of Contents Abstract1 Abstract1 Introduction2 Amazon EC2 Reserved Instances3 Reserved Instances payment options3 Standard vs Convertible offering classes3 Regional and zonal Reserved Instances4 Differences between regional and zonal Reserved Instances4 Limitations for instance size flexibility5 Maximizing Utilization with Size Flexibility in Regional Reserved Instances5 Normalization factor for dedicated EC2 instances7 Normalization factor for bare metal instances7 Savings Plans9 Reservation models for other AWS services10 Amazon RDS reserved DB instances10 Amazon ElastiCache reserved nodes10 Amazon Elasticsearch Service Reserved Instances10 Amazon Redshift reserved nodes11 Amazon DynamoDB reservations11 Reserved Instances billing12 Usage billing 12 Consolidated billing 13 Reserved Instances: Capacity reservations13 Blended rates 14 How discounts are applied14 Maximizing the value of reservations15 Measure success15 Maximize discounts by standardizing instance type15 Reservation management techniques16 Reserved Instance Marketplace16 AWS Cost Explorer16 AWS Cost and Usage Report17 Reserved Instances on your cost and usage report17 AWS Trusted Advisor18 Conclusion 19 Contributors 20 Document revisions21 Notices22 iiiAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Abstract Amazon EC2 Reserved Instances and Other AWS Reservation Models Publication date: March 29 2021 (Document revisions (p 21)) Abstract This document is part of a series of AWS whitepapers designed to support your cloud journey and discusses Amazon EC2 Reserved Instances and reservation models for other AWS services Its aim is to empower you to maximize the value of your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously measure your optimization status 1Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Introduction The cloud is well suited for variable workloads and rapid deployment yet many cloudbased workloads follow a more predictable pattern For such applications your organization can achieve significant cost savings by using Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances Amazon EC2 Reserved Instances enable your organization to commit to usage parameters at the time of purchase to achieve a lower hourly rate Reservation models are also available for Amazon Relational Database Service (Amazon RDS) Amazon ElastiCache Amazon Elasticsearch Service (Amazon ES) Amazon Redshift and Amazon DynamoDB This whitepaper discusses Amazon EC2 Reserved Instances and the reservation models for these other AWS services 2Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reserved Instances payment options Amazon EC2 Reserved Instances When you purchase Reserved Instances you make a oneyear or threeyear commitment and receive a billing discount of up to 72 percent in return When used for the appropriate workloads Reserved Instances can save you a lot of money Note that a Reserved Instance is not an instance dedicated to your organization It is a billing discount applied to the use of OnDemand Instances in your account These OnDemand Instances must match certain attributes of the Reserved Instances you purchased to benefit from the billing discount You pay for the entire term of a Reserved Instance regardless of actual usage so your cost savings are closely tied to use Therefore it is important to plan and monitor your usage to make the most of your investment When you purchase a Reserved Instance in a specific Availability Zone it provides a capacity reservation This improves the likelihood that the compute capacity you need is available in a specific Availability Zone when you need it A Reserved Instance purchased for an AWS Region does not provide capacity reservation Reserved Instances payment options You can purchase Reserved Instances through the AWS Management Console The following payment options are available for most Reserved Instances: •No Upfront – No upfront payment is required You are billed a discounted hourly rate for every hour within the term regardless of whether the Reserved Instance is being used No Upfront Reserved Instances are based on a contractual obligation to pay monthly for the entire term of the reservation A successful billing history is required before you can purchase No Upfront Reserved Instances •Partial Upfront – A portion of the cost must be paid up front and the remaining hours in the term are billed at a discounted hourly rate regardless of whether you’re using the Reserved Instance •All Upfront – Full payment is made at the start of the term with no other costs or additional hourly charges incurred for the remainder of the term regardless of hours used Reserved Instances with a higher upfront payment provide greater discounts You can also find Reserved Instances offered by thirdparty sellers at lower prices and shorter terms on the Reserved Instance Marketplace As you purchase more Reserved Instances volume discounts begin to apply that let you save even more For more information see Amazon EC2 Reserved Instance Pricing Standard vs Convertible offering classes When you purchase a Reserved Instance you can choose between a Standard or Convertible offering class Table 1 – Comparison of standard and Convertible Reserved Instances Standard Reserved Instance Convertible Reserved Instance Oneyear to threeyear term Oneyear to threeyear term 3Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Regional and zonal Reserved Instances Standard Reserved Instance Convertible Reserved Instance Enables you to modify Availability Zone scope networking type and instance size (within the same instance type) of your Reserved Instance For more information see Modifying Reserved InstancesEnables you to exchange one or more Convertible Reserved Instances for another Convertible Reserved Instance with a different configuration including instance family operating system and tenancy There are no limits to how many times you perform an exchange as long as the target Convertible Reserved Instance is of an equal or higher value than the Convertible Reserved Instances that you are exchanging For more information see Exchanging Convertible Reserved Instances Can be sold in the Reserved Instance MarketplaceCannot be sold in the Reserved Instance Marketplace Standard Reserved Instances typically provide the highest discount levels Oneyear Standard Reserved Instances provide a similar discount to threeyear Convertible Reserved Instances If you want to purchase capacity reservations see OnDemand Capacity Reservations Convertible Reserved Instances are useful when: •Purchasing Reserved Instances in the payer account instead of a subaccount You can more easily modify Convertible Reserved Instances to meet changing needs across your organization •Workloads are likely to change In this case a Convertible Reserved Instance enables you to adapt as needs evolve while still obtaining discounts and capacity reservations •You want to hedge against possible future price drops •You can’t or don’t want to ask teams to do capacity planning or forecasting •You expect compute usage to remain at the committed amount over the commitment period Regional and zonal Reserved Instances When you purchase a Reserved Instance you determine the scope of the Reserved Instance The scope is either regional or zonal •Regional: When you purchase a Reserved Instance for a Region it's referred to as a regional Reserved Instance •Zonal : When you purchase a Reserved Instance for a specific Availability Zone it's referred to as a zonal Reserved Instance Differences between regional and zonal Reserved Instances The following table highlights some key differences between regional Reserved Instances and zonal Reserved Instances: Table 2 – Comparison of regional and zonal Reserved Instances 4Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Limitations for instance size flexibility Regional Reserved InstancesZonal Reserved Instances Availability Zone flexibilityThe Reserved Instance discount applies to instance usage in any Availability Zone in the specified RegionNo Availability Zone flexibility— the Reserved Instance discount applies to instance usage in the specified Availability Zone only Capacity reservationNo capacity reservation—a regional Reserved Instance does not provide a capacity reservationA zonal Reserved Instance provides a capacity reservation in the specified Availability Zone Instance size flexibilityThe Reserved Instance discount applies to instance usage within the instance family regardless of size Only supported on Amazon Linux/Unix Reserved Instances with default tenancy For more information see Instance size flexibility determined by normalization factorNo instance size flexibility— the Reserved Instance discount applies to instance usage for the specified instance type and size only Limitations for instance size flexibility Instance size flexibility does not apply to the following Reserved Instances: •Reserved Instances that are purchased for a specific Availability Zone (zonal Reserved Instances) •Reserved Instances with dedicated tenancy •Reserved Instances for Windows Server Windows Server with SQL Standard Windows Server with SQL Server Enterprise Windows Server with SQL Server Web RHEL and SUSE Linux Enterprise Server •Reserved Instances for G4 instances Maximizing Utilization with Size Flexibility in Regional Reserved Instances For additional flexibility all Regional Linux Reserved Instances with shared tenancy apply to all sizes of instances within an instance family and an AWS Region even if you are using them across multiple accounts via Consolidated Billing The only attributes that must be matched are the instance type (for example m4) tenancy (must be default) and platform (must be Linux) All new and existing Reserved Instances are sized according to a normalization factor based on instance size as follows Table 3 – Regional Reserved Instance sizes and normalization factors Instance size Normalization factor nano 025 micro 05 small 1 5Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Maximizing Utilization with Size Flexibility in Regional Reserved Instances Instance size Normalization factor medium 2 large 4 xlarge 8 2xlarge 16 4xlarge 32 8xlarge 64 9xlarge 72 10xlarge 80 12xlarge 96 16xlarge 128 24xlarge 192 32xlarge 256 For example if you have a Reserved Instance for a c48xlarge it applies to any usage of a Linux c4 instance with shared tenancy in the AWS Region such as: •One c48xlarge instance •Two c44xlarge instances •Four c42xlarge instances •Sixteen c4large instances It also includes combinations of instances for example a t2medium instance has a normalization factor of 2 If you purchase a t2medium default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region and you have two running t2small instances in your account in that Region the billing benefit is applied in full to both instances Figure 1 – Two t2medium instances running in a Region 6Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for dedicated EC2 instances Or if you have one t2large instance running in your account in the US East (N Virginia) Region the billing benefit is applied to 50% of the usage of the instance Figure 2 – One t2large instance running in a Region The normalization factor is also applied when modifying Reserved Instances Normalization factor for dedicated EC2 instances For size inflexible RIs the normalization factor is always 1 The normalization factor doesn't apply to EC2 instances that do not have size flexibility The sole purpose of the normalization factor is to provide an ability to match various EC2 instances to each other within a family so that you can exchange one type for another type We do not support this use case for EC2 instances without size flexibility hence normalization factor is not used and to keep our data model uniform across different EC2 use cases we assign it an equivalent value of 1 Normalization factor for bare metal instances Instance size flexibility also applies to bare metal instances within the instance family If you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on bare metal instances you can benefit from the Reserved Instance savings within the same instance family The opposite is also true: if you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on instances in the same family as a bare metal instance you can benefit from the Reserved Instance savings on the bare metal instance A bare metal instance is the same size as the largest instance within the same instance family For example an i3metal is the same size as an i316xlarge so they have the same normalization factor The metal instance sizes do not have a single normalization factor They vary based on the specific instance family For the most uptodate list see Amazon EC2 Instance Types Table 4 – Bare metal instance sizes and normalization factors Instance size Normalization factor a1metal 32 c5metal 192 c5dmetal 192 7Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for bare metal instances Instance size Normalization factor c5nmetal 144 c6gmetal 128 c6gdmetal 128 g4dnmetal 192 i3metal 128 i3enmetal 192 m5metal 192 m5dmetal 192 m5dnmetal 192 m5nmetal 192 m5znmetal 96 m6gmetal 128 m6gdmetal 128 r5metal 192 r5bmetal 192 r5dmetal 192 r5dnmetal 192 r5nmetal 192 r6gmetal 128 r6gdmetal 128 x2gdmetal 128 z1dmetal 96 For example an i3metal instance has a normalization factor of 128 If you purchase an i3metal default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region the billing benefit can apply as follows: •If you have one running i316xlarge in your account in that Region the billing benefit is applied in full to the i316xlarge instance (i316xlarge normalization factor = 128) •Or if you have two running i38xlarge instances in your account in that Region the billing benefit is applied in full to both i38xlarge instances (i38xlarge normalization factor = 64) •Or if you have four running i34xlarge instances in your account in that Region the billing benefit is applied in full to all four i34xlarge instances (i34xlarge normalization factor = 32) The opposite is also true For example if you purchase two i38xlarge default tenancy Amazon Linux/ Unix Reserved Instances in the US East (N Virginia) Region and you have one running i3metal instance in that Region the billing benefit is applied in full to the i3metal instance 8Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Savings Plans Savings Plans Savings Plans is another flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices on Amazon EC2 instances usage regardless of instance family size OS tenancy or AWS Region and also applies to AWS Fargate and AWS Lambda usage Savings Plans offer significant savings over OnDemand Instances just like EC2 Reserved Instances in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or threeyear period You can sign up for Savings Plans for a one or threeyear term and easily manage your plans by taking advantage of recommendations performance reporting and budget alerts in the AWS Cost Explorer AWS offers two types of Savings Plans: •Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs) These plans automatically apply to EC2 instance usage regardless of instance family size AZ Region operating system or tenancy and also apply to Fargate and Lambda usage For example with Compute Savings Plans you can change from C4 to M5 instances shift a workload from EU (Ireland) to EU (London) or move a workload from Amazon EC2 to Fargate or Lambda at any time and automatically continue to pay the Savings Plans price •EC2 Instance Savings Plans provide the lowest prices offering savings up to 72% (just like Standard RIs) in exchange for commitment to usage of individual instance families in a Region (for example M5 usage in N Virginia) This automatically reduces your cost on the selected instance family in that region regardless of AZ size operating system or tenancy EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that Region For example you can move from c5xlarge running Windows to c52xlarge running Linux and automatically benefit from the Savings Plans prices Note that Savings Plans does not provide a capacity reservation You can however reserve capacity with On Demand Capacity Reservations and pay lower prices on them with Savings Plans You can continue purchasing RIs to maintain compatibility with your existing cost management processes and your RIs will work alongside Savings Plans to reduce your overall bill However as your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility 9Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon RDS reserved DB instances Reservation models for other AWS services In addition to Amazon EC2 reservation models are available for Amazon RDS Amazon ElastiCache Amazon ES Amazon Redshift and Amazon DynamoDB Topics •Amazon RDS reserved DB instances (p 10) •Amazon ElastiCache reserved nodes (p 10) •Amazon Elasticsearch Service Reserved Instances (p 10) •Amazon Redshift reserved nodes (p 11) •Amazon DynamoDB reservations (p 11) Amazon RDS reserved DB instances Similar to Amazon EC2 Reserved Instances there are three payment options for Amazon RDS reserved DB instances: No Upfront Partial Upfront and All Upfront All reserved DB instance types are available for Aurora MySQL MariaDB PostgreSQL Oracle and SQL Server database engines Sizeflexible reserved DB instances are available for Amazon Aurora MariaDB MySQL PostgreSQL and the “Bring Your Own License” (BYOL) edition of the Oracle database engine For more information about Amazon RDS reserved DB instances see the following: •Amazon RDS Reserved Instances •Working with Reserved DB Instances •Amazon DynamoDB Pricing Amazon ElastiCache reserved nodes Amazon ElastiCache reserved nodes give you the option to make a low onetime payment for each cache node you want to reserve In turn you receive a significant discount on the hourly charge for that node Amazon ElastiCache provides three reserved cache node types (Light Utilization Medium Utilization and Heavy Utilization) that enable you to balance the amount you pay up front with your effective hourly price Based on your application workload and the amount of time you plan to run them Amazon ElastiCache Reserved Nodes might provide substantial savings over running ondemand Nodes Reserved Cache Nodes are available for both Redis and Memcached For more information see Amazon ElastiCache Reserved Nodes Amazon Elasticsearch Service Reserved Instances Amazon Elasticsearch Service (Amazon ES) Reserved Instances (RIs) offer significant discounts compared to standard OnDemand Instances The instances themselves are identical—RIs are just a billing discount 10Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon Redshift reserved nodes applied to OnDemand Instances in your account For longlived applications with predictable usage RIs can provide considerable savings over time Amazon ES RIs require one or threeyear terms and have three payment options that affect the discount rate For more information see Amazon Elasticsearch Service Reserved Instances Amazon Redshift reserved nodes In AWS the charges that you accrue for using Amazon Redshift are based on compute nodes Each compute node is billed at an hourly rate The hourly rate varies depending on factors such as AWS Region node type and whether the node receives ondemand node pricing or reserved node pricing If you intend to keep an Amazon Redshift cluster running continuously for a prolonged period you should consider purchasing reservednode offerings These offerings provide significant savings over on demand pricing However they require you to reserve compute nodes and commit to paying for those nodes for either a oneyear or a threeyear duration For more information about Amazon Redshift reserved node pricing see Reserved Instance Pricing and Purchasing Amazon Redshift Reserved Nodes Amazon DynamoDB reservations If you can predict your need for Amazon DynamoDB readandwrite throughput reserved capacity offers significant savings over the normal price of DynamoDB provisioned throughput capacity You pay a onetime upfront fee and commit to paying for a minimum usage level at specific hourly rates for the duration of the reserved capacity term Any throughput you provision in excess of your reserved capacity is billed at standard rates for provisioned throughput Provisioned capacity mode might be best if you •Have predictable application traffic •Run applications whose traffic is consistent or ramps gradually •Can forecast capacity requirements to control costs For more information see Pricing for Provisioned Capacity 11Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Usage billing Reserved Instances billing All Reserved Instances provide you with a discount compared to OnDemand Instance pricing With Reserved Instances you pay for the entire term regardless of actual use You can choose to pay for your Reserved Instance upfront partially upfront or monthly depending on the payment option specified for the Reserved Instance When Reserved Instances expire you are charged OnDemand Instance rates You can queue a Reserved Instance for purchase up to three years in advance This can help you ensure that you have uninterrupted coverage For more information see Queuing your purchase You can set up a billing alert to warn you when your bill exceeds a threshold that you define For more information see Monitoring Charges with Alerts and Notifications Usage billing Except for DynamoDB reservations which are billed based on throughput reservations are billed for every clockhour during the term you select regardless of whether an instance is running or not A clock hour is defined as the standard 24hour clock that runs from midnight to midnight and is divided into 24 hours (for example 1:00:00 to 1:59:59 is one clockhour) A Reserved Instance billing benefit can be applied to a running instance on a persecond basis Per second billing is available for instances using an opensource Linux distribution such as Amazon Linux and Ubuntu Perhour billing is used for commercial Linux distributions such as Red Hat Enterprise Linux and SUSE Linux Enterprise Server A Reserved Instance billing benefit can apply to a maximum of 3600 seconds (one hour) of instance usage per clockhour You can run multiple instances concurrently but can only receive the benefit of the Reserved Instance discount for a total of 3600 seconds per clockhour Instance usage that exceeds 3600 seconds in a clockhour is billed at the OnDemand Instance rate For example if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances concurrently for one hour one instance is charged at one hour of Reserved Instance usage and the other three instances are charged at three hours of OnDemand Instance usage However if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances for 15 minutes (900 seconds) each within the same hour the total running time for the instances is one hour which results in one hour of Reserved Instance usage and 0 hours of OnDemand Instance usage 12Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Consolidated billing Figure 3 – Running four instances for 15 minutes each in the same hour If multiple eligible instances are running concurrently the Reserved Instance billing benefit is applied to all the instances at the same time up to a maximum of 3600 seconds in a clockhour Thereafter the On Demand Instance rates apply Figure 4 – Running four instances concurrently over the hour You can find out about the charges and fees to your account by viewing the AWS Billing and Cost Management console You can also examine your utilization and coverage and receive reservation purchase recommendations via AWS Cost Explorer You can dive deeper into your reservations and Reserved Instance discount allocation via the AWS Cost and Usage Report For more information on Reserved Instance usage billing see Usage Billing Consolidated billing AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage AWS Organizations includes consolidated billing and account management capabilities that enable you to better meet the budgetary security and compliance needs of your business For more information see What Is AWS Organizations? For more information on consolidated bills and how they are calculated see Understanding Consolidated Bills The pricing benefits of Reserved Instances are shared when the purchasing account is billed under a consolidated billing payer account The instance usage across all member accounts is aggregated in the payer account every month This is useful for companies that have different functional teams or groups then the normal Reserved Instance logic is applied to calculate the bill Reserved Instances: Capacity reservations AWS also offers discounted hourly rates in exchange for an upfront fee and term contract Services such as Amazon EC2 and Amazon RDS use this approach to sell reserved capacity for hourly use of Reserved Instances For more information see Reserved Instances in the Amazon EC2 User Guide for Linux Instances and Working with Reserved DB Instances in the Amazon Relational Database Service User Guide 13Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Blended rates When you reserve capacity with Reserved Instances your hourly usage is calculated at a discounted rate for instances of the same usage type in the same Availability Zone (AZ) When you launch additional instances of the same instance type in the same Availability Zone and exceed the number of instances in your reservation AWS averages the rates of the Reserved Instances and the OnDemand Instances to give you a blended rate Blended rates A line item for the blended rate of that instance is displayed on the bill of any member account that is running an instance that matches the specifications of a reservation in the organization The payer account of an organization can turn off Reserved Instance sharing for member accounts in that organization via the AWS Billing Preferences This means that Reserved Instances are not shared between that member account and other member accounts Each estimated bill is computed using the most recent set of preferences For information on how to configure sharing see Turning Off Reserved Instance Sharing How discounts are applied The application of Amazon EC2 Reserved Instances is based on instance attributes including the following: •Instance type – Instance types comprise varying combinations of CPU memory storage and networking capacity (for example m4xlarge) This gives you the flexibility to choose the appropriate mix of resources for your applications such as computeoptimized storageoptimized and so on Each instance type includes one or more instance sizes enabling you to scale your resources to the requirements of your target workload •Platform – You can purchase Reserved Instances for Amazon EC2 instances running Linux Unix SUSE Linux Red Hat Enterprise Linux Windows Server and Microsoft SQL Server platforms •Tenancy – Reserved Instances can be default tenancy or dedicated tenancy •Regional or zonal – See Regional and zonal Reserved Instances (p 4) If you purchase a Reserved Instance and you already have a running instance that matches the attributes of the Reserved Instance the billing benefit is immediately applied You don’t have to restart your instances If you do not have an eligible running instance launch an instance and ensure that you match the same criteria that you specified for your Reserved Instance For more information see Using Your Reserved Instances 14Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Measure success Maximizing the value of reservations This section discusses how you can maximize the value of your reservations Topics •Measure success (p 15) •Maximize discounts by standardizing instance type (p 15) •Reservation management techniques (p 16) •Reserved Instance Marketplace (p 16) •AWS Cost Explorer (p 16) •AWS Cost and Usage Report (p 17) •AWS Trusted Advisor (p 18) Measure success Making the most of reservations means measuring your reservation coverage (portion of instances enjoying reservation discount benefits) and reservation utilization (degree to which purchased Reserved Instances are used) Establish a standardized review cadence in which you focus on the following questions: •Do you need to modify any of our existing reservations to increase utilization? •Are any currently utilized reservations expiring? •Do you need to purchase any reservations to increase your coverage? A standardized review cadence ensures that issues are surfaced and addressed in a timely manner As your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility Maximize discounts by standardizing instance type By standardizing the instance types that your organization uses you can ensure that deployments match the characteristics of your reservations to maximize your discounts Standardization maximizes utilization and minimizes the level of effort associated with management of reservations Three services that can help you standardize your instances are: •AWS Config – Enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations •AWS Service Catalog – Lets you create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine (VM) images servers software and databases to complete multitier application architecture •AWS Compute Optimizer Recommends optimal AWS compute resources for your workloads to reduce costs and improve performance by using Machine Learning algorithms to analyze historical utilization metrics The Compute Optimizer focuses on the configuration and resource utilization of your workload to identify dozens of defining characteristics such as whether a workload is CPU intensive exhibits a daily pattern or accesses local storage frequently The service processes these characteristics and identifies the hardware resource headroom required by the workload It also infers 15Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reservation management techniques how the workload would have performed on various hardware platforms (for example Amazon EC2 instances types) and offers recommendations Reservation management techniques You can manage reservations either by using a central IT operations or management team or by using a specific team or business unit The following table summarizes the different reservation management techniques Table 5 – Comparison of different reservation management techniques Central reservation management Team/Business Unit reservation management Maximizes reservation coverage by covering aggregate usage across a businessIncreases likelihood of high reservation utilization (for example using alreadypurchased reservations) because a single team should understand its capacity commitment of RIs Simplifies overall reservation management especially when combining central management and Convertible Reserved InstancesReduces interfacing or planning between the business unit and the central team Reduces the requirement for an individual team to understand reservationsStreamlines decisions about purchases purchase process and reservation account location Reserved Instance Marketplace Reserved Instance Marketplace supports the sale of thirdparty and AWS customers' unused Standard Reserved Instances which vary in term lengths and pricing options For example you might want to sell Reserved Instances after moving instances to a new AWS Region changing to a new instance type ending projects before the term expiration when your business needs change or if you have unneeded capacity If you want to sell your unused Reserved Instances on the Reserved Instance Marketplace you must meet certain eligibility criteria For more information see Reserved Instance Marketplace AWS Cost Explorer AWS Cost Explorer lets you visualize understand and manage your AWS costs and usage over time You can analyze your cost and usage data at a high level (for example total costs and usage across all accounts in your organization) or for highly specific requests (for example m22xlarge costs within account Y that are tagged project: secretProject ) You can dive deeper into your reservations using the Reserved Instance utilization and coverage reports Using these reports you can set custom Reserved Instance utilization and coverage targets and visualize progress toward your goals From there you can refine the underlying data using the available filtering dimensions (for example account instance type scope and more) AWS Cost Explorer provides the following prebuilt reports: •EC2 RI Utilization % offers relevant data to identify and act on opportunities to increase your Reserved Instance usage efficiency It’s calculated by dividing Reserved Instance hours used by the total Reserved Instance purchased hours 16Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Cost and Usage Report •EC2 RI Coverage % shows how much of your overall instance usage is covered by Reserved Instances This lets you make informed decisions about when to purchase or modify a Reserved Instance to ensure maximum coverage It’s calculated by dividing Reserved Instance hours used by the total EC2 OnDemand and Reserved Instance hours Also AWS Cost Explorer provides Reserved Instance purchase recommendations for zonal and sizeflexible Reserved Instances to help payer accounts achieve greater cost efficiencies For more information see AWS Cost Explorer AWS Cost and Usage Report The AWS Cost and Usage Report contains the most comprehensive set of data about your AWS costs and usage including additional information regarding AWS services pricing and reservations By using the AWS Cost and Usage report you can gain a wealth of reservationrelated insights about the Amazon Resource Name (ARN) for a reservation the number of reservations the number of units per reservation and more It can help you do the following: •Calculate savings – Each hourly line item of usage contains the discounted rate that was charged in addition to the public OnDemand Instance rate for that usage type at that time You can quantify your savings by calculating the difference between the public OnDemand Instance rates and the rates you were charged •Track the allocation of Reserved Instance discounts – Each line item of usage that receives a discount contains information about where the discount came from This makes it easier to trace which instances are benefitting from specific reservations These reports update up to three times per day Reserved Instances on your cost and usage report The Fee line item is added to your bill when you purchase an All Upfront or Partial Upfront Reserved Instance as shown Figure 5 – Fee line item from AWS Cost and Usage Report The RI Fee line item describes the recurring monthly charges that are associated with Partial Upfront and No Upfront Reserved Instances The RI Fee is calculated by multiplying your discounted hourly rate by the number of hours in the month as shown 17Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Trusted Advisor Figure 6 – RI Fee line item from AWS Cost and Usage Report The Discounted Usage line item describes the instance usage that received a matching Reserved Instance discount benefit It’s added to your bill when you have usage that matches one of your Reserved Instances as shown Figure 7 – Discounted Usage line item from AWS Cost and Usage Report AWS Trusted Advisor AWS Trusted Advisor is an online resource to help you reduce cost increase performance and improve security by optimizing your AWS environment AWS Trusted Advisor provides realtime guidance to help you provision your resources following AWS best practices To help you maximize utilization of Reserved Instances AWS Trusted Advisor checks your Amazon EC2 computingconsumption history and calculates an optimal number of Partial Upfront Reserved Instances Recommendations are based on the previous calendar month's hourbyhour usage aggregated across all consolidated billing accounts Note that Trusted Advisor does not provide sizeflexible Reserved Instance recommendations For more information about how the recommendation is calculated see ""Reserved Instance Optimization Check Questions"" in the Trusted Advisor FAQs 18Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Conclusion Effectively planned and managed reservations can help you achieve significant discounts for AWS workloads that run on a predictable schedule It’s important to analyze your current AWS usage to select the right reservation attributes from the start and to devise a longerterm strategy for monitoring and managing your Reserved Instances Using tools such as the AWS Compute Optimizer AWS Cost and Usage report and the Reserved Instance Utilization and Coverage reports in AWS Cost Explorer you can examine your overall usage and discover opportunities for greater cost efficiencies 19Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Contributors Contributors to this document include: •Pritam Pal Senior Specialist Solution Architect EC2 Spot Amazon Web Services 20Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Updated bare metal instance types and normalization factors Removed link to Scheduled Instances (p 21)Minor update March 29 2021 Updated Reserved Instances billing information and normalization factors Savings Plan section added (p 21)Whitepaper updated August 31 2020 Initial publication (p 21) Whitepaper published March 1 2018 21Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 22",General,consultant,Best Practices Amazon_Elastic_File_System_Choosing_Between_Different_Throughput_and_Performance_Mode,This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System Choosing Between the D ifferent Throughput & Performance Modes July 2018 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representat ions contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify a ny agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Performance Modes 1 General Purp ose 1 Max I/O 1 Selecting the right performance mode 2 Throughput Modes 3 Bursting Throughput 3 Provisione d Throughput 4 Selecting the right throughput mode 5 Conclusion 6 Contributors 6 Further Reading 7 Document Revisions 7 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Storage types can generally be divided in to three different categories : block file and object Each storage type has made its way into the enterprise and a large majority of data reside s on file storage Network shared file systems have become a critical storage platform for businesses of any size These systems are accessed by a single client or multiple (tens hundreds or thousands) concurrently so they can access and use a common data set Amazon Elastic File System (Amazon EFS) satisfies these demands and gives custom ers the flexibility to choose different performance and throughput modes that best suits their needs This paper outlines the best practices for running network shared file systems on the AWS cloud platform and offers guidance to select the right Amazon EF S performance and throughput modes for your workload This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Page 1 Introduction Amazon Elastic File System (Amazon EFS)1 provides simple scalable elastic file storage for use with AWS Cloud services and on premises resources The file systems you create using Amazon EFS are elastic growing and shrinking automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage se rvers in multiple Availability Zones Amazon EFS supports Network File System version 4 (NFSv40 & 41) provides POSIX file system semantics and guarantees open after close semantics Amazon EFS is a regional service built on a foundation of high availability and high durability and is designed to satisfy the performance and throughput demands of a wide spectrum of use cases and workloads including web serving and content management enterprise applications media and entertainment processing workflows home directories database backups developer tools container storage and big data analytics EFS file systems provide customizable performance and throughput options so you can tune your file system to match the needs of your application Performance Modes Amazon EFS offers two performance modes: General Purpose and Max I /O You can select one when creating your file system There is no price difference between the modes so your file system is billed and metered the same The performance m ode can’t be changed after the file system has been created General Purpose General Purpose is the default performance mode and is recommended for the majority of uses cases and workloads It is the most commonly used performance mode and is ideal for latency sensitive applications like web serving content management systems and general file serving These file systems experience the lowest latency per file system operation and can achieve this for random or sequential IO patterns There is a limit of 7000 file system operation per second aggregated across all clients for General Purpose performance mode file systems Max I /O File systems created in Max I /O performance mode can scale to higher levels of aggregate throughput and operations per second when compared to General Purpose file systems These file systems are designed for highly parallelized applications like big data a nalytics video transcoding a nd processing and genomic analytics which can scale out to tens hundreds or thousands of Amazon EC2 instances Max I /O file systems do not have a 7000 file system operation per second limit but latency per file system operation is slightly higher when compared to General Purpose performance mode file systems This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 2 Selecting the right performance mode We recommend creating the file system in the default General Purpose performance mode and testing your workload for a period of time to test its performance We pr ovide eight Amazon CloudWatch metrics per file system to help you understand how your workload is driving the file system One of these metrics Percen tIOLimit is specific to General Purpose performance mode file systems and indicates as a percent how close you are to the 7000 file system operations per second limit If the PercentIOLimit value returned is at or near 100 percent for a significant amount of time during your test (see figure 1) we recommend you use a Max I /O performance mode file system To move to a different performance mode you migrate the data to a different file system that was created in the other performance mode You can use Amazon EFS File Sync to migrate the data For more information on Amazon EFS File Sync please refer to the Amazon EFS File Sync section of the Amazon EFS User Guide 2 There are some workloads that need to scale out to the higher I/O levels provided by Max I /O performance mode but are also latency sensitive and require the lower latency provided by General Purpose performance mode In situations like this and if the work load and Figure 1 Figure 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 3 applications support it we recommend creating multiple General Purpose performance mode file systems and spread the application workload across all these file systems This would allow you to create a logical file system and shard data across multiple EFS file systems Each file system would be mounted as a subdirectory and the application can access th ese subdirectories in parallel (s ee figure 2 ) This allows latency sensitive workload s to scale to higher levels of file system operatio ns per second aggregated across multiple file systems and at the same time take advantage of the lower latencies offered by General Purpose performance mode file systems Throughput Modes The throughput mode of the file system helps determine the overal l throughput a file system is able to achieve You can select the throughput mode at any time (subject to daily limits) Changing the throughput mode is a nondisruptive operation and can be run while clients continuously access the file system You can c hoose between two throughput modes Bursting or Provisioned There are price and throughput level differences between the two modes so understand ing each one their differences and when to select one throughput mode over the other is valuable Bursting Throughput Bursting Throughput is the default mode and is recommended for a majority of uses cases and workloads Throughput sc ales as your file system grows and you are billed only for the amount of data stored on the file system in GB Month Because file based workloads are typically spiky – driving high levels of throughput for short periods of time and low levels of throughput the rest of the time – file systems using Bursting Throughput mode allow for high throug hput levels for a limited period of time All Bursting T hroughput mode file systems regardless of size can burst up to 100 MiB/s Throughput also scales as the file s ystem grows and will scale at the bursting throughput rate of 100 MiB/s per TiB of data stored subject to regional default file system throughput limits These bursting throughput numbers can be achieved when the file system has a positive burst credit balance You can monitor and alert on your file system’s burst credit balance using the BurstCreditBalance file system metric in Am azon CloudWatch File systems earn burst credits at the baseline throughput rate of 50 MiB/s per TiB of data stored and can accumulate burst credits up to the maximum size of 21 TiB per TiB of data stored This allows larger file systems to accumulate and store more burst credits which allows them to burst for longer periods of time If the file system’s burst credit balance is ever depleted the permitted throughput becomes the baseline throughput Permitted throughput is the maximum amount of throughput a file system is allowed and this value is available as an Amazon CloudWatch metric This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 4 Provisioned Throughp ut Provisioned T hroughput is available for applications that require a higher throughput to storage ratio than those allowed by Bursting Throughput mod e In this mode you can provision the file system’s throughput independent of the amount of data stored in the file system This allows you to optimize your file system’s throughput performance to match your application’s needs and your application can dr ive up to the provisioned throughput continuously This concept of provisioned performance is similar to features offered by other AWS services like provisioned IOPS for Amazon Elastic Block Store PIOPS (io1) volumes and provisioned throughput with read a nd write capacity units for Amazon DynamoDB As with these services you are billed separately for the performance or throughput you provision and the storage you use eg two billing dimensions When file systems are running in Provisioned Throughput mod e you are billed for the storage you use in GB Month and for the throughput provisioned in M iB/sMonth The storage charge for both Bursting and Provisioned Throughput modes includes the baseline throughput of the file system in the price of storage Thi s means the price of storage includes 1 MiB/s of throughput per 20 GiB of data stored so you will be billed for the throughput you provision above this limit For more information on pricing see the Amaz on EFS pricing page 3 You can increase Provisioned T hroughput as often as you need You can decrease Provisioned Throughput or switch throughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change File systems continuously earn burst credits up to the maximum burst credit balance allowed for the file system The maximum burst credit balance is 21 TiB for file systems smaller than 1 TiB or 21 TiB per TiB stored for file systems larger than 1 TiB File systems running in Provisioned Throughput mode still earn burst credits They earn at the higher of the two rates either the P rovisioned Throughput rate or the baseline Bursting Throughput rate of 50 MiB/s per TiB of storage You could find yourself in t he situation where your file system is running in Provisioned Throughput mode and over time the size of it grows so that its provisioned throughput is less than the baseline throughput it is entitled to had the file system been in Bursting Throughput mode In a case like this you will be entitled to the higher throughput of the two modes including the burst throughput of Bursting Throughput mode and you will not be billed for throughput above the storage price For example you set the provisioned throug hput of your 1 TiB file system to 200 MiB/s Over time the file system grows to 5 TiB A file system in Bursting Throughput mode would be entitled to a baseline throughput of 50 MiB/s per TiB of data stored and a burst throughput of 100 MiB/s per TiB of da ta stored Though your file system is still running in Provisioned Throughput mode its entitled to a baseline throughput of 250 MiB/s and a bu rst throughput of 500 MiB/s and will only incur a storage charge for a 5 TiB file system For information on maxi mum provisioned throughput limits please refer to the Amazon EFS Limits section of the Amazon EFS User Guide 4 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 5 Selecting the right throughput mode We recommend running file systems in Bursting Throughput mode because it offers a simple and scalable experience that provides the right ratio of throughput to storage capacity for most workloads There are times when a file system needs a higher throughput to storage capacity ratio than what is offered by Bursting Throughput mode Knowing the throughput demands of your application or monitoring key indicators are two important ways in determining when you’ll need these higher levels of throughput We recommend using Amazon CloudWatch to monitor how your file system is performing One of these metrics BurstCreditBalance is a key performance indicator that will help determine if your file system is better suited for Provisioned Throughput mode If this value is zero or steadily decreasing over a period of normal operations (see figure 3 ) your file system is consuming more burst credits than it is earn ing This means your workload requires a throughput to storage capacity ratio greater than what is allowed by Bursting Throughput mode If this occurs we recommend provisioning throughput for your file system This can be done by modifying the file syste m to change the throughput mode using the AWS Management Console AWS CLIs AWS SDKs or EFS APIs When choosing to run in Provisioned Throughput mode you must also indicate the amount of throughput you want to provision for your file system To help dete rmine how much throughput to provision we recommend monitoring another key performance indicator available from Amazon CloudWatch TotalIOBytes This metric gives you throughput in terms of the total numbers of bytes (data read data write and metadata) for each file system operation during a selected period To calculate the average throughput in MiB/s for a period convert the Sum statistic to MiB (Sum of TotalIOBytes x 1048576) and divide by the number of seconds in the period Use Metric Math expres sions in Amazon CloudWatch to make it even easier to see throughput in MiB/s For more information on using Metric Math see Using Metric Math with Amazon EFS in the Amaz on EFS User Guide 5 Calculate this during the same period when your BurstCreditBalance metric was continuously decreasing This will give you the average throughput you were Figure 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 6 achieving during this period and is a good starting point when choosing the amount of throughput to provision If your file system is running in Provisioned Throughput mode and you experience no performance issues while your BurstCreditBalance continuously increases for long periods of normal operations then consider decreasing the amount of provisioned throughput to reduce costs To help determine how much throughput to provision we also recommend monitoring the Amazon CloudWatch metric TotalIOBytes Calculate this during the same period when your BurstCredi tBalance metric was continuously increasing This will give you the average throughput you were achieving during this period and is a good starting point when choosing the amount of throughput to provision Remember you can increase the amount of provisio ned throughput as often as you need but you can only decrease the amount of provisioned throughput or switch thro ughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change If you’re planning on migrating large a mounts of data into your file system you may also want to consider switching to Provisioned Throughput mode and provision a higher throughput beyond your allotted burst capability to accelerate loading data Following the migration you may decide to lowe r the amount of provisioned throughput or switch to Bursting Throughput mode for normal operations Monitor the average total throughput of the file system using the TotalIOBytes metric in Amazon CloudWatch Use Metric Math expressions in Amazon CloudWatch to make it even easier to see throughput in MiB/s Compare the average throughput you’re driving the file system to the PermittedThroughput metric If the calculated average throughput you’re driving the file system is less than the permitted throughput then consider making a throughput change to lower costs If the calculated average throughput during normal operations is at or below the baseline throughput to storage capacity ratio of Bursting Throughput mode (50 MiB/s per TiB of data stored) then cons ider switching to Bursting Throughput mode If the calculated average throughput during normal operations is above this ratio then consider lowering the amount of provisioned throughput to some level in between your current provisioned throughput and the calculated average throughput during normal operations Remember you can switch throughput modes or decrease the amou nt of provisioned throughput as long as it’s been more than 24 hours since the last decrease or throughput mode change Conclusion Amazon EFS gives you the flexibility to choose different performance and throughput modes to customize your file system to meet the needs for a wide spectrum of workloads Knowing the performance and throughput demands of your appl ication and monitoring key performance indicators will help you select the right performance and throughput mode to satisfy your file system’s needs Contributors The following individuals and organizations contributed to this document: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 7  Darryl S Osborne solutions architect Amazon File Services Further Reading For additional information see the following :  Amazon EFS User Guide6 Document Revisions Date Description July 2018 First publication 1 https://awsamazoncom/efs/ 2 https://docsawsamazoncom/efs/latest/ug/get started filesynchtml 3 https://awsamazon com/efs/pricing/ 4 https://docsawsamazoncom/efs/latest/ug/limitshtml 5 https://docsawsamazon com/efs/latest/ug/monitoring metric mathhtml 6 https://docsawsamazoncom/efs/latest/ug/whatisefshtml,General,consultant,Best Practices Amazon_Virtual_Private_Cloud_Network_Connectivity_Options,Amazon Virtual Private Cloud Connectivity Options January 2018 © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Contents Introduction 1 Network toAmazon VPC Connectivity Options 2 AWS Managed VPN 4 AWS Direct Connect 6 AWS Direct Connect + VPN 8 AWS VPN CloudHub 10 Software VPN 11 Transit VPC 13 Amazon VPC toAmazon VPC Connectivity Options 14 VPC Peering 16 Software VPN 17 Software toAWS Managed VPN 19 AWS Managed VPN 20 AWS Direct Connect 22 AWS PrivateLink 25 Internal User toAmazon VPC Connectivity Options 26 Software Remote Acce ss VPN 27 Conclusion 29 Appendix A: High Level HA Architecture for Software VPN Instances 30 VPN Monitoring 31 Contributors 31 Document Revisions 32 Abstract Amazon Virtual Private Cloud (Amazon VPC) lets customers provision a private isolated section of the Amazon Web Services (AWS) Cloud where they can launch AWS resources in a virtual network using customer defined IP address ranges Amazon VPC provides customers with several options for connecting their AWS virtual networks with other remote networks This document describes several common network connectivity op tions available to our customers These include connectivity options for integrating remote customer networks with Amazon VPC and connecting multiple Amazon VPCs into a contiguous virtual network This whitepaper is intended for corporate network architect s and engineers or Amazon VPC administrators who would like to review the available connectivity options It provides an overview of the various options to facilitate network connectivity discussions as well as pointers to additional documentation and reso urces with more detailed information or examples Amazon Web Services – Amazon VPC Connectivity Options Page 1 Introduction Amazon VPC provides multiple network connectivity options for you to leverage depending on your current network designs and requirements These connectivity options inc lude leveraging either the internet or an AWS Direct Connect connection as the network backbone and terminating the connection into either AWS or user managed network endpoints Additionally with AWS you can choose how network routing is delivered betwee n Amazon VPC and your networks leveraging either AWS or user managed network equipment and routes This whitepaper considers the following options with an overview and a high level comparison of each: User Network –to–Amazon VPC Connectivity Options • AWS Managed VPN – Describes establishing a VPN connection from your network equipment on a remote network to AWS managed network equipment attached to your Amazon VPC • AWS Direct Connect – Describes establishing a private logical connection from your remote network to Amazon VPC leveraging AWS Direct Connect • AWS Direct Connect + VPN – Describes establishing a private encrypted connection from your remote network to Amazon VPC leveraging AWS Direct Connect • AWS VPN CloudHub – Describes establishing a hub andspoke model for connecting remote branch offices • Software VPN – Describes establishing a VPN connection from your equipment on a remote network to a user managed software VPN appliance running inside an Amazon VPC • Transit VPC – Describes establishing a global transit network on AWS using Software VPN in conjunction with AWS managed VPN Amazon VPC –to–Amazon VPC Connectivity Options • VPC Peering – Describes the AWS recommended approach for connecting multiple Amazon VPCs within and across regions using the Amazon VPC peering feature Amazon Web Services – Amazon VPC Connectivity Options Page 2 • Software VPN – Describes connecting multiple Amazon VPCs using VPN connections established between user managed software VPN appliances running inside of each Amazon VPC • Software toAWS Managed VPN – Describes connecting multiple Amazon VPCs with a VPN connection established between a user managed software VPN appliance in one Amazon VPC and AWS managed network equipment attached to the other Amazon VPC • AWS Managed VPN – Describes connecting multiple Amazon VPCs leveraging multiple VPN connections between your remote network and each of your Amazon VPCs • AWS Direct Connect – Describes connecting multiple Amazon VPCs leveraging logical connections on customer managed AWS Direct Connect routers • AWS PrivateLink – Describes connecting multiple Amazon VPCs leveraging VPC interface endpoints and VPC endpoint services Internal User toAmazon VPC Connectivity Options • Software Remote Access VPN – In addition to customer network –to– Amazon VPC connectivity options for connecting remote users to VPC resources this section describes leveraging a remote access solution for providing end user VP N access into an Amazon VPC Network toAmazon VPC Connectivity Options This section provides design patterns for you to connect remote networks with your Amazon VPC environment These options are useful for integrating AWS resources with your existing on site services (for example monitoring authentication security data or other systems) by extending your internal networks into the AWS Cloud This network extension also allows your internal users to seamlessly connect to resources hosted on AWS just like any other internally facing resource VPC connectivity to remote customer networks is best achieved when using nonoverlapping IP ranges for each network being connected For example if Amazon Web Services – Amazon VPC Connectivity Options Page 3 you’d like to connect one or more VPCs to your home network make sure they are configured with unique Classless Inter Domain Routing (CIDR) ranges We advise allocating a single contiguous non overlapping CIDR block to be used by each VPC For additional information about Amazon VPC routing and constraints see the Amazon VPC Frequently Asked Questions 1 Option Use Case Advantages Limitations AWS Managed VPN AWS managed IPsec VPN connection over the internet Reuse existing VPN equipment and processes Reuse existing internet connections AWS managed endpoint includes multi data center redundancy and automated failover Supports static routes or dynamic Border Gateway Protocol (BGP) peering and routing policies Network latency variability and availability are dependent on internet conditions Customer managed endpoint is responsible for implementing redundancy and failover (if required) Customer device must support single hop BGP (when leveraging BGP for dynamic routing) AWS Direct Connect Dedicated network connection over private lines More predictable network performance Reduced bandwidth costs 1 or 10 Gbps provisioned connections Supports BGP peering and routing policies May require additional telecom and hosting provider relationships or new network circuits to be provisioned AWS Direct Connect + VPN IPsec VPN connection over private lines Same as the previous option with the addition of a secure IPsec VPN connection Same as the previous option with a little additional VPN complexity Amazon Web Services – Amazon VPC Connectivity Options Page 4 Option Use Case Advantages Limitations AWS VPN CloudHub Connect remote branch offices in a hubandspoke model for primary or backup connectivity Reuse existing internet connections and AWS VPN connections (for example use AWS VPN CloudHub as backup connectivity to a third party MPLS network) AWS managed virtual private gateway includes multi data center redundancy and automated failover Supports BGP for exchanging routes and routing priorities (for example prefer MPLS connections over backup AWS VPN connections) Network latency variability and availability are dependent on the internet User managed branch office endpoints are responsible for implementing redundancy and failover (if required) Software VPN Software appliance based VPN connection over the internet Supports a wider array of VPN vendors products and protocols Fully customer managed solution Customer is responsible for implementing HA (high availability) solutions for all VPN endpoints (if required) Transit VPC Software appliance based VPN connection with hub VPC AWS managed IPsec VPN connection for spoke VPC connection Same as the previous option with the addition of AWS managed VPN connection between hub and spoke VPCs Same as the previous section AWS Managed VPN Amazon VPC provides the option of creating an IPsec VPN connection between remote customer networks and their Amazon VPC over the internet as shown in the following figure Consider taking this approach when you want to take advantage of an AWS managed VPN endpoint that includes automated multi – data center redundancy and failover built into the AWS side of the VPN connection Although not shown the Amazon virtual private gateway represents two distinct VPN endpoints physically located in separate data centers to increase the availability of your VPN connection Amazon Web Services – Amazon VPC Connectivity Options Page 5 AWS managed VPN The virtual private gateway also supports and encourages multiple user gateway connections so you can implement redundancy and failover on your side of the VPN connection as shown in the following figure Both dynamic and static routing options are provided to give you flexibility in your routing configuration Dynamic routing uses BGP peering to exchange routing information between AWS and these remote endpoints With dynamic routing you can also specify routing priorities policies and weights (metrics) in your BGP advertisements and influence the network path between your networks and AWS It is important to note that when you use BGP both the IPSec and the BGP connections must be terminated on the same user gateway device so it must be capable of terminating both IPSec and BGP connections Amazon Web Services – Amazon VPC Connectivity Options Page 6 Redundant AWS managed VPN connections Additional Resources • Adding a Virtual Private Gateway to Your VPC2 • Customer Gateway device minimum requirements3 • Customer Gateway devices known to work with Amazon VPC4 AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated connection from an onpremises network to Amazon VPC Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment This private connection can reduce network costs increa se bandwidth throughput and provide a more consistent network experience than internet based connections AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations It uses industry standard VLANs to access Amazon Web Services – Amazon VPC Connectivity Options Page 7 Amazon Elastic Compute Cloud (Amazon EC2) instances running within an Amazon VPC using private IP addresses You can choose from an ecosystem of WAN service providers fo r integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks The following figure illustrates this pattern You can also work with your provider to create sub 1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint allowing you to treat them as a single managed connection AWS Direct Connect AWS Direct Connect allows you to connect your AWS Direct Connect connection to one or more VPCs in your account that are located in the same or different regions You can use Direct Connect gateway to achieve this A Direct Connect gateway is a globally available resource You can create the Direct Connect gateway in any public r egion and access it from all other public regions This feature also allows you to connect to any of the participating VPCs from any Direct Connect location further reducing your costs for using AWS services on a cross region basis The following figure illustrates this pattern Amazon Web Services – Amazon VPC Connectivity Options Page 8 AWS Direct Connect Gateway Additional Resources • AWS Direct Connect product page5 • AWS Direct Connect locations6 • AWS Direct Connect FAQs • AWS Direct Connect LAGs • AWS Direct Connect Gateway s7 • Getting Started with AWS Direct Connect8 AWS Direct Connect + VPN With AWS Direct Connect + VPN you can combine one or more AWS Direct Connect dedicated network connections with the Amazon VPC VPN This combination provides an IPsec encrypted private connection that also reduces network costs increases bandwidth throughput and provides a more consistent network experience than internet based VPN connections You can use AWS Direct Connect to establish a dedicated network connection between your network create a logical connection to public AWS resources such as an Amazon virt ual private gateway IPsec endpoint This solution combines the AWS managed benefits of the VPN solution with low latency increased Amazon Web Services – Amazon VPC Connectivity Options Page 9 bandwidth more consistent benefits of the AWS Direct Connect solution and an end toend secure IPsec conne ction The following figure illustrates this option AWS Direct Connect and VPN Additional Resources • AWS Direct Connect product page9 • AWS Direct Connect FAQs10 • Adding a Virtual Private Gateway to Your VPC11 EC2 Instances AWS Public Direct Connect Customer WAN Availability Zone AWS Direct Virtual VPN Private EC2 Instances VPC Subnet 2 Remote Servers Amazon VPC Amazon Web Services – Amazon VPC Connectivity Options Page 10 AWS VPN CloudHub Building on the AWS managed VPN and AWS Direct Connect options described previously you can securely communicate from one site to another using the AWS VPN CloudHub The AWS VPN CloudHub operates on a simple hub and spoke mo del that you can use with or without a VPC Use this design if you have multiple branch offices and existing internet connections and would like to implement a convenient potentially low cost hub andspoke model for primary or backup connectivity between these remote offices The following figure depicts the AWS VPN CloudHub architecture with blue dashed lines indicating network traffic between remote sites being routed over their AWS VPN connections AWS VPN CloudHub AWS VPN CloudHub leverages an Amazon VPC virtual private gateway with multiple gateways each using unique BGP autonomous system numbers (ASNs) Your gateways advertise the appropriate routes (BGP prefixes) over their VPN connections These routing advertisements are received and Customer Customer Network EC2 Instances Customer Availability Zone Virtual Customer Network EC2 Instances VPC Subnet 2 Customer Customer Network Amazon Web Services – Amazon VPC Connectivity Options Page 11 readvertised to each BGP peer so that each site can send dat a to and receive data from the other sites The remote network prefixes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges Each site can also send and receive data from the VPC as if they were using a standard VPN conn ection This option can be combined with AWS Direct Connect or other VPN options (for example multiple gateways per site for redundancy or backbone routing that you provide) depending on your requirements Additional Resources • AWS VPN CloudHub12 • Amazon VPC VPN Guide13 • Customer Gateway device minimum requirements14 • Customer Gateway device s known to work with Amazon VPC15 • AWS Direct Connect product page16 Software VPN Amazon VPC offers you the flexibility to fully manage both sides of your Amazon VPC connectivity by creating a VPN connection between your remote network and a software VPN appliance running in your Amazon VPC network This option is recommended if you must manage both ends of the VPN connection either for compliance purposes or for leveraging gateway devices that are not currently supported by Amazon VPC’s VPN solution The following figure shows this option Amazon Web Services – Amazon VPC Connectivity Options Page 12 Software VPN You can choose from an ecosystem of multiple partners and open source communities that have produced software VPN appliances that run on Amazon EC2 These include products from well known security companies like Check Point Astaro OpenVPN Technologies and Microsoft as well as popular open source tools like OpenVPN Openswan and IPsec Tools Along with this choice comes the responsib ility for you to manage the software appliance including configuration patches and upgrades Note that this design introduces a potential single point of failure into the network design because the software VPN appliance runs on a single Amazon EC2 ins tance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace17 • Tech Brief Connecting Cisco ASA to VPC EC2 Instance (IPSec)18 Clients Clients Software VPN Appliance Availability Zone internet VPC Router VPN Customer VPN internet EC2 Instances Remote VPC Subnet 2 Servers Amazon Web Services – Amazon VPC Connectivity Options Page 13 • Tech Brief Connecting Multiple VPCs with EC2 Instances (IPSec)19 • Tech Brief Connecting Multiple VPCs with EC2 Instances (SSL)20 Transit VPC Building on the Software VPN design mentioned above you can create a global tran sit network on AWS A transit VPC is a common strategy for connecting multiple geographically disperse VPCs and remote networks in order to create a global network transit center A transit VPC simplifies network management and minimizes the number of con nections required to connect multiple VPCs and remote networks The following figure illustrates this design Software VPN and Transit VPC Amazon Web Services – Amazon VPC Connectivity Options Page 14 Along with providing direct network routing between VPCs and on premises networks this design also enables the transit VPC to implement more complex routing rules such as network address translation between overlapping network ranges or to add additional network level packet filtering or inspection The transit VPC design can be used to su pport important use cases like private networking shared connectivity and cross account AWS usage Additional Resources • Tech Brief Global Transit Network • Solution Transit VPC Amazon VPC toAmazon VPC Connectivity Options Use these design patterns when you want to integrate multiple Amazon VPCs into a larger virtual network This is useful if you require multiple VPCs due to security billing presence in multiple regions or internal charge back requirements to more easily integrate AWS resources between Amazon VPCs You can also combine these patterns with the Network –to–Amazon VPC Connectivity Options for creating a corporate network that spans remote networks and multiple VPCs VPC connectivity between VPCs is best achieved when using non overlapping IP ranges for each VPC being connected For example if you’d like to connect multiple VPCs make sure each VPC is configured with unique Classless Inter Domain Routing (CIDR) ranges Therefore we advise you to allocate a single contiguous non overlapping CIDR block to be used by each VPC For additional information about Amazon VPC routing and constraints see the Amazon VPC Frequently Asked Questions 21 Amazon Web Services – Amazon VPC Connectivity Options Page 15 Option Use Case Advantages Limitations VPC Peering AWS provided network connectivity between two VPCs Leverages AWS networking infrastructure Does not rely on VPN instances or a separate piece of physical hardware No single point of failure No bandwidth bottleneck VPC peering does not support transitive peering relationships Software VPN Software appliance based VPN connections between VPCs Leverages AWS networking equipment in region and internet pipes between regions Supports a wider array of VPN vendors products and protocols Managed entirely by you You are responsible for implementing HA solutions for all VPN endpoints (if required) VPN instances could become a network bottleneck Software to AWS managed VPN Software appliance to VPN connection between VPCs Leverages AWS networking equipment in region and internet pipes between regions AWS managed endpoint includes multi data center redundancy and automated failover You are responsible for implementing HA solutions for the software appliance VPN endpoints (if required) VPN instances could become a network bottleneck AWS managed VPN VPCtoVPC routing managed by you over IPsec VPN connections using your equipment and the internet Reuse existing Amazon VPC VPN connections AWS managed endpoint includes multi data center redundancy and automated failover Supports static routes and dynamic BGP peering and routing policies Network latency variability and availability depend on internet conditions The endpoint you manage is responsible for implementing red undancy and failover (if required) AWS Direct Connect VPCtoVPC routing managed by you using your equipment in an AWS Direct Connect location and private lines Consistent network performance Reduced bandwidth costs 1 or 10 Gbps provisioned connections Supports static routes and BGP peering and routing policies May require additional telecom and hosting provider relationships Amazon Web Services – Amazon VPC Connectivity Options Page 16 Option Use Case Advantages Limitations AWS PrivateLink AWS provided network connectivity between two VPCs using interface endpoints Leverages AWS networking infrastructure No single point of failure VPC Endpoint services only available in AWS region in which they are created VPC Peering A VPC peering connection is a networking connection between two VPCs that enables routing using each VPC’s private IP addresses as if they were in the same network This is the AWS recommended method for connecting VPCs VPC peering connections can be created between your own VPCs or with a VPC in another AWS account VPC peering also supports inter region peering Traffic using inter region VPC Peering always stays o n the global AWS backbone and never traverses the public internet thereby reducing threat vectors such as common exploits and DDoS attacks VPC toVPC peering Amazon Web Services – Amazon VPC Connectivity Options Page 17 AWS uses the existing infrastructure of a VPC to create VPC peering connections These connections are neither a gateway nor a VPN connection and do not rely on a separate piece of physical hardware Therefore they do not introduce a potential single point of failure or network bandwidth bottleneck between VPCs Additiona lly VPC routing tables security groups and network access control lists can be leveraged to control which subnets or instances are able to utilize the VPC peering connection A VPC peering connection can help you to facilitate the transfer of data betw een VPCs You can use them to connect VPCs when you have more than one AWS account to connect a management or shared services VPC to application or customer specific VPCs or to connect seamlessly with a partner’s VPC For more examples of scenarios in w hich you can use a VPC peering connection see the Amazon VPC Peering Guide22 Additional Resources • Amazon VPC User Guide23 • Amazon VPC Peering Guide24 Software VPN Amazon VPC provides network routing flexibility This includes the ability to create secure VPN tunnels between two or more software VPN appliances to connect multiple VPCs into a larger virtual private network so that instances in each VPC can seamlessly connect to each other using private IP addresses This option is recommended when you want to connect VPCs across multiple AWS Regions and manage both ends of the VPN connection using your preferred VPN software provider This option uses an internet gatew ay attached to each VPC to facilitate communication between the software VPN appliances Amazon Web Services – Amazon VPC Connectivity Options Page 18 Inter region VPC toVPC routing You can choose from an ecosystem of multiple partners and open source communities that have produced software VPN appliances that run on Amazon EC2 These include products from well known security companies like Check Point Sophos OpenVPN Technologies and Microsoft as well as popular open source tools like OpenVPN Openswan and IPsec Tools Along wit h this choice comes the responsibility for you to manage the software appliance including configuration patches and upgrades Note that this design introduces a potential single point of failure into the network design as the software VPN appliance runs on a single Amazon EC2 instance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace25 • Tech Brief Connecting Multiple VPCs with EC2 Instances (IPSec)26 • Tech Brief Connecting Multiple VPCs with EC2 Instances (SSL)27 Amazon Web Services – Amazon VPC Connectivity Options Page 19 Software toAWS Managed VPN Amazon VPC provides the flexibility to combine the AWS managed VPN and software VPN options to connect multiple VPCs With this design you can create secure VPN tunnels between a software VPN appliance and a virtual private gateway to connect multiple VPCs into a larger virtual private network allowing instances in each VPC to seamlessly connect to each other using private IP addresses This option is recommended when you want to connect VPCs across multiple AWS regions and would like to take advantage of the AWS managed VPN endpoint including automated multi data center redundancy and failover built into the virtual private gateway side of the VPN connection This option uses a virtual private gateway in one Amazon VPC and a combination of an internet gateway and software VPN appliance in anot her Amazon VPC as shown in the following figure Intra region VPC toVPC routing Amazon Web Services – Amazon VPC Connectivity Options Page 20 Note that this design introduces a potential single point of failure into the network design as the software VPN appliance runs on a single Amazon EC2 instance For additional information see Appendix A: HighLevel HA Architecture for Software VPN Instances Additional Resources • Tech Brief Connecting Multiple VPCs with Sophos Security Gateway28 • Configuring Windows Server 2008 R2 as a Customer Gateway for Amazon Virtua l Private Cloud29 AWS Managed VPN Amazon VPC provides the option of creating an IPsec VPN to connect your remote networks with your Amazon VPCs over the internet You can take advantage of multiple VPN connections to route traffic between your Amazon VPCs as shown in the following figure Amazon Web Services – Amazon VPC Connectivity Options Page 21 Routing traffic between VPCs Amazon Web Services – Amazon VPC Connectivity Options Page 22 We recommend this approach when you want to take advantage of AWS managed VPN endpoints including the automated multi data center redundancy and failover built into the AWS side of each VPN connection Although not shown the Amazon virtual private gateway represents two distinct VPN endpoints physically located in sepa rate data centers to increase the availability of each VPN connection Amazon virtual private gateway also supports multiple customer gateway connections (as described in the Customer Network –to–Amazon VPC Options and AWS man aged VPN sections and shown in the figure Redundant AWS managed VPN connections) allowing you to implement redundancy and failover on your side of the VPN connection This solution can also leverage BGP peering to exchange routing information between AWS and these remote endpoints You can specify routing priorities policies and weights (metrics) in your BGP advertisements to influence the network path traffic will take to and from your networks and AWS This approach is suboptimal from a routing perspe ctive since the traffic must traverse the internet to get to and from your network but it gives you a lot of flexibility for controlling and managing routing on your local and remote networks and the potential ability to reuse VPN connections Additiona l Resources • Amazon VPC Users Guide30 • Customer Gateway device minimum requirements31 • Customer Gateway devices known to work with Amazon VPC32 • Tech Brief Connecting a Single Router to Multiple VPCs33 AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to your Amazon VPC or among Amazon VPCs This option can potentially reduce network costs increase bandwidth throughput and provide a more consistent network experience than the other VPC toVPC connectivity options Amazon Web Services – Amazon VPC Connectivity Options Page 23 You can divide a physical AWS Direct Connect connection into multiple logical connections one for each VPC You can then use these logical connections for routing traffic between VPCs as shown in the following figure In addition to intra region routing you can connect AWS Direct Connect locations in other regions using your existing WAN providers and leverage AWS Direct C onnect to route traffic between regions over your WAN backbone network Amazon Web Services – Amazon VPC Connectivity Options Page 24 Intra region VPC toVPC routing with AWS Direct Connect Amazon Web Services – Amazon VPC Connectivity Options Page 25 We recommend this approach if you’re already an AWS Direct Connect customer or would like to take advantage of AWS Direct Connect’s reduced network costs increased bandwidth throughput and more consistent network experience AWS Direct Connect can provide very efficient routing since traffic can take advantage of 1 G bps or 10 G bps fiber connecti ons physically attached to the AWS network in each region Additionally this service gives you the most flexibility for controlling and managing routing on your local and remote networks as well as the potential ability to reuse AWS Direct Connect connec tions Additional Resources • AWS Direct Connect product page34 • AWS Direct Connect locations35 • AWS Direct Connect FAQs36 • Get Started with AWS Direct Connect37 AWS PrivateLink An interface VPC endpoint (AWS PrivateLink) enables you to connect to services powered by AWS PrivateLink These services include some AWS services services hosted by other AWS accounts (referred to as endpoint services ) and supported AWS Marketplace par tner services The interface endpoints are created directly inside of your VPC using elastic network interfaces and IP addresses in your VPC’s subnets The service is now in your VPC enabling connectivity to AWS services or AWS PrivateLink powered servic e via private IP addresses That means that VPC Security Groups can be used to manage access to the endpoints Also interface endpoint s can be accessed from your premises via AWS Direct Connect In the following diagram the account owner of VPC B is a s ervice provider and account owner of VPC A is service consumer Amazon Web Services – Amazon VPC Connectivity Options Page 26 VPC toVPC routing with AWS PrivateLink We recommend this approach if you want to use service s offered by another VPC securely over private connection You can create an interface endpoint to keep all traffic within AWS network Additional Resources • Interf ace VPC Endpoints • VPC Endpoint Services Internal User toAmazon VPC Connectivity Options Internal user access to Amazon VPC resources is typically accomplished either through your network –toAmazon VPC options or the use of software remote access VPNs to connect internal users to VPC resources With the former option you can reuse your existing on premises and remote access solutions for managing end user access while still providing a seamless experience connecting to AWS hosted resources Describing on premises internal and remote access solutions in any more detail than what has been described in Amazon Web Services – Amazon VPC Connectivity Options Page 27 Customer Network –to–Amazon VPC Options is beyond the scope of this document With software remote access VPN you can leverage low cost ela stic and secure AWS services to implement remote access solutions while also providing a seamless experience connecting to AWS hosted resources In addition you can combine software remote access VPNs with your network toAmazon VPC options to provide re mote access to internal networks if desired This option is typically preferred by smaller companies with less extensive remote networks or who have not already built and deployed remote access solutions for their employees The following table outlines the advantages and limitations of these options Option Use Case Advantages Limitations Network to Amazon VPC Connectivity Options Virtual extension of your data center into AWS Leverages existing end user internal and remote access policies and technologies Requires existing end user internal and remote access implementations Software Remote Access VPN Cloud based remote access solution to Amazon VPC and/or internal networks Leverages low cost elastic and secure web services provided by AWS for implementing a remote access solution Could be redundant if internal and remote access implementations already exist Software Remote Access VPN You can choose from an ecosystem of multiple partners and open source communities that have produced remote access solutions that run on Am azon EC2 These include products from well known security companies like Check Point Sophos OpenVPN Technologies and Microsoft The following figure shows a simple remote access solution leveraging an internal remote user database Amazon Web Services – Amazon VPC Connectivity Options Page 28 Remote access solution Remote access solutions range in complexity support multiple client authentication options (including multifactor authentication) and can be integrated with either Amazon VPC or remotely hosted identity and access management soluti ons (leveraging one of the network toAmazon VPC options) like Microsoft Active Directory or other LDAP/multifactor authentication solutions The following figure shows this combination allowing the remote access server to leverage internal access manage ment solutions if desired Amazon Web Services – Amazon VPC Connectivity Options Page 29 Combination remote access solution As with the software VPN options the customer is responsible for managing the remote access software including user management configuration patches and upgrades Additionally consider that this design introduces a potential single point of failure into the network design as the remote access server runs on a single Amazon EC2 instance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace38 • OpenVPN Access Server Quick Start Guide39 Conclusion AWS provides a number of efficient secure connectivity options to help you get the most out of AWS when integrating your remote networks with Amazon VPC Amazon Web Services – Amazon VPC Connectivity Options Page 30 The options provided in this whitepaper highlight several of the connectivity options and patterns that customers have used to successfully integrate their remote networks or multiple Amazon VPC networks You can use the information provided here to determine the most appropriate mechanism for connecting the infrastructure required to run your business regardless of where it is physically located or hosted Appendix A: High Level HA Architect ure for Software VPN Instances Creating a fully resilient VPC connection for software VPN instances requires the setup and configuration of multiple VPN instances and a monitoring instance to monitor the health of the VPN connections High level HA design We recommend configuring your VPC route tables to leverage all VPN instances simultaneously by directing traffic from all of the subnets in one Availability Amazon Web Services – Amazon VPC Connectivity Options Page 31 Zone through its respective VPN instances in the same Availabi lity Zone Each VPN instance then provides VPN connectivity for instances that share the same Availability Zone VPN Monitoring To monitor Software based VPN appliance you can create a VPN Monitor The VPN monitor is a custom instance that you will need t o run the VPN monitoring scripts This instance is intended to run and monitor the state of VPN connection and VPN instances If a VPN instance or connection goes down the monitor needs to stop terminate or restart the VPN instance while also rerouting traffic from the affected subnets to the working VPN instance until both connections are functional again Since customer requirements vary AWS does not currently provide prescriptive guidance for setting up this monitoring instance However an example s cript for enabling HA between NAT instances could be used as a starting point for creating an HA solution for Software VPN instances We recommend that you think through the necessary busin ess logic to provide notification or attempt to automatically repair network connectivity in the event of a VPN connection failure Additionally you can monitor the AWS Managed VPN tunnels using Amazon CloudWatch metrics which collects data points from the VPN service into readable near real time metrics Each VPN connection collects and publishes a variety of tunnel metrics to Amazon CloudWatch These metrics will allow you to monitor tunnel health activity and create automated actions Contributors The following individuals contributed to this document: • Garvit Singh Solutions Builder AWS Solution Architecture • Steve Morad Senior Manager Solution Builders AWS Solution Architecture • Sohaib Tahir Solutions Architect AWS Solution Architecture Amazon Web Services – Amazon VPC Connectivity Options Page 32 Document Revisions Date Description January 2018 Updated information throughout Focus on the following designs/features: transit VPC direct connect gateway and private link July 2014 First publication Notes 1 http://awsamazoncom/vpc/faqs/ 2 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 3 https://docsawsamazoncom/vpc/latest/adminguide/Introduction html#CGRe quirements 4 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#Device sTested 5 http://awsamazoncom/directconnect/ 6 http://a wsamazoncom/directconnect/#details 7 http://awsamazoncom/directconnect/faqs/ 8 http://docsamazonwebservicescom/DirectConnect/latest/GettingStartedGui de/Welcomehtml 9 http://awsamazoncom/directconnect/ 10 http://awsamazoncom/directconnect/faqs/ 11 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 12 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPN_Cl oudHubhtml 13 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Amazon Web Services – Amazon VPC Connectivity Options Page 33 Nhtml Amazon Web Services – Amazon VPC Connectivity Options Page 34 14 http://awsamazoncom/vpc/faqs/#C8 15 http://awsamazoncom/vpc/faqs/#C9 16 http://awsamazoncom/directconnect/ 17 https://awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 18 http://awsamazoncom/articles/8800869755706543 19 http://awsamazoncom/articles/5472675506466066 Although these guid es specifically address connecting multiple Amazon VPCs they are easily adaptable to support this network configuration by substituting one of the VPCs with an on premises VPN device connecting to an IPsec or SSL software VPN appliance running in an Amazo n VPC 20 https://awsamazoncom/articles/0639686206802544 21 http://awsamazoncom/vpc/faqs/ 22 http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/ 23 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpc peeringhtml 24 http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/ 25 https:/ /awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 26 http://awsamazoncom/articles/5472675506466066 27 http://awsamazoncom/articles/0639686206802544 28 http://awsamazoncom/articles/1909971399457482 29 http://docsamazonwebservicescom/AmazonVPC/late st/UserGuide/Custom erGateway Windowshtml 30 http://d ocsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 31 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#CGRe quirements Amazon Web Services – Amazon VPC Connectivity Options Page 35 32 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#Device sTested Amazon Web Services – Amazon VPC Connectivity Options Page 36 33 http://awsamazoncom/vpc/faqs/#C9 34 http://awsamazoncom/directconnect/ 35 http://awsamazoncom/directconnect/#details 36 http://awsamazoncom/directconnect/faqs/ 37 http://docsamazonwebservicescom/DirectConnect/latest/GettingStartedGui de/Welcomehtml 38 https://awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 39 http://doc sopenvpnnet/how totutorialsguides/virtual platforms/amazon ec2appliance amiquick start guide/,General,consultant,Best Practices Amdocs_Optima_Digital_Customer_Management_and_Commerce_Platform_in_the_AWS_Cloud,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmdocs Digital Brand Experience Platform in AWS Cloud First Published February 2018 Updated November 18 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlContents Introduction 1 BSS applications are mission critical workloads 2 Amdocs BSS portfolio 3 Amdocs Digital Brand Experience Suite overview 3 Functional capabilities 4 Functional architecture 8 Data management 11 Digital Brand Experience Suite deployment architecture 13 Technical architecture 13 Digital Brand Experience Suite SaaS model 19 AWS Well Architected Framework 21 Conclusion 24 Contributors 24 Further reading 25 Document revisions 25 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAbstract Amdocs Digital Brand Experience Suite is a digital customer management and commerce platform designed to rapidly and securely monetize any product or service Serving innovative communications operators utilities and other subscription based service providers Digital Brand Experience Suite ’s open platform has been available onpremises but is now also available on the AWS Cloud This whitepaper provides an architectural overview of how the Digital Brand Experience Suite business support systems (BSS) solution operates on the AWS Cloud The document is written for executive s architect s and development teams that want to deploy a business support solution for their consumer or enterprise business on the AWS Cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 1 Introduction Amdocs provides the Amdocs Digital Brand Experience Suite: a digital customer management commerce and monetization software as a service ( SaaS ) solution designed specifically for the needs of digital brands and other small service providers who need to provide digital experience to their customers while being agile innovative and with rapid time to market The Amdocs solution helps these commun ications service provider s (CSPs) to focus on their business by simplifying their business support through prebuilt packages of business and technical processes spanning the full customer lifecycle: care commerce ordering and monetization Provided as a service the solution is ready to support simple models with minimal time to market including integrations to key external partners and an extensive set of application programming interface s (APIs ) More complex business models can be configured in the s ystem and integrations within bespoke ecosystems are supported through the open API architecture The enterprise market in particular involves unique challenges that require an industry proven solution Service providers focusing on the enterprise and sma ll and medium sized enterprise (SME) business segments can deliver a significant increase in revenue and market share However when trying to perform an enterprise business strategy many operators find they lack the required capability to support the continuous demand for their corporate services They find that their BSS platforms lack business flexibility and operational efficiency and are not cost effective Key challenges include : underperforming systems the high cost of managing legacy operation s and maintaining regulatory compliance Many companies need to adopt a pan Regional architecture to onboard additional countries Regions customer verticals and products This situation demands a significant change in both revenue and customer manageme nt systems as well as in the IT environment This whitepaper provides an overview of the Amdocs Digital Brand Experience platform and a reference architecture for deploying Amdocs on AWS This whitepaper also discusses the benefits of running the platform on AWS and various use cases By running Amdocs Digital Brand Experience on the AWS Cloud and especially delivered as SaaS the Amdocs platform can deliver significant required improvements to the operations and capabilities of customers in every indust ry while enabling future growth and expansion to new domains Customers can also benefit from the compliance and security credentials of the AWS Cloud instead of incurring an ongoing cost of audits related to storing customer data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 2 BSS applications are mi ssion critical workloads BSS are the backbone of a service provider’s customer facing strategy BSS encompasses the spectrum from marketing shopping ordering charging taxation invoicing payments collection dunning and ultimately financial reporting There are four primary domains : product management order management revenue management and customer management Product management Product management supports the sellable entities or catalog of a provider From conception to sale to revenue recognition this is the toolset for managing services products pricing discounts and many other attributes of the product lifecycle Order management Order management is an extension of the sales process and encompasses four areas: order decomposition order orchestration order fallout and order status management Ordering may be synchronous where service is enabled in real time Or the actual service delivery may take days with complex installation processes It is incumbent on the BSS to accurat ely and efficiently process ing orders avoiding fallout s while providing status both to the service provider and the customer Revenue management Revenue management focuses on the financial aspects of the business both from the customer and service provi der perspective It includes pricing charging and discounting those feeds into the invoicing process and taxing The invoice in turn feeds the accounts receivable processes —payment collection and dunning —and becomes the foundation for revenue recognition reporting ( general ledger) C onsumer billing for consumer enterprise and wholesale services as well as prepaid and postpaid models are supported in the system Revenue management also include s fraud management and revenue assurance Customer management The relationship of the service provider to their customers is of critical importance From the initial contact through self care and mobile applications shopping online and to customer care i t is important to provide the multi channel exposure of a single customer view Complex customer models are supported through robust mechanisms of customer groups Enterprises are modeled through a combination of accounts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 3 hierarchies groups and organiza tions —providing support for real world charg ing billing and reporting responsibilities Amdocs BSS portfolio Amdocs is a software and services vendor with nearly 40 years of expertise specifically focused on the communications and media industry It’s a trusted partner to the world’s leading communications and media companies serving more than 350 service providers in more than 85 countries Amdocs’ product lines encompass digital customer experience monetization network and service automation and mor e supporting more than 17 billion digital customer journeys every day Amdocs C ES21 is a 5G native integrated BSS operations support system (OSS ) suite It is a cloud native open and modular suite that supports many of the world’s top CSPs on their dig ital and 5G journeys The Amdocs Digital Brand Experience Suite is a SaaS solution that’s specifically built for the needs of digital brands and other small service providers It is a pre integrated suite with an extensive set of built in processes and con figuration templates to simplify commerce care ordering and monetization and empowering business users through “shift left” to a truly digital experience for the BSS itself As SaaS it provides unparalleled time to market and scalability while benefi tting from Amdocs ’ robust operations and a “pay as you grow” business model Amdocs Digital Brand Experience Suite overview Amdocs Digital Brand Experience Suite provides flexibility while implementing a high level of complexity It enables customers to capitalize on digital era opportunities by growing customer’s business with an open system that seamlessly interacts with ancillary app lication s It offers the freedom to address a div erse set of product and service markets as well as a range of end customer types Encompassing a set of established and progressive BSS products Amdocs Digital Brand Experience Suite represents proven functionality under a preconfigured industry standar d integration layer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 4 Configurability smart interoperability and consistent experience • Swift onboarding of the service provider onto the platform With the SaaS solution onboarding can be done immediately Complex business models and dedicated instances of Digital Brand Experience Suite for larger service providers take slightly longer • Timetomarket for new products services and bundles occurs in minutes instead of months • Simple table driven configuration doesn’t require codin g The data model is highly flexible without requiring software changes • Provides s upport for multiple lines of business Within a single instance or tenant Amdocs Digital Brand Experience Suite supports any number of li nes of business (mobile fixed line broadband cable finance and utilities) and uses a flexible catalog to offer converged services to a sophisticated market Flexible deployment • Multi tenancy capabilities allow for a “define once utilize many” strateg y as different tenants are hosted on a single hardware and software platform that is operated in one location CSPs can deploy Amdocs Digital Brand Experience Suite on AWS as a service or as a dedicated instance Support options • Amdocs offers support for subscription usage based and “billing as a service” models over multiple networks and protocols of any kind and across borders In addition Amdocs supports any service product and payment method as well as multiple currencies and languages Open and secure integration model • More than 500 o penstandard partner friendly pre integrated microservices use RESTful service methods • Secur ity and compliance is provided by both AWS Cloud and the Digital Brand Experience Suite architecture Functional capa bilities The Digital Brand Experience Suite comes with the following capabilities: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 5 Digital channels • Responsive with multi modal web presentation layer – Multimodal user interfaces provide users with different ways of interacting with applications This has advantages both in providing interaction solutions with additional robustness in environments • Bespoke native mobile application – The goal of bespoke software or mobile apps is to create operational efficiency reduce cost improve retention and drive up revenue • Selfcare – Web interface enables customers to use the selfservice capability • Customer service representative (CSR) interfaces – The customer service interface includes tools and information for supporting the system admin users customers and transactions Business process foundation • Identity management – Authentication roles user management and single sign on • Security usage throttling service level agreements ( SLAs ) – Authorization metrics and SLA enforcement around exposed northbound APIs • Microservice based REST APIs – API framework to deliver business services through a standardized REST API model • Configurable service logic – Orchestration of underlying APIs to deliver business oriented functions enhanced flexibility and extensibility • Data mapping – Management of the Digital Brand Experience Suite data model and virtualization of external third party applications • Commerce catalog – Rules matching products and services to customers Rules can be based on account segment hierarchy geography equipment serviceability or any number of other factors and defined business processes serving both B2B and B2C customers With optional intelligence capabilities the rules can be extended to support marketing campaigns such as Next Best Offer /Next Best Action ( NBO/ NBA) • Shopping cart – Product browsing and search cart item management (including product options and features) and pricing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 6 • Quotation service – A view into what a bill would look like for a given order including prices discounts and taxation • Messaging – Asynchronous message queuing technology with persistence for internal event notification and synchronization and routing to the relevant professional (system admin istrator CSR and so on ) Customer management layer capabilities • Customer management – Definition of customer profiles customer interactions and customer hierarchies supporting simple to extremely complex B2B hierarch ies and B2C scenarios • Case management – Customer interaction mechanism which can initiate actions in the system and queue up issues for service provider personnel Configurable rules determine actions and routing for a particular case • Inventory – Manag es serialized logical inventory for association to billing products Inventory can be categorized by type or line with corequisite rules defined in the catalog • Resource management – Manages dynamic lifecycle policy for all resources Revenue management • Billing rules – Configurable management of rules related to the billing operation This is the foundation for how charges are derived from a combination of price and customer service attributes • Event and order fulfillment – A workflow driven process to pro vision and activate billing orders in the system This involves instantiation of the relevant products to their respective customer databases • Usage and file processing – Integrity checks on the input event usage files before passing to rating • Rating engine – Offline and online rating engine including filebased offline rating typically for prepaid and postpaid subscribers The rating engine can use multiple factors related to the subscriber account and service to calculate the price for the usage o Offline rating engine – Filebased offline rating typically for postpaid subscribers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 7 o Online rating engine – Real time rating and promotional calculations based on network events • Rated usage management – Persistence and indexing of billed unbille d non billable usage and usage details • Bill preparer – The billing processor (BIP) identifies accounts within a particular bill cycle gathers data for bill processing calculates billable charges and generates processed information for bill formatting • Billtime discount – Calculates bill time discounts based on total usage for the period total charges and applicable discount tiers • Billtime taxation – Calculates appropriate taxes given the geography account information info and installed tax packages • Invoice generator (IGEN) – Combines the processed bill information from the BIP with invoice formats from the invoice designer to produce formatted bills The IGEN supports conditional logic in the templates and multi language pres entation formats • Accounts receivable (AR) balance management – Applies bill charges to an account’s AR balances Thresholds defined against the balance may trigger notifications and/or lifecycle state changes • Payments – Requests for payment payment hist ory and payment profiles • Adjustments and refunds – Allow for charges to be disputed adjusted or fully refunded A manager approval mechanism with workflow ensures that all adjustments have been reviewed and authorized • Journal ( general ledger) feeds – Reporting function that maps all financially significant activities in the system to operator defined general ledger codes Journaling generates feed files on a regular basis with the charges organized based on the specified codes and categories These f iles are then imported into the operator’s account systems • Collections – Driven process through which past due bills launch various external notification and collection activities ultimately leading to debt resolution or write off Interfaces are provide d to restore account state upon successful collection action • Recharge – Balance allotments and related promotions launch ed by recharge actions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 8 • Balance management – Full lifecycle of cyclical authorization balances updated in real time • Online promotions – Realtime bonus awards and discounts applied immediately to balances • Notification s – Threshold based external notification s (for example invoked in response to a low balance ) Order management • Order management – Processing of ordered servi ces and their elements prior to order fulfillment Typically initiated at the end of the shopping experience t his can include editing or cancelling pending orders or forcing pending orders to immediately activate workflow driven processes configured to m eet business needs • Order fulfillment – A workflow driven process to provision and activate orders in the system Configurable m ilestones define the workflow model for each service and may involve many steps a route to service activation on thirdparty sy stems • Provisioning – Runs the provisioning processes of all ordered services on various network s including : Home Location Registers unified communication platforms electrical grids media servers Home Subscriber Servers and others • Network protocol integration – Supports authentication authorization and accounting functionality for all types of online and offline charging as well as major network protocols Formats are provided for common event record types Interfaces to online charging system (OCS) support all the protocols involved in voice and data charging especially 5G Functional architecture Digital Brand Experience Suite architecture includes three layers: user experience integration and application The following diagram illus trates the high level architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 9 Digital Brand Experience Suite functional architecture This whitepaper focuses primarily on the integration and application layers because these features are deployed in AWS While the UI applications are downloaded from AWS the actual UI runtime occurs client side The APIs of the integration layer support the Digital Brand Experience Suite user interfaces ( UIs) as well as other thirdparty client integr ations These APIs expose the capabilities of the application layer as well as orchestrate the different applications to form higher level business services Integration layer capabilities are marked in the green box and application layer capabilities ar e marked in the blue box Additional detailed capabilities can be reviewed in the following diagram This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 10 Digital Brand Experience Suite functional capabilities Note that the OCS domain in t he preceding diagram depicts a reference implementation; integration with an OCS (as well as the specific OCS used) is an optional aspect of the Digital Brand Experience Suite solution Integration layer capabilities • Throttling and SLAs – Metrics and SLA reporting around the exposed northbound APIs • Identity management – Centralized authentication and authorization • Business logic and i ntegration – Service oriented APIs and their supporting capabilities • Commerce catalog – Definition and management of products related to the shopping experience Includes eligi bility aspects references to marketing collateral bundling constructions and so forth • Commerce engine – Technical APIs to manage shopping carts and catalog browsing • Extensible business logic – Business rules which extend the core logic of the APIs This also includes business process management to model flow based scenarios such as case handling and post checkout approval This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 11 • Dynamic data storage – Persistence for objects that are required for Digital Brand Experience Suite capabilities but not part of the existing and native application models This i ncludes things like consents contacts metadata for order supporting documentation and assigned and applied product instances Application layer capabilities • Billing catalog – Definition and management of products related to the billing operation Products and their elements include rate plans discount plans recurring and non recurring charges and associated configuration Product lifecycle allows for advance sales windows sunsetting and so forth For o ther billing application capabilities refer to the Revenue management section of this document Data management The following diagram shows the main entities managed by Digital Brand Experience Suite with the functional domains which are primarily responsible for each Digital Brand Experience Suite functional domains Optima Web UI Business Logic & Integration Layer BSS Application Commerce Engine Dynamic Data Business Process / Case Document MetadataApplied Products Shopping Carts BPO/SPO/AddOn CollateralBook Pricing / DiscountsEligibility Rules Compatibility Rules Dependency Rules Cart Validation Rules Collections WorkflowsLogical InventoryPayments Package/Component/Element Adjustments / Refunds Balances Invoice Details Invoice Formats Orders / ServiceOrders Workflow Jobs BT Discounts / Promos BT Rates / TaxPrivacy / Consent Contact / Individual Contracts / Signatures Business Processes Case Handling Rules BP/Case Workflow DefinitionsCase Instance Data BP/Case Workflow InstancesInteractions / Notes Rated Usage AuroraAuroraAurora CouchbaseBT Charges Account/ServiceBusiness Access Layer AuroraUsers / User GroupsRoles / Permissions Amazon Aurora Amazon AuroraAmazon Aurora Amazon AuroraThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 12 Benefits of deploying Digital Brand Experience Suite on AWS With the increase of the subscriber base and high demands of 5G cost reduction becomes an essential factor to build a successful business model CSPs that are running Digital Brand Experience Suite on AWS will pay only for the resources they use With the “pay as you go ” model customers also can spin u p experiment and iterate BSS environments (testing dev and so forth ) and pay based on consumption An on premises environment usually provides a limited set of environments to work with—provisioning additional environments can take a long time or migh t not be possible With AWS CSPs can create virtually man y new environments in minutes as required In addition CSPs can create a logical separation between projects environments and loosely decoupled application thereby enabling each of their teams to work independently with the resources they need Teams can subsequently converge in a common integration environment when they are ready At the conclusion of a project customers can shut down the environment and cease payment Customers often over size on premises environments for the initial phases of a project but subsequently cannot cope with growth in later phases With AWS customers can scale their compute resources up or down at any time Customers pay only for the individual services they need for as long as they use them In addition customers can change instance sizes in minutes through AWS Management Console AWS API or AWS Command Line Interface (AWS CLI) Because of the exponential growth of data worldwide and specifically in the telecom world designing and deploying backup solutions has become more complicated With AWS c ustomers have multiple options to set up a disaster recovery strategy depending on the recovery point objective (RPO) and recovery time objective (RTO) using the expansive AWS Global Cloud I nfrastructure Amdocs Digital Brand Experience Suite platform offers rich product and service management capabilities which can be integrated with AWS Cloud Analytics services for use cases such as subscriber customer and usage analytics Digital Brand Experience Suite capabilities can be also empowered by machine learning and artificial intellige nce capabilities through AWS services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 13 Digital Brand Experience Suite deployment architecture Although there are multiple options for deploying the Digital Brand Experience Suite into an AWS environment the diagrams in this section primarily focus on depl oying into a multi tenant SaaS architecture Where possible common aspects of the architecture for nonSaaS deployments will be highlighted Technical architecture Common deployment architecture The following diagram depicts the main resources deployed for the Digital Brand Experience Suite The application is using the same AWS services regardless of the nature of the cloud deployment ( for example SaaS vs non SaaS) Digital Brand Experience Suite common cloud resources detail The Digital Brand Experience Suite uses Amazon Virtual Private Cloud (VPC) that is divided into three subnets which organize the access compute and storage resources needed for the Digital Brand Experience Suite All of these subnets are private —access is handled by a demilitarized zone (DMZ ) such as the inbound services VPC of the SaaS offering VPC BSS DBCustomers Amazon S3 – Web UI Amazon CloudWatch Amazon S3 Amazon ECR AWS Systems ManagerEndpoints Security groupSecurity group Security group Security group Security group Security group Security groupSecurity group AWS PrivateLink – For customers AWS PrivateLink – For Amdocs platform AWS Lambda – Web UI backend AWS Lambda – Payment gatewaySecurity group Amazon API Gateway Amazon EFS Amazon EKS Amazon EC2 –BAL Amazon EC2 –BIL Amazon EC2 –ESB Amazon EC2 –BSS Amazon EC2 – BP Batch Amazon Aurora Bill DB Amazon Aurora –BSS DB Amazon EC2 –Couchbase on EC2 Application Load Balancer (ALB) Customers ALB – Amdocs platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 14 Customers subnet The customers subnet provides access and load balancing capabilities into the VPC This is the entry point from the DMZ ( for example inbound services VPC through AWS PrivateLink for customers interface ) As such access here is focused on the services that the end users need for their Digital Brand Experience BSS subnet The BSS subnet holds the primary computing resources These comprise different Auto Scaling groups managed by Amazon Elastic Kubernetes Service (Amazon EKS) • Business Access Layer ( BAL) nodes – Used for API access path based routing metrics and throttling to support the Digital Brand Experience Suite APIs These capabilities are provided by the APIMAN package These nodes support inherent SLAs and enable customers to set throttling rules based on the number of requests per second for each method in APIs • Enterprise Service Bus ( ESB) nodes – Implement the Digital Brand Experience Suite SaaS APIs which are organized into microservices based on functional areas (for example account management shopping cart and i nvoicing ) These APIs and their integration logic translate between the high level service oriented requests received by the Digital Brand Experience Suite APIs and the low level technical APIs needed to fulfill the requests across the various Digital Brand Experience Suite resources • Bill Processing ( BP) batch nodes – Run the billing applications which perform bill calculation invoice generation collections and journal processing These applications are taskbased meaning that they are initiat ed on a schedule and on a particular set of input data For example bill processing for cycle 15 will run on the determined day ( for example the fifteenth day of the month) for the subset of accounts who have selected the fifteenth day as their bill cycle date By using native auto scaling BP batch nodes dynamically scale Amazon Elastic Compute Cloud (Amazon EC2) instances based on configurable parameters (such as the number of customers services and products ) and is one of the major benefits of running the application on AWS With AWS Auto Scaling BP batch application s always have the right resources at the right time This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 15 • BSS nodes – Host the low level service APIs which expose the billing capabilities to the Integra tion layer For example fetching the invoice details from processed bills or inquiring about a particular collections’ scenario • Business Integration Layer ( BIL) nodes – Contain applications to support the middleware —the shopping cart application Red Hat Decision Manager (RHDM) which is used to extend the BIL API business logic and RedHat Process Automation Manager (RHPAM) which is used for case handling and post cart processing ( for example credit review) Using each of these di fferent node groups highly depends on the traffic profiles of the specific operator; as a result deploying these node groups into separate Auto Scaling groups allows for greater platform efficiency by scaling the specific node group accordingly AWS Fargate is used for BP batch which comprise s of scheduled and taskbased applications like the billing processor and invoice generator Rather than port these applications Fargate is used to containe rize them while maintaining their established technology stack An Amazon Elastic File System (Amazon EFS) instance is deployed within this subnet that is used by the various processes of the billing application (for example usage files which are shared between the different usage file rating processes) As part of the overall migration of the Digital Brand Experience Suite solution to be more AWS native several processes have already moved to use serverless computing resources For example the payment gateway and web UI backend are implemented through AWS Lambda functions for event based handling Serverless computing on AWS —such as AWS Lambda —includes automatic scaling built in high availability and a payforvalue billing model AWS Lambda is an event driven compute servic e that enables customer to run code in response to events from over 200 natively integrated AWS and SaaS sources —all without managing any servers Internal Amdocs operations and support users access BSS subnet from the management VPC through PrivateLink for Amdocs interface s PrivateLink provides private connectivity between VPCs AWS services and customer’s onpremises networks without exposing their traffic to the public internet Database subnet The database subnet holds the resources for the Digital Brand Experience Suite persistence layer (such as multiple database technologies ) that are used across the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 16 Digital Brand Experience Suite SaaS solution The BIL database and BSS database use Amazon Aurora databases for commerce (shopping cart) and billing respectively Database resources are only accessible from the BSS subnet Not only does this secure the actual persisted data but it decouples the storage technology from the external services and hides storage details like database schemas from the end users This all ows the solution to evolve over time and introduce and update storage technology while minimizing the impact on the rest of the solution and its users External services integration Interface VPC endpoints are used to securely access various AWS services such as Amazon CloudWatch Amazon Simple Storage Service (Amazon S3) Amazon Elastic Container Registry (Amazon ECR) and AWS Systems Manager VPC endpoints allows communication between instances and database in customer VPCs and management services such as CloudWatch and Systems Manager without imposing availability risks and bandwidth constraints on network traffic High availability The following diagram d epicts how Digital Brand Experience Suite can be deployed in multiple Availability Zones (AZs) configuration to promote high availability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 17 Digital Brand Experience Suite high availability in AWS Digital Brand Experience Suite architecture on AWS is highly available The solution is built across a minimum of two Availability Zones All Availability Zones in an AWS Region are interconnected with high bandwidth low latency networking Availability Zones are physi cally separated by a meaningful distance although all are within 100 km (60 miles) of each other If one of the Availability Zones becomes unavailable the application continues to stay available because the architecture is high ly available in all layer s—databases utilizing multi AZ set up as well as Kubernetes spreads the pods in a deployment across nodes and multiple Availability Zones —and impact of an Availability Zone failure is mitigated Digital Brand Experience Suite architecture on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling and it adjusts the size of Amazon EKS cluster by adding or removing worker nodes in multiple Availability Zones In addition applicat ion components are stateless and based on containers with Elastic Load Balancing with native awareness of failure boundaries like A vailability Zones to keep your applications available across a Region without requiring Global Server Load Balancing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 18 Scalability The solution is fully s calab le using Auto Scaling groups of various container types This allows for more fine grained scalability as the various compute needs change over time Auto Scal ing groups can be configured with different scaling models either scaling up or down based on events system measurements or a preset schedule Digital Brand Experience Suite architecture uses Amazon Aurora a MySQL and PostgreSQL –compatible relational database built for the cloud Amazon Aurora scales in many ways including storage instance and read scaling The a pplication also uses Couchbase on Amazon EC2 setting up Couchbase in a way that makes it scalable Security Access management The access is following rolebased access control through AWS Identity and Access Management (IAM) The solution has defined roles based on who needs access to what As a best practice customers could assign permissions at IAM group role level to access applications in the spec ific VPCs and never grant privileges beyond the minimum required for a user or group to fulfill their job requirements The list of roles and groups change with each project Secure data at rest Data at rest will be encrypted on the storage volume level (using AWS built in capabilities) as well as on the database level (on configurable PII fields) Digital Brand Experience Suite architecture us es AWS Key Management Service (AWS KMS) to create and control t he encryption keys and makes it easy for customers to create and manage cryptographic keys and control their use across a wide range of AWS services and applications Encryption is applied by solution components and AWS services Decryption is applied by each data consumer Secure data in transit The w eb UIs access will be encrypted with SSL encryption (HTTPS) The solution API layer access will be encrypted with SSL encryption (HTTPS) Additionally the encryption keys will be stored in AWS KMS The system credentials will be securely stored in AWS Secrets Manager Automated clearing house and credit card data will be tokenized by purchaser’s paym ent gateway system and the solution store s the credit card token only This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 19 Digital Brand Experience Suite SaaS model The following diagram provides a high level network layout view identifying the three major VPCs configured Digital Brand Expe rience Suite SaaS overall view This diagram also addresses the two primary means of accessing the solution: end customer and user access by the inbound services VPC and Amdocs operations access by the management VPC Both methods can then access the common resources in the Digital Brand Experience Suite SaaS VPC End customer and user access is secured by AWS Shield Advanced to provide managed distributed denial of service (DDo S) protection and AWS Web Application Firewall (AWS WAF) to protect their application from common web exploits In addition Amazon CloudFront is deployed in front of the Amazon S3 buckets used to host the web UI application client for download This improves initial application download performance by placing the application closer to the user This layout is more tailored to SaaS offerings because it provides two main access channels: individual tenant and global operations Non SaaS cloud offerings employ a different network architecture Amazon S3 – Web UI AWS PrivateLink – For customersRegion AWS PrivateLink – For Amdocs platform Amdocs data center Users Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAF Amazon CloudFront Download distribution VPC Inbound Services VPC –Amdocs platform SaaS VPC Management AWS Direct ConnectAWS Shield Advanced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 20 Inbound services VPC (SaaS Offering) The following diagram provides more detail on the inbound services VPC Digital Brand Experience Suite SaaS inbound services VPC detail The public DMZ subnet is the approachable point for all users —it primarily provides authentication services so that further secured services can be accessed To protect the solution from mal icious attacks such as DDoS AWS WAF and AWS Shield are deployed Management VPC (SaaS offering) The following diagram provides more detail on the management VPC Amazon S3 – Web UI VPC AWS PrivateLink – For customersDMZ Amdocs Digital Brand Experience Amazon Route 53 – Private Hosted zone amdocsoptimacloud Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAFAmazon CloudFront – Download distribution AWS Shield Advanced Internet gateway Network Load Balancer – Amdocs Platform Security group Elastic network interface This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 21 Digital Brand Experience Suite SaaS management VPC detail The resources within the private management subnet provide access to Digital Brand Experience Suite SaaS for the operations engineers Microsoft Windows instances in Amazon EC2 as Bastion instance are running i n the private management VPC Operations engineers can use the Remote Desktop Protocol to administrate and access the compute resources inside the VPC remotely PrivateLink is also used to connect services across accounts and VPCs without exposing the traffic to the public interne t AWS Well Architected Framework The AWS Well Architected Framework helps cloud architects build secure high performing resilient and efficient infrastructure for their applications and workloads The AWS Well Architected Framework is based on five pillars : •Operational excellence •Security •Reliability •Performance efficie ncy •Cost optimization Amazon S3 – Web UI VPC Management Amazon Route 53 – Private Hosted zone amdocsoptimacloud Internet gateway Security group Security group Elastic network interface Security group Endpoints AWS PrivateLink – For Amdocs platform Amazon S3 Windows BastionThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 22 AWS Well Architected provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time The AWS Well Architected Framework helped Amdocs to adapt best practices and to achieve an optimized architecture of their Digital Bran d Experience Suite on AWS The following is an overview of the five pillars of the AWS Well Architected Framework with reference to the Digital Brand Experience Suite architecture on AWS Operational excellence This pillar focuses on t he ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures Digital Brand Experience Suite architecture on AWS has the ability to support development and run workloads effectively The application gains insight s into th e operations aspects by using CloudWatch to collect metrics send alarms monitor Amazon Aurora metrics and use CloudWatch Container Insights from an Amazon EKS cluster The application uses AWS Lambda to respond to operational events automate changes and continuously manage and improve processes to deliver a business value Customers can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper Security This pillar focuses on the ability to protect information systems and assets while delivering busine ss value through ri sk assessments and mitigation strategies Digital Brand Experience Suite architecture on AWS takes advantage of inherent prevention features such as : •Amazon VPCs to logically isolate environments per customer requirements •Subnets to logically isolate multiple layers in VPC and control the communication between them •Network access control lists and security groups to control incoming and outgoing traffic Digital Bran d Experience Suite uses AWS KMS for security of data at rest SSL encryption for data in transit as well as Secrets Manager for systems credential management rolebased access control through IAM for access management Customers can find prescrip tive guidance on implementation in the Security Pillar whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 23 Reliability This pillar focuses on the ability of a system to recover from infrastructure or service failures to dynamically acquire computing resources to meet demand and to mitigate disruptions such as misconfigurations or transient network issues Digital Brand Experience Suite quickly rec overs from database failure by using Amazon Aurora which spans across multiple Availability Zones in an AWS Region and each Availability Zone contains a copy of the cluster volume data This functionality means that database cluster can tolerate a failure of an Availability Zone without any loss of data Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling handling scalability and reliability of application Changes are made through automation using AWS CloudFormation The architecture of Digital Brand Experience Suite on AWS encompasses the ability to perform its intended function correctly and consistently when it’s expected to This includes the abil ity to operate and test the workload through its total lifecycle Customers can find prescriptive guidance on implementation in the Reliability Pillar whitepaper Performance efficiency This pillar deals with the ability to use computing resources efficiently to meet system requirements an d to maintain that efficiency as demand changes and technologies evolve The architecture of Digital Brand Experience Suite on AWS ensures an efficient usage of the compute storage and database resources to meet system requirements and to maintain th em as demand changes and technologies evolve Customers can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepa per Cost optimization This pillar deals with the ability to avoid or eliminate unneeded cost or suboptimal resources Digital Brand Experience Suite on AWS uses Amazon Aurora PostgreSQL which considerable reduces database costs Amazon Aurora PostgreSQL is three times faster than standard PostgreSQL databases It provides the security availabili ty and reliability of commercial databases at onetenth the cost Additionally Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling contributing to considerable cost reduction The architecture of Digi tal Brand Experience Suite on AWS has the ability to run systems to deliver business value at the lowest price point This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 24 Customers can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper Conclusion Amdocs Digital Brand Experience Suite is a pre integrated complete digital customer management and commerce platform designed to rapidly and securely monetize any product or service The richness of Amdocs Digital Brand Experience Suite ’s capabilities and flexibility —a strong BSS engine enabled by modern digital open source components such as JBoss Fuse REST APIs React Nodejs and other advanced technologies —enable s customers to enjoy the superior performance of a wellproven solution Amdocs Digital Brand Experience Suite combine s the effectiveness of a lean archi tecture and future readiness to provide customers the ability to step into the digital economy By deploying Amdocs Digital Brand Experience Suite in the AWS Cloud customers c an increase deployment velocity reduce infrastructure cost significantly and i ntegrate with IoT analytics and machine learning services Customers can further use the compliance benefits of the AWS Cloud for sensitive customer data AWS is the costeffective secure scalable high performing and flexible option for deploying Amdocs Digital Brand Experience Suite BSS Contributors Contributors to this document include : •David Sell Lead Software Architect Amdocs Digital Brand Experience Amdocs •Shahar Dumai Head of marketing for Amdocs Digital Brand Experience Amdocs •Efrat NirBerger Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Visu Sontam Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Mounir Chennana Solutions Architect Amazon Web ServicesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 25 Further reading For additional information see: •5G Network Evolution with AWS whitepaper •Continuous Integration and Continuous Delivery for 5G Networks on AWS whitepaper •NextGeneration Mobile Private Network s Powered by AWS whitepaper •AWS Well Architected Framework whitepaper •NextGeneration OSS with AWS whitepaper Document revisions Date Description November 18 2021 Updated for technical accuracy February 2018 First publication,General,consultant,Best Practices An_Introduction_to_High_Performance_Computing_on_AWS,High Performance Computing (HPC) has been key to solving the most complex problems in every industry and changing the way we work and live From weather modeling to genome mapping to the search for extraterrestrial intelligence HPC is helping to push the boundaries of what’s possible with advanced computing technologies Once confined to government labs large enterprises and select academic organizations today it is found across a wide range of industries In this paper we will discuss how cloud services put the world’s most advanced computing capabilities within reach for more organizations helping them to innovate faster and gain a competitive edge We will discuss the advantages of running HPC workloads on Amazon Web Services (AWS) with Intel® Xeon® technology compared to traditional onpremises architectures We will also illustrate these benefits in actual deployments across a variety of industries High Performance Computing on AWS Redefines What is Possible In 2017 the market for cloud HPC solutions grew by 44% compared to 2016i https://awsamazoncom/hpc 2HPC FUNDAMENTALS Although HPC applications share some common building blocks they are not all similar HPC applications are often based on complex algorithms that rely on high performing infrastructure for efficient execution These applications need hardware that includes high performance processors memory and communication subsystems For many applications and workloads the performance of compute elements must be complemented by comparably high performance storage and networking elements Some may demand high levels of parallel processing but not necessarily fast storage or high performance interconnect Other applications are interconnectsensitive requiring low latency and high throughput networking Similarly there are many I/Osensitive applications that without a very fast I/O subsystem will run slowly because of storage bottlenecks And still other applications such as game streaming video encoding and 3D application streaming need performance acceleration using GPUs Today many large enterprises and research institutions procure and maintain their own HPC infrastructure This HPC infrastructure is shared across many applications and groups within the organization to maximize utilization of this significant capital investment Cloudbased services have opened up a new frontier for HPC Moving HPC workloads to the cloud can provide near instant access to virtually unlimited computing resources for a wider community of users and can support completely new types of applications Today organizations of all sizes are looking to the cloud to support their most advanced computing applications For smaller enterprises cloud is a great starting point enabling fast agile deployment without the need for heavy capital expenditure For large enterprises cloud provides an easier way to tailor HPC infrastructure to changing business needs and to gain access to the latest technologies without having to worry about upfront investments in new infrastructure or ongoing operational expenses When compared to traditional onpremises HPC infrastructures cloud offers significant advantages in terms of scalability flexibility and costONPREMISES HPC HAS ITS LIMITS Today onpremises HPC infrastructure handles most of the HPC workloads that enterprises and research institutions employ Most HPC system administrators maintain and operate this infrastructure at varying levels of utilization However business is always competitive so efficiency needs to be coupled with the flexibility and opportunity to innovate continuously Some of the challenges with onpremises HPC are well known These include long procurement cycles high initial capital investment and the need for midcycle technology refreshes For most organizations planning for and procuring an HPC system is a long and arduous process that involves detailed capacity forecasting and system evaluation cycles Often the significant upfront capital investment required is a limiting factor for the amount of capacity that can be procured Maintaining the infrastructure over its lifecycle is an expensive proposition as well Previously technology refreshes every three years was enough to stay current with the compute technology and incremental demands from HPC workloads However to take advantage of the faster pace of innovation HPC customers are needing to refresh their infrastructure more often than before And it is worth the effort IDC reports that for every $1 spent on HPC businesses see $463 in incremental revenues and $44 in incremental profit so delaying incremental investments in HPC – and thus delaying the innovations it brings – has large downstream effects on the businesshttps://awsamazoncom/hpc 3Stifled Innovation: Often the constraints of onpremises infrastructure mean that use cases or applications that did not meet the capabilities of the hardware were not considered When engineers and researchers are forced to limit their imagination to what can be tried out with limited access to infrastructure the opportunity to think outside the box and tinker with new ideas gets lost Reduced Productivity: Onpremises systems often have long queues and wait times that decrease productivity They are managed to maximize utilization – often resulting in very intricate scheduling policies for jobs However even if a job requires only a couple of hours to run it may be stuck in a prioritized queue for weeks or months – decreasing overall productivity and limiting innovation In contrast with virtually unlimited capacity the cloud can free users to get the same job done but much faster without having to stand in line behind others who are just as eager to make progressLimited Scalability and Flexibility: HPC workloads and their demands are constantly changing and legacy HPC architectures cannot always keep pace with evolving requirements For example infrastructure elements like GPUs containers and serverless technologies are not readily available in an onpremises environment Integrating new OS or container capabilities – or even upgrading libraries and applications – is a major systemwide undertaking And when an onpremises HPC system is designed for a specific application or workload it’s difficult and expensive to take on new HPC applications as well as forecast and scale for future (frequently unknown) requirements Lost Opportunities: Onpremises HPC can sometimes limit an organization’s opportunities to take full advantage of the latest technologies For example as organizations adopt leadingedge technologies like artificial intelligence/ machine learning technologies (AI/ML) and visualization the complexity and volume of data is pushing on premises infrastructure to its limits Furthermore most AI/ML algorithms are cloudnative These algorithms will deliver superior performance on large data sets when running in the cloud especially with workloads that involve transient data that does not need to be stored long term There are other limitations of onpremises HPC infrastructure that are less visible and so are often overlooked leading to misplaced optimization efforts https://awsamazoncom/hpc 4CLOUD IS A BETTER WAY TO HPC To move beyond the limits of onpremises HPC many organizations are leveraging cloud services to support their most advanced computing applications Flexible and agile the cloud offers strong advantages compared to traditional onpremises HPC approaches HPC on AWS with Intel® Xeon® processors deliver significant leaps in compute performance memory capacity and bandwidth and I/O scalability The highly customizable computing platform and robust partner community enable your staff to imagine new approaches so they can fail forward faster delivering more answers to more questions without the need for costly onpremises upgrades In short AWS frees you to rethink your approach to every HPC and big data analysis initiative and invites your team to ask questions and seek answers as often as possible Innovate Faster with a Highly Scalable Infrastructure Moving HPC workloads to the cloud can bring down barriers to innovation by opening up access to virtually unlimited capacity and scale And one of the best features of working in a cloud environment is that when you solve a problem it stays solved You’re not revisiting it every time you do a major systemwide software upgrade or a biannual hardware refresh Limits on scale and capacity with onpremises infrastructure usually led to organizations being reluctant to consider new use cases or applications that exceeded their capabilities Running HPC in the cloud enables asking the business critical questions they couldn’t address before and that means a fresh look at project ideas that were shelved due to infrastructure constraints Migrating HPC applications to AWS eliminates the need for tradeoffs between experimentation and production AWS and Intel bring the most costeffective scalable solutions to run the most computationallyintensive applications ondemand Now research development and analytics teams can test every theory and process every data set without straining onpremises systems or stalling other critical work streams Flexible configuration and virtually unlimited scalability allow engineers to grow and shrink the infrastructure as workloads dictate not the other way around Additionally with easy access to a broad range of cloudbased services and a trusted partner network researchers and engineers can quickly adopt tested and verified HPC applications so that they can innovate faster without having to reinvent what already exists Increase Collaboration with Secure Access to Clusters Worldwide Running HPC workloads on the cloud enables a new way for globally distributed teams to collaborate securely With globallyaccessible shared data engineers and researchers can work together or in parallel to get results faster For example the use of the cloud for collaboration and visualization allows a remote design team to view and interact with a simulation model in near real time without the need to duplicate and proliferate sensitive design data Using the cloud as a collaboration platform also makes it easier to ensure compliance with everchanging industry regulations The AWS cloud is compliant with the latest revisions of GDPR HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and other regulations Encryption and granular permission features guard sensitive data without interfering with the ability to share data across approved users and detailed audit trails for virtually every API call or cloud orchestration action means environments can be designed to address specific governance needs and submit to continuous monitoring and surveillance With a broad global presence and the wide availability of Intel® Xeon® technologypowered Amazon EC2 instances HPC on AWS enables engineers and researchers to share and collaborate efficiently with team members across the globe without compromising on securityhttps://awsamazoncom/hpc 5 Optimize Cost with Flexible Resource Selection Running HPC in the cloud enables organizations to select and deploy an optimal set of services for their unique applications and to pay only for what they use Individuals and teams can rapidly scale up or scale down resources as needed commissioning or decommissioning HPC clusters in minutes instead of days or weeks With HPC in the cloud scientists researchers and commercial HPC users can gain rapid access to resources they need without a burdensome procurement process Running HPC in the cloud also minimizes the need for job queues Traditional HPC systems require researchers and analysts to submit their projects to open source or commercial cluster and job management tools which can be time consuming and vulnerable to submission errors Moving HPC workloads to the cloud can help increase productivity by matching the infrastructure configuration to the job With onpremises infrastructure engineers were constrained to running their job on the available configuration With HPC in the cloud every job (or set of related jobs) can run on its own ondemand cluster customized for its specific requirements The result is more efficient HPC spending and fewer wasted resources AWS HPC solutions remove the traditional challenges associated with onpremises clusters: fixed infrastructure capacity technology obsolescence and high capital expenditures AWS gives you access to virtually unlimited HPC capacity built from the latest technologies You can quickly migrate to newer more powerful Intel® Xeon® processorbased EC2 instances as soon as they are made available on AWS This removes the risk of onpremises CPU clusters becoming obsolete or poorly utilized as your needs change over time As a result your teams can trust that their workloads are running optimally at every stage Data Management & Data Transfer Running HPC applications in the cloud starts with moving the required data into the cloud AWS Snowball and AWS Snowmobile are data transport solutions that use devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud Using Snowball addresses common challenges with large scale data transfers including high network costs long transfer times and security concerns AWS DataSync is a data transfer service that makes it easy for you to automate moving data between onpremises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS) DataSync automatically handles many of the tasks related to data transfers that can slow down migrations or burden your IT operations including running your own instances handling encryption managing scripts network optimization and data integrity validation AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your datacenter office or colocation environment which in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connectionsAWS AND INTEL® DELIVER A COMPLETE HPC SOLUTION AWS HPC solutions with Intel® Xeon® technologypowered compute instances put the full power of HPC in reach for organizations of every size and industry AWS provides a comprehensive set of components required to power today’s most advanced HPC applications giving you the ability to choose the most appropriate mix of resources for your specific workload Key products and services that make up the HPC on AWS solution include: https://awsamazoncom/hpc 5https://awsamazoncom/hpc 6 https://awsamazoncom/hpc 6Compute The AWS HPC solution lets you choose from a variety of compute instance types that can be configured to suit your needs including the latest Intel® Xeon® processor powered CPU instances GPUbased instances and field programmable gate array (FPGA)powered instances The latest Intelpowered Amazon EC2 instances include the C5n C5d and Z1d instances C5n instances feature the Intel Xeon Platinum 8000 series (SkylakeSP) processor with a sustained all core Turbo CPU clock speed of up to 35 GHz C5n instances provide up to 100 Gbps of network bandwidth and up to 14 Gbps of dedicated bandwidth to Amazon EBS C5n instances also feature 33% higher memory footprint compared to C5 instances For workloads that require access to highspeed ultralow latency local storage AWS offers C5d instances equipped with local NVMebased SSDs Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint High frequency z1d instances deliver a sustained all core frequency of up to 40 GHz the fastest of any cloud instance For HPC codes that can benefit from GPU acceleration the Amazon EC2 P3dn instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances) local NVMe storage the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory NVIDIA NVLink for faster GPUtoGPU communication AWScustom Intel® Xeon® Scalable (Skylake) processors running at 31 GHz sustained allcore Turbo AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady predictable performance at the lowest possible cost Using AWS Auto Scaling it’s easy to setup application scaling for multiple resources across multiple services in minutes Networking Amazon EC2 instances support enhanced networking that allow EC2 instances to achieve higher bandwidth and lower interinstance latency compared to traditional virtualization methods Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables you to run HPC applications requiring high levels of internode communications at scale on AWS Its custombuilt operating system (OS) bypass hardware interface enhances the performance of interinstance communications which is critical to scaling HPC applications AWS also offers placement groups for tightlycoupled HPC applications that require low latency networking Amazon Virtual Private Cloud (VPC) provides IP connectivity between compute instances and storage components Storage Storage options and storage costs are critical factors when considering an HPC solution AWS offers flexible object block or file storage for your transient and permanent storage requirements Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 Provisioned IOPS allows you to allocate storage volumes of the size you need and to attach these virtual volumes to your EC2 instances Amazon Simple Storage Service (S3) is designed to store and access any type of data over the Internet and can be used to store the HPC input and output data long term and without ever having to do a data migration project again Amazon FSx for Lustre is a high performance file storage service designed for demanding HPC workloads and can be used on Amazon EC2 in the AWS cloud Amazon FSx for Lustre works natively with Amazon S3 making it easy for you to process cloud data sets with high performance file systems When linked to an S3 bucket an FSx for Lustre file system transparently presents S3 objects as files and allows you to write results back to S3 You can also use FSx for Lustre as a standalone highperformance file system to burst your workloads from onpremises to the cloud By copying onpremises data to an FSx for Lustre file system you can make that data available for fast processing by compute instances running on AWS Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloudhttps://awsamazoncom/hpc 7Automation and Orchestration Automating the job submission process and scheduling submitted jobs according to predetermined policies and priorities are essential for efficient use of the underlying HPC infrastructure AWS Batch lets you run hundreds to thousands of batch computing jobs by dynamically provisioning the right type and quantity of compute resources based on the job requirements AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists researchers and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS Cloud NICE EnginFrame is a web portal designed to provide efficient access to HPCenabled infrastructure using a standard browser EnginFrame provides you a userfriendly HPC job submission job control and job monitoring environment Operations & Management Monitoring the infrastructure and avoiding cost overruns are two of the most important capabilities that can help an HPC system administrators efficiently manage your organization’s HPC needs Amazon CloudWatch is a monitoring and management service built for developers system operators site reliability engineers (SRE) and IT managers CloudWatch provides you with data and actionable insights to monitor your applications understand and respond to systemwide performance changes optimize resource utilization and get a unified view of operational health AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amountVisualization Tools The ability to visualize results of engineering simulations without having to move massive amounts of data to/from the cloud is an important aspect of the HPC stack Remote visualization helps accelerate the turnaround times for engineering design significantly NICE Desktop Cloud Visualization enables you to remotely access 2D/3D interactive applications over a standard network In addition Amazon AppStream 20 is another fully managed application streaming service that can securely deliver application sessions to a browser on any computer or workstation Security and Compliance Security management and regulatory compliance are other important aspects of running HPC in the cloud AWS offers multiple security related services and quicklaunch templates to simplify the process of creating a HPC cluster and implementing best practices in data security and regulatory compliance The AWS infrastructure puts strong safeguards in place to help protect customer privacy All data is stored in highly secure AWS data centers AWS Identity and Access Management (IAM) provides a robust solution for managing users roles and groups that have rights to access specific data sources Organizations can issue users and systems individual identities and credentials or provision them with temporary access credentials using the Amazon Security Token Service (Amazon STS) AWS manages dozens of compliance programs in its infrastructure This means that segments of your compliance have already been completed AWS infrastructure is compliant with many relevant industry regulations such as HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and others https://awsamazoncom/hpc 7https://awsamazoncom/hpc 8Flexible Pricing and Business Models With AWS capacity planning worries become a thing of the past AWS offers ondemand pricing for shortterm projects contract pricing for longterm predictable needs and spot pricing for experimental work or research groups with tight budgets AWS customers enjoy the flexibility to choose from any combination of payasyougo options procuring only the capacity they need for the duration that it’s needed and AWS Trusted Advisor will alert you first to any costsaving actions you can take to minimize your bill This simplified flexible pricing structure and approach allows research institutions to break free from the time and budget constraining CapExintensive data center model With HPC on AWS organizations can flexibly tune and scale their infrastructure as workloads dictate instead of the other way around AWS Partners and Marketplace For organizations looking to build highly specific solutions AWS Marketplace is an online store for applications and services that build on top of AWS AWS partner solutions and AWS Marketplace lets organizations immediately take advantage of partners’ builtin optimizations and best practices leveraging what they’ve learned from building complex services on AWS A variety of open source HPC applications are also available on the AWS Marketplace HPC ON AWS DELIVERS ADVANTAGES FOR A RANGE OF HPC WORKLOADS AWS cloud provides a broad range of scalable flexible infrastructure solutions that organizations can select to match their workloads and tasks This gives HPC users the ability to choose the most appropriate mix of resources for their specific applications Let us take a brief look at the advantages that HPC on AWS delivers for these workload types Tightly Coupled HPC: A typical tightly coupled HPC application often spans across large numbers of CPU cores in order to accomplish demanding computational workloads To study the aerodynamics of a new commercial jet liner design engineers often run computational fluid dynamics simulations using thousands of CPU cores Global climate modeling applications are also executed at a similar scale AWS cloud provides scalable computing resources to execute such applications These applications can be deployed on the cloud at any scale Organizations can set a maximum number of cores per job dependent on the application requirements aligning it to criteria like model size frequency of jobs cost per computation and urgency of the job completion A significant benefit of running such workloads on AWS is the ability to scale out to experiment with more tunable parameters For example an engineer performing electromagnetic simulations can run larger numbers of parametric sweeps in his Design of Experiment (DoE) study using very large numbers of Amazon EC2 OnDemand instances and using AWS Auto Scaling to launch independent and parallel simulation jobs Such DoE jobs would often not be possible because of the hardware limits of onpremises infrastructure A further benefit for such an engineer is to use Amazon Simple Storage Service (S3) NICE DCV and other AWS solutions like AI/ML services to aggregate analyze and visualize the results as part of a workflow pipeline any element of which can be spun up (or down) independently to meet needs Amazon EC2 features that help with applications in this category also include EC2 placement groups and enhanced networking for reduced nodetonode latencies and consistent network performance Loosely Coupled Grid Computing: The cloud provides support for a variety of loosely coupled grid computing applications that are designed for faulttolerance enabling individual nodes to be added or removed during the course of job execution This category of applications includes Monte Carlo simulations for financial risk analysis material science study for proteomics and more A typical job distributes independent computational workloads across large numbers of CPU cores or nodes in a grid without high demand for high performance nodetonode interconnect or on highperformance storage The cloud lets organizations deliver the faulttolerance https://awsamazoncom/hpc 9these applications require and choose the instance types they require for specific compute tasks that they plan to execute Such applications are ideally suited to Amazon EC2 Spot instances which are EC2 instances that opportunistically take advantage of Amazon EC2’s spare computing capacity Coupled with Amazon EC2 Auto Scaling and jobs can be scaled up when excess spare capacity makes Spot instances cheaper than normal AWS Batch brings all these capabilities together in a single batchoriented service that is easy to use containerfocused for maximum portability and integrates with a range of commercial and open source workflow engines to make job orchestration easy High Volume Data Analytics and Interpretation: When grid and cluster HPC workloads handle large amounts of data their applications require fast reliable access to many types of data storage AWS services and features that help HPC users optimize for data intensive computing include Amazon S3 Amazon Elastic Block Store (EBS) and Amazon EC2 instance types that are optimized for high I/O performance (including those configured with solidstate drive (SSD) storage) Solutions also exist for creating high performance virtual network attached storage (NAS) and network file systems (NFS) in the cloud allowing applications running in Amazon EC2 to access high performance scalable cloudbased shared storage resources Example applications in this category include genomics highresolution image processing and seismic data processing Visualization: Using the cloud for collaboration and visualization makes it much easier for members in global organizations to share their digital data instantly from any part of the world For example it lets subcontractors or remote design teams view and interact with a simulation model in near real time from any location They can securely collaborate on data from anywhere without the need to duplicate and share it AWS services that enable these types of workloads include graphics optimized instances remote visualization services like NICE DCV and managed services like Amazon Workspaces and Amazon AppStream 20Accelerated Computing: There are many HPC workloads that can benefit from offloading computationintensive tasks to specialized hardware coprocessors such as GPUs or FPGAs Many tightlycoupled and visualization workloads are apt for accelerated computing AWS HPC solutions offer the flexibility to choose from many available CPU GPU or FPGAbased instances to deploy optimized infrastructure to meet the needs of specific applications Machine Learning and Artificial Intelligence: Machine learning requires a broad set of computing resource options ranging from GPUs for computeintensive deep learning FPGAs for specialized hardware acceleration to highmemory instances for inference study With HPC on AWS organizations can select instance types and services to fit their machine learning needs They can choose from a variety of CPU GPU FPGA memory storage and networking options and tailor instances to their specific requirements whether they are training models or running inference on trained models AWS uses the latest Intel® Xeon®Scalable CPUs which are optimized for machine learning and AI workloads at scale The Intel® Xeon®Scalable processors incorporated in AWS EC2 C5 instances along with optimized deep learning functions in the Intel MKLDNN library provide sufficient compute for deep learning training workloads (in addition to inference classical machine learning and other AI algorithms) In addition CPU and GPU optimized frameworks such as TensorFlow MxNet and PyTorch are available in Amazon Machine Image (AMI) format for customers to deploy their AI workloads on optimized software and hardware stacks Recent advances in distributed algorithms have also enabled the use of hundreds of servers to reduce the time to train from weeks to minutes Data scientists can get excellent deep learning training performance using Amazon EC2 and further reduce the timetotrain by using multiple CPU nodes scaling near linearly to hundreds of nodeshttps://awsamazoncom/hpc 10Life Sciences and Healthcare Running HPC workloads on AWS lets healthcare and life sciences professionals easily and securely scale genomic analysis and precision medicine applications For AWS users the scalability is builtin bolstered by an ecosystem of partners for tools and datasets designed for sensitive data and workloads They can efficiently dynamically store and compute their data collaborate with peers and integrate findings into clinical practice—while conforming with security and compliance requirements For example BristolMyers Squibb (BMS) a global biopharmaceutical company used AWS to build a secure selfprovisioning portal for hosting research The solution lets scientists run clinical trial simulations ondemand and enables BMS to set up rules that keep compute costs low Computeintensive clinical trial simulations that previously took 60 hours are finished in only 12 hours on the AWS Cloud Running simulations 98% faster has led to more efficient less costly clinical trials—and better conditions for patients DRIVING INNOVATION ACROSS INDUSTRIES Every industry tackles a different set of challenges AWS HPC solutions available with the power of the latest Intel technologies help companies of all sizes in nearly every industry achieve their HPC results with flexible configuration options that simplify operations save money and get results to market faster These workloads span the traditional HPC applications like genomics life sciences research financial risk analysis computeraided design and seismic imaging to the emerging applications like machine learning deep learning and autonomous vehicles “The time and money savings are obvious but probably what is most important factor is we are using fewer subjects in these trials we are optimizing dosage levels we have higher drug tolerance and safety and at the end of the day for these kids it’s fewer blood samples” Sr Solutions Specialist BristolMyers SquibbFinancial Services Insurers and capital markets have long been utilizing grid computing to power actuarial calculations determine capital requirements model risk scenarios price products and handle other key tasks Taking these computeintensive workloads out of the data center and moving them to AWS helps them boost speed scale better and save money For example MAPRE the largest insurance company in Spain needed fast flexible environments in which to develop sales management insurance policy applications The firm was looking for a costeffective technology platform that could deliver rapid analysis and enable quick deployment of development environments in remote installations sites Its onpremises infrastructure simply could not support these needs The company turned to AWS for high performance computing risk analysis of customer data and to create test and development environments for its commercial application “The onpremises hardware investment for three years cost approximately €15 million whereas the AWS infrastructure cost the company €180000 for the same period a savings of 88 percent” MAPFRE https://awsamazoncom/hpc 11KEEPING PACE WITH CHANGING FINANCIAL REGULATIONS AWS customers in financial services are preparing for new Fundamental Review of Trading Book (FRTB) regulations that will come into effect between 2019 and 2021 As part of the proposed regulations these financial services institutions will need to perform computeintensive “value at risk” calculations in the four hours after trading ends in New York and begins in Tokyo The periodic nature of the calculation along with the amount of processing power and storage needed to run it within four hours made it a great fit for an environment where a vast amount of costeffective compute power is available on an ondemand basis To help its financial services customers meet these new regulations AWS worked with TIBCO (an onpremises marketleading infrastructure platform for grid and elastic computing) to run a proof of concept grid in AWS Cloud The grid grew to 61299 Spot instances with 13 million vCPUs and cost approximately $30000 an hour to run This proofofconcept is a strong example of the potential for AWS to deliver a vast amount of cost effective compute power on an ondemand basishttps://awsamazoncom/hpc 12Design and Engineering Using simulations on AWS HPC infrastructure lets manufacturers and designers reduce costs by replacing expensive development of physical models with virtual ones during product development The result? Improved product quality shorter time to market and reduced product development costs TLG Aerospace in Seattle Washington put these capabilities to work to perform aerodynamic simulations on aircraft and predict the pressure and temperature surrounding airframes Its existing cloud provider was expensive and could not scale to handle more performanceintensive applications TLG turned to Amazon EC2 Spot instances which provide a way to use unused EC2 computing capacity at a discounted price The solution dramatically decreased simulation costs and can scale easily to take on new jobs as needed Energy and Geo Sciences Reducing runtimes for computeintensive applications like seismic analysis and reservoir simulation is just one of the many ways the energy and geosciences industry has been utilizing HPC applications in the cloud By moving HPC applications to the cloud organizations reduce job submission time track runtime and efficiently manage the large datasets associated with daily workloads For example using AWS ondemand computing resources Zenotech a simulation service provider can power simulations that help energy companies support advanced reservoir models“We saw a 75% reduction in the cost per CFD simulation as soon as we started using Amazon EC2 Spot instances We are able to pass those savings along to our customers–and be more competitive” TLG Aerospace Using the resources available within a typical small company it would take several years to complete a sophisticated reservoir simulation Zenotech completed it at a computing cost for AWS resources of only $750 over a 12day periodhttps://awsamazoncom/hpc 13Media and Entertainment The movie and entertainment industries are shifting content production and post production to cloudbased HPC to take advantage of highly scalable elastic and secure cloud services to accelerate content production and reduce capital infrastructure investment Content production and postproduction companies are leveraging the cloud to accelerate and streamline production editing and rendering workloads with highly scalable cloud computing and storage One design and visual effects (VFX) company Fin Design + Effects needed the ability to access vast amounts of compute capacity when big deadlines came around Its onpremises render servers had a finite capacity and were difficult and expensive to scale Fin started by using AWS Direct Connect to scale its rendering capabilities by establishing a dedicated Gigabit network connection from the Fin data center to AWS Fin is also taking advantage of Amazon EC2 Spot instances Fin now has the agility to add compute resources on the fly to meet lastminute project demands AI/ML and Autonomous Vehicles The AI revolution which started with the rapid increase in accuracy brought by deep learning methods has the potential to revolutionize a variety of industries Autonomous driving is a particularly popular use case for AI/ML Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks along with the capability to do realtime processing of local rules and events in the vehicle AWS’s virtually unlimited storage and compute capacity and support for popular deep learning frameworks help accelerate algorithm training and testing and drive faster time to market“We are reducing our operational costs by 50 percent by using Amazon EC2 Spot instances” Fin Design Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks SUMMARY AND RECOMMENDATION Technology continues to change rapidly and it’s clear that HPC has a critical role to play in enabling organizations to innovate faster and enable them to adopt other leadingedge technologies like AI/ML and IoT AWS puts the advanced capabilities of High Performance Computing in reach for more people and organizations while simplifying processes like management deployment and scaling Accessible flexible and cost effective it lets organizations unleash the creativity of their engineers analysts and researchers from the limitations of onpremises infrastructures Unlike traditional onpremise HPC systems AWS offers virtually unlimited capacity to scale out HPC infrastructure It also provides the flexibility for organizations to adapt their HPC infrastructure to changing business priorities With flexible deployment and pricing models it lets organizations of all sizes and industries take advantage of the most advanced computing capabilities available HPC on AWS lets you take a fresh approach to innovation to solve the world’s most complex problems Learn more about running your HPC workloads on AWS at http://awsamazoncom/hpc i “HPC Market Update ISC18” Intersect360 Research 2018,General,consultant,Best Practices An_Overview_of_AWS_Cloud_Data_Migration_Services,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/overviewawscloud datamigrationservices/overviewawsclouddatamigrationserviceshtmlAn Overview of AWS Cloud Data Migration Services Published May 1 2016 Updated June 13 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Cloud Data Migration Challenges 2 Security and Data Sensitivity 2 Cloud Data Migration Tools 5 Time and Performance 6 Choosing a Mig ration Method 7 Selfmanaged Migration Methods 8 AWS Managed Migration Tools 9 Cloud Data Migration Use Cases 18 Use Case 1: One Time Massive Data Migration 18 Use Case 2: Continuous On premises Data Migration 21 Use Case 3: Continuous Streaming Data In gestion 25 Conclusion 26 Contributors 26 Further Reading 26 Document revisions 27 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract One of the most challenging steps required to deploy an application infrastructure in the cloud is moving data into and out of the cloud Amazon Web Services (AWS) provides multiple services for moving data and each solution offers various levels of speed security cost and performance This white paper outlines the different AWS services that can help seamlessly transfer data to and from the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 1 Introduction As you plan your data migration strategy you will need to determine the best approach to use based on the specifics of your environment There are many different ways to lift andshift data to the cloud such as onetime large batches constant device streams intermittent updates or even hybrid data storage combining the AWS Cloud and on prem ises data stores These methods can be used individually or together to help streamline the realities of cloud data migration projects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 2 Cloud Data Migration Challenges When planning a data migration you need to determine how much data is being moved and the bandwidth available for the transfer of data This will determine how long the transfer will take AWS offers several methods to transfer data into your account including the AWS Snow Family of storage devi ces AWS Direct Connect and AWS SitetoSite VPN over your existing internet connectio n The network bandwidth that is consume d for data migration will not be available for your organization’s typical application traffic In addition your organization might be concerned with moving sensitive business information from your internal network to a secure AWS environment Determining the security level for your organization helps you select the appropriate AWS services for your data migration Security and Data Sensitivity When customers migrate data ensuring the security of data both in transit and at rest is critical AWS takes security very seriously and build s security features into all data migration services Every service uses AWS Identity and Access Management (IAM) to control programmatic and AWS Console access to resources The following table lists these featur es Table 1 – AWS Services Security Features AWS Service Security Feature s AWS Direct Connect • Provides a dedicated physical connection with no data transfer over the Internet • Integrates with AWS CloudTrail to capture API calls made by or on behalf of a customer account AWS Snow Family • Integrates with the AWS Key Manage ment Service (AWS KMS) to encrypt data atrest that is stored on AWS Snow cone Snowball or Snowmobile • Uses an industry standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software to physically secure the AWS Snowcone or Snowball device This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 3 AWS Service Security Feature s AWS Transfer Family • SFTP use s SSH while FTPS use s TLS to transfer data through a secure and encrypted channel • AWS Transfer Family is PCI DSS and GDPR compliant and HIPAA eligible The service is also SOC 1 2 and 3 compliant Learn more about services in scope grouped by compliance programs • The service supports three modes of authentication: Service Managed where you store user identities within the service Microsoft Active Directory and Custom (BYO) which enables you to i ntegrate an identity provider of your choice Service Managed authentication is supported for server endpoints that are enabled for SFTP only • You can use Amazon CloudWatch to monitor your end users’ activ ity and use AWS CloudTrail to access a record of all S3 API operations invoked by your server to service your end users’ data requests This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 4 AWS Service Security Feature s AWS DataSync • All data transferred between the source and destination is encrypted via Transport Layer Security (TLS) w hich replaced Secure Sockets Layer (SSL) Data is never persisted in AWS DataSync itself The service supports using default encryption for S3 buckets Amazon EFS file system encryption of data at rest and Amazon FSx For Window s File Server encryption at rest and in transit • When copying data to or from your premises there is no need to setup a VPN/tunnel or allow inbound connections Your AWS DataSync agent can be configured to route through a firewall using standard network p orts • Your AWS DataSync agent connects to DataSync service endpoints within your chosen AWS Region You can choose to have the agent connect to pu blic internet facing endpoints Federal Information Processing Standards (FIPS) validated endpoints or endpoints within one of your VPCs AWS Storage Gateway • Encrypts all data in transit to and from AWS by using SSL/TLS • All data in AWS Storage Gateway is encrypted at rest using AES 256 while data transfers are encrypted with AES128 GCM or AES 128 CCM • Authentication between your gateway and iSCSI initiators can be secured by using Challenge Handshake Authentication Protocol (CHAP) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 5 AWS Service Security Feature s Amazon S3 Transfer Acceleration • Access to Amazon S3 can be restricted by granting other AWS accounts and users permission to perform the resource operations by writing an access policy • Encrypt dat a atrest by performing server side encryption using Amazon S3 Managed Keys (SSE S3) AWS Key Management Service (KMS) Managed Keys (SSE KMS) or Customer Provided Key s (SSE C) Or by performing client side encryption using AWS KMS –Managed Customer Master Key (CMK) or Client Side Master Key • Data in transit can be secured by us ing SSL /TLS or client side encryption • Enable Multi Factor Authentication (MFA) Delete for an Amazon S3 bucket AWS Kinesis Data Firehose • Data in transit can be secured by using SSL /TLS • If you send data to your delivery stream using PutRecord or PutRecordBatch or if you send the data using AWS IoT Amazon CloudWatch Logs or CloudWatch Events you can turn on server side encryption by using the StartDeliveryStreamEncryption operation • You can also enable SSE when you create the delivery stream Cloud Data Migration Tools This section discusses managed and self managed migration tools with a brief description of how each solution works Y ou can select AWS managed or selfmanaged migration methods and make your choice based on your specific use case This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 6 Time and Performance When you migrate data from your onpremises storage to AWS storage services you want to take the least amount of time to move data over your internet connection with minimal disruption to the existing systems To calculate the number of days required to migrate a given amount of data you can use the following formula: Number of Days = (Terabytes * 8 bites per Byte)/(CIRCUIT gigabits per second * NETWORK_UTILIZATION percent * 3600 seconds per hour * AVAILABLE_HOURS ) For example if you have a n GigabitEthernet connection (1 Gbps) to the Internet and 100 TB of data to move to AWS theoretically the minimum time it would take over the network connection at 80 percent utilization is approximately 28 days (100000000000000 Bytes * 8 bits per byte ) /(1000000000 bps * 80 percent * 3600 seconds per hour * 10 hours per day ) = 2777 days If this amount of time is not practical for you there are many ways to reduce migration time for large amounts of data You can use AWS managed migration tools that automate dat a transfers and optimize your internet connection to the AWS Cloud Alternatively you may develop or purchase your own tools and create your own transfers processes that the utilize the native HTTP interfaces to Amazon Simple Storage Service (Amazon S3) For moving small amounts of data from your on site location to the AWS Cloud you may use ad hoc methods that get the job done quickly with minimal use of automation methods discussed in the AWS migration tools section For the best results we suggest th e following: Table 2 – Recommended migration methods Connection & Data Scale Method Duration Less than 10 Mbps & Less than 100 GB Selfmanaged ~ 3 days Less than 10 Mbps & Between 100 GB – 1 TB AWS Managed ~ 30 days Less than 10 Mbps & Greater than 1 TB AWS Snow Family ~ weeks Less than 1 Gbps & Between 100 GB – 1 TB Selfmanaged ~ days This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 7 Connection & Data Scale Method Duration Less than 1 Gbps & Greater than 1 TB AWS Managed / Snow Family ~ weeks Choosing a Migration Method There are several factors to consider when choosing the appropriate migration method and tool As discussed in the previous section time allocated to perform data transfers the volume of data and network speeds influence the decision between different d ata migration methods You should also consider for each data store server or application stack the number of repetitive steps required to transfer data from source to target Then evaluate the variance of these steps as they are repeated In other wo rds are there unique requirements per data store that require non trivial changes to the data migration procedures? Then evaluate the level of existing investments in custom tooling and automation in your organization You will need to determine if it is more worthwhile to use existing selfmanaged tooling and automation or sunset them in favor of managed services and tools You can use following decision tree as a framework to choose a suitable migration method and tool: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 8 Figure 1 Migration Method Decision Tree Selfmanaged Migration Methods Small one time data transfers on limited bandwidth connections may be accomplished using these very simple tools This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 9 Amazon S3 AWS Command Line Interface For migrating small amounts of data you can use the Amazon S3 AWS Command Line Interface to write commands that move data into an Amazon S3 bucket You can upload objects up to 5 GB in size in a single operation If your object is greater than 5 GB you can use multipart upload Multipart uploading is a three step process: You initiate the upload you upload the object parts and after you have uploaded all the parts you complete the multipart upload Upon receiving the complete multipart upload request Amaz on S3 constructs the object from the uploaded parts Once complete you can access the object just as you would any other object in your bucket Amazon Glacier AWS Command Line Interface For migrating small amounts of data you can write commands using the Amazon Glacier AWS Command Line I nterface to move data into Amazon Glacier In a single operation you can upload archives from 1 byte to up to 4 GB in size However for archives greater than 100 MB in size we recommend using multipart upload Using the multipart upload API you can upload larg e archives up to about 40000 GB (10000 * 4 GB) Storage Partner Solutions Multiple Storage Partner solutions work seamlessly to access storage across on premises and AWS Cloud envi ronments Partner hardware and software solutions can help customers do tasks such as backup create primary file storage/cloud NAS archive perform disaster recovery and transfer files AWS Managed Migration Tools AWS has designed several sophisticated services to help with cloud data migration AWS Direct Connect AWS Direct Connect lets you establish a dedicated network connection between your corporate network and one AWS Direct Connect location Using this connection you can create virtual interfaces directly to AWS services This bypasses Internet service providers (ISPs) in you r network path to your target AWS region By setting up private connectivity over AWS Direct Connect you could reduce network costs increase bandwidth throughput and provide a more consistent network experience than with Internet based connections This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 10 Using AWS Direct Connect you can easily establish a dedicated network connection from your premises to AWS at speeds starting at 50 Mbps and up to 100 Gbps You can use the connection to access Amazon Virtual Privat e Cloud (Amazon VPC) as well as AWS public services such as Amazon S3 AWS Direct Connect in itself is not a data transfer service Rather AWS Direct Connect provides a high bandwidth connection that can be used to transfer data between your corporate n etwork and AWS with more consistent performance and without ever having the data routed over the Internet Encryption methods may be applied to secure the data transfers over the AWS Direct Connect such as AWS Site toSite VPN AWS APN Partners can help you set up a new connection between an AWS Direct Connect location and your corporate data center office or colocation facility Additionally many of our partners offer AWS Direct Connect Bundles that provide a set of advanced hybrid architectures that can reduce complexity and provide peak performance You can extend your on premises networking security storage and compute technologi es to the AWS Cloud using managed hybrid architecture compliance infrastructure managed security and converged infrastructure With 108 Direct Connect locations worldwide and more than 50 Direct Connect delivery partners you can establish links between your on premises network and AWS Direct Connect locations With AWS Direct C onnect you only pay for what you use and there is no minimum fee associated with using the service AWS Direct Connect has two pricing components: porthour rate (based on port speed) and data transfer out (per GB per month) Additionally i f you are using an APN partner to facilitate a n AWS Direct Connect connection contact the partner to discuss any fees they may charge For information about pricing see Amazon Direct Connect Pricing AWS Snow Family The AWS Snow Family accelerates moving large amounts of data into and out of AWS using AWS managed hardware and software The Snow Family comprised of AWS Snowcone AWS Snowball and AWS Snowmobile are various physical devices each with different form factors and capacities They are purpose built for efficient data storage and transfer and have built in compute capabilities The AWS Snowcone device is a lightweight handheld storage device that accommodates field environments where access to power may be limited and WiFi is necessary to make the connection An AWS Snowball Edge device is rugged enough to withstand a 70 G shock and at 497 pounds (2254 kg) it is light enough for one person to carry It is entirely self contained with This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 11 110240 VAC power ships with country specific power cables as well as an E Ink display and control panel on the front Each AWS Snowball Edge appliance is weather resistant and serves as its own shipping con tainer With AWS Snowball you have the choice of two devices as of the date of this writing Snowball Edge Compute Optimized with more computing capabilities suited for higher performance workloads or Snowball Edge Storage Optimized with more storage which is suited for large scale data migrations and capacity oriented workloads Snowball Edge Compute Optimized provides powerful computing resources for use cases such as machine learning full motion video analysis analytics and local computing stacks These capabilities include 52 vCPUs 208 GiB of memory and an optional NVIDIA Tesla V100 GPU For storage the device provides 42 TB usable HDD capacity for S3 compatible object storage or EBS compatible block volumes as well as 768 TB of usable NVMe SSD capacity for EBS compatible block volumes Snowball Edge Compute Optimized devices run Amazon EC2 sbe c and sbe g instances which are equivalent to C5 M5a G3 and P3 instances Snowball Edge Storage Optimized devices are well suited for large scale data migrations and recurring transfer workflows as well as local computing with higher capacity needs Snowball Edge Storage Optimized provides 80 TB of HDD capacity for block volumes and Amazon S3 compatible object storage and 1 TB of SATA SSD for block volumes For computing resources the device provides 40 vCPUs and 80 GiB of memory to support Amazon EC2 sbe1 instances (equivalent to C5) AWS transfers your data directly onto Snowball Edge device using on premises high speed connections ships the device to AWS facilities and transfers data off of AWS Snowball Edge devices using Amazon’s high speed internal network The data transfer process bypass es the corporate Internet connection and mitigates the requirement for an AWS Direct Connect services For datasets of significant size AWS Snowball is often faster than transferring data via the Internet and more cost effective than upgrading your data center’s Internet connection AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to other AWS services such as Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx File Gateway and Amazon Glacier AWS Snowball is ideal for transferring large amounts of data up to many petabytes in and out of the AWS cloud securely This approach is effective especially in cases where you don’t want to make expensive upgrades to your network infrastructure ; if you frequently experience large backlogs of data ; if you are in a physically isolated This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 12 environment ; or if you are in an area where high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using AWS Snow Family Common use cases include cloud migration disaster recovery d ata center decommission and content distribution When you d ecommission a data center many steps are involved to make sure valuable data is not lost and the AWS Snow Family can help ensure data is securely and cost effectively transferred to AWS In a content distribution scenario you might u se Snowball Edge devices if you regularly receive or need to share large amounts of data with clients customers or business partners Snowball appliances can be sent directly from AWS to client or customer locations If you need to move massive amounts of data AWS Snowmobile is an Ex abyte scale data transfer service Each Snowmobile is a 45 foot long ruggedized shipping container hauled by a trailer truck with up to 100 PB data storage capacity Snowmobile also handles all of the logistics AWS personnel transport and configure the Sn owmobile They will also work with your team to connect a temporary high speed network switch to your local network The local high speed network facilitates rapid transfer of data from within your datacenter to the Snowmobile Once you’ve loaded all your data the Snowmobile drives back to AWS where the data is imported into Amazon S3 Moving data at this massive scale requires additional preparation precautions and security Snowmobile uses GPS tracking round the clock video surveillance and dedicated security personnel Snowmobile offers an optional security escort vehicle while your data is in transit to AWS Management of and access to the shipping container and data stored within is limited to AWS personnel using hardware secur e access control meth ods AWS Snow Family might not be the ideal solution if your data can be transferred over the Internet in less than one week or if your applications cannot tolerate the offline transfer time With AWS Snow Family as with most other AWS services you pay only for what you use Snowball has three pricing components: a service fee (per job) extra day charges as required and data transfer out The first 5 days of Snowcone usage and the first 10 days of onsite Snowball includes 10 days of device use For the destination storage the standard Amazon S3 storage pricing applies For pricing information see AWS Snowball pricing Snowmobile pricing is based on the amount of data stored on the truck p er month For more information about AWS Regions and availability see AWS Regional Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 13 AWS Storage Gateway AWS Storage Gateway makes backing up to the cloud extremely simple It connects an onpremises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service enables you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports three types of storage interfaces used in on premises environment including file volume and tap e It uses industry standard network storage protocols such as Network File System (NFS) and Server Message Block (SMB) that work with your existing applications enabling data storage using S3 File Gateway function to store data in Amazon S3 It provides lowlatency performance by maintaining an on premises cache of frequently accessed data while securely storing all of your data encrypted in Amazon S3 Once data is stored in Amazon S3 it can be archived in Amazon S3 Glacier For disaster re covery scenarios AWS Storage Gateway together with Amazon Elastic Compute Cloud (Amazon EC2) can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appliance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance After you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Console to create gatewa ycached volumes gateway stored volumes or a gateway –virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications Volume Gateway supports iSCSI connections that enable storing of volume data in S3 With caching enabled you can use Amazon S3 to hold your complete set of data while caching some portion of it locally for onpremises frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while sti ll providing your applications with low latency access to frequently accessed data You can create storage volumes up to 32 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached vo lumes can support up to 32 volumes and total volume storage per gateway of 1024 Ti B Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your locally sourced data in cache while asynchronously backing up data to AWS These volumes provide your on premises applications with lowlatency access to their entire datasets while providing durable off site backups This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 14 You can create storage volumes up to 1 6 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 32 volumes with a total volume storage of 512 TiB Data written to your gateway stored volumes is stored on your on premises storage hardware and asynchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup a pplication with an iSCSI based VTL consisting of a virtual media changer and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 5 T iB A VTL can hold up to 1 500 virtual tapes with a maximum aggregate capacity of 1 PiB After the virtual tapes are created your backup application can discover them using its standard media inventory procedure Once created tapes are available for immediate access and are stor ed in Amazon S3 Virtual tapes you need to access frequently should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mir roring data to cloud based compute resources and then later archiving the data to Amazon Glacier With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway appliance per month) and data transfer out (per GB per month) Based on type of gateway appliance you use there are snapshot storage usage (per GB per month) and volume storage usage (per GB per month) for gateway cached volumes/gateway stored v olumes and virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) and retrieval from virtual tape shelf (per GB) for Gateway Virtual Tape Library For information about pricing see AWS Storage Gateway pricing Amazon S3 Transfer Acceleration (S3 TA) Amazon S3 Transfer Acceleration (S3 TA) enables fast easy and secure transfers of files over long distances between your client and your Amazon S3 bucket Transfer Acceleration lever ages Amazon CloudFront globally distributed AWS edge locations As This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 15 data arrives at an AWS edge location data is routed to your Amazon S3 bucket over an optimized network path Transfer Acceleration helps you fully utilize your bandwidth minimize the effe ct of distance on throughput and ensure consistently fast data transfer to Amazon S3 regardless of your client’s location Acceleration primarily depends on your available bandwidth the distance between the source and destination and packet loss rates o n the network path Generally you will see more acceleration when the source is farther from the destination when there is more available bandwidth and/or when the object size is bigger You can use the online speed comparison tool to get the preview of the performance benefit from uploading data from your location to Amazon S3 buckets in different AWS Regions using Transfer Acceleration Organ izations are using Transfer Acceleration on a bucket for a variety of reasons For example they have customers that upload to a centralized bucket from all over the world transferring gigabytes to terabytes of data on a regular basis across continents or having underutilize d the available bandwidth over the Internet when uploading to Amazon S3 The best part about using Transfer Acceleration on a bucket is that the feature can be enabled by a single click of a button in the Amazon S3 console; this makes the accelerate endpoint available to use in place of the regular Amazon S3 endpoint With Tra nsfer Acceleration you pay only for what you use and for transferring data over the accelerated endpoint Transfer Acceleration has the following pricing components: data transfer in (per GB) data transfer out (per GB) and data transfer between Amazon S3 and another AWS Region (per GB) Transfer acceleration pricing is in addition to data transfer (per GB per month) pricing for Amazon S3 For information about pricing see Amazon S3 pricing AWS Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS The service can capture and automatically load st reaming data into Amazon S3 Amazon Redshift Amazon Elasticsearch Service or Splunk Amazon Kinesis Data Firehose is a fully managed service making it easier to capture and load massive volumes of streaming data from hundreds of thousands of sources The service can automatically scale to match the throughput of your data and requires n o ongoing administration Additionally Amazon Kinesis Data Firehose c an also batch compress transform and encrypt data before loading it This process minimiz es the amount of storage used at the destination and increas es security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 16 You can use Data Firehose by creating a delivery stream and sending the data to it The streaming data originators are called data producers A producer can be as simple as a PutRecord() or PutRecordBatch() API call or you can build your producers using Kinesis Agent You can send a record (before base64 encoding) as large as 1000 KiB Additionally Firehose buffers incoming streaming data to a certain size called a Buffer Size (1 MiB to 12 8 MiB) or for a certain period of time called a Buffer Interval (60 to 900 seconds) before delivering to destinations With Amazon Kinesis Data Firehose you pay only for the volume of data you transmit through the service Amazon Kinesis Data Firehose has a single pricing component: data ingested (per G iB) which is calculated as the number of data records you send to the service times the size of each record rounded up to the nearest 5 KiB There may be charges associated with PUT requests a nd storage on Amazon S3 and Amazon Redshift and Amazon Elasticsearch instance hours based on the destination you select for loading data For information about pricing see Amazon Kinesis Da ta Firehose pricing AWS Transfer Family If you are looking to modernize your file transfer workflows for business processes that are heavily dependent on FTP SFTP and FTPS ; the AWS Transfer Family service provides fully managed file transfers in and out of Amazon S3 buckets and Amazon EFS shares The AWS Transfer Family uses a highly available multi AZ architecture that automatically scales to add capacity based on your file transfer demand This means no more FTP SFTP and FTPS servers to manage The AWS Transfer Family allows the authentication of users through multiple methods including self managed AWS Directory Service on premises Active Directory systems through AWS Managed Microsoft AD connectors or custom identity providers Custom identity pr oviders may be configured through the Amazon API Gateway enabling custom configurations DNS entries used by existing users partners and applications are maintained using Route 53 for minimal disruption and seamless migration With your data residing in Amazon S3 or Amazon EFS you can use other AWS services for analytics and data processing workflows There are many use cases that require a standards based file transfer protocol like FTP SFTP or FTPS AWS Transfer Family is a good fit for secure file sharing between an organization and third parties Examples of data that are shared between organizations are l arge files such as audio/video media files technical documents research data and EDI data such as purchase orders and invoices Another u se case is providing a central location where users can download and globally access your data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 17 securely A third use case is to facilitate data ingestion for a data lake Organizations and third parties can FTP SFTP or FTPS research analytics or busine ss data into an Amazon S3 bucket which can then be further processed and analyzed With the AWS Transfer Family you only pay for the protocols you have enabled for access to your endpoint and the amount of data transferred over each of the protocols There are no upfront costs and no resources to manage yourself You select the protocols identity provider and endpoint configuration to enable transfers over the chosen protocols You are billed on an hourly basis for each of the protocols enabled to acce ss your endpoint until the time you delete it You are also billed based on the amount of data (Gigabytes) uploaded and downloaded over each of the protocols For more details on pricing per region see AWS Transfer Family pricing Third Party Connectors Many of the most popular third party backup software packages such as CommVault Simpana and Veritas NetBackup include Amazon S3 connectors This allows the backup software to point direc tly to the cloud as a target while still keeping the backup job catalog complete Existing backup jobs can simply be rerouted to an Amazon S3 target bucket and the incremental daily changes are passed over the Internet Lifecycle management policies can m ove data from Amazon S3 into lower cost storage tiers for archival status or deletion Eventually and invisibly local tape and disk copies can be aged out of circulation and tape and tape automation costs can be entirely removed These connectors can be used alone or they can be used with a gateway provided by AWS Storage Gateway to back up to the cloud without affecting or re architecting existing on premises processes Backup administrators will appreciate the integration into their d aily console activities and cloud architects will appreciate the behind the scenes job migration into Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 18 Cloud Data Migration Use Cases Use Case 1: One Time Massive Data Migration Figure 2 Onetime massive data migra tion In use case 1 a customer goes through the process of decommissioning a data center and moving the entire workload to the cloud First all the current corporate data needs to be migrated To complete this migration AWS Snowball appliances are used to move the data from the customer’s existing data center to an Amazon S3 bucket in the AWS Cloud 1 Customer creates a new data transfer job in the AWS Snowball Management Console by providing the following information a Choose Import into Amazon S3 to start c reating the import job b Enter the shipping address of the corporate data center and shipping speed (one or two day) c Enter job details such as name of the job destination AWS Region destination Amazon S3 bucket to receive the imported data and Snowba ll Edge device type This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 19 d Enter security settings indicating the IAM role Snowball assumes to import the data and AWS KMS master key used to encrypt the data within Snowball e Set Amazon Simple Notification Service (SNS) notification options and provide a list o f comma separated email addresses to receive email notifications for this job Choose which job status values trigger notifications f Download AWS OpsHub for Snow family to manage yo ur devices and their local AWS services With AWS OpsHub you can unlock and configure single or clustered devices transfer files and launch/manage instances running on Snow Family devices 2 After the job is created AWS ships the Snowball Appliances to th e customer data center by AWS In this example the customer is importing 200 TB of data into Amazon S3 they will need to create three Import jobs of 80 TB Snowball Edge Storage Optimized capacity 3 After receiving the Snowball appliance the customer performs the following tasks a Customer connects the powered off appliance to their internal network and uses the supplied power cables to connect to a power outlet b After the Snowball is ready the customer uses the E Ink display to choose the network settings and assign an IP address to the appliance 4 The customer transfers the data to the Snowball appliance using the following steps a Download the credentials consisting of a manifest file and an unlock code for a specific Snowball job from AWS Snow Family Management Console b Install the Snowball Client on an on premises machine to manage the flow of data from the on premise s data source to the Snowball c Access the Snowball client using the terminal or command prompt on the workstation and typing the following command: snowball Edge unlockdevice endpoint [https:// Snowball IP Address] manifest [Path/to/manifest/file] –unlockcode [29 character unlock code] d Begin transferring data onto the Snowball using the following tools: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 20 i Version 11614 or earlier of the AWS CLI s3 cp or s3 sync commands Detailed installation and command syntax are found here ii AWS OpsHub which was installed in step 1f Detailed commands and instructions on managing S3 Storage can be found here 5 After the data transfer is complete disconnect the S nowball from your network and seal the Snowball After being properly sealed the return shipping label appears on the E Ink display Arrange UPS pickup of the appliance for shipment back to AWS 6 UPS automatically report s back a tracking number for the job to the AWS Snowball Management Console The customer can access that tracking number and a link to the UPS tracking website by viewing the job's status details in the console 7 After the appliance is received at the AWS Region the job status changes from In transit to AWS to At AWS On average it takes a day for data import into Amazon S3 to begin When the import starts the status of the job changes to Importing From this point on it takes an average of two business days for your import to reach Comp leted status You can track status changes through the AWS Snowball Management Console or by Amazon SNS notifications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 21 Use Case 2: Continuous On premises Data Migration Figure 3 Ongoing data migration from onpremises storage solution In use case 2 a customer has a hybrid cloud deployment with data being used by both an on premises environment and systems deployed in AWS Additionally the customer wants a dedicated connection to AWS that provides consisten t network performance As part of the on going data migration AWS Direct Connect acts as the backbone providing a dedicated connection that bypasses the Internet to connect to AWS cloud Additionally the customer deploys AWS Storage Gateway with Gateway Cached Volume s in the data center which sends data to an Amazon S3 bucket in their target AWS region The following steps describe the required steps to build this solution: e The customer creates an AWS Direct Connect connection between their corporate data center and the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 22 a To set up the connection using the Connection Wizard ordering type the customer provides the following information using the AWS Direct Connect Console : i Choose a resiliency level 1 Maximum Resiliency (for critical workloads) : You can achieve maximum resiliency for critical workloads by using separat e connections that terminate on separate devices in more than one location This topology provides resiliency against device connectivity and complete location failures 2 High Resiliency (for critical workloads): You can achieve high resiliency for critic al workloads by using two independent connections to multiple locations This topology provides resiliency against connectivity failures caused by a fiber cut or a device failure It also helps prevent a complete location failure 3 Development and Test (non critical or test/dev workloads): You can achieve development and test resiliency for non critical workloads by using separate connections that terminate on separate devices in one location This topology provides resiliency agains t device failure but does not provide resiliency against location failure ii Enter connection settings: 1 Bandwidth – choose from 1Gbps to 100Gbps 2 First location – the first physical location for your first Direct Connect connection 3 First location service provider 4 Second location – the second physical location for your second Direct Connect connection 5 Second location service provider iii Review and create menu : confirm your selections and click create b After the customer creates a connection using the AWS Direct Connect console AWS will send an email within 72 hours The email will include a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 23 Letter of Authorization and Connecting Facility Assignment (LOA CFA) After receiv ing the LOA CFA the customer will forward it to their network provider so they can order a cross connect for the customer The customer is not able to order a cross connect for themselves in the AWS Direct Connect location if the customer does not already have equipment there The network provider will have to do this for the custome r c After the physical connection is set up the customer create s the virtual interface s within AWS Direct Connect to connect to AWS public services such Amazon S3 d After creating virtual interface s the customer runs the AWS Direct Connect failover test to make sure that traffic routes to alternate online virtual interfaces 2 After the AWS Direct Connect connection is setup the customer create s an Amazon S3 bucket into which the on premises data can be backed up 3 The customer deploys the AWS Storage Gateway in their existing data center using following steps : a Deploy a new gateway using AWS Storage Gateway console b Select Volume Gateway Cached volumes for the type of gateway c Download the gateway virtual machine (VM) image and deploy on the on premis es virtualization environment d Provision two local disks to be attached to the VM e After the gateway VM is powered on record the IP address of the machine and then enter the IP address in the AWS Storage Gateway console to activate the gateway 4 After the gateway is activated the customer can configure the volume gateway in the AWS Storage Gateway console: a Configure the local storage by selecting one of the two local disks attached to the storage gateway VM to be used as the upload buffer and cache storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 24 b Create volumes on the Amazon S3 bucket 5 The customer connects the Amazon S3 gateway volume as an iSCSI connection through the storage gateway IP address on a client machine 6 After setup is completed and the customer applications write data to t he storage volumes in AWS the gateway at first stores the data on the on premises disks (referred to as cache storage ) before uploading the data to Amazon S3 The cache storage acts as the on premises durable store for data that is waiting to upload to Am azon S3 from the upload buffer The cache storage also lets the gateway store the customer application's recently accessed data on premises for lowlatency access If an application requests data the gateway first checks the cache storage for the data bef ore checking Amazon S3 To prepare for upload to Amazon S3 the gateway also stores incoming data in a staging area referred to as an upload buffer Storage G ateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS w here it is stored encrypted in Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 25 Use Case 3: Continuous Streaming Data Ingestion Figure 4 Continuous streaming data ingestion In use case 3 the customer wants to ingest a social media feed continuously in Amazon S3 As part of the continuous data migration the customer uses Amazon Kinesis Data Firehose to ingest data without having to provision a dedicated set of servers 1 The c ustomer creates an Amazon Kinesis Data Firehose Delivery Stream using the following steps in the Amazon Kinesis Data Firehose console : a Choose the Delivery Stream name b Choose the Amazon S3 bucket; c hoose the IAM role that grants Firehose access to Amazon S3 bucket c Firehose buffers incoming records before delivering the data to Amazon S3 The customer chooses Buffer Size (1 128 MBs) or Buffer Interval (60 900 seconds) Whichever condition is satisfie d first triggers the data delivery to Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 26 d The customer chooses from three compression formats (GZIP ZIP or SNAPPY) or no data compression e The customer chooses whether to encrypt the data or not with a key from the list of AWS Key Management S ervice (AWS KMS) keys that they own 2 The customer sends the streaming data to an Amazon Kinesis Firehose delivery stream by writing appropriate code using AWS SDK Conclusion This whitepaper walked you through different AWS managed and selfmanaged storage migration options Additionally the pap er covered different use cases showing how multiple storage services can be used together to solve different migration needs Contributors Contributors to this document include: • Shruti Worlikar Solutions Architect Amazon Web Services • Kevin Fernandez Sr Solutions Architect Amazon Web Services • Scott Wainner Sr Solutions Architect Amazon Web Services Further Reading For additional information see : • AWS Direct Connect • AWS Snow Family • AWS Storage Gateway • AWS Kinesis Data Firehose • Storage Partner Solutions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 27 Document revisions Date Description July 13 2021 Repaired broken links Updated Time/Performance characteristics Added decision tree Added AWS Transfer Family Updated with new AWS Snow Family services Updated procedures in use cases May 2016 First publication,General,consultant,Best Practices An_Overview_of_the_AWS_Cloud_Adoption_Framework,AWS Cloud Adoption Framework (CAF) 30 Translated Whitepapers Language AWS Whitepaper Link Arabic (عربي) Whitepaper Link Brazilian Portuguese (Português) Whitepaper Link Chinese Simplified (中文 (简体 )) Whitepaper Link Chinese Traditional (中文 (繁體 )) Whitepaper Link English Whitepaper Link Finnish (Suomalainen) Whitepaper Link French Canadian (Français Canadien) Whitepaper Link French France (Français) Whitepaper Link German (Deutsch) Whitepaper Link Hebrew (עברית) Whitepaper Link Indonesian (Bahasa Indonesia) Whitepaper Link Italian (Italiano) Whitepaper Link Japanese (日本語 ) Whitepaper Link Korean (한국어 ) Whitepaper Link Russian (Ρусский) Whitepaper Link Spanish (Español) Whitepaper Link Swedis h (Svenska) Whitepap er Link Thai (ไทย) Whitepaper Link Turkish (Türkçe) Whitepaper Link Vietnamese (Tiếng Việt) Whitepaper Link,General,consultant,Best Practices Architecting_for_Genomic_Data_Security_and_Compliance_in_AWS,ArchivedArchitecting for Genomic Data Security and Compliance in AWS Working with ControlledAccess Datasets from dbGaP GWAS and other IndividualLevel Genomic Research Repositories Angel Pizarro Chris Whalley December 2014 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 2 of 17 Table of Contents Overview 3 Scope 3 Considerations for Genomic Data Privacy and Security in Human Research 3 AWS Approach to Shared Security Responsibilities 4 Architecting for Compliance with dbGaP Security Best Practices in AWS 5 Deployment Model 6 Data Location 6 Physical Server Access 7 Portable Storage Media 7 User Accounts Passwords and Access Control Lists 8 Internet Networking and Data Transfers 9 Data Encryption 11 File Systems and Storage Volumes 13 Operating Systems and Applications 14 Auditing Logging and Monitoring 15 Authorizing Access to Data 16 Cleaning Up Data and Retaining Results 17 Conclusion 17 ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 3 of 17 Overview Researchers who plan to work with genomic sequence data on Amazon Web Services (AWS) often have questions about security and compliance; specifically about how to meet guidelines and best practices set by government and grant funding agencies such as the National Institutes of Health In this whitepaper we review the current set of guidelines and discuss which services from AWS you can use to meet particular requirements and how to go about evaluating those services Scope This whitepaper focuses on common issues raised by Amazon Web Services (AWS) customers about security best practices for human genomic data and controlled access datasets such as those from National Institutes of Health (NIH) repositories like Database of Genotypes and Phenotypes (dbGaP) and genomewide association studies (GWAS) Our intention is to provide you with helpful guidance that you can use to address common privacy and security requirements However we caution you not to rely on this whitepaper as legal advice for your specific use of AWS We strongly encourage you to obtain appropriate compliance advice about your specific data privacy and security requirements as well as applicable laws relevant to your human research projects and datasets Considerations for Genomic Data Privacy and Security in Human Research Research involving individuallevel genotype and phenotype data and deidentified controlled access datasets continues to increase The data has grown so fast in volume and utility that the availability of adequate data processing storage and security technologies has become a critical constraint on genomic research T he global research community is recognizing the practical benefits of the AWS cloud and scientific investigators institutional signing officials IT directors ethics committees and data access committees must answer privacy and security questions as they evaluate the use of AWS in connection with individuallevel genomic data and other controlled access datasets Some common questions include: Are data protected on secure servers? Where are data located? How is access to data controlled? Are data protections appropriate for the Data Use Certification? These considerations are not new and are not cloudspecific Whether data reside in an investigator lab an institution al network an agencyhosted data repository or within the AWS cloud the essential considerations for human genomic data are the same You must correctly implement data protection and security controls in the system by first defining the system requirements and then architecting the system security controls to meet those requirements particularly the shared responsibilities amongst the parties who use and maintain the system ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 4 of 17 AWS Approach to Shared Security Responsibilities AWS delivers a robust web services platform with features that enable research teams around the world to create and control their own private area in the AWS cloud so they can quickly build install and use their data analysis applications and data stores without having to purchase or maintain the necessary hardware and facilities As a researcher you can create your private AWS environment yourself using a selfservice signup process that establishes a unique AWS account ID creates a root user account and account ID and provides you with access to the AWS Management Console and Application Programming Interfaces (APIs) allowing control and management of the private AWS environment Because AWS does not access or manage your private AWS environment or the data in it you retain responsibility and accountability for the configuration and security controls you implement in your AWS account This customer accountability for your private AWS environment is fundamental to understanding the respective roles of AWS and our customers in the context of data protections and security practices for human genomic data Figure 1 depicts the AWS Shared Responsibility Model Figure 1 Shared Responsibility Model In order to deliver and maintain the features available within every customer ’s private AWS environment AWS works vigorously to enhance the security features of the platform and ensure that the feature delivery operations are secure and of high quality AWS defines quality and security as confidentiality integrity and availability of our services and AWS seeks to provide researchers with visibility and assurance of our quality and security practices in four important ways ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 5 of 17 First AWS infrastructure is designed and managed in alignment with a set of internationally recognized security and quality accreditations standards and bestpractices including industry standards ISO 27001 ISO 9001 AT 801 and 101 (formerly SSAE 16) as well as government standards NIST FISMA and FedRAMP Independent third parties perform accreditation assessments of AWS These third parties are auditing experts in cloud computing environments and each brings a unique perspective from their compliance backgrounds in a wide range of industries including healthcare life sciences financial services government and defense and others Because each accreditation carries a unique audit schedule including continuous monitoring AWS security and quality controls are constantly audited and improved for the benefit of all AWS customers including those with dbGaP HIPAA and other health data protection requirements Second AWS provides transparency by making these ISO SOC FedRAMP and other compliance reports available to customers upon request Customers can use these reports to evaluate AWS for their particular needs You can request AWS compliance reports at https://awsamazoncom/compliance/contact and you can find more information on AWS compliance certifications customer case studies and alignment with best practices and standards at the AWS compliance website http://awsamazoncom/compliance/ Third as a controlled US subsidiary of Amazoncom Inc Amazon Web Services Inc participates in the Safe Harbor program developed by the US Department of Commerce the European Union and Switzerland respectively Amazoncom and its controlled US subsidiaries have certified that they adhere to the Safe Harbor Privacy Principles agreed upon by the US the EU and Switzerland respectively You can view the Safe Harbor certification for Amazoncom and its control led US subsidiaries on the US Department of Commerce’s Safe Harbor website The Safe Harbor Principles require Amazon and its controlled US subsidiaries to take reasonable precautions to protect the personal information that our customers give us in order to create their account This certification is an illustration of our dedication to security privacy and customer trust Lastly AWS respects the rights of our customers to have a choice in their use of the AWS platform The AWS Account Management Console and Customer Agreement are designed to ensure that every customer can stop using the AWS platform and export all their data at any time and for any reason This not only helps customers maintain control of their private AWS environment from creation to deletion but it also ensures that AWS must continuously work to earn and keep the trust of our customers Architecting for Compliance with dbGaP Security Best Practices in AWS A primary principle of the dbGaP security best practices is that researchers should download data to a secure computer or server and not to unsecured network drives or servers1 The remainder of the dbGaP security best practices can be broken into a set of three IT security control domains that you must address to ensure that you meet the primary principle: 1 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 6 of 17  Physical Security refers to both physical access to resources whether they are located in a data center or in your desk drawer and to remote administrative access to the underlying computational resources  Electronic Security refers to configuration and use of networks servers operating systems and applicationlevel resources that hold and analyze dbGaP data  Data Access Security refers to managing user authentication and authorization of access to the data how copies of the data are tracked and managed and having policies and processes in place to manage the data lifecycle Within each of these control domains are a number of control areas which are summarized in Table 1 Table 1 Summary of dbGaP Security Best Practices Control Domain Control Areas Physical Security Deployment Model Data Location Physical Server Access Portable Storage Media Electronic Security User Accounts Passwords and Access Control Lists Internet Networking and Data Transfers Data Encryption File Systems and Storage Volumes Operating Systems and Applications Auditing Logging And Monitoring Data Access Security Authorizing Access to Data Cleaning Up Data and Retaining Results The remainder of this paper focuses on the control areas involved in architecting for security and compliance in AWS Deployment Model A basic architectural consideration for dbGaP compliance in AWS is determining whether the system will run entirely on AWS or as a hybrid deployment with a mix of AWS and nonAWS resources This paper focus es on the control areas for the AWS resources If you are architecting for hybrid deployments you must also account for your nonAWS resources such as the local workstations you might download data to and from your AWS environment any institutional or external networks you connect to your AWS environment or any thirdparty applications you purchase and install in your AWS environment Data Location The AWS cloud is a globally available platform in which you can choose the geographic region in which your data is located AWS data centers are built in clusters in various global regions AWS calls these data center clusters Availability zones (AZs) As of December 2014 AWS ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 7 of 17 maintains 28 AZs organized into 11 regions globally As an AWS customer you can choose to use one region all regions or any combination of regions using builtin features available within the AWS Management Console AWS regions and Availability Zones ensure that if you have locationspecific requirements or regional data privacy policies you can establish and maintain your private AWS environment in the appropriate location You can choose to replicate and back up content in more than one region but you can be assured that AWS does not move customer data outside the region(s) you configure Physical Server Access Unlike traditional laboratory or institutional server systems where researchers install and control their applications and data directly on a specific physical server the applications and data in a private AWS account are decoupled from a specific physical server This decoupling occurs through the builtin features of the AWS Foundation Services layer (see Figure 1 Shared Responsibility Model ) and is a key attribute that differentiates the AWS cloud from traditional server systems or even traditional server virtualization Practically this means that every resource (virtual servers firewalls databases genomic data etc) within your private AWS environment is reduced to a single set of software files that are orchestrated by the Foundational Services layer across multiple physical servers Even if a physical server fails your private AWS resources and data maintain confidentiality integrity and availability This attribute of the AWS cloud also adds a significant measure of security because even if someone were to gain access to a single physical server they would not have access to all the files needed to recreate the genomic data within the your private AWS account AWS owns and operates its physical servers and network hardware in highlysecure state of theart data centers that are included in the scope of independent thirdparty security assessments of AWS for ISO 27001 Service Organization Controls 2 (SOC 2) NIST’s federal information system security standards and other security accreditations Physical access to AWS data centers and hardware is based on the least privilege principle and access is authorized only for essential personnel who have experience in cloud computing operating environments and who are required to maintain the physical environment When individuals are authorized to access a data center they are not given logical access to the servers within the data center When anyone with data center access no longer has a legitimate need for it access is immediately revoked even if they remain an employee of Amazon or Amazon Web Services Physical entry into AWS data centers is controlled at the building perimeter and ingress points by professional security staff who use video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to enter data center floors and all physical access to AWS data centers is logged monitored and audited routinely Portable Storage Media The decision to run entirely on AWS or in a hybrid deployment model has an impact on your system security plans for portable storage media Whenever data are downloaded to a portable device such as a laptop or smartphone the data should be encrypted and hardcopy printouts controlled When genomic data are stored or processed i n AWS customers can encrypt their ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 8 of 17 data but there is no portable storage media to consider because all AWS customer data resides on controlled storage media covered under AWS’s accredited security practices When controlled storage media reach the end of their useful life AWS procedures include a decommissioning and media sanitization process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual” ) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industrystandard practices For more information see Overview of Security Processes 2 User Accounts Passwords and Access Control Lists Managing user access under dbGaP requirements relies on a principle of least privilege to ensure that individuals and/or processes are granted only the rights and permissions to perform their assigned tasks and functions but no more3 When you use AWS there are two types of user accounts that you must address :  Accounts with direct access to AWS resources and  Accounts at the operating system or application level Managing user accounts with direct access to AWS resources is centralized in a service called AWS Identity and Access Management (IAM) After you establish your root AWS account using the selfservice signup process you can use IAM to create and manage additional users and groups within your private AWS environment In adherence to the least privilege principle new users and groups have no permissions by default until you associate them with an IAM policy IAM policies allow access to AWS resources and support finegrained permissions allowing operationspecific access to AWS resources For example you can define an IAM policy that restricts an Amazon S3 bucket to readonly access by specific IAM users coming from specific IP addresses In addition to the users you define within your private AWS environment you can define IAM roles to grant temporary credentials for use by externally authenticated users or applications running on Amazon EC2 servers Within IAM you can assign users individual credentials such as passwords or access keys Multifactor authentication (MFA) provides an extra level of user account security by prompting users to enter an additional authentication code each time they log in to AWS dbGaP also requires that users not share their passwords and recommends that researchers communicate a written password policy to any users with permissions to controlled access data Additionally dbGaP recommends certain password complexity rules for file access IAM provides robust features to manage password complexity reuse and reset rules How you manage user accounts at the operating system or application level depends largely on which operating systems and applications you choose For example applications developed specifically for the AWS cloud might leverage IAM users and groups whereas you'll need to assess and plan the compatibility of thirdparty applications and operating systems with IAM on a case bycase basis You should always configure passwordenabled screen savers on any 2 http://mediaamazonwebservicescom/pdf/AWS_Security_Whitepaperpdf 3 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 9 of 17 local workstations that you use to access your private AWS environment and configure virtual server instances within the AWS cloud environment with OSlevel passwordenabled screen savers to provide an additional layer of protection More information on IAM is available in the IAM documentation and IAM Best Practices guide as well as on the MultiFactor Authentication page Internet Networking and Data Transfers The AWS cloud is a set of web services delivered over the Internet but data within each customer’s private AWS account is not exposed directly to the Internet unless you specifically configure your security features to all ow it This is a critical element of compliance with dbGaP security best practices and the AWS cloud has a number of builtin features that prevent direct Internet exposure of genomic data Processing genomic data in AWS typically involves the Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a service you can use to create virtual server instances that run operating systems like Linux and Microsoft Windows When you create new Amazon EC2 instances for downloading and processing genomic data by default those instances are accessible only by authorized users within the private AWS account The instances are not discoverable or directly accessible on the Internet unless you configure them otherwise Additionally genomic data within an Amazon EC2 instance resides in the operating system ’s file directory which requires that you set OSspecific configurations before any data can be accessible outside of the instance When you need clusters of Amazon EC2 instances to process large volumes of data a Hadoop framework service called Amazon Elastic MapReduce (Amazon EMR) allows you to create multiple identical Amazon EC2 instances that follow the same basic rule of least privilege unless you change the configuration otherwise Storing genomic data in AWS typically involves object stores and file systems like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS) as well as database stores like Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon DynamoDB and Amazon ElastiCache Like Amazon EC2 all of these storage and databases services default to least privilege access and are not discoverable or directly accessible from the Internet unless you configure them to be so Individual compute instances and storage volumes are the basic building blocks that researchers use to architect and build genomic data processing systems in AWS Individually these building blocks are private by default and networking them together within the AWS environment can provide additional layers of security and data protections Using Amazon Virtual Private Cloud (Amazon VPC) you can create private isolated networks within the AWS cloud where you retain complete control over the virtual network environment including definition of the IP address range creation of subnets and configuration of network route tables and network gateways Amazon VPC also offers stateless firewall capabilities through the use of Network Access Control Lists (NACLs) that control the source and destination network traffic endpoints and ports giving you robust security controls that are independent of the computational resources launched within Amazon VPC subnets In addition to the stateless firewalling capabilities of Amazon VPC NACLs Amazon EC2 instances and some services are launched within the context of AWS Security Groups Security groups define networklevel stateful firewall rules to protect computational resources at the Amazon EC2 instance or service ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 10 of 17 layer level Using security groups you can lock down compute storage or application services to strict subsets of resources running within an Amazon VPC subnet adhering to the principal of least privilege Figure 2 Protecting data from direct Internet access using Amazon VPC In addition to networking and securing the virtual infrastructure within the AWS cloud Amazon VPC provides several options for connecting to your AWS resources The first and simplest option is providing secure public endpoints to access resources such as SSH bastion servers A second option is to create a secure Virtual Private Network (VPN) connection that uses Internet Protocol Security (IPSec) by defining a virtual private gateway into the Amazon VPC You can use the connection to establish encrypted network connectivity over the Internet between an Amazon VPC and your institutional network Lastly research institutions can establish a dedicated and private network connection to AWS using AWS Direct Connect AWS Direct Connect lets you establish a dedicated highbandwidth (1 Gbps to 10 Gbps) network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces allowing you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (Amazon VPC) using private IP space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs 1 1 2 2 3 3 dbGaP data in Amazon S3 bucket; accessible only by Amazon EC2 instance within VPC security group Amazon EC2 instance hosts Aspera Connect download software running within VPC security group Amazon VPC network configured with private subnet requiring SSH client VPN gateway or other encrypted connection Amazon S3 bucket w/ dbGaP data EC2 instance w / Aspera Connect ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 11 of 17 Using a combination of hosted and selfmanaged services you can take advantage of secure robust networking services within a VPC and secure connectivity with another trusted network To learn more about the finer details see our Amazon VPC whitepaper the Amazon VPC documentation and the Amazon VPC Connectivity Options Whitepaper Data Encrypti on Encrypting data intransit and at rest is one of the most common methods of securing controlled access datasets As an Internetbased service provider AWS understands that many institutional IT security policies consider the Internet to be an insecure communications medium and consequently AWS has invested considerable effort in the security and encryption features you need in order to use the AWS cloud platform for highly sensitive data including protected health information under HIPAA and controlled access genomic datasets from the National Institutes of Health (NIH) AWS uses encryption in three areas:  Service management traffic  Data within AWS services  Hardware security modules As an AWS customer you use the AWS Management Console to manage and configure your private environment Each time you use the AWS Management Console an SSL/TLS4 connection is made between your web browser and the console endpoints Service management traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint using an X509 certificate After this encrypted connection is established all subsequent HTTP traffic including data in transit over the Internet is protected within the SSL/TLS session Each AWS service is also enabled with application programming interfaces (APIs) that you can use to manage services either directly from applications or thirdparty tools or via Software Development Kits (SDK) or via AWS command line tools AWS APIs are web services over HTTPS and protect commands within an SSL/TLS encrypted session Within AWS there are several options for encrypting genomic data ranging from completely automated AWS encryption solutions (serverside) to manual clientside options Your decision to use a particular encryption model may be based on a variety of factors including the AWS service(s) being used your institutional policies your technical capability specific requirements of the data use certificate and other factors A s you architect your systems for controlled access datasets it’s important to identify each AWS service and encryption model you will use with the genomic data There are three different models for how you and/or AWS provide the encryption method and work with the key management infrastructure (KMI) as illustrated in Figure 3 4 Secure Sockets Layer (SSL)/Transport Layer Security (TLS) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 12 of 17 Customer Managed AWS Managed Model A Researcher manages the encryption method and entire KMI Model B Researcher manages the encryption method; AWS provides storage component of KMI while researcher provides management layer of KMI Model C AWS manages the encryption method and the entire KMI Figure 2 Encryption Models in AWS In addition to the clientside and serverside encryption features builtin to many AWS services another common way to protect keys in a KMI is to use a dedicated storage and data processing device that performs cryptographic operations using keys on the devices These devices called hardware security modules (HSMs) typically provide tamper evidence or resistance to protect keys from unauthorized use For researchers who choose to use AWS encryption capabilities for your controlled access datasets the AWS CloudHSM service is another encryption option within your AWS environment giving you use of HSMs that are designed and validated to government standards (NIST FIPS 140 2) for secure key management If you want to manage the keys that control encryption of data in Amazon S3 and Amazon EBS volumes but don’t want to manage the needed KMI resources either within or external to AWS you can leverage the AWS Key Management Service (AWS KMS) AWS Key Management Service is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses HSMs to protect the security of your keys AWS Key Management Service is integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Redshift AWS Key Management Service is also integrated with AWS CloudTrail discussed later to provide you with logs of all key usage to help meet your regulatory and compliance needs AWS KMS also allows you to implement key creation rotation and usage policies AWS KMS is designed so that no one has access to your master keys The service is built on systems that are designed to protect your master keys with extensive hardening techniques such as never storing plaintext master keys on disk not persisting them in memory and limiting which systems can connect to the device All access to update software on the service is controlled by a multilevel approval process that is audited and reviewed by an independent group within Amazon KMI Encryption Method KMI Encryption Method KMI Encryption Method Key Storage Key Management Key Storage Key Management Key Storage Key Management ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 13 of 17 As mentioned in the Internet Network and Data Transfer section of this paper you can protect data transfers to and from your AWS environment to an external network with a number of encryptionready security features such as VPN For more information about encryption options within the AWS environment see Securing Data at Rest with Encryption as well as the AWS CloudHSM product details page To learn more about how AWS KMS works you can read the AWS Key Management Service whitepaper5 File Systems and Storage Volumes Analyzing and securing large datasets like whole genome sequences requires a variety of storage capabilities that allow you to make use of that data Within your private AWS account you can configure your storage services and security features to limit access to authorized users Additionally when research collaborators are authorized to access the data you can configure your access controls to safely share data between your private AWS account and your collaborator’s private AWS account When saving and securing data within your private AWS account you have several options Amazon Web Services offers two flexible and powerful storage options The first is Amazon Simple Storage Service (Amazon S3) a highly scalable webbased object store Amazon S3 provides HTTP/HTTPS REST endpoints to upload and download data objects in an Amazon S3 bucket Individual Amazon S3 objects can range from 1 byte to 5 terabytes Amazon S3 is designed for 9999% availability and 99999999999% object durability thus Amazon S3 provides a highly durable storage infrastructure designed for missioncritical and primary data storage The service redundantly stores data in multiple data centers within the Region you designate and Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 performs regular systematic data integrity checks and is built to be automatically selfhealing Amazon S3 provides a base level of security whereby defaultonly bucket and object owners have access to the Amazon S3 resources they create In addition you can write security policies to further restrict access to Amazon S3 objects For example dbGaP recommendations call for all data to be encrypted while the data are in flight With an Amazon S3 bucket policy you can restrict an Amazon S3 bucket so that it only accepts requests using the secure HTTPS protocol which fulfills this requirement Amazon S3 bucket policies are best utilized to define broad permissions across sets of objects within a single bucket The previous examples for restricting the allowed protocols or source IP ranges are indicative of best practices For data that need more variable permissions based on whom is trying to access data IAM user policies are more appropriate As discussed previously IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account With IAM user policies you can grant these IAM users finegrained control to your Amazon S3 bucket or data objects contained within Amazon S3 is a great tool for genomics analysis and is well suited for analytical applications that are purposebuilt for the cloud However many legacy genomic algorithms and applications cannot work directly with files stored in a HTTPbased object store like Amazon S3 but rather need a traditional file system In contrast to the Amazon S3 objectbased storage approach 5 https://d0awsstaticcom/whitepapers/KMSCryptographicDetailspdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 14 of 17 Amazon Elastic Block Store (Amazon EBS) provides networkattached storage volumes that can be formatted with traditional file systems This means that a legacy application running in an Amaz on EC2 instance can access genomic data in an Amazon EBS volume as if that data were stored locally in the Amazon EC2 instance Additionally Amazon EBS offers wholevolume encryption without the need for you to build maintain and secure your own key management infrastructure When you create an encrypted Amazon EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots created from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances providing encryption of data intransit from Amazon EC2 instances to Amazon EBS storage Amazon EBS encryption uses AWS Key Management Service (AWS KMS) Customer Master Keys (CMKs) when creating encrypted volumes and any snapshots created from your encrypted volumes The first time you create an encrypted Amazon EBS volume in a region a default CMK is created for you automatically This key is used for Amazon EBS encryption unless you select a CMK that you created separately using AWS Key Management Service Creating your own CMK gives you more flexibility including the ability to create rotate disable define access controls and audit the encryption keys used to protect your data For more information see the AWS Key Management Service Developer Guide There are three options for Amazon EBS volumes:  Magnetic volumes are backed by magnetic drives and are ideal for workloads where data are accessed infrequently and scenarios where the lowest storage cost is important  General Purpose (SSD) volumes are backed by SolidState Drives (SSDs) and are suitable for a broad range of workloads including small to mediumsized databases development and test environments and boot volumes  Provisioned IOPS (SSD) volumes are also backed by SSDs and are designed for applications with I/Ointensive workloads such as databases Provisioned IOPs offer storage with consistent and lowlatency performance and support up to 30 IOPS per GB which enables you to provision 4000 IOPS on a volume as small as 134 GB You can also achieve up to 128MBps of throughput per volume with as little as 500 provisioned IOPS Additionally you can stripe multiple volumes together to achieve up to 48000 IOPS or 800MBps when attached to larger Amazon EC2 instances While generalpurpose Amazon EBS volumes represent a great value in terms of performance and cost and can support a diverse set of genomics applications you should choose which Amazon EBS volume type to use based on the particular algorithm you're going to run A benefit of scalable ondemand infrastructure is that you can provision a diverse set of resources each tuned to a particular workload For more information on the security features available in Amazon S3 see the Access Control and Using Data Encryption topics in the Amazon S3 Developer Guide For an overview on security on AWS including Amazon S3 see Amazon Web Services: Overview of Security Processes For more information about Amazon EBS security features see Amazon EBS Encryption and Amazon Elastic Block Store (Amazon EBS) Operating Systems and Applications Recipients of controlledaccess data need their operating systems and applications to follow predefined configuration standards Operating systems should align with standards such as ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 15 of 17 NIST 80053 dbGaP Security Best Practices Appendix A or other regionally accepted criteria Software should also be configured according to applicationspecific best practices and OS and software patches should be kept up todate When you run operating systems and applications in AWS you are responsible for configuring and maintaining your operating systems and applications as well as the feature configurations in the associated AWS services such as Amazon EC2 and Amazon S3 As a concrete example imagine that a security vulnerability in the standard SSL/TLS shared library is discovered In this scenario AWS will review and remediate the vulnerability in the foundation services (see Figure 1) and you will review and remediate the operating systems and applications as well as any service configuration updates needed for hybrid deployments You must also take care to properly configure the OS and applications to restrict remote access to the instances and applications Examples include locking down security groups to only allow SSH or RDP from certain IP ranges ensuring strong password or other authentication policies and restricting user administrative rights on OS and applications Auditing Logging and Monitoring Researchers who manage controlled access data are required to report any inadvertent data release in accordance with the terms in the Data Use Certification breach of data security or other data management incidents contrary to the terms of data access The dbGaP security recommendations recommend use of security auditing and intrusion detection software that regularly scans and detects potential data intrusions Within the AWS ecosystem you have the option to use builtin monitoring tools such as Amazon CloudWatch as well as a rich partner ecosystem of security and monitoring software specifically built for AWS cloud services The AWS Partner Network lists a variety of system integrators and software vendors that can help you meet security and compliance requirements For more information see the AWS Life Science Partner webpage6 Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch provides performance metrics on the individual resource level such as Amazon EC2 instance CPU load and network IO and sets up thresholds on these metrics to raise alarms when the threshold is passed For example you can set an alarm to detect unusual spikes in network traffic from an Amazon EC2 instance that may be an indication of a compromised system CloudWatch alarms can integrate with other AWS services to send the alerts simultaneous ly to multiple destinations Example methods and destinations might include a message queue in Amazon Simple Queuing Service (Amazon SQS) which is continuously monitored by watchdog processes that will automatically quarantine a system; a mobile text message to security and operations staff that need to react to immediate threats; an email to security and compliance teams who audit the event and take action as needed Within Amazon CloudWatch you can also define custom metrics and populate these with whatever information is useful even outside of a security and compliance requirement For instance an Amazon CloudWatch metric can monitor the size of a data ingest queue to trigger 6 http://awsamazoncom/partners/competencies/lifesciences/ ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 16 of 17 the scaling up (or down) of computational resources that process data to handle variable rates of data acquisition AWS CloudTrail and AWS Config are two services that enable you to monitor and audit all of the operations against th e AWS product API’s AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service With AWS CloudTrail you can get a history of AWS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and hig herlevel AWS services (such as AWS CloudFormation) The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing AWS Config builds upon the functionality of AWS CloudTrail and provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config you can discover existing AWS resources export a complete inventory of your AWS resources with all configuration details and determine how a resource was configured at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting Lastly AWS has implemented various methods of external communication to support all customers in the event of security or operational issues that may impact our customers Mechanisms are in place to allow the customer support team to be notified of operational and security issues that impact each customer’s account The AWS incident management team employs industrystandard diagnostic procedures to drive resolution during businessimpacting events within the AWS cloud platform The operational systems that support the platform are extensively instrumented to monitor key operational metrics and alarms are configured to automatically notify operations and management personnel when early warning thresholds are cross ed on those key metrics Staff operators provide 24 x 7 x 365 coverage to detect incidents and to manage their impact and resolution An oncall schedule is used so that personnel are always available to respond to operational issues Authorizing Access to Data Researchers using AWS in connection with controlled access datasets must only allow authorized users to access the data Authorization is typically obtained either by approval from the Data Access Committee (DAC) or within the terms of the researcher’s existing Data Use Certification ( DUC) Once access is authorized you can grant that access in one or more ways depending on where the data reside and where the collaborator requiring access is located The scenarios below cover the situations that typically arise:  Provide the collaborator access within an AWS account via an IAM user (see User Accounts Passwords and Access Control Lists )  Provide the collaborator access to their own AWS accounts (see File Systems Storage Volumes and Databases )  Open access to the AWS environment to an external network (see Internet Networking and Data Transfers ) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 17 of 17 Cleaning U p Data and Retaining Results Controlledaccess datasets for closed research projects should be deleted upon project close out and only encrypted copies of the minimum data needed to comply with institutional policies should be retained In AWS deletion and retention operations on data are under the complete control of a researcher You might opt to replicate archived data to one or more AWS regions for disaster recovery or highavailability purposes but you are in complete control of that process As it is for onpremises infrastructure data provenance7 is the sole responsibility of the researcher Through a combination of data encryption and other standard operating procedures such as resource monitoring and security audits you can comply with dbGaP security recommendations in AWS With respect to AWS storage services after Amazon S3 data objects or Amazon EBS volumes are deleted removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds After the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Conclusion The AWS cloud platform provides a number of important benefits and advantages to genomic researchers and enables them to satisfy the NIH security best practices for controlled access datasets While AWS delivers these benefits and advantages through our services and features researchers are still responsible for properly building using and maintaining the private AWS environment to help ensure the confidentiality integrity and availability of the controlled access datasets they manage Using the practices in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications using controlled access data quickly and securely Notices © 2014 Amazon Web Services Inc or its affiliates All rights reserved This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS it s affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers 7 The process of tracing and recording the origins of data and its movement between databases,General,consultant,Best Practices Architecting_for_HIPAASecurity_and_Compliance_on_AWS,"ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/ architectinghi paasecurityandcomplianceonaws/ welcomehtmlArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Architecting for HIPAA Security and Compliance on Amazon Web Services: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Table of Contents Abstract 1 Introduction 2 Encryption and protection of PHI in AWS 3 Alexa for Business 6 Amazon API Gateway 6 Amazon AppFlow 7 Amazon AppStream 20 7 Amazon Athena 7 Amazon Aurora 8 Amazon Aurora PostgreSQL 8 Amazon CloudFront 8 Lambda@Edge 8 Amazon CloudWatch 9 Amazon CloudWatch Events 9 Amazon CloudWatch Logs 9 Amazon Comprehend 9 Amazon Comprehend Medical 9 Amazon Connect 9 Amazon DocumentDB (with MongoDB compatibility) 10 Amazon DynamoDB 10 Amazon Elastic Block Store 10 Amazon EC2 11 Amazon Elastic Container Registry 11 Amazon ECS 11 Amazon EFS 12 Amazon EKS 12 Amazon ElastiCache for Redis 12 Encryption at Rest 13 Transport Encryption 13 Authentication 13 Applying ElastiCache Service Updates 14 Amazon OpenSearch Service 14 Amazon EMR 14 Amazon EventBridge 14 Amazon Forecast 15 Amazon FSx 15 Amazon GuardDuty 16 Amazon HealthLake 16 Amazon Inspector 16 Amazon Kinesis Data Analytics 16 Amazon Kinesis Data Firehose 17 Amazon Kinesis Streams 17 Amazon Kinesis Video Streams 17 Amazon Lex 17 Amazon Managed Streaming for Apache Kafka (Amazon MSK) 18 Amazon MQ 18 Amazon Neptune 19 AWS Network Firewall 19 Amazon Pinpoint 19 Amazon Polly 20 Amazon Quantum Ledger Database (Amazon QLDB) 20 Amazon QuickSight 21 Amazon RDS for MariaDB 21 Amazon RDS for MySQL 21 iiiArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon RDS for Oracle 22 Amazon RDS for PostgreSQL 22 Amazon RDS for SQL Server 22 Encryption at Rest 23 Transport Encryption 23 Auditing 23 Amazon Redshift 23 Amazon Rekognition 23 Amazon Route 53 24 Amazon S3 Glacier 24 Amazon S3 Transfer Acceleration 24 Amazon SageMaker 24 Amazon SNS 25 Amazon Simple Email Service (Amazon SES) 25 Amazon SQS 25 Amazon S3 26 Amazon Simple Workflow Service 26 Amazon Textract 26 Amazon Transcribe 27 Amazon Translate 27 Amazon Virtual Private Cloud 27 Amazon WorkDocs 27 Amazon WorkSpaces 28 AWS App Mesh 28 AWS Auto Scaling 28 AWS Backup 29 AWS Batch 29 AWS Certificate Manager 30 AWS Cloud Map 30 AWS CloudFormation 30 AWS CloudHSM 30 AWS CloudTrail 30 AWS CodeBuild 31 AWS CodeDeploy 31 AWS CodeCommit 31 AWS CodePipeline 31 AWS Config 32 AWS Data Exchange 32 AWS Database Migration Service 32 AWS DataSync 33 AWS Directory Service 33 AWS Directory Service for Microsoft AD 33 Amazon Cloud Directory 33 AWS Elastic Beanstalk 33 AWS Fargate 34 AWS Firewall Manager 34 AWS Global Accelerator 34 AWS Glue 35 AWS Glue DataBrew 35 AWS IoT Core and AWS IoT Device Management 35 AWS IoT Greengrass 35 AWS Lambda 35 AWS Managed Services 36 AWS Mobile Hub 36 AWS OpsWorks for Chef Automate 36 AWS OpsWorks for Puppet Enterprise 36 AWS OpsWorks Stack 37 ivArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Organizations 37 AWS RoboMaker 37 AWS SDK Metrics 37 AWS Secrets Manager 38 AWS Security Hub 38 AWS Server Migration Service 38 AWS Serverless Application Repository 39 AWS Service Catalog 39 AWS Shield 39 AWS Snowball 39 AWS Snowball Edge 40 AWS Snowmobile 40 AWS Step Functions 40 AWS Storage Gateway 40 File Gateway 41 Volume Gateway 41 Tape Gateway 41 AWS Systems Manager 41 AWS Transfer for SFTP 41 AWS WAF – Web Application Firewall 42 AWS XRay 42 Elastic Load Balancing 42 FreeRTOS 42 Using AWS KMS for Encryption of PHI 43 VM Import/Export 43 Auditing backups and disaster recovery 44 Document revisions 45 Notices 48 vArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Architecting for HIPAA Security and Compliance on Amazon Web Services Publication date: September 9 2021 (Document revisions (p 45)) This paper briefly outlines how customers can use Amazon Web Services (AWS) to run sensitive workloads regulated under the US Health Insurance Portability and Accountability Act (HIPAA) We will focus on the HIPAA Privacy and Security Rules for protecting Protected Health Information (PHI) how to use AWS to encrypt data in transit and atrest and how AWS features can be used to run workloads containing PHI 1ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Introduction The Health Insurance Portability and Accountability Act of 1996 (HIPAA) applies to “covered entities” and “business associates” HIPAA was expanded in 2009 by the Health Information Technology for Economic and Clinical Health (HITECH) Act HIPAA and HITECH establish a set of federal standards intended to protect the security and privacy of PHI HIPAA and HITECH impose requirements related to the use and disclosure of protected health information (PHI) appropriate safeguards to protect PHI individual rights and administrative responsibilities For more information on HIPAA and HITECH go to the Health Information Privacy Home Covered entities and their business associates can use the secure scalable lowcost IT components provided by Amazon Web Services (AWS) to architect applications in alignment with HIPAA and HITECH compliance requirements AWS offers a commercialofftheshelf infrastructure platform with industry recognized certifications and audits such as ISO 27001 FedRAMP and the Service Organization Control Reports (SOC1 SOC2 and SOC3) AWS services and data centers have multiple layers of operational and physical security to help ensure the integrity and safety of customer data With no minimum fees no termbased contracts required and payasyouuse pricing AWS is a reliable and effective solution for growing healthcare industry applications AWS enables covered entities and their business associates subject to HIPAA to securely process store and transmit PHI Additionally as of July 2013 AWS offers a standardized Business Associate Addendum (BAA) for such customers Customers who execute an AWS BAA may use any AWS service in an account designated as a HIPAA Account but they may only process store and transmit PHI using the HIPAA eligible services defined in the AWS BAA For a complete list of these services see the HIPAA Eligible Services Reference page AWS maintains a standardsbased risk management program to ensure that the HIPAAeligible services specifically support HIPAA administrative technical and physical safeguards Using these services to store process and transmit PHI helps our customers and AWS to address the HIPAA requirements applicable to the AWS utilitybased operating model AWS’s BAA requires customers to encrypt PHI stored in or transmitted using HIPAAeligible services in accordance with guidance from the Secretary of Health and Human Services (HHS): Guidance to Render Unsecured Protected Health Information Unusable Unreadable or Indecipherable to Unauthorized Individuals (“Guidance”) Please refer to this site because it may be updated and may be made available on a successor (or related) site designated by HHS AWS offers a comprehensive set of features and services to make key management and encryption of PHI easy to manage and simpler to audit including the AWS Key Management Service (AWS KMS) Customers with HIPAA compliance requirements have a great deal of flexibility in how they meet encryption requirements for PHI When determining how to implement encryption customers can evaluate and take advantage of the encryption features native to the HIPAAeligible services Or customers can satisfy the encryption requirements through other means consistent with the guidance from HHS 2ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption and protection of PHI in AWS The HIPAA Security Rule includes addressable implementation specifications for the encryption of PHI in transmission (“in transit”) and in storage (“at rest”) Although this is an addressable implementation specification in HIPAA AWS requires customers to encrypt PHI stored in or transmitted using HIPAA eligible services in accordance with guidance from the Secretary of Health and Human Services (HHS): Guidance to Render Unsecured Protected Health Information Unusable Unreadable or Indecipherable to Unauthorized Individuals (“Guidance”) Please refer to this site because it may be updated and may be made available on a successor (or related site) designated by HHS AWS offers a comprehensive set of features and services to make key management and encryption of PHI easy to manage and simpler to audit including the AWS Key Management Service (AWS KMS) Customers with HIPAA compliance requirements have a great deal of flexibility in how they meet encryption requirements for PHI When determining how to implement encryption customers may evaluate and take advantage of the encryption features native to the HIPAAeligible services or they can satisfy the encryption requirements through other means consistent with the guidance from HHS The following sections provide high level details about using available encryption features in each of the HIPAAeligible services and other patterns for encrypting PHI and how AWS KMS can be used to encrypt the keys used for encryption of PHI on AWS Topics •Alexa for Business (p 6) •Amazon API Gateway (p 6) •Amazon AppFlow (p 7) •Amazon AppStream 20 (p 7) •Amazon Athena (p 7) •Amazon Aurora (p 8) •Amazon Aurora PostgreSQL (p 8) •Amazon CloudFront (p 8) •Amazon CloudWatch (p 9) •Amazon CloudWatch Events (p 9) •Amazon CloudWatch Logs (p 9) •Amazon Comprehend (p 9) •Amazon Comprehend Medical (p 9) •Amazon Connect (p 9) •Amazon DocumentDB (with MongoDB compatibility) (p 10) •Amazon DynamoDB (p 10) •Amazon Elastic Block Store (p 10) •Amazon Elastic Compute Cloud (p 11) •Amazon Elastic Container Registry (p 11) •Amazon Elastic Container Service (p 11) •Amazon Elastic File System (Amazon EFS) (p 12) 3ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper •Amazon Elastic Kubernetes Service (Amazon EKS) (p 12) •Amazon ElastiCache for Redis (p 12) •Amazon OpenSearch Service (p 14) •Amazon EMR (p 14) •Amazon EventBridge (p 14) •Amazon Forecast (p 15) •Amazon FSx (p 15) •Amazon GuardDuty (p 16) •Amazon HealthLake (p 16) •Amazon Inspector (p 16) •Amazon Kinesis Data Analytics (p 16) •Amazon Kinesis Data Firehose (p 17) •Amazon Kinesis Streams (p 17) •Amazon Kinesis Video Streams (p 17) •Amazon Lex (p 17) •Amazon Managed Streaming for Apache Kafka (Amazon MSK) (p 18) •Amazon MQ (p 18) •Amazon Neptune (p 19) •AWS Network Firewall (p 19) •Amazon Pinpoint (p 19) •Amazon Polly (p 20) •Amazon Quantum Ledger Database (Amazon QLDB) (p 20) •Amazon QuickSight (p 21) •Amazon RDS for MariaDB (p 21) •Amazon RDS for MySQL (p 21) •Amazon RDS for Oracle (p 22) •Amazon RDS for PostgreSQL (p 22) •Amazon RDS for SQL Server (p 22) •Amazon Redshift (p 23) •Amazon Rekognition (p 23) •Amazon Route 53 (p 24) •Amazon S3 Glacier (p 24) •Amazon S3 Transfer Acceleration (p 24) •Amazon SageMaker (p 24) •Amazon Simple Notification Service (Amazon SNS) (p 25) •Amazon Simple Email Service (Amazon SES) (p 25) •Amazon Simple Queue Service (Amazon SQS) (p 25) •Amazon Simple Storage Service (Amazon S3) (p 26) •Amazon Simple Workflow Service (p 26) •Amazon Textract (p 26) •Amazon Transcribe (p 27) •Amazon Translate (p 27) •Amazon Virtual Private Cloud (p 27) •Amazon WorkDocs (p 27) 4ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper •Amazon WorkSpaces (p 28) •AWS App Mesh (p 28) •AWS Auto Scaling (p 28) •AWS Backup (p 29) •AWS Batch (p 29) •AWS Certificate Manager (p 30) •AWS Cloud Map (p 30) •AWS CloudFormation (p 30) •AWS CloudHSM (p 30) •AWS CloudTrail (p 30) •AWS CodeBuild (p 31) •AWS CodeDeploy (p 31) •AWS CodeCommit (p 31) •AWS CodePipeline (p 31) •AWS Config (p 32) •AWS Data Exchange (p 32) •AWS Database Migration Service (p 32) •AWS DataSync (p 33) •AWS Directory Service (p 33) •AWS Elastic Beanstalk (p 33) •AWS Fargate (p 34) •AWS Firewall Manager (p 34) •AWS Global Accelerator (p 34) •AWS Glue (p 35) •AWS Glue DataBrew (p 35) •AWS IoT Core and AWS IoT Device Management (p 35) •AWS IoT Greengrass (p 35) •AWS Lambda (p 35) •AWS Managed Services (p 36) •AWS Mobile Hub (p 36) •AWS OpsWorks for Chef Automate (p 36) •AWS OpsWorks for Puppet Enterprise (p 36) •AWS OpsWorks Stack (p 37) •AWS Organizations (p 37) •AWS RoboMaker (p 37) •AWS SDK Metrics (p 37) •AWS Secrets Manager (p 38) •AWS Security Hub (p 38) •AWS Server Migration Service (p 38) •AWS Serverless Application Repository (p 39) •AWS Service Catalog (p 39) •AWS Shield (p 39) •AWS Snowball (p 39) •AWS Snowball Edge (p 40) •AWS Snowmobile (p 40) 5ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Alexa for Business •AWS Step Functions (p 40) •AWS Storage Gateway (p 40) •AWS Systems Manager (p 41) •AWS Transfer for SFTP (p 41) •AWS WAF – Web Application Firewall (p 42) •AWS XRay (p 42) •Elastic Load Balancing (p 42) •FreeRTOS (p 42) •Using AWS KMS for Encryption of PHI (p 43) •VM Import/Export (p 43) Alexa for Business Alexa for Business makes it easy to configure install and manage fleets of Alexaenabled devices in the enterprise Alexa for Business allows the enterprise to control which skills (Alexa apps) are available to its users and which corporate resources (email calendar directories etc) designated Alexa skills have access to Through this access it extends Alexa’s capabilities with new enterprisespecific skills such as starting meetings and checking if conference rooms are booked The Alexa for Business system consists of two components First is the Alexa for Business management console an AWS service that configures and monitors the Alexaenabled hardware and allows configuration of the system It also provides the hooks so that designated Alexa skills can access corporate resources The second is the Alexa system which processes enduser queries and commands takes action and provides responses The Alexa system is not an AWS service The Alexa for Business management console does not process or store any PHI Therefore Alexa for Business can be used in conjunction with Alexa skills that do not process PHI such as starting meetings checking on conference rooms or using any Alexa skill that also does not process PHI If customers want to process PHI with Alexa and Alexa for Business customers must use a HIPAAeligible Alexa skill and sign a BAA with the Alexa organization Customers can find out more about building HIPAAeligible Alexa skills at Alexa Healthcare Skills Amazon API Gateway Customers can use Amazon API Gateway to process and transmit protected health information (PHI) While Amazon API Gateway automatically uses HTTPS endpoints for encryption inflight customers can also choose to encrypt payloads clientside API Gateway passes all noncached data through memory and does not write it to disk Customers can use AWS Signature Version 4 for authorization with API Gateway For more information see the following: •Amazon API Gateway FAQs: Security and Authorization •Controlling and managing access to a REST API in API Gateway Customers can integrate with any service that is connected to API Gateway provided that when PHI is involved the service is configured consistent with the Guidance and BAA For information on integrating API Gateway with backend services see Set up REST API methods in API Gateway Customers can use AWS CloudTrail and Amazon CloudWatch to enable logging that is consistent with their logging requirements Ensure that any PHI sent through API Gateway (such as in headers URLs and request/response) is only captured by HIPAAeligible services that have been configured to be consistent 6ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon AppFlow with the Guidance For more information on logging with API Gateway see How do I enable CloudWatch Logs for troubleshooting my API Gateway REST API or WebSocket API? Amazon AppFlow Amazon AppFlow is a fully managed integration service that enables customers to securely transfer data between SoftwareasaService (SaaS) applications such as Salesforce Marketo Slack and ServiceNow and AWS services such as Amazon S3 and Amazon Redshift AppFlow can run data flows at a frequency the customer chooses on a schedule in response to a business event or on demand Customers can also configure data transformation capabilities like filtering and validation to generate rich readytouse data as part of the flow itself without additional steps Amazon AppFlow can be used to process and transfer data containing PHI Encryption of data while in transit between AppFlow and the configured source/destination is provided by default using TLS 12 or later Data stored atrest in S3 is automatically encrypted using an AWS KMS customer master key (CMK) that is specified by the customer For PHI data transferred to non S3 destinations customers must ensure the atrest storage for the chosen destination meets their security needs AppFlow enables application monitoring by integrating with AWS CloudTrail to log API calls and Amazon EventBridge to emit flow execution events Amazon AppStream 20 Amazon AppStream 20 is a fully managed application streaming service Customers own their data and must configure the necessary Windows applications in a manner that meets their regulatory requirements Customers are able to configure persistent storage via Home Folders Files and folders are encrypted in transit using Amazon S3's SSL endpoints Files and folders are encrypted atrest using Amazon S3managed encryption keys For more information see Enable and Administer Persistent Storage for Your AppStream 20 Users If customers choose to use a thirdparty storage solution they are responsible for ensuring the configuration of that solution is consistent with the guidance All public API communication with Amazon AppStream 20 is encrypted using TLS For more information please see Amazon AppStream 20 Documentation Amazon AppStream 20 is integrated with AWS CloudTrail a service that logs API calls made by or on behalf of Amazon AppStream 20 in customer’s AWS account and delivers the log files to the specified Amazon S3 bucket CloudTrail captures API calls made from the Amazon AppStream 20 console or from the Amazon AppStream 20 API Customers can also use Amazon CloudWatch to log resource usage metrics For more information see Monitoring Amazon AppStream 20 Resources and Logging AppStream 20 API Calls with AWS CloudTrail Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL Athena helps customers analyze unstructured semistructured and structured data stored in Amazon S3 Examples include CSV JSON or columnar data formats such as Apache Parquet and Apache ORC Customers can use Athena to run ad hoc queries using ANSI SQL without the need to aggregate or load the data into Athena Amazon Athena can now be used to process data containing PHI Encryption of data while in transit between Amazon Athena and S3 is provided by default using SSL/TLS Encryption of PHI while atrest on S3 should be performed according to the guidance provided in the S3 section Encryption of query results from and within Amazon Athena including staged results should be enabled using serverside encryption with Amazon S3 managed keys (SSES3) AWS KMSmanaged keys (SSEKMS) or clientside 7ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Aurora encryption with AWS KMSmanaged keys (CSEKMS) Amazon Athena uses AWS CloudTrail to log all API calls Amazon Aurora Amazon Aurora allows customers to encrypt Aurora database clusters and snapshots at rest using keys that they manage through AWS KMS On a database instance running with Amazon Aurora encryption data stored atrest in the underlying storage is encrypted as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Aurora encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon Aurora see Protecting data using encryption Connections to DB clusters running Aurora MySQL must use transport encryption utilizing Secure Socket Layer (SSL) or Transport Layer Security (TLS) For more information on implementing SSL/TLS see Using SSL/TLS with Aurora MySQL DB clusters Amazon Aurora PostgreSQL Amazon Aurora allows customers to encrypt Aurora database clusters and snapshots at rest using keys that they manage through AWS KMS On a database instance running with Amazon Aurora encryption data stored atrest in the underlying storage is encrypted as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Aurora encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon Aurora see Protecting data using encryption Connections to DB clusters running Aurora PostgreSQL must use transport encryption utilizing Secure Socket Layer (SSL) or Transport Layer Security (TLS) For more information on implementing SSL/TLS see Securing Aurora PostgreSQL data with SSL Amazon CloudFront Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of customer websites APIs video content or other web assets It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end users with no minimum usage commitments To ensure encryption of PHI while in transit with CloudFront customers must configure CloudFront to use HTTPS endtoend from the origin to the viewer This includes traffic between CloudFront and the viewer CloudFront redistributing from a custom origin and CloudFront distributing from an Amazon S3 origin Customers should also ensure that the data is encrypted at the origin to ensure it remains encrypted atrest while cached in CloudFront If using Amazon S3 as an origin customers can make use of S3 serverside encryption features If customers distribute from a custom origin they must ensure that the data is encrypted at the origin Lambda@Edge Lambda@Edge is a compute service that allows for the execution of Lambda functions at AWS edge locations Lambda@Edge can be used to customize content delivered through CloudFront When using Lambda@Edge with PHI customers should follow the Guidance for the use of CloudFront All connections into and out of Lambda@Edge should be encrypted using HTTPS or SSL/TLS 8ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon CloudWatch Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications that customers run on AWS Customers can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch itself does not produce store or transmit PHI Customers can monitor CloudWatch API calls with AWS CloudTrail For more information see Logging Amazon CloudWatch API Calls with AWS CloudTrail For more details on configuration requirements see the Amazon CloudWatch Logs section Amazon CloudWatch Events Amazon CloudWatch Events delivers a nearrealtime stream of system events that describe changes in AWS resources Customers should ensure that PHI does not flow into CloudWatch Events and any AWS resource emitting a CloudWatch event that is storing processing or transmitting PHI is configured in accordance with the Guidance Customers can configure Amazon CloudWatch Events to register as an AWS API call in CloudTrail For more information see Creating a CloudWatch Events Rule That Triggers on an AWS API Call Using AWS CloudTrail Amazon CloudWatch Logs Customers can use Amazon CloudWatch Logs to monitor store and access their log files from Amazon Elastic Compute Cloud (Amazon EC2) instances AWS CloudTrail Amazon Route 53 and other sources They can then retrieve the associated log data from CloudWatch Logs Log data is encrypted while in transit and while it is atrest As a result it is not necessary to reencrypt PHI emitted by any other service and delivered to CloudWatch Logs Amazon Comprehend Amazon Comprehend uses natural language processing to extract insights about the content of documents Amazon Comprehend processes any text file in UTF8 format It develops insights by recognizing the entities key phrases language sentiments and other common elements in a document Amazon Comprehend can be used with data containing PHI Amazon Comprehend does not retain or store any data and all calls to the API are encrypted with SSL/TLS Amazon Comprehend uses CloudTrail to log all API calls Amazon Comprehend Medical For guidance see the previous Amazon Comprehend (p 9) section Amazon Connect Amazon Connect is a selfservice cloudbased contact center service that enables dynamic personal and natural customer engagement at any scale Customers should not include any PHI in any fields associated with managing users security profiles and contact flows within Amazon Connect 9ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon DocumentDB (with MongoDB compatibility) Amazon Connect Customer Profiles a feature of Amazon Connect equips contact center agents with a more unified view of a customer’s profile with the most up to date information to provide more personalized customer service Customer Profiles is designed to automatically bring together customer information from multiple applications into a unified customer profile delivering the profile directly to the agent as soon as the support call or interaction begins Customers should refrain from naming domains or object keys with PHI data The contents of Domains and Objects are encrypted and protected but the key identifiers are not Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB (with MongoDB compatibility) (Amazon DocumentDB) offers encryption at rest during cluster creation via AWS KMS which allows customers to encrypt databases using AWS or customermanaged keys On a database instance running with encryption enabled data stored atrest is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon DocumentDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon DocumentDB see Encrypting Amazon DocumentDB Data at Rest Connections to Amazon DocumentDB containing PHI must use endpoints that accept encrypted transport (HTTPS) By default a newly created Amazon DocumentDB cluster only accepts secure connections using Transport Layer Security (TLS) For more information see Encrypting Data in Transit Amazon DocumentDB uses AWS CloudTrail to log all API calls For more information see Logging and Monitoring in Amazon DocumentDB For certain management features Amazon DocumentDB uses operational technology that is shared with Amazon RDS Amazon DocumentDB console AWS CLI and API calls are logged as calls made to the Amazon RDS API Amazon DynamoDB Connections to Amazon DynamoDB containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Amazon DynamoDB offers DynamoDB encryption which allows customers to encrypt databases using keys that customers manage through AWS KMS On a database instance running with Amazon DynamoDB encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon DynamoDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon DynamoDB see DynamoDB Encryption at Rest Amazon Elastic Block Store Amazon EBS encryption atrest is consistent with the Guidance that is in effect at the time of publication of this whitepaper Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon EBS encryption satisfies their compliance and regulatory requirements With Amazon EBS encryption a unique volume encryption key is generated for each EBS volume Customers 10ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon EC2 have the flexibility to choose which master key from the AWS Key Management Service is used to encrypt each volume key For more information see Amazon EBS encryption Amazon Elastic Compute Cloud Amazon EC2 is a scalable userconfigurable compute service that supports multiple methods for encrypting data at rest For example customers might elect to perform application or fieldlevel encryption of PHI as it is processed within an application or database platform hosted in an Amazon EC2 instance Approaches range from encrypting data using standard libraries in an application framework such as Java or NET; leveraging Transparent Data Encryption features in Microsoft SQL or Oracle; or by integrating other thirdparty and software as a service (SaaS)based solutions into their applications Customers can choose to integrate their applications running in Amazon EC2 with AWS KMS SDKs simplifying the process of key management and storage Customers can also implement encryption of data at rest using filelevel or full disk encryption (FDE) by using thirdparty software from AWS Marketplace Partners or native file system encryption tools (such as dmcrypt LUKS etc) Network traffic containing PHI must encrypt data in transit For traffic between external sources (such as the internet or a traditional IT environment) and Amazon EC2 customers should use open standard transport encryption mechanisms such as Transport Layer Security (TLS) or IPsec virtual private networks (VPNs) consistent with the Guidance Internal to an Amazon Virtual Private Cloud (VPC) for data traveling between Amazon EC2 instances network traffic containing PHI must also be encrypted; most applications support TLS or other protocols providing in transit encryption that can be configured to be consistent with the Guidance For applications and protocols that do not support encryption sessions transmitting PHI can be sent through encrypted tunnels using IPsec or similar implementations between instances Amazon Elastic Container Registry Amazon Elastic Container Registry (Amazon ECR) is integrated with Amazon Elastic Container Service (Amazon ECS) and allows customers to easily store run and manage container images for applications running on Amazon ECS After customers specify the Amazon ECR repository in their Task Definition Amazon ECS will retrieve the appropriate images for their applications No special steps are required to use Amazon ECR with container images that contain PHI Container images are encrypted while in transit and stored encrypted while atrest using Amazon S3 serverside encryption (SSES3) Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a highly scalable highperformance container management service that supports Docker containers and allows customers to easily run applications on a managed cluster of Amazon EC2 instances Amazon ECS eliminates the need for customers to install operate and scale their own cluster management infrastructure With simple API calls customers can launch and stop Dockerenabled applications query the complete state of their cluster and access many familiar features like security groups Elastic Load Balancing EBS volumes and IAM roles Customers can use Amazon ECS to schedule the placement of containers across their cluster based on their resource needs and availability requirements Using ECS with workloads that process PHI requires no additional configuration ECS acts as an orchestration service that coordinates the launch of containers (images for which are stored in S3) on EC2 and it does not operate with or upon data within the workload being orchestrated Consistent with 11ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon EFS HIPAA regulations and the AWS Business Associate Addendum PHI should be encrypted in transit and atrest when accessed by containers launched with ECS Various mechanisms for encrypting atrest are available with each AWS storage option (for example S3 EBS and KMS) Ensuring complete encryption of PHI sent between containers may also lead customers to deploy an overlay network (such as VNS3 Weave Net or similar) in order to provide a redundant layer of encryption Nevertheless complete logging should also be enabled (for example through CloudTrail) and all container instance logs should be directed to CloudWatch Amazon Elastic File System (Amazon EFS) Amazon Elastic File System (Amazon EFS) provides simple scalable elastic file storage for use with AWS Cloud services and onpremises resources It is easy to use and offers a simple interface that allows customers to create and configure file systems quickly and easily Amazon EFS is built to elastically scale on demand without disrupting applications growing and shrinking automatically as customers add and remove files To satisfy the requirement that PHI be encrypted atrest two paths are available on EFS EFS supports encryption atrest when a new file system is created During creation the option for “Enable encryption of data at rest” should be selected Selecting this option ensures that all data placed on the EFS file system will be encrypted using AES256 encryption and AWS KMSmanaged keys Customers may alternatively choose to encrypt data before it is placed on EFS but they are then responsible for managing the encryption process and key management PHI should not be used as all or part of any file name or folder name Encryption of PHI while in transit for Amazon EFS is provided by Transport Layer Security (TLS) between the EFS service and the instance mounting the file system EFS offers a mount helper to facilitate connecting to a file system using TLS By default TLS is not used and must be enabled when mounting the file system using the EFS mount helper Ensure that the mount command contains the “o tls” option to enable TLS encryption Alternatively customers who choose not to use the EFS mount helper can follow the instructions in the EFS documentation to configure their NFS clients to connect through a TLS tunnel Amazon Elastic Kubernetes Service (Amazon EKS) Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for customers to run Kubernetes on AWS without needing to stand up or maintain their own Kubernetes control plane Kubernetes is an opensource system for automating the deployment scaling and management of containerized applications Using Amazon EKS with workloads that process PHI data requires no additional configuration Amazon EKS operates as an orchestration service coordinating the launch of containers (the images for which are stored in S3) on EC2 and does not directly operate with or upon data within the workload being orchestrated Amazon EKS uses AWS CloudTrail to log all API calls Amazon ElastiCache for Redis Amazon ElastiCache for Redis is a Rediscompatible inmemory data structure service that can be used as a data store or cache In order to store PHI customers must ensure that they are running the latest HIPAAeligible ElastiCache for Redis engine version and current generation node types Amazon ElastiCache for Redis supports storing PHI for the following node types and Redis engine version: • Node Types: current generation only (for example as of the time of publication of this whitepaper M4 M5 R4 R5 T2 T3) 12ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption at Rest • ElastiCache for Redis engine version: 326 and 4010 onwards For more information about choosing current generation nodes see Amazon ElastiCache pricing For more information about choosing an ElastiCache for Redis engine see What Is Amazon ElastiCache for Redis? Customers must also ensure that the cluster and nodes within the cluster are configured to encrypt data at rest enable transport encryption and enable authentication of Redis commands In addition customers must also ensure that their Redis clusters are updated with the latest ‘Security’ type service updates on or before the ‘Recommended Apply by Date’ (the date by which it is recommended the update be applied) at all times For more information see the sections below Topics •Encryption at Rest (p 13) •Transport Encryption (p 13) •Authentication (p 13) •Applying ElastiCache Service Updates (p 14) Encryption at Rest Amazon ElastiCache for Redis provides data encryption for its cluster to help protect the data at rest When customers enable encryption atrest for a cluster at the time of creation Amazon ElastiCache for Redis encrypts data on disk and automated Redis backups Customer data on disk is encrypted using hardware accelerated Advanced Encryption Standard (AES)512 symmetric keys Redis backups are encrypted through Amazon S3managed encryption keys (SSES3) A S3 bucket with serverside encryption enabled will encrypt the data using hardwareaccelerated Advanced Encryption Standard (AES)256 symmetric keys before saving it in the bucket For more details on Amazon S3managed encryption keys (SSES3) see Protecting Data Using Server Side Encryption with Amazon S3Managed Encryption Keys (SSES3) On an ElastiCache Redis cluster (single or multinode) running with encryption data stored atrest is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper This includes data on disk and automated backups in S3 bucket Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon ElastiCache for Redis encryption satisfies their compliance and regulatory requirements For more information about encryption atrest using Amazon ElastiCache for Redis see What Is Amazon ElastiCache for Redis? Transport Encryption Amazon ElastiCache for Redis uses TLS to encrypt the data in transit Connections to ElastiCache for Redis containing PHI must use transport encryption and evaluate the configuration for consistency with the Guidance For more information see CreateReplicationGroup For more information on enabling transport encryption see ElastiCache for Redis InTransit Encryption (TLS) Authentication Amazon ElastiCache for Redis clusters (single/multi node) that contain PHI must provide a Redis AUTH token to enable authentication of Redis commands Redis AUTH is available when both encryption at rest and encryptionin transit are enabled Customers should provide a strong token for Redis AUTH with following constraints: • Must be only printable ASCII characters • Must be at least 16 characters and no more than 128 characters in length • Cannot contain any of the following characters: '/' '""' or ""@"" 13ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Applying ElastiCache Service Updates This token must be set from within the Request Parameter at the time of Redis replication group (single/ multi node) creation and can be updated later with a new value AWS encrypts this token using AWS Key Management Service (AWS KMS) For more information on Redis AUTH see ElastiCache for Redis In Transit Encryption (TLS) Applying ElastiCache Service Updates Amazon ElastiCache for Redis clusters (single/multi node) that contain PHI must be updated with the latest ‘Security’ type service updates on or before the ‘Recommended Apply by Date’ ElastiCache offers this as a selfservice feature that customers can use to apply the updates anytime on demand and in real time Each service update comes with a ‘Severity’ and ‘Recommended Apply by Date’ and is available only for the applicable Redis replication groups The ‘SLA Met’ field in the service update feature will state whether the update was applied on or before the ‘Recommended Apply by Date’ If customers choose to not apply the updates to the applicable Redis replication groups by the ‘Recommended Apply by Date’ ElastiCache will not take any action to apply them Customers can use the service updates history dashboard to review the application of updates to their Redis replication groups over time For more information on how to use this feature see Self Service Updates in Amazon ElastiCache Amazon OpenSearch Service Amazon OpenSearch Service (OpenSearch Service) enables customers to run a managed OpenSearch cluster in a dedicated Amazon Virtual Private Cloud (Amazon VPC) When using OpenSearch Service with PHI customers should use OpenSearch 60 or later Customers should ensure PHI is encrypted at rest and intransit within Amazon OpenSearch Service Customers may use AWS KMS key encryption to encrypt data at rest in their OpenSearch Service domains which is only available for OpenSearch 51 or later For more information about how to encrypt data at rest see Encryption of Data at Rest for Amazon OpenSearch Service Each OpenSearch Service domain runs in its own VPC Customers should enable nodetonode encryption which is available in OpenSearch 60 or later If customers send data to OpenSearch Service over HTTPS nodetonode encryption helps ensure that customer’s data remains encrypted as OpenSearch distributes (and redistributes) it throughout the cluster If data arrives unencrypted over HTTP OpenSearch Service encrypts the data after it reaches the cluster Therefore any PHI that enters an Amazon OpenSearch Service cluster should be sent over HTTPS For more information see Nodeto node Encryption for Amazon OpenSearch Service Logs from the OpenSearch Service configuration API can be captured in AWS CloudTrail For more information see Managing Amazon OpenSearch Service Domains Amazon EMR Amazon EMR deploys and manages a cluster of Amazon EC2 instances into a customer’s account For information on encryption with Amazon EMR see Encryption Options Amazon EventBridge Amazon EventBridge (formerly Amazon CloudWatch Events) is a serverless event bus that enables you to create scalable eventdriven applications EventBridge delivers a stream of real time data from event sources such as Zendesk Datadog or Pagerduty and routes that data to targets like AWS Lambda 14ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Forecast By default EventBridge encrypts data using 256bit Advanced Encryption Standard (AES256) under an AWS owned CMK which helps secure customer data from unauthorized access Customers should ensure that any AWS resource emitting an event that is storing processing or transmitting PHI is configured in accordance with best practices Amazon EventBridge is integrated with AWS CloudTrail and customers can view the most recent events in the CloudTrail console in Event history For more information see EventBridge Information in CloudTrail Amazon Forecast Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts Based on the same machine learning forecasting technology used by Amazoncom Every interaction customers have with Amazon Forecast is protected by encryption Any content processed by Amazon Forecast is encrypted with customer keys through Amazon Key Management Service and encrypted atrest in the AWS Region where customers are using the service Amazon Forecast is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in Amazon Forecast CloudTrail captures all API calls for Amazon Forecast as events The calls captured include calls from the Amazon Forecast console and code calls to the Amazon Forecast API operations If customers create a trail customers can enable continuous delivery of CloudTrail events to an Amazon S3 bucket including events for Amazon Forecast For more information see Logging Forecast API Calls with AWS CloudTrail By default the log files delivered by CloudTrail to their bucket are encrypted by Amazon serverside encryption with Amazon S3managed encryption keys (SSES3) To provide a security layer that is directly manageable customers can instead use serverside encryption with AWS KMS–managed keys (SSEKMS) for their CloudTrail log files Enabling serverside encryption encrypts the log files but not the digest files with SSEKMS Digest files are encrypted with Amazon S3managed encryption keys (SSES3) AWS Forecast imports and exports data to/from S3 buckets When importing and exporting data from Amazon S3 customers should ensure S3 buckets are configured in a manner consistent with the guidance For more information see Getting Started Amazon FSx Amazon FSx is a fullymanaged service providing featurerich and highlyperformant file systems Amazon FSx for Windows File Server provides highly reliable and scalable file storage and is accessible over the Server Message Block (SMB) protocol Amazon FSx for Lustre provides highperformance storage for compute workloads and is powered by Lustre the world's most popular highperformance file system Amazon FSx supports two forms of encryption for file systems encryption of data in transit and encryption at rest Amazon FSx for Windows File Server also supports logging of all API calls using AWS CloudTrail Encryption of data in transit is supported by Amazon FSx for Windows File Server on compute instances supporting SMB protocol 30 or newer and by Amazon FSx for Lustre on Amazon EC2 instances that support encryption in transit Alternatively customers may encrypt data before storing on Amazon FSx but are then responsible for the encryption process and key management Encryption of data at rest is automatically enabled when creating an Amazon FSx file system using AES256 encryption algorithm and AWS KMSmanaged keys Data and metadata are automatically encrypted before being written to the file system and automatically decrypted before being presented to the application PHI should not be used in any file or folder name 15ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon GuardDuty Amazon GuardDuty Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help customers protect their AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise Amazon GuardDuty also detects potentially compromised instances or reconnaissance by attackers Amazon GuardDuty continuously monitors and analyzes the following data sources: VPC Flow Logs AWS CloudTrail event logs and DNS logs It uses threat intelligence feeds such as lists of malicious IPs and domains and machine learning to identify unexpected and potentially unauthorized and malicious activity within an AWS environment As such Amazon GuardDuty should not encounter any PHI as this data is not to be stored in any of the AWS based data sources listed above Amazon HealthLake Amazon HealthLake enables customers in the healthcare and life sciences industries to store transform query and analyze health data at petabyte scale Customers can use Amazon HealthLake to transmit process and store PHI Amazon HealthLake encrypts data at rest in customer’s data stores by default All service data and metadata is encrypted with a service owned KMS key Per Fast Healthcare Interoperability Resources (FHIR) specifications if a customer deletes FHIR resource it will only be hidden from retrieval and will be retained by the service for versioning When customers use StartFHIRImportJob API Amazon HealthLake will enforce requirement to export data to an encrypted Amazon S3 bucket Amazon HealthLake also encrypts data in transit It uses Transport Layer Security (TLS) 12 to encrypt data in transit through the public endpoint and through backend services Clients must support TLS 10 or later although AWS recommends TLS 12 or later Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral DiffieHellman (DHE) or Elliptic Curve Ephemeral Diffie Hellman (ECDHE) Additionally requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal Alternatively customers can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests Amazon HealthLake is integrated with AWS CloudTrail CloudTrail captures all API calls to Amazon HealthLake as events including calls made as result of interaction with AWS Management Console commandline interface (CLI) and programmatically using software development kit (SDK) Amazon Inspector Amazon Inspector is an automated security assessment service for customers seeking to improve their security and compliance of applications deployed on AWS Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity Customers may run Amazon Inspector on EC2 instances that contain PHI Amazon Inspector encrypts all data transmitted over the network as well as all telemetry data stored atrest Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics enables customers to quickly author SQL code that continuously reads processes and stores data in near real time Using standard SQL queries on the streaming data customers can construct applications that transform and provide insights into their data Kinesis Data Analytics supports inputs from Kinesis Data Streams and Kinesis Data Firehose delivery streams as 16ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Kinesis Data Firehose sources for analytics application If the stream is encrypted Kinesis Data Analytics accesses the data in the encrypted stream seamlessly with no further configuration needed Kinesis Data Analytics does not store unencrypted data read from Kinesis Data Streams For more information see Configuring Application Input Kinesis Data Analytics integrates with both AWS CloudTrail and Amazon CloudWatch Logs for application monitoring For more information see Monitoring Tools and Working with Amazon CloudWatch Logs Amazon Kinesis Data Firehose When customers send data from their data producers to their Kinesis data stream Amazon Kinesis Data Streams encrypts data using an AWS KMS key before storing it atrest When the Kinesis Data Firehose delivery stream reads data from the Kinesis stream Kinesis Data Streams first decrypts the data and then sends it to Kinesis Data Firehose Kinesis Data Firehose buffers the data in memory based on the buffering hints specified by the customer It then delivers the data to the destinations without storing the unencrypted data at rest For more information about encryption with Kinesis Data Firehose see Data Protection in Amazon Kinesis Data Firehose AWS provides various tools that customers can use to monitor Amazon Kinesis Data Firehose including Amazon CloudWatch metrics Amazon CloudWatch Logs Kinesis Agent and API logging and history For more information see Monitoring Amazon Kinesis Data Firehose Amazon Kinesis Streams Amazon Kinesis Streams enables customers to build custom applications that process or analyze streaming data for specialized needs The serverside encryption feature allows customers to encrypt data at rest When serverside encryption is enabled Kinesis Streams will use an AWS KMS key to encrypt the data before storing it on disks For more information see Data Protection in Amazon Kinesis Data Streams Connections to Amazon S3 containing PHI must use endpoints that accept encrypted transport (that is HTTPS) For a list of regional endpoints see AWS service endpoints Amazon Kinesis Video Streams Amazon Kinesis Video Streams is a fully managed AWS service that customers can use to stream live video from devices to the AWS Cloud or build applications for realtime video processing or batch oriented video analytics Serverside encryption is a feature in Kinesis Video Streams that automatically encrypts data at rest by using an AWS KMS customer master key (CMK) that is specified by the customer Data is encrypted before it is written to the Kinesis Video Streams stream storage layer and it is decrypted after it is retrieved from storage The Amazon Kinesis Video Streams SDK can be used to transmit streaming video data containing PHI By default the SDK uses TLS to encrypt frames and fragments generated by the hardware device on which it is installed The SDK does not manage or affect data stored atrest Amazon Kinesis Video Streams uses AWS CloudTrail to log all API calls Amazon Lex Amazon Lex is an AWS service for building conversational interfaces for applications using voice and text With Amazon Lex the same conversational engine that powers Amazon Alexa is now available to 17ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Managed Streaming for Apache Kafka (Amazon MSK) any developer enabling customers to build sophisticated natural language chatbots into their new and existing applications Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) so customers can build highly engaging user experiences with lifelike conversational interactions and create new categories of products Lex uses the HTTPS protocol to communicate both with clients as well as other AWS services Access to Lex is APIdriven and appropriate IAM least privilege can be enforced For more information see Data Protection in Amazon Lex Monitoring is important for maintaining the reliability availability and performance of customer’s Amazon Lex chatbots To track the health of Amazon Lex bots use Amazon CloudWatch With CloudWatch customers can get metrics for individual Amazon Lex operations or for global Amazon Lex operations for their account Customers can also set up CloudWatch alarms to be notified when one or more metrics exceeds a threshold that customers define For example customers can monitor the number of requests made to a bot over a particular time period view the latency of successful requests or raise an alarm when errors exceed a threshold Lex is also integrated with AWS CloudTrail to log Lex API calls For more information see Monitoring in Amazon Lex Amazon Managed Streaming for Apache Kafka (Amazon MSK) Amazon MSK provides encryption features for data at rest and for data intransit For data at rest encryption Amazon MSK cluster uses Amazon EBS serverside encryption and AWS KMS keys to encrypt storage volumes For data intransit Amazon MSK clusters have encryption enabled via TLS for inter broker communication The encryption configuration setting is enabled when a cluster is created Also by default intransit encryption is set to TLS for clusters created from CLI or AWS Console Additional configuration is required for clients to communicate with clusters using TLS encryption Customers can change the default encryption setting by selecting the TLS/plaintext settings For more information see Amazon MSK Encryption Customers can monitor the performance of customer’s clusters using the Amazon MSK console Amazon CloudWatch console or customers can access JMX and host metrics using Open Monitoring with Prometheus an open source monitoring solution Tools that are designed to read from Prometheus exporters are compatible with Open Monitoring like: Datadog Lenses New Relic Sumologic or a Prometheus server For details on Open Monitoring see Amazon MSK Open Monitoring documentation Please note that the default version of Apache Zookeeper bundled with Apache Kafka does not support encryption However it is important to note that communications between Apache Zookeeper and Apache Kafka brokers is limited to broker topic and partition state information The only way data can be produced and consumed from an Amazon MSK cluster is over a private connection between their clients in their VPC and the Amazon MSK cluster Amazon MSK does not support public endpoints Amazon MQ Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud Amazon MQ works with existing applications and services without the need for a customer to manage operate or maintain their own messaging systemTo provide the encryption of PHI data while in transit the following protocols with TLS enabled should be used to access brokers: 18ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Neptune • AMQP • MQTT • MQTT over WebSocket • OpenWire • STOMP • STOMP over WebSocket Amazon MQ encrypts messages atrest and in transit using encryption keys that it manages and stores securely Amazon MQ uses CloudTrail to log all API calls Amazon Neptune Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built highperformance graph database engine that is optimized for storing billions of relationships and querying the graph with milliseconds latency Amazon Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL Data containing PHI can now be retained in an encrypted instance of Amazon Neptune An encrypted instance of Amazon Neptune can be specified only at the time of creation by choosing ‘Enable Encryption’ from the Amazon Neptune console All logs backups and snapshots are encrypted for an Amazon Neptune encrypted instance Key management for encrypted instances of Amazon Neptune is provided through the AWS KMS Encryption of data in transit is provided through SSL/TLS Amazon Neptune uses CloudTrail to log all API calls AWS Network Firewall AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds (Amazon VPCs) The service automatically scales with network traffic volume to provide highavailability protections without the need to set up or maintain the underlying infrastructure Both customer rules and access logs may contain end user IP addresses which are encrypted both at rest and in transit within the AWS architecture Furthermore AWS Network Firewall encrypts all data at rest and in transit between component AWS services (Amazon S3 Amazon DynamoDB Amazon CloudWatch Logs Amazon EBS) The service automatically encrypts data without requiring special configuration Amazon Pinpoint Amazon Pinpoint offers developers a single API layer CLI support and clientside SDK support to extend application communication channels with users The eligible channels include: email SMS text messaging mobile push notifications and custom channels Amazon Pinpoint also provides an analytics system that tracks app user behavior and user engagement With this service developers can learn how each user prefers to engage and can personalize the user's experience to increase user satisfaction Amazon Pinpoint also helps developers address multiple messaging use cases such as direct or transactional messaging targeted or campaign messaging and eventbased messaging By integrating and enabling all enduser engagement channels via Amazon Pinpoint developers can create a 360 degree view of user engagement across all customer touch points Amazon Pinpoint stores user endpoint and event data so customers can create segments send messages to recipients and capture engagement data 19ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Polly Amazon Pinpoint encrypts data both atrest and intransit For more information see Amazon Pinpoint FAQs While Amazon Pinpoint encrypts all data at rest and in transit the final channel such as SMS or email may not be encrypted and customers should configure any channel in a manner consistent with their requirements Additionally customers who need to send PHI through the SMS channel should use a dedicated short code (5 6 digit origination phone numbers) for the explicit purpose of sending PHI For more information on how to request a short code see Requesting Dedicated Short Codes for SMS Messaging with Amazon Pinpoint Customers may also choose not to send PHI through the final channel and instead provide a mechanism to securely access PHI over HTTPS API calls to Amazon Pinpoint can be captured using AWS CloudTrail The captured calls include those from the Amazon Pinpoint console and code calls to Amazon Pinpoint API operations If customers create a trail customers can enable continuous delivery of AWS CloudTrail events to an Amazon S3 bucket including events for Amazon Pinpoint If customers don't configure a trail they can still view the most recent events by using Event history on the AWS CloudTrail console Using the information collected by AWS CloudTrail customers can determine that the request was made to Amazon Pinpoint the IP address of the request who made the request when the request was made and additional details For more information see Logging Amazon Pinpoint API Calls with AWS CloudTrail Amazon Polly Amazon Polly is a cloud service that converts text into lifelike speech Amazon Polly provides simple API operations that customers can easily integrate with existing applications Amazon Polly uses the HTTPS protocol to communicate with clients Access to Amazon Polly is APIdriven and appropriate IAM least privilege can be enforced For more information see Data Protection Some examples of use cases that include PHI: • Caregiver converts a text report containing PHI into synthesized speech so they can listen to the report while walking or performing other duties • Visually impaired patient is given medical guidance and consumes the guidance in the form of synthesized speech The final delivery channel from Amazon Polly could result in playing audio with PHI in a public space and precautions should be taken that delivery takes this into consideration The synthesized speech output can also be sent asynchronously to an Amazon S3 bucket with encryption enabled When supported event activity occurs in Amazon Polly that activity is recorded in a AWS CloudTrail event along with other AWS service events in Event History For an ongoing record of events in a customer AWS account including events for Amazon Polly create a trail A trail enables CloudTrail to deliver log files to an Amazon S3 bucket Using the information collected by CloudTrail customers can determine the request that was made to Amazon Polly the IP address from which the request was made who made the request when it was made and additional details Amazon Quantum Ledger Database (Amazon QLDB) Amazon QLDB is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time Data containing PHI can now be retained in a QLDB instance By default all Amazon QLDB data in transit and at rest is encrypted Data in transit is encrypted using TLS and data at rest is encrypted using 20ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon QuickSight AWS managed keys For data protection purposes we recommend that customers protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties For more information see Data Protection in Amazon QLDB Amazon QLDB is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in QLDB CloudTrail captures all control plane API calls for QLDB as events The calls that are captured include calls from the QLDB console and code calls to the QLDB API operations If customers create a trail customers can enable continuous delivery of CloudTrail events to an Amazon Simple Storage Service (Amazon S3) bucket including events for QLDB If customers don't configure a trail customers can still view the most recent events on the CloudTrail console in Event history Using the information collected by CloudTrail customers can determine the request that was made to QLDB the IP address from which the request was made who made the request when it was made and additional details Amazon QuickSight Amazon QuickSight is a business analytics service that customers can use to build visualizations perform ad hoc analysis and quickly get business insights from their data Amazon QuickSight discovers AWS data sources enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust inmemory engine (SPICE) Customers can only use the Enterprise edition of Amazon QuickSight to work with data containing PHI as it provides support for encryption of data stored atrest in SPICE Data encryption is performed using AWS managed keys Amazon RDS for MariaDB Amazon RDS for MariaDB allows customers to encrypt MariaDB databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for MariaDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for MariaDB containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance Amazon RDS for MySQL Amazon RDS for MySQL allows customers to encrypt MySQL databases using keys that customers manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for MySQL encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for MySQL containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance 21ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon RDS for Oracle Amazon RDS for Oracle Customers have several options for encrypting PHI atrest using Amazon RDS for Oracle Customers can encrypt Oracle databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for Oracle encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Customers can also use Oracle Transparent Data Encryption (TDE) and they should evaluate the configuration for consistency with the Guidance Oracle TDE is a feature of the Oracle Advanced Security option available in Oracle Enterprise Edition This feature automatically encrypts data before it is written to storage and automatically decrypts data when the data is read from storage Customers can also use AWS CloudHSM to store Amazon RDS Oracle TDE keys For more information see the following: • Amazon RDS for Oracle Transparent Data Encryption: Oracle Transparent Data Encryption • Using AWS CloudHSM to store Amazon RDS Oracle TDE keys: What Is Amazon Relational Database Service (Amazon RDS)? Connections to Amazon RDS for Oracle containing PHI must use transport encryption and evaluate the configuration for consistency with the Guidance This is accomplished using Oracle Native Network Encryption and enabled in Amazon RDS for Oracle option groups For detailed information see Oracle Native Network Encryption Amazon RDS for PostgreSQL Amazon RDS for PostgreSQL allows customers to encrypt PostgreSQL databases using keys that customers manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for PostgreSQL encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for PostgreSQL containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance Amazon RDS for SQL Server RDS for SQL Server supports storing PHI for the following version and edition combinations: • 2008 R2 Enterprise Edition only • 2012 2014 and 2016 Web Standard and Enterprise Editions Important: SQL Server Express edition is not supported and should never be used for the storage of PHI In order to store PHI customers must ensure that the instance is configured to encrypt data at rest and enable transport encryption and auditing as detailed below 22ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption at Rest Encryption at Rest Customers can encrypt SQL Server databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for SQL Server encryption satisfies their compliance and regulatory requirements For more information about encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources If customers use SQL Server Enterprise Edition they can use Server Transparent Data Encryption (TDE) as an alternative This feature automatically encrypts data before it is written to storage and automatically decrypts data when the data is read from storage For more information on RDS for SQL Server Transparent Data Encryption see Support for Transparent Data Encryption in SQL Server Transport Encryption Connections to Amazon RDS for SQL Server containing PHI must use transport encryption provided by SQL Server Forced SSL Forced SSL is enabled from within the parameter group for Amazon RDS SQL Server For more information on RDS for SQL Server Forced SSL see Using SSL with a Microsoft SQL Server DB Instance Auditing RDS for SQL Server instances that contain PHI must have auditing enabled Auditing is enabled from within the parameter group for Amazon RDS SQL Server For more information on RDS for SQL Server auditing see Compliance Program Support for Microsoft SQL Server DB Instances Amazon Redshift Amazon Redshift provides database encryption for its clusters to help protect data at rest When customers enable encryption for a cluster Amazon Redshift encrypts all data including backups by using hardwareaccelerated Advanced Encryption Standard (AES)256 symmetric keys Amazon Redshift uses a fourtier keybased architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key The cluster key encrypts the database key for the Amazon Redshift cluster Customers can use either AWS KMS or an AWS CloudHSM (Hardware Security Module) to manage the cluster key Amazon Redshift encryption atrest is consistent with the Guidance that is in effect at the time of publication of this whitepaper Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Redshift encryption satisfies their compliance and regulatory requirements For more information see Amazon Redshift database encryption Connections to Amazon Redshift containing PHI must use transport encryption and customers should evaluate the configuration for consistency with the Guidance For more information see Configuring security options for connections Amazon Redshift Spectrum enables customers to run Amazon Redshift SQL queries against exabytes of data in Amazon S3 Redshift Spectrum is a feature of Amazon Redshift and thus is also in scope for the HIPAA BAA Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to customer applications A customer only needs to provide an image or video to the Amazon Rekognition API and the service can identify 23ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Route 53 the objects people text scenes and activities as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial recognition Amazon Rekognition is eligible to operate with images or video containing PHI Amazon Rekognition operates as a managed service and does not present any configurable options for the handling of data Amazon Rekognition only uses discloses and maintains PHI as permitted by the terms of the AWS BAA All data is encrypted atrest and in transit with Amazon Rekognition Amazon Rekognition uses AWS CloudTrail to log all API calls Amazon Route 53 Amazon Route 53 is a managed DNS service that provides customers the ability to register domain names route internet traffic customer domain resources and check the health of those resources While Amazon Route 53 is a HIPAA Eligible Service no PHI should be stored in any resource names or tags within Amazon Route 53 as there is no support for encrypting such data Instead Amazon Route 53 can be used to provide access to customer domain resources that transmit or store PHI such as web servers running on Amazon EC2 or storage such as Amazon S3 Amazon S3 Glacier Amazon S3 Glacier automatically encrypts data at rest using AES 256bit symmetric keys and supports secure transfer of customer data over secure protocols Connections to Amazon S3 Glacier containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Do not use PHI in archive and vault names or metadata because this data is not encrypted using Amazon S3 Glacier serverside encryption and is not generally encrypted in clientside encryption architectures Amazon S3 Transfer Acceleration Amazon S3 Transfer Acceleration (S3TA) enables fast easy and secure transfers of files over long distances between a customer’s client and an S3 bucket Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations As the data arrives at an edge location data is routed to Amazon S3 over an optimized network path Customers should ensure that any data containing PHI transferred using AWS S3TA is encrypted in transit and atrest Refer to the Guidance for Amazon S3 to understand the available encryption options Amazon SageMaker Amazon SageMaker is a fully managed machine learning service With Amazon SageMaker data scientists and developers can quickly and easily build and train machine learning models and then directly deploy them into a productionready hosted environment It provides an integrated Jupyter authoring notebook instance for easy access to data sources for exploration and analysis Amazon SageMaker also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment With native support for bringyourownalgorithms and frameworks Amazon SageMaker offers flexible distributed training options that adjust to a customer’s specific workflows Amazon SageMaker is eligible to operate with data containing PHI Encryption of data in transit is provided by SSL/TLS and is used when communicating both with the frontend interface of Amazon SageMaker (to the Notebook) and 24ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon SNS whenever Amazon SageMaker interacts with any other AWS service (for example pulling data from Amazon S3) To satisfy the requirement that PHI be encrypted atrest encryption of data stored with the instance running models with Amazon SageMaker is enabled using AWS Key Management Service (KMS) when setting up the endpoint (DescribeEndpointConfig:KmsKeyID) Encryption of model training results (artifacts) is enabled using AWS KMS and keys should be specified using the KmsKeyID in the OutputDataConfig description If a KMS Key ID isn’t provided the default Amazon S3 KMS Key for the role’s account will be used Amazon SageMaker uses AWS CloudTrail to log all API calls Amazon Simple Notification Service (Amazon SNS) Customers should understand the following key encryption requirement in order to use Amazon Simple Notification Service (SNS) with Protected Health Information (PHI) Customers must use the HTTPS API endpoint that SNS provides in each AWS Region The HTTPS endpoint leverages encrypted connections and protects the privacy and integrity of the data sent to AWS For a list of all HTTPS API endpoints see AWS service endpoints Additionally Amazon SNS uses CloudTrail a service that captures API calls made by or on behalf of Amazon SNS in the customer’s AWS account and delivers the log files to an Amazon S3 bucket that they specify CloudTrail captures API calls made from the Amazon SNS console or from the Amazon SNS API Using the information collected by CloudTrail customers can determine what request was made to Amazon SNS the source IP address from which the request was made who made the request and when it was made For more information on logging SNS operations see Logging Amazon SNS API calls using CloudTrail Amazon Simple Email Service (Amazon SES) Amazon Simple Email Service (Amazon SES) is a flexible and highly scalable email sending and receiving service It supports both S/MIME and PGP protocols to encrypt messages for full endtoend encryption and all communication with Amazon SES is secured using SSL (TLS 12) Customers have the option to store messages encrypted atrest by configuring Amazon SES to receive and encrypt messages before storing them in an Amazon S3 bucket For more information see How Amazon Simple Email Service (Amazon SES) uses AWS KMS to find out more information about encrypting messages for storage Messages are secured in transit to Amazon SES either through an HTTPS endpoint or encrypted SMTP connection For messages sent from Amazon SES to a receiver Amazon SES will first attempt to make a secure connection to the receiving mail server but if a secure connection cannot be established it will send the message unencrypted To require encryption for delivery to a receiver customers must create a configuration set in Amazon SES and use the AWS CLI to set the TlsPolicy property to Require For more information see Amazon SES and Security Protocols Amazon SES integrates with AWS CloudTrail to monitor all API calls Using the information collected by AWS CloudTrail customers can determine that the request was made to Amazon SES the IP address of the request who made the request when the request was made and additional details For more information see Logging Amazon SES API Calls with AWS CloudTrail Amazon SES also provides methods to monitor sending activity such as sends rejects bounce rates deliveries opens and clicks For more information see Monitoring Your Amazon SES Sending Activity Amazon Simple Queue Service (Amazon SQS) Customers should understand the following key encryption requirements in order to use Amazon SQS with PHI 25ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon S3 • Communication with the Amazon SQS Queue via the Query Request must be encrypted with HTTPS For more information on making SQS requests see Making Query API requests • Amazon SQS supports serverside encryption integrated with the AWS KMS to protect data at rest The addition of serverside encryption allows customers to transmit and receive sensitive data with the increased security of using encrypted queues Amazon SQS serverside encryption uses the 256 bit Advanced Encryption Standard (AES256 GCM algorithm) to encrypt the body of each message The integration with AWS KMS allows customers to centrally manage the keys that protect Amazon SQS messages along with keys that protect their other AWS resources AWS KMS logs every use of encryption keys to AWS CloudTrail to help meet regulatory and compliance needs For more information and to check Region for the availability for SSE for Amazon SQS see Encryption at Rest • If serverside encryption is not used the message payload itself must be encrypted before being sent to SQS One way to encrypt the message payload is by using the Amazon SQS Extended Client along with the Amazon S3 encryption client For more information about using clientside encryption see Encrypting Message Payloads Using the Amazon SQS Extended Client and the Amazon S3 Encryption Client Amazon SQS uses CloudTrail a service that logs API calls made by or on behalf of Amazon SQS in a customer’s AWS account and delivers the log files to the specified Amazon S3 bucket CloudTrail captures API calls made from the Amazon SQS console or from the Amazon SQS API Customers can use the information collected by CloudTrail to determine which requests are made to Amazon SQS the source IP address from which the request is made who made the request when it is made and so on For more information about logging SQS operations see Logging Amazon SQS API calls using AWS CloudTrail Amazon Simple Storage Service (Amazon S3) Customers have several options for encryption of data at rest when using Amazon S3 including both serverside and clientside encryption and several methods of managing keys For more information see Protecting data using encryption Connections to Amazon S3 containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Do not use PHI in bucket names object names or metadata because this data is not encrypted using S3 serverside encryption and is not generally encrypted in clientside encryption architectures Amazon Simple Workflow Service Amazon Simple Workflow Service (Amazon SWF) helps developers build run and scale background jobs that have parallel or sequential steps Amazon SWF can be thought of as a fully managed state tracker and task coordinator in the Cloud The Amazon Simple Workflow Service is used to orchestrate workflows and is not able to store or transmit data PHI should not be placed in metadata for Amazon SWF or within any task description Amazon SWF uses AWS CloudTrail to log all API calls Amazon Textract Amazon Textract uses machine learning technologies to automatically extract text and data from scanned documents that goes beyond simple optical character recognition (OCR) to identify understand and extract data from forms and tables For example customers can use Amazon Textract to automatically extract data and process forms with protected health information (PHI) without human intervention to fulfill medical claims 26ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Transcribe Amazon Textract can also be used to maintain compliance in document archives For example customers can use Amazon Textract to extract data from insurance claims or medical prescriptions and automatically recognize keyvalue pairs in those documents so that sensitive ones can be redacted Amazon Textract supports serverside encryption (SSES3 and SSEKMS) for input documents and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch to track resource usage metrics and AWS CloudTrail to capture API calls to Amazon Textract Amazon Transcribe Amazon Transcribe uses advanced machine learning technologies to recognize speech in audio files and transcribe them into text For example customers can use Amazon Transcribe to convert US English and Mexican Spanish audio to text and to create applications that incorporate the content of audio files Amazon Transcribe can be used with data containing PHI Amazon Transcribe does not retain or store any data and all calls to the API are encrypted with SSL/TLS Amazon Transcribe uses CloudTrail to log all API calls Amazon Translate Amazon Translate uses advanced machine learning technologies to provide highquality translation on demand Customers can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages Documents containing PHI can be processed with Amazon Translate No additional configuration is required when translating documents that contain PHI Encryption of data while in transit is provided by SSL/TLS and no data remains atrest with Amazon Translate Amazon Translate uses CloudTrail to log all API calls Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) offers a set of network security features wellaligned to architecting for HIPAA compliance Features such as stateless network access control lists and dynamic reassignment of instances into stateful security groups afford flexibility in protecting the instances from unauthorized network access Amazon VPC also allows customers to extend their own network address space into AWS as well as providing a number of ways to connect their data centers to AWS VPC Flow Logs provide an audit trail of accepted and rejected connections to instances processing transmitting or storing PHI For more information on Amazon VPC see Amazon Virtual Private Cloud Amazon WorkDocs Amazon WorkDocs is a fully managed secure enterprise file storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity Amazon WorkDocs files are encrypted atrest using keys that customers manage through AWS Key Management Service (KMS) All data in transit is encrypted using SSL/TLS AWS web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL/TLS Using the Amazon WorkDocs Management Console WorkDocs administrators can view audit logs to track file and user activity by time and choose whether to allow users to share files with others outside their organization Amazon WorkDocs is also integrated with CloudTrail (a service that captures API calls made by or on behalf of Amazon WorkDocs in customer’s AWS account) and delivers CloudTrail log files to an Amazon S3 bucket that customers specify 27ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon WorkSpaces Multifactor authentication (MFA) using a RADIUS server is available and can provide customers with an additional layer of security during the authentication process Users log in by entering their user name and password followed by an OTP (OneTime Passcode) supplied by a hardware or a software token For more information see: •Amazon WorkDocs feature •Logging Amazon WorkDocs API calls using AWS CloudTrail Customers should not store PHI in file names or directory names Amazon WorkSpaces Amazon WorkSpaces is a fully managed secure DesktopasaService (DaaS) solution that runs on AWS With Amazon WorkSpaces customers can easily provision virtual cloudbased Microsoft Windows desktops for their users providing them access to the documents applications and resources they need anywhere anytime from any supported device Amazon WorkSpaces stores data in Amazon Elastic Block Store volumes Customers can encrypt customer’s WorkSpaces storage volumes using keys that customers manage through AWS Key Management Service When encryption is enabled on a WorkSpace both the data stored atrest in the underlying storage and the automated backups (EBS Snapshots) of the disk storage are encrypted consistent with the Guidance Communication from the WorkSpace clients to WorkSpace is secured using SSL/TLS For more information on encryption atrest using Amazon WorkSpaces see Encrypted WorkSpaces AWS App Mesh AWS App Mesh is a service mesh that provides applicationlevel networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure like Amazon ECS Amazon EKS or Amazon EC2 services App Mesh configures Envoy proxies to collect and transmit observability data to the monitoring set vices that you configure to give you endtoend visibility It can route traffic based on routing and traffic policies configured to ensure highavailability of your applications Traffic between applications can be configured to use TLS App Mesh can be used using AWS SDK or App Mesh controller for Kubernetes While AWS App Mesh is a HIPAA Eligible Service no PHI should be stored in any resource names/attributes within AWS App Mesh as there is no support for protecting such data Instead AWS App Mesh can be used to monitor control and secure customer domain resources that transmit or store PHI AWS Auto Scaling AWS Auto Scaling enables customers to configure automatic scaling for the AWS resources that are part of a customer’s application in a matter of minutes Customers can use AWS Auto Scaling for a number of services that involve PHI such as Amazon DynamoDB Amazon ECS Amazon RDS Aurora replicas and Amazon EC2 instances in an Auto Scaling Group AWS Auto Scaling is an orchestration service that does not directly process store or transmit customer content; for that reason customers can use this service with encrypted content The AWS shared responsibility model applies to data protection in AWS Auto Scaling: AWS is responsible for the AWS network security procedures whereas the customer is responsible for maintaining control over a customer’s content that is hosted on this infrastructure This content includes the security configuration 28ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Backup and management tasks for the AWS services that customers use For data protection purposes we recommend that customers protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) That way each user is given only the permissions necessary to fulfill their job duties AWS strongly recommend that customers never put sensitive identifying information such as customers' account numbers into freeform fields such as a Name field This includes when customers work with AWS Auto Scaling or other AWS services using the AWS Management Console API AWS CLI or AWS SDKs Any data that customers enter into AWS Auto Scaling or other services might get picked up for inclusion in diagnostic logs When customers provide a URL to an external server they should not include credentials information in the URL to validate their request to that server AWS also recommends that customers secure their data in the following ways: • Use multifactor authentication (MFA) with each account • Use SSL/TLS to communicate with AWS resources AWS recommends TLS 12 or later • Set up API and user activity logging with AWS CloudTrail • Use AWS encryption solutions along with all default security controls within AWS services • Use advanced managed security services such as Amazon Macie which assists in discovering and securing personal data that is stored in Amazon S3 AWS Backup AWS Backup offers a centralized fullymanaged and policybased service to protect customer data and ensure compliance across AWS services for business continuity purposes With AWS Backup customers can centrally configure data protection (backup) policies and monitor backup activity across customer AWS resources including Amazon EBS volumes Amazon Relational Database Service (Amazon RDS) databases (including Aurora clusters) Amazon DynamoDB tables Amazon Elastic File System (Amazon EFS) Amazon FSx file systems Amazon EC2 instances and AWS Storage Gateway volumes AWS Backup encrypts customer data in transit and at rest Backups from services with existing snapshot capabilities are encrypted using the source service’s snapshot encryption methodology For example EBS snapshots are encrypted using the encryption key of the volume that the snapshot was created from Backups from newer AWS services that introduce backup functionality built on AWS Backup such as Amazon EFS are encrypted intransit and atrest independently from the source services giving customer backups an additional layer of protection Encryption is configured at the Backup Vault level Default Vault is encrypted When customers create a new vault an Encryption Key must be selected AWS Batch AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the optimal quantity and type of compute resources (such as CPU or memoryoptimized instances) based on the volume and specific resource requirements of the batch jobs submitted AWS Batch plans schedules and executes batch computing workloads across the full range of AWS compute services and features Similar to guidance for Amazon ECS PHI should not be placed directly into the job definition the job queue or the tags for AWS Batch Instead jobs scheduled and executed with AWS Batch may operate on encrypted PHI Any information returned by stages of a job to AWS Batch should also not contain any PHI Whenever jobs being executed by AWS Batch must transmit or receive PHI that connection should be encrypted using HTTPS or SSL/TLS 29ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Certificate Manager AWS Certificate Manager AWS Certificate Manager is a service that lets customers easily provision manage and deploy public and private SSL/TLS certificates for use with AWS services and their internal connected resources AWS Certificate Manager should not be used to store data containing PHI AWS Certificate Manager uses CloudTrail to log all API calls AWS Cloud Map AWS Cloud Map is a cloud resource discovery service With AWS Cloud Map customers can define custom names for application resources such as Amazon ECS tasks Amazon EC2 instances Amazon S3 buckets Amazon DynamoDB tables Amazon SQS queues or any other cloud resource Customers can then use these custom names to discover the location and metadata of cloud resources from their applications using AWS SDK and authenticated API queries While AWS Cloud Map is a HIPAA Eligible Service no PHI should be stored in any resource names/attributes within AWS Cloud Map as there is no support for protecting such data Instead AWS Cloud Map can be used to discover customer domain resources that transmit or store PHI AWS CloudFormation AWS CloudFormation enables customers to create and provision AWS infrastructure deployments predictably and repeatedly It helps customers leverage AWS products such as Amazon EC2 Amazon Elastic Block Store Amazon SNS Elastic Load Balancing and Auto Scaling to build highly reliable highly scalable costeffective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure AWS CloudFormation enables customers to use a template file to create and delete a collection of resources together as a single unit (a stack) AWS CloudFormation does not itself store transmit or process PHI Instead it is used to build and deploy architectures that use other AWS services that might store transmit and/or process PHI Only HIPAA Eligible Services should be used with PHI Please refer to the entries for those services in this Whitepaper for guidance on use of PHI with those services AWS CloudFormation uses AWS CloudTrail to log all API calls AWS CloudHSM AWS CloudHSM is a cloudbased hardware security module (HSM) that enables customers to easily generate and use their own encryption keys on the AWS Cloud With CloudHSM customers can manage their own encryption keys using FIPS 1402 Level 3 validated HSMs CloudHSM offers customers the flexibility to integrate with their applications using open standard APIs such as PKCS#11 Java Cryptography Extensions (JCE) and Microsoft CryptoNG (CNG) libraries CloudHSM is also standardscompliant and enables customers to export all of their keys to most other commercially available HSMs As AWS CloudHSM is a hardware appliance key management service it is unable to store or transmit PHI Customers should not store PHI in Tags (metadata) No other special guidance is required AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of AWS accounts With CloudTrail customers can log continuously monitor and retain account activity 30ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS CodeBuild related to actions across their AWS infrastructure CloudTrail provides event history of their AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies security analysis resource change tracking and troubleshooting AWS CloudTrail is enabled for use with all AWS accounts and can be used for audit logging as required by the AWS BAA Specific Trails should be created using the CloudTrail console or the AWS Command Line Interface CloudTrail encrypts all traffic while in transit and atrest when an encrypted Trail is created An encrypted trail should be created when the potential exists to log PHI By default an encrypted Trail stores entries in Amazon S3 using ServerSide Encryption with Amazon S3 (SSES3) managed keys If an additional management over keys is desired it can also be configured with AWS KMSmanaged keys (SSEKMS) As CloudTrail is the final destination for AWS log entries and thus a critical component of any architecture that handles PHI CloudTrail log file integrity validation should be enabled and the associated CloudTrail digest files should be periodically reviewed Once enabled a positive assertion that the log files have not been changed or altered can be established AWS CodeBuild AWS CodeBuild is a fully managed build service in the cloud AWS CodeBuild compiles source code runs unit tests and produces artifacts that are ready to deploy AWS CodeBuild uses an AWS KMS customer master key (CMK) to encrypt build output artifacts A CMK should be created and configured before building artifacts that contain PHI secrets/passwords master certificates etc AWS CodeBuild uses AWS CloudTrail to log all API calls AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services including Amazon EC2 AWS Fargate AWS Lambda and onpremises servers Customers use AWS CodeDeploy to rapidly release new features of containerized workload and handles the complexity of updating applications AWS CodeDeploy supports serverside encryption (SSES3) for deployment artifacts and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch Events to track deployments and AWS CloudTrail to capture API calls to AWS CodeDeploy AWS CodeCommit AWS CodeCommit is a secure highly scalable managed source control service that hosts private Git repositories AWS CodeCommit eliminates the need for customers to manage their own source control system or worry about scaling its infrastructure AWS CodeCommit encrypts all traffic and stored information while in transit and atrest By default when a repository is created within AWS CodeCommit an AWS managed key is created with AWS KMS and is used only by that repository to encrypt all data stored atrest AWS CodeCommit uses AWS CloudTrail to log all API calls AWS CodePipeline AWS CodePipeline is a fully managed continuous delivery service that helps customers automate customer release pipelines for fast and reliable application and infrastructure updates Customers 31ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Config use AWS CodePipeline to allow researchers to automatically process clinical trial data lab results and genomic data are few examples of workflow pipeline used by customers AWS CodePipeline supports serverside encryption (SSES3 and SSEKMS) for Code artifacts and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch Events to track pipeline changes and AWS CloudTrail to capture API calls to AWS CodePipeline AWS Config AWS Config provides a detailed view of the resources associated with a customer’s AWS account including how they are configured how they are related to one another and how the configurations and their relationships have changed over time AWS Config cannot itself be used to store or transmit PHI Instead it can be leveraged to monitor and evaluate architectures built with other AWS services including architectures that handle PHI to help determine whether they remain compliant with their intended design goal Architectures that handle PHI should only be built with HIPAA Eligible Services AWS Config uses AWS CloudTrail to log all results AWS Data Exchange AWS Data Exchange makes it easy to find subscribe to and use thirdparty data in the cloud Once subscribed to a data product customers can use the AWS Data Exchange API to load data directly into Amazon S3 and then analyze it with a wide variety of AWS analytics and machine learning services For data providers AWS Data Exchange makes it easy to reach the millions of AWS customers migrating to the cloud by removing the need to build and maintain infrastructure for data storage delivery billing and entitling AWS Data Exchange always encrypts all data products stored in the service atrest without requiring any additional configuration This encryption is automatically done via a service managed KMS key AWS Data Exchange uses Transport Layer Security (TLS) and clientside encryption for encryption in transit Communication with AWS Data Exchange is always done over HTTPS so customer’s data is always encrypted in transit This encryption is configured by default when customers use AWS Data Exchange For more information see Data Protection in AWS Data Exchange AWS Data Exchange is integrated with AWS CloudTrail AWS CloudTrail captures all calls to AWS Data Exchange APIs as events including calls from the AWS Data Exchange console and from code calls to the AWS Data Exchange API operations Some actions customers can take are consoleonly actions There is no corresponding API in the AWS SDK or AWS CLI These are actions that rely on AWS Marketplace functionality such as publishing or subscribing to a product AWS Data Exchange provides CloudTrail logs for a subset of these consoleonly actions For more information see Logging AWS Data Exchange API Calls with AWS CloudTrail Please note that all listings using AWS Data Exchange must adhere to AWS Data Exchange’s Publishing Guidelines and AWS Data Exchange FAQs for AWS Marketplace Providers which restrict certain categories of data For more information see AWS Data Exchange FAQs AWS Database Migration Service AWS Database Migration Service (AWS DMS) helps customers migrate databases to AWS easily and securely Customers can migrate their data to and from most widely used commercial and opensource databases such as Oracle MySQL and PostgreSQL The service supports homogeneous migrations such 32ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS DataSync as Oracle to Oracle and also heterogeneous migrations between different database platforms such as Oracle to PostgreSQL or MySQL to Oracle Databases running onpremises and being migrated to the cloud with AWS DMS can contain PHI data AWS DMS encrypts data while in transit and when data is being staged for final migration into the target database on AWS AWS DMS encrypts the storage used by a replication instance and the endpoint connection information To encrypt the storage used by a replication instance AWS DMS uses an AWS KMS key that is unique to the AWS account Refer to the Guidance for the appropriate target database to ensure that data remains encrypted once migration is complete AWS DMS uses CloudTrail to log all API calls AWS DataSync AWS DataSync is an online transfer service that simplifies automates and accelerates moving data between onpremises storage and AWS Customers can use AWS DataSync to connect their data sources to either Amazon S3 or Amazon EFS Customers should ensure that Amazon S3 and Amazon EFS are configured in a manner consistent with the Guidance By default customer data is encrypted in transit using TLS 12 For more information about encryption and AWS DataSync see AWS DataSync features Customers can monitor DataSync activity using AWS CloudTrail For more information on logging with CloudTrail see Logging AWS DataSync API Calls with AWS CloudTrail AWS Directory Service AWS Directory Service for Microsoft AD AWS Directory Service for Microsoft Active Directory (Enterprise Edition) also known as AWS Microsoft AD enables directoryaware workloads and AWS resources to use managed Active Directory in the AWS Cloud AWS Microsoft AD stores directory content (including content containing PHI) in encrypted Amazon Elastic Block Store volumes using encryption keys that AWS manages For more information see Amazon EBS Encryption Data in transit to and from Active Directory clients is encrypted when it travels through Lightweight Directory Access Protocol (LDAP) over customer’s Amazon Virtual Private Cloud (VPC) network If an Active Directory client resides in an onpremises network the traffic travels to customer’s VPC by a virtual private network link or an AWS Direct Connect link Amazon Cloud Directory Amazon Cloud Directory enables customers to build flexible cloudnative directories for organizing hierarchies of data along multiple dimensions Customers also can create directories for a variety of use cases such as organizational charts course catalogs and device registries For example customers can create an organizational chart that can be navigated through separate hierarchies for reporting structure location and cost center Amazon Cloud Directory automatically encrypts data at rest and in transit by using 256bit encryption keys that are managed by the AWS Key Management Service (AWS KMS) AWS Elastic Beanstalk With AWS Elastic Beanstalk customers can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications Customers can 33ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Fargate simply upload code and AWS Elastic Beanstalk automatically handles the deployment from capacity provisioning load balancing automatic scaling to application health monitoring At the same time customers retain full control over the AWS resources powering their application and can access the underlying resources at any time AWS Elastic Beanstalk does not itself store transmit or process PHI Instead customers can use it to build and deploy architectures with other AWS services that might store transmit and/or process PHI Customers should ensure that when picking the services that are deployed by AWS Elastic Beanstalk to only use HIPAA Eligible Services with PHI See the entries for those services in this whitepaper for guidance on use of PHI with those services Customers should not include PHI in any freeform fields within AWS Elastic Beanstalk such as the Name field AWS Elastic Beanstalk uses AWS CloudTrail to log all API calls AWS Fargate AWS Fargate is a technology that allows customer to run containers without having to manage servers or clusters With AWS Fargate customers no longer have to provision configure and scale clusters of virtual machines to run containers This removes the need to choose server types decide when to scale clusters or optimize cluster packing AWS Fargate removes the need for customers to interact with or think about servers or clusters With Fargate customers focus on designing and building their applications instead of managing the infrastructure that runs them Fargate does not require any additional configuration in order to work with workloads that process PHI Customers can run container workloads on Fargate using container orchestration services like Amazon ECS Fargate only manages the underlying infrastructure and does not operate with or upon data within the workload being orchestrated In keeping with the requirements for HIPAA PHI should still be encrypted whenever in transit or atrest when accessed by containers launched with Fargate Various mechanisms for encrypting atrest are available with each AWS storage option described in this paper AWS Firewall Manager AWS Firewall Manager is a security management service which allows customers to centrally configure and manage firewall rules across customer accounts and applications in AWS Organizations As new applications are created Firewall Manager makes it easy to bring new applications and resources into compliance by enforcing a common set of security rules Now customers have a single service to build firewall rules create security policies and enforce them in a consistent hierarchical manner across their entire infrastructure from a central administrator account AWS Firewall Manager is an orchestration service that does not directly process store or transmit user data The service does not encrypt customer content but underlying services that AWS Firewall Manager uses such as DynamoDB encrypts user data AWS Global Accelerator AWS Global Accelerator is a global load balancing service that improves the availability and latency of multiregion applications To ensure that PHI remains encrypted in transit and atrest while using AWS Global Accelerator architectures being load balanced by Global Accelerator should use an encrypted protocol such as HTTPS or SSL/TLS Refer to the guidance for Amazon EC2 Elastic Load Balancing and other AWS services to better understand the available encryption options for backend resources AWS Global Accelerator uses AWS CloudTrail to log all API calls 34ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Glue AWS Glue AWS Glue is a fully managed ETL (extract transform and load) service that makes it simple and cost effective for customers to categorize their data clean it enrich it and move it reliably between various data stores In order to ensure the encryption of data containing PHI while in transit AWS Glue should be configured to use JDBC connections to data stores with SSL/TLS Additionally to maintain encryption while intransit the setting for serverside encryption (SSES3) should be passed as a parameter to ETL jobs run with AWS Glue All data stored atrest within the Data Catalog of AWS Glue is encrypted using keys managed by AWS KMS when encryption is enabled upon creation of a Data Catalog object AWS Glue uses CloudTrail to log all API calls AWS Glue DataBrew AWS Glue DataBrew is a fully managed visual data preparation service that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and machine learning In order to ensure the encryption of data containing PHI while in transit DataBrew should be configured to use JDBC connections to data stores with SSL/TLS When connecting to JDBC data sources DataBrew uses the settings on your AWS Glue connection including the “Require SSL connection” option Additionally to maintain encryption while at rest in S3 buckets the setting for serverside encryption (SSES3 or SSEKMS) should be passed as a parameter to DataBrew jobs AWS IoT Core and AWS IoT Device Management AWS IoT Core and AWS IoT Device Management provide secure bidirectional communication between internetconnected devices such as sensors actuators embedded microcontrollers or smart appliances and the AWS Cloud AWS IoT Core and AWS IoT Device Management can now accommodate devices that transmit data containing PHI All communication with AWS IoT Core and AWS IoT Device Management is encrypted using TLS AWS IoT Core and AWS IoT Device Management use AWS CloudTrail to log all API calls AWS IoT Greengrass AWS IoT Greengrass lets customers run local compute messaging data caching sync and ML inference capabilities for connected devices in a secure way AWS IoT Greengrass uses X509 certificates managed subscriptions AWS IoT policies and IAM policies and roles to ensure that customer’s Greengrass applications are secure AWS IoT Greengrass uses the AWS IoT transport security model to encrypt communication with the cloud using TLS In addition AWS IoT Greengrass data is encrypted when at rest (in the cloud) For more information on Greengrass security see Overview of AWS IoT Greengrass Security Customers can log AWS IoT Greengrass API actions using AWS CloudTrail For more information see Logging AWS IoT Greengrass API Calls with AWS CloudTrail AWS Lambda AWS Lambda lets customers run code without provisioning or managing servers on their own AWS Lambda uses a compute fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances across multiple Availability Zones in a Region which provides the high availability security performance and scalability of the AWS infrastructure 35ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Managed Services To ensure that PHI remains encrypted while using AWS Lambda connections to external resources should use an encrypted protocol such as HTTPS or SSL/TLS For example when S3 is accessed from a Lambda procedure it should be addressed with https://buckets3awsregionamazonawscom If any PHI is placed atrest or idled within a running procedure it should be encrypted clientside or serverside with keys obtained from AWS KMS or AWS CloudHSM Follow the related guidance for Amazon API Gateway when triggering AWS Lambda functions through the service When using events from other AWS services to trigger AWS Lambda functions the event data should not contain (in and of itself) PHI For example when a Lambda procedure is triggered from an S3 event such as the arrival of an object in S3 the object name that is relayed to Lambda should not have any PHI although the object itself can contain such data AWS Managed Services AWS Managed Services provides ongoing management of AWS infrastructures By implementing best practices to maintain a customer’s infrastructure AWS Managed Services helps to reduce their operational overhead and risk AWS Managed Services automates common activities such as change requests monitoring patch management security and backup services and provides fulllifecycle services to provision run and support infrastructures Customers can use AWS Managed Services to manage AWS workloads that that operate with data containing PHI Usage of AWS Managed Services does not alter the AWS Services eligible for the use with PHI Tooling and automation provided by AWS Managed Services cannot be used for the storage or transmission of PHI AWS Mobile Hub AWS Mobile Hub provides a set of tools that enable customers to quickly configure AWS services and integrate them into their mobile app AWS Mobile Hub itself does not store or transmit PHI Instead it is used to administer and orchestrate mobile architectures built with other AWS services including architectures that handle PHI Architectures that handle PHI should only be built with HIPAA eligible services and PHI should not be placed in metadata for AWS Mobile Hub AWS Mobile Hub uses AWS CloudTrail to log all actions For more information see Logging AWS Mobile CLI API Calls with AWS CloudTrail AWS OpsWorks for Chef Automate AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate a set of automation tools from Chef for infrastructure and application management The service itself does not contain transmit or handle any PHI or sensitive information but customers should ensure that any resources configured by OpsWorks for Chef Automate is configured consistent with the Guidance API calls are captured with AWS CloudTrail For more information see Logging AWS OpsWorks Stacks API Calls with AWS CloudTrail AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise a set of automation tools from Puppet for infrastructure and application management The service itself does not contain transmit or handle any PHI or sensitive information but customers should ensure that any resource configured by OpsWorks for Puppet Enterprise is 36ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS OpsWorks Stack configured consistent with the Guidance API calls are captured with AWS CloudTrail For more information see Logging AWS OpsWorks Stacks API Calls with AWS CloudTrail AWS OpsWorks Stack AWS OpsWorks Stacks provides a simple and flexible way to create and manage stacks and applications Customers can use AWS OpsWorks Stacks to deploy and monitor applications in their stacks AWS OpsWorks Stacks encrypts all traffic while in transit However encrypted data bags (a Chef data storage mechanism) are not available and any assets that must be stored securely such as PHI secrets/passwords master certificates etc should be stored in an encrypted bucket in Amazon S3 AWS OpsWorks Stack uses AWS CloudTrail to log all API calls AWS Organizations AWS Organizations helps customers centrally manage and govern their environment as they grow and scale their AWS resources Using AWS Organizations they can programmatically create new AWS accounts and allocate resources group accounts to organize their workflows apply policies to accounts or groups for governance and simplify billing by using a single payment method for all of their accounts In addition AWS Organizations is integrated with other AWS services so customers can define central configurations security mechanisms audit requirements and resource sharing across accounts in their organization AWS Organizations is available to all AWS customers at no additional charge AWS Organizations is an orchestration service that does not directly process store or transmit user data The service does not encrypt customer content but underlying services that are launched within AWS Organizations do encrypt user data AWS Organizations is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in AWS Organizations AWS RoboMaker AWS RoboMaker enables customers to execute code in the cloud for application development and provides a robotics simulation service to accelerate application testing AWS RoboMaker also provides a robotics fleet management service for remote application deployment update and management Network traffic containing PHI must encrypt data in transit All management communication with the simulation server is over TLS and customers should use open standard transport encryption mechanisms for connections to other AWS services AWS RoboMaker also integrates with CloudTrail to log all API calls to a specific Amazon S3 bucket AWS RoboMaker logs do not contain PHI and the EBS volumes used by the simulation server are encrypted When transferring data that may contain PHI to other services such as Amazon S3 customers must follow the receiving service’s guidance for storing PHI For deployments to robots customers must ensure that encryption of data in transit and atrest is consistent with their interpretation of the Guidance AWS SDK Metrics Enterprise customers can use the AWS CloudWatch agent with AWS SDK Metrics for Enterprise Support (SDK Metrics) to collect metrics from AWS SDKs on their hosts and clients These metrics are shared with 37ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Secrets Manager AWS Enterprise Support SDK Metrics can help customers collect relevant metrics and diagnostic data about their application's connections to AWS services without adding custom instrumentation to their code and reduces the manual work necessary to share logs and data with AWS Support Please note that SDK Metrics is only available to AWS customers with an Enterprise Support subscription Customers can use SDK Metrics with any application that directly calls AWS services and that was built using an AWS SDK that is one of the versions listed in the AWS Metrics documentation SDK Metrics monitors calls that are made by the AWS SDK and uses the CloudWatch agent running in the same environment as a client application The CloudWatch agent encrypts the data intransit from the local machine to delivery in the destination log group The log group can be configured to be encrypted following the directions at Encrypt Log Data in CloudWatch Logs Using AWS KMS AWS Secrets Manager AWS Secrets Manager is an AWS service that makes it easier for customers to manage “secrets” Secrets can be database credentials passwords thirdparty API keys and even arbitrary text AWS Secrets Manager might be used to store PHI if such information is contained within “secrets” All secrets stored by AWS Secrets Manager are encrypted atrest using the AWS Key Management System (KMS) Users can select the AWS KMS key used when creating a new secret If no key is selected the default key for the account will be used AWS Secrets Manager uses AWS CloudTrail to log all API calls AWS Security Hub AWS Security Hub collects and consolidates findings from AWS security services enabled in a customer’s environment such as intrusion detection findings from Amazon GuardDuty vulnerability scans from Amazon Inspector Amazon S3 bucket policy findings from Amazon Macie publicly accessible and cross account resources from IAM Access Analyzer and resources lacking WAF coverage from AWS Firewall Manager AWS Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions AWS Security Hub integrates with Amazon CloudWatch Events enabling customers to create custom response and remediation workflows Customers can easily send findings to SIEMs chat tools ticketing systems Security Orchestration Automation and Response (SOAR) tools and oncallmanagement platforms Response and remediation actions can be fully automated or they can be triggered manually in the console Customers can also use AWS Systems Manager Automation documents AWS Step Functions and AWS Lambda functions to build automated remediation workflows that can be initiated from AWS Security Hub To ensure data protection AWS Security Hub encrypts data at rest and data in transit between component services Thirdparty auditors assess the security and compliance of AWS Security Hub as part of multiple AWS compliance programs AWS Security Hub is part of AWS’s SOC ISO PCI and HIPAA compliance programs AWS Server Migration Service AWS Server Migration Service (AWS SMS) automates the migration of onpremises VMware vSphere or Microsoft HyperV/SCVMM virtual machines to the AWS Cloud AWS SMS incrementally replicates server VMs as cloudhosted Amazon Machine Images (AMIs) ready for deployment on Amazon EC2 38ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Serverless Application Repository Servers running onpremises and being migrated to the cloud with (AWS SMS) can contain PHI data AWS SMS encrypts data while in transit and when server VM images are being staged for final placement onto EC2 Refer to the guidance for EC2 and setting up encrypted storage volumes when migrating a server VM containing PHI with AWS SMS AWS SMS uses CloudTrail to log all API calls AWS Serverless Application Repository The AWS Serverless Application Repository (SAR) is a managed repository for serverless applications It enables teams organizations and individual developers to store and share reusable applications and easily assemble and deploy serverless architectures in powerful new ways The applications are AWS CloudFormation templates which contain definitions of the application infrastructure and compiled binaries of application AWS Lambda function code Although it is possible for applications that are in the AWS Serverless Application Repository to process PHI they would only do this after being deployed to a customer’s account and not as part of the SAR itself The AWS Serverless Application Repository encrypts files that customers upload including deployment packages and layer archives For data in transit the AWS Serverless Application Repository uses TLS to encrypt data between the service and the agent AWS Serverless Application Repository is integrated with AWS CloudTrail which is a service that provides a record of actions taken by a user role or an AWS service in the AWS Serverless Application Repository AWS Service Catalog AWS Service Catalog allows IT administrators to create manage and distribute portfolios of approved products to end users who can then access the products they need in a personalized portal AWS Service Catalog is used to catalog share and deploy selfservice solutions on AWS and cannot be used to store transmit or process PHI PHI should not be placed in any metadata for AWS Service Catalog items or within any item description AWS Service Catalog uses AWS CloudTrail to log all API calls AWS Shield AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS AWS Shield provides alwayson detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection AWS Shield cannot be used to store or transmit PHI but instead can be used to safeguard web applications that do operate with PHI As such no special configuration is needed when engaging AWS Shield All AWS customers benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against most common frequently occurring network and transport layer DDoS attacks that target their website or applications For higher levels of protection against attacks targeting their web applications running on Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 resources customers can subscribe to AWS Shield Advanced AWS Snowball With AWS Snowball (Snowball) customers can transfer hundreds of terabytes or petabytes of data between their onpremises data centers and Amazon Simple Storage Service (Amazon S3) PHI stored 39ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Snowball Edge in AWS Snowball must be encrypted atrest consistent with the Guidance When creating an import job customers must specify the ARN for the AWS KMS master key to be used to protect data within the Snowball In addition during the creation of the import job customers should choose a destination S3 bucket that meets the encryption standards set by the Guidance While Snowball does not currently support serverside encryption with AWS KMSmanaged keys (SSE KMS) or serverside encryption with customer provided keys (SSEC) Snowball does support serverside encryption with Amazon S3managed encryption keys (SSES3) For more information see Protecting Data Using ServerSide Encryption with Amazon S3Managed Encryption Keys (SSES3) Alternatively customers can use the encryption methodology of their choice to encrypt PHI before storing the data in AWS Snowball Currently customers may use the standard AWS Snowball appliance or AWS Snowmobile as part of our BAA AWS Snowball Edge AWS Snowball Edge connects to existing customer applications and infrastructure using standard storage interfaces streamlining the data transfer process and minimizing setup and integration Snowball Edge can cluster together to form a local storage tier and process customer data onsite helping customers ensure that their applications continue to run even when they are not able to access the cloud To ensure that PHI remains encrypted while using Snowball Edge customers should make sure to use an encrypted connection protocol such as HTTPS or SSL/TLS when using AWS Lambda procedures powered by AWS IoT Greengrass to transmit PHI to/from resources external to Snowball Edge Additionally PHI should be encrypted while stored on the local volumes of Snowball Edge either through local access or via NFS Encryption is automatically applied to data placed into Snowball Edge using the Snowball Management Console and API for bulk transport into S3 For more information on data transport into S3 see the related guidance for the section called “AWS Snowball” (p 39) AWS Snowmobile AWS Snowmobile is operated by AWS as a managed service As such AWS will contact the customer to determine requirements for deployment and arrange for network connectivity as well as provide assistance moving data Data stored on Snowmobile is encrypted using the same guidance provided for AWS Snowball AWS Step Functions AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows AWS Step Functions is not able to store transmit or process PHI PHI should not be placed within the metadata for AWS Step Functions or within any task or state machine definition AWS Step Functions uses AWS CloudTrail to log all API calls AWS Storage Gateway AWS Storage Gateway is a hybrid storage service that enables customers’ onpremises applications to seamlessly use AWS Cloud storage The gateway uses open standard storage protocols to connect 40ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper File Gateway existing storage applications and workflows to AWS Cloud storage services for minimal process disruption File Gateway File gateway is a type of AWS Storage Gateway that supports a file interface into Amazon S3 and that adds to the current blockbased volume and VTL storage File gateway uses HTTPS to communicate with S3 and stores all objects encrypted while on S3 using SSES3 by default or using clientside encryption with keys stored in AWS KMS File metadata such as file names remains unencrypted and should not contain any PHI Volume Gateway Volume gateway provides cloudbacked storage volumes that customers can mount as internet Small Computer System Interface (iSCSI) devices from onpremises application servers Customers should attach local disks as Upload buffers and Cache to the Volume Gateway VM in accordance with their internal compliance and regulatory requirements It is recommended that for PHI these disks should be capable of providing encryption atrest Communication between the Volume Gateway VM and AWS is encrypted using TLS 12 to secure PHI in transport Tape Gateway Tape gateway provides a VTL (virtual tape library) interface to thirdparty backup applications running onpremises Customers should enable encryption for PHI within the thirdparty backup application when setting up a tape backup job Communication between the Tape Gateway VM and AWS is encrypted using TLS 12 to secure PHI in transport Customers using any of the Storage Gateway configurations with PHI should enable full logging For more information see What Is AWS Storage Gateway? AWS Systems Manager AWS Systems Manager is a unified interface that allows customers to easily centralize operational data automate tasks across their AWS resources and shortens the time to detect and resolve operational problems in their infrastructure Systems Manager provides a complete view of a customer’s infrastructure performance and configuration simplifies resource and application management and makes it easy to operate and manage their infrastructure at scale When outputting data that may contain PHI to other services such as Amazon S3 customers must follow the receiving service’s guidance for storing PHI Customers should not include PHI in metadata or identifiers such as document names and parameter names AWS Transfer for SFTP AWS Transfer for SFTP provides Secure File Transfer Protocol (SFTP) access to a customer's S3 resources Customers are presented with a virtual server which is accessed using the standard SFTP protocol at a regional service endpoint From the point of view of the AWS customer and the SFTP client the SFTP gateway looks like a standard highly available SFTP server Although the service itself does not store process or transmit PHI the resources that the customer is accessing on Amazon S3 should be configured in a manner that is consistent with the Guidance Customers can also use AWS CloudTrail to log API calls made to AWS Transfer for SFTP 41ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS WAF – Web Application Firewall AWS WAF – Web Application Firewall AWS WAF is a web application firewall that helps protect customer web applications from common web exploits that could affect application availability compromise security or consume excessive resources Customers may place AWS WAF between their web applications hosted on AWS that operate with or exchange PHI and their end users As with the transmission of any PHI while on AWS data containing PHI must be encrypted while in transit Refer to the guidance for Amazon EC2 to better understand the available encryption options AWS XRay AWS XRay is a service that collects data about requests that a customer’s application serves and provides tools that they can use to view filter and gain insights into that data to identify issues and opportunities for optimization For any traced request to a customer’s application they can see detailed information not only about the request and response but also about calls that their application makes to downstream AWS resources microservices databases and HTTP web APIs AWS XRay should not be used to store or process PHI Information transmitted to and from AWS XRay is encrypted by default When using AWS XRay do not place any PHI within segment annotations or segment metadata Elastic Load Balancing Customers can use Elastic Load Balancing to terminate and process sessions containing PHI Customers can choose either the Classic Load Balancer or the Application Load Balancer Because all network traffic containing PHI must be encrypted in transit endtoend customers have the flexibility to implement two different architectures: Customers can terminate HTTPS HTTP/2 over TLS (for Application) or SSL/TLS on Elastic Load Balancing by creating a load balancer that uses an encrypted protocol for connections This feature enables traffic encryption between the load balancer and the clients that initiate HTTPS HTTP/2 over TLS or SSL/TLS sessions and for connections between the load balancer and customer backend instances Sessions containing PHI must encrypt both frontend and backend listeners for transport encryption Customers should evaluate their certificates and session negotiation policies and maintain them consistent to the Guidance For more information see HTTPS Listeners for Your Classic Load Balancer Alternatively customers can configure Amazon ELB in basic TCPmode (for Classic) or over WebSockets (for Application) and passthrough encrypted sessions to backend instances where the encrypted session is terminated In this architecture customers manage their own certificates and TLS negotiation policies in applications running in their own instances For more information see Listeners for Your Classic Load Balancer In both architectures customers should implement a level of logging which they determine to be consistent with HIPAA and HITECH requirements FreeRTOS FreeRTOS is an operating system for microcontrollers that makes small lowpower edge devices easy to program deploy secure connect and manage FreeRTOS is based on the FreeRTOS kernel a popular open source operating system for microcontrollers and extends it with software libraries that make it easy to securely connect small lowpower devices to AWS Cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass Data containing PHI can now be encrypted in transit and while atrest when using a qualified device running FreeRTOS FreeRTOS provides two libraries to provide platform security: TLS and PKCS#11 42ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Using AWS KMS for Encryption of PHI The TLS API should be used to encrypt and authenticate all network traffic that contains PHI PKCS#11 provides a standard interface for software cryptographic operations and should be used to encrypt any PHI stored on a qualified device running FreeRTOS Using AWS KMS for Encryption of PHI Master keys in AWS KMS can be used to encrypt/decrypt data encryption keys used to encrypt PHI in a customer’s applications or in AWS services that use AWS KMS AWS KMS can be used in conjunction with a HIPAA account but PHI can only be processed stored or transmitted in HIPAA Eligible Services AWS KMS is normally used to generate and manage keys for applications running in other HIPAA Eligible Services For example an application processing PHI in Amazon EC2 could use the GenerateDataKey API call to generate data encryption keys for encrypting and decrypting PHI in the application The data encryption keys would be protected by a customer’s master keys stored in AWS KMS creating a highly auditable key hierarchy as API calls to AWS KMS are logged in AWS CloudTrail PHI should not be stored in the Tags (metadata) for any keys stored in AWS KMS VM Import/Export VM Import/Export enables customers to easily import virtual machine images from existing environment to Amazon EC2 instances and export them back to your onpremises environment This offering allows customers to leverage existing investments in the virtual machines that you have built to meet theirIT security their configuration management and their compliance requirements by bringing those virtual machines into Amazon EC2 as readytouse instances Customers can also export imported instances back to their onpremises virtualization infrastructure allowing them to deploy workloads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 To import customer images customers can use the AWS CLI or other developer tools to import a virtual machine (VM) image from their VMware environment If customers use the VMware vSphere virtualization platform they can also use the AWS Management Portal for vCenter to import their VM As part of the import process VM Import will convert customer VM into an Amazon EC2 AMI which they can use to run Amazon EC2 instances Once their VM has been imported they can take advantage of Amazon’s elasticity scalability and monitoring via offerings like Auto Scaling Elastic Load Balancing and CloudWatch to support their imported images Customers can export previously imported Amazon EC2 instances using the Amazon EC2 API tools Simply specify the target instance virtual machine file format and a destination Amazon S3 bucket and VM Import/Export will automatically export the instance to the Amazon S3 bucket along with encryption options to secure the transmission and storage of their VM images Customers can then download and launch the exported VM within their onpremises virtualization infrastructure Customers can import Windows and Linux VMs that use VMware ESX or Workstation Microsoft Hyper V and Citrix Xen virtualization formats And customers can export previously imported Amazon EC2 instances to VMware ESX Microsoft HyperV or Citrix Xen formats For a full list of supported operating systems versions and formats see VM Import/Export Requirements AWS plans to add support for additional operating systems versions and formats in the future 43ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Auditing backups and disaster recovery HIPAA’s Security Rule has detailed requirements related to indepth auditing capabilities data backup procedures and disaster recovery mechanisms The services in AWS contain many features that help customers address their requirements For example customers should consider establishing auditing capabilities to allow security analysts to examine detailed activity logs or reports to see who had access IP address entry what data was accessed etc This data should be tracked logged and stored in a central location for extended periods of time in case of an audit Using Amazon EC2 customers can run activity log files and audits down to the packet layer on their virtual servers just as they do on traditional hardware They also can track any IP traffic that reaches their virtual server instance A customer’s administrators can back up the log files into Amazon S3 for longterm reliable storage HIPAA also has detailed requirements related to maintaining a contingency plan to protect data in case of an emergency and must create and maintain retrievable exact copies of electronic PHI To implement a data backup plan on AWS Amazon EBS offers persistent storage for Amazon EC2 virtual server instances These volumes can be exposed as standard block devices and they offer offinstance storage that persists independently from the life of an instance To align with HIPAA guidelines customers can create pointintime snapshots of Amazon EBS volumes that are stored automatically in Amazon S3 and are replicated across multiple Availability Zones which are distinct locations engineered to be insulated from failures in other Availability Zones These snapshots can be accessed at any time and can protect data for longterm durability Amazon S3 also provides a highly available solution for data storage and automated backups By simply loading a file or image into Amazon S3 multiple redundant copies are automatically created and stored in separate data centers These files can be accessed at any time from anywhere (based on permissions) and are stored until intentionally deleted Moreover AWS inherently offers a variety of disaster recovery mechanisms Disaster recovery the process of protecting an organization’s data and IT infrastructure in times of disaster involves maintaining highly available systems keeping both the data and system replicated offsite and enabling continuous access to both With Amazon EC2 administrators can start server instances very quickly and can use an Elastic IP address (a static IP address for the cloud computing environment) for graceful failover from one machine to another Amazon EC2 also offers Availability Zones Administrators can launch Amazon EC2 instances in multiple Availability Zones to create geographically diverse fault tolerant systems that are highly resilient in the event of network failures natural disasters and most other probable sources of downtime Using Amazon S3 a customer’s data is replicated and automatically stored in separate data centers to provide reliable data storage designed to provide 9999% availability For more information on disaster recovery see the AWS Disaster Recovery whitepaper available at Disaster Recovery 44ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 45) Added information about AWS Network FirewallSeptember 9 2021 Whitepaper updated (p 45) Updated information about Amazon Connect Customer ProfilesAugust 26 2021 Whitepaper updated (p 45) Added sections Amazon AppFlow and AWS Glue DataBrewJuly 22 2021 Whitepaper updated (p 45) Updated navigation and organizationApril 26 2021 Whitepaper updated (p 45) Added the following sections: AWS CodeDeploy AWS CodePipeline Amazon Aurora Aurora PostgreSQL Amazon Textract Amazon Polly Amazon FSx AWS Auto Scaling AWS Backup AWS Elastic Beanstalk AWS Firewall Manager AWS Organizations AWS Security Hub AWS Serverless Application Repository VM Import/Export Amazon HealthLake Amazon EventBridge Updated Amazon Aurora sectionMarch 31 2021 Whitepaper updated (p 45) Added section on AWS App Mesh and updated AWS System Manager contentAugust 25 2020 Whitepaper updated (p 45) Added sections Amazon Appstream 20 AWS SDK Metrics AWS Data Exchange Amazon MSK Amazon Pinpoint Amazon Lex Amazon SES and Amazon Forecast Amazon Quantum Ledger Database (QLDB) AWS Cloud MapMay 7 2020 Whitepaper updated (p 45) Added sections on Amazon CloudWatch Amazon CloudWatch Events Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Amazon OpenSearch Service Amazon DocumentDB (with MongoDB compatibility) AWS MobileJanuary 1 2020 45ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Hub AWS IoT Greengrass AWS OpsWorks for Chef Automate AWS OpsWorks for Puppet Enterprise AWS Transfer for SFTP AWS DataSync AWS Global Accelerator Amazon Comprehend Medical AWS RoboMaker and Alexa for Business Whitepaper updated (p 45) Added sections on Amazon Comprehend Amazon Transcribe Amazon Translate and AWS Certificate ManagerJanuary 1 2019 Whitepaper updated (p 45) Added sections on Amazon Athena Amazon EKS AWS IoT Core and AWS IoT Device Management Amazon FreeRTOS Amazon GuardDuty Amazon Neptune AWS Server Migration Service AWS Database Migration Service Amazon MQ and AWS GlueNovember 1 2018 Whitepaper updated (p 45) Added sections on Amazon Elastic File System (EFS) Amazon Kinesis Video Streams Amazon Rekognition Amazon SageMaker Amazon Simple Workflow AWS Secrets Manage AWS Service Catalog and AWS Step FunctionsJune 1 2018 Whitepaper updated (p 45) Added sections on AWS CloudFormation AWS XRay AWS CloudTrail AWS CodeBuild AWS CodeCommit AWS Config and AWS OpsWorks StackApril 1 2018 Whitepaper updated (p 45) Added section on AWS Fargate January 1 2018 Updates made prior to 2018: Date Description November 2017 Added sections on Amazon EC2 Container Registry Amazon Macie Amazon QuickSight and AWS Managed Services November 2017 Added sections on Amazon ElastiCache for Redis and Amazon CloudWatch October 2017 Added sections on Amazon SNS Amazon Route 53 AWS Storage Gateway AWS Snowmobile and AWS CloudHSM Updated section on AWS Key Management Service 46ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Date Description September 2017 Added sections on Amazon Connect Amazon Kinesis Streams Amazon RDS (Maria) DB Amazon RDS SQL Server AWS Batch AWS Lambda AWS Snowball Edge and the Lambda@Edge feature of Amazon CloudFront August 2017 Added sections on Amazon EC2 Systems Manager and Amazon Inspector July 2017 Added sections on Amazon WorkSpaces Amazon WorkDocs AWS Directory Service and Amazon ECS June 2017 Added sections on Amazon CloudFront AWS WAF AWS Shield and Amazon S3 Transfer Acceleration May 2017 Removed requirement for Dedicated Instances or Dedicated Hosts for processing PHI in EC2 and EMR March 2017 Updated list of services to point to AWS Services in Scope by Compliance Program page Added description for Amazon API Gateway January 2017 Updated to newest template October 2016 First publication 47ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 48",General,consultant,Best Practices Automating_Elasticity,ArchivedAutomating Elasticity March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All right s reserved Archived Contents Introduction 1 Monitoring AWS Service Usage and Costs 1 Tagging Resources 2 Automating Elasticity 2 Automating Time Based Elasticity 3 Automating Volume Based Elasticity 4 Conclusion 6 Archived Abstract This is the sixth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how you can automate elasticity to get the most value out of your AWS resources and optimize costs ArchivedAmazon Web Services – Automating Elasticity Page 1 Introduction In the traditional data center based model of IT once infrastructure is deployed it typically runs whether it is needed or not and all the capacity is paid for regardless of how much it gets used In the cloud resources are elastic meaning they can instantly grow or shrink to match the requirements of a specific application Elasticity allows you to match the supply of resource s—which cost money —to demand Because cloud resources are paid for based on usage matching needs to utilization is critical for cost optimization Demand includes both external usage such as the number of customers who visit a website over a given period and internal usage such as an application team using dev elopment and test environments There are two basic types of elasticity: time based and volume based Time based elasticity means turning off resources when they are not being used such as a devel opment environment that is needed only during business hours Volume based elasticity means matching scale to the intensity of demand whether that’s compute cores storage sizes or throughput By combining monitoring tagging and automation you can get the most value out of your AWS resources and optimize costs Monitoring AWS Service Usage and Costs There are a couple of tools that you can use to monitor your service usage and costs to identify opportunities to use elasticity The Cost Optimization Monitor can help you generate reports that provide insight into service usage and costs as you deploy and operate cloud architecture They include detailed billing reports which you can access in the AWS Billing and Cost Management console These reports provide estimated costs that you can break down in different ways (by period account resource or custom resource tags) to help monitor and forecast monthly charges You can analyze this information to optimize your infrastructure and maximize your return on investment using elastic ity ArchivedAmazon Web Services – Automating Elasticity Page 2 Cost Explorer is another free tool that you can use to view your costs and find ways to take advantage of elasticity You can view data up to the la st 13 months forecast how much you are likely to spend for the next 3 months and get recommendations on what Reserved Instances to purchase You can also use Cost Explorer to see patterns in how much you spend on AWS resources over time identify areas t hat need further inquiry and see trends that can help you understand your costs In addition you can specify time ranges for the data as well as view time data by day or by month Tagging Resources Tagging resources gives you visibility and control over cloud IT costs down to seconds and pennies by team and application Tagging lets you assign custom metadata to instances images and other resources For example you can categorize resources by owner purpose or environment which help s you organize th em and assign cost accountability When resources are accurately tagged automation tools can identify key characteristics of those resources needed to manage elasticity For example many customers run automated start/stop scripts that turn off developmen t environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2 ) instance tags provide a simple way to identify development instances that should keep running Automati ng Elasticity With AWS you can aut omate both volume based and time based elasticity which can provide significant savings For example companies that shut down EC2 instances outside of a 10 hour workday can save 70 % compared to running those instances 24 hours a day Automation becomes i ncreasingly important as environments grow larger and become more complex in which manually searching for elasticity savings becomes impractical Automation is powerful but you need to use it carefully It is important to minimize risk by giving people a nd systems only the minimum level of access required to perform necessary tasks Additionally you should anticipate exceptions to automation plans and consider different schedules and usage scenarios A one sizefitsall approach is seldom realistic even within the same ArchivedAmazon Web Services – Automating Elasticity Page 3 department Choose a flexible and customizable approach to accommodate your needs Automating Time Based Elasticity Most non production instances can and should be stopped when they are not being used Although it is possible to manually s hut down unused instances this is impractical at larger scales Let’s consider a few ways to automate time based elasticity AWS Instance Scheduler The AWS Instance Scheduler is a simple solution that allows you to create automatic start and stop schedules for your EC2 instances The solution is deployed using an AWS CloudFormation template which launches and configures the components necessary to automatically start and stop EC2 instances in all AWS Regions of your account During initial deployment you simply defi ne the AWS Instance Scheduler default start and stop parameters and the interval you want it to run These values are stored in Amazon DynamoDB and can be overridden or modified as necessary A custom resour ce tag identifies instances that should receive AWS Instance Scheduler actions The solution's recurring AWS Lambda function automatically starts and stops appropriately tagged EC2 instances You can review th e solution's custom Amazon CloudWatch metric to see a history of AWS Instance Scheduler actions Amazon EC2 API tools You can terminate instances programmatically using Amazon EC2 APIs specifically the StopInstances and TerminateInstances actions These APIs let you build your own schedules and automation tools When you stop an instance the root device and any other devices attached to the instance persist When you terminate an instanc e the root device and any other devices attached during the instance launch are automatically deleted For more information about the differences between rebooting stopping and terminating instances see Instance Lifecycle in the Amazon EC2 User Guide ArchivedAmazon Web Services – Automating Elasticity Page 4 AWS Lambda AWS Lambda serverless functions are another tool that you can use to shut down instances when they are not being used You can configure a Lambda function to start and stop instances when triggered by Amazon CloudWatch Events such as a specific time or utilization threshold For more information read this Knowledge Center topic AWS Data Pipeline AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premises data sources at specified intervals It can be used to stop and start Amazon EC2 instances by running AWS Command Li ne Interface (CLI) file commands on a set schedule AWS Data Pipeline runs as an AWS Identity and Access Management (IAM) role which eliminates key management requirements Amazon CloudWatch Amazon Cloud Watch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics and log files set alarms and automatically react to changes in your AWS resources You can use Amazon Cl oudWatch alarms to automatically stop or terminate EC2 instances that have gone unused or underutilized for too long You can stop your instance if it has an Amazon Elastic Block Store (Amazon EBS) volume as its root device A stopped instance retains its instance ID and can be restarted A terminated instance is deleted For more information on the difference between stopping and terminating instances see the Stop and Start Your Instance in the Amazon EC2 User Guide For example you can create a group of alarms that first sends an email notification to developers whose instance ha s been underutilized for 8 hours and then terminat es that instance if its utilization has not improved after 24 hours For instructions on using this method see the Amazon CloudWatch User Guide Automatin g Volume Based Elasticity By taking advantage of volume based elasticity you can scale resources to match capacity The best tool for accomplishing this task is Amazon EC2 Auto ArchivedAmazon Web Services – Automating Elasticity Page 5 Scaling which you can use to optimize performance by automatically increasing the number of EC2 instances during demand spikes and decreasing capacity during lulls to reduce costs Amazon EC2 Auto Scaling is well suited for applications that have stable demand p atterns and for ones that experience hourly daily or weekly variability in usage Beyond Amazon EC2 Auto Scaling you can use AWS Auto Scaling to automatically scale resources for other AWS services in cluding: • Amazon Elastic Container Service (Amazon ECS) – You can configure your Amazon ECS service to use AWS Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms For more informa tion read the documentation • Amazon EC2 Spot Fleets – A Spot Fleet can either launch instances (scale out) or terminate instances (scale in) within the range that you choose in response to one or more scaling policies For more information read the documentation • Amazon EMR clusters – Auto Scaling in Amazon EMR allows you to programmatically scale out and scale in core and task nodes in a cluster based on rules that you specify in a scaling policy For more information read the documentation • Amazon AppStream 20 stacks and fleets – You can define scaling policies that adjust the size of your fleet automatically based on a variety of utilization metrics and optimize the number of running instances to match user demand You can also choose to turn off automatic scaling and make the fle et run at a fixed size For more information read the documentation • Amazon DynamoDB – You can dynamically adjust provisioned throughput capacity in response to actual traffic patterns This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling When the workload decrea ses AWS Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity For more information read to the documentation You can also read our blog post Auto Scaling for Amazon DynamoDB ArchivedAmazon Web Services – Automating Elasticity Page 6 Conclusion The elasticity of cloud services is a powerful way to optimize costs By combining tagging monitori ng and automation your organization can match its spending to its needs and put resources where they provide the most value For more information about elasticity and other cost management topics see the AWS Billing and Cost Management documentation Automation tools can help minimize some of the management and administrative tasks associated with an IT deployment Similar to the benefits from application services an automated or DevOps approach to your AWS infrastructure will provide scalability and elasticity with minimal manual intervention This also provides a level of control over your AWS environment and the associated spending For example when engineers or developers are allowed to provision AWS resources only through an established process a nd use tools that can be managed and audited (for example a provisioning portal such as AWS Service Catalog) you can avoid the expense and waste that results from simply turning on (and most often leaving on) standalone resources Contributors The follow ing individuals and organizations contributed to this document: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document History Date Description March 2020 Minor revisions March 2018 First publication ArchivedAmazon Web Services – Automating Elasticity Page 7,General,consultant,Best Practices Automating_Governance_on_AWS,"ArchivedAutomating Governance A Managed Service Approach to Security and Compliance on AWS August 2015 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 2 of 39 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 3 of 39 Contents Abstract 4 Introduction 4 Shared Responsibility Environment 6 Compliance Requirements 7 Compliance and Governance 8 Challenges in Architecting for Governance 9 Implementin g a Managed Services Organization 10 Standardizing Architecture for Compliance 14 Architectural Baselines 14 The Shared Services VPC 18 Automating for Compliance 20 Automating Compliance for EC2 Instances 23 Development & Management 25 Deployment 28 Automating for Governance: HighLevel Steps 33 Step 1: Define Common Use Cases 34 Step 2: Create and Document Reference Architectures 35 Step 3: Validate and Document Architecture Compliance 35 Step 4: Build Automated Solutions Based on Architecture 36 Step 5: Develop an Accreditation and Approval Process 37 Conclusion 37 Contributors 38 Notes 38 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 4 of 39 Abstract This whitepaper is intended for existing and potential Amazon Web Services (AWS) customers who are implementing security controls for applications running on AWS It provides guidelines for developing and implementing a managed service approach to deploying applications in AWS The guidelines described provide enterprise customers with greater control over their applications while accelerating the process of deploying authorizing and monitoring these applications This paper is targeted at IT decision makers and security personnel and assumes familiarity with basic networking operating system data encryption and operational control security practices Introduction Governance encompasses an organization’s mission longterm goals responsibilities and decision making Gartner describes governance as “the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals ”1 An effective governance strategy defines both the frameworks for achieving goals and the decision makers who create them:  Frameworks – The policies principles and guidelines that drive consistent IT decision making  Decision makers – The entities or individuals who are responsible and accountable for IT decisions Welldeveloped frameworks ultimately can yield an efficient secure and compliant technology environment This paper describes how to develop and automate these frameworks by introducing the following concepts and practices:  A managed service organization (MSO) that is part of a centralized cloud governance model  Roles and responsibilities of the MSO on the customer side of the AWS shared responsibility model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 5 of 39  Shared services and the use of Amazon Virtual Private Cloud (Amazon VPC) within AWS  Architectural baselines for establishing minimum configuration requirements for applications being deployed in AWS  Automation methods that can facilitate application deployment and simplify compliance accreditation ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 6 of 39 Shared Responsibility Environment Moving IT infrastructure to services in AWS creates a model of shared responsibility between the customer and AWS This shared model helps relieve the operational burden on the customer because AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate The customer assumes responsibility for and management of the guest operating system (including responsibility for updates and security patche s) and other associated application software and the configuration of the AWSprovided security group firewall Customers must carefully consider the services they choose because their responsibilities vary depending on the services they use the integration of those services into their IT environment and applicable laws and regulations Figure 1: The AWS Shared Responsibility Model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 7 of 39 This customer/AWS shared responsibility model also extends to IT controls Just as AWS and its customers share the responsibility for operating the IT environment they also share the management operation and verification of IT controls AWS can help relieve the customer of the burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that might previously have been managed by the customer Customers can shift the management of certain IT controls to AWS which results in a (new) distributed control environment Customers can then use the AWS control and compliance documentation to perform their control evaluation and verification procedures as required under the applicable compliance standard Compliance Requirements The infrastructure and services provided by AWS are approved to operate under several compliance standards and industry certifications These certifications cover only the AWS side of the shared responsibility model; customers retain the responsibility for certifying and accrediting workloads that are deployed on top of the AWSprovided services that they run The following common compliance standards have unique requirements that customers must consider:  NIST SP 800 532–Published by the National Institute of Standards in Technology (NIST) NIST SP 800 53 is a catalog of security controls which most US federal agencies must comply with and which are widely used within private sector enterprises Provides a risk management framework that adheres to the Federal Information Processing Standard (FIPS)  FedRAMP3–A US government program for ensuring standards in security assessment authorization and continuous monitoring FedRAMP follows the NIST 800 53 security control standards  DoD Cloud Security Model (CSM)4–Standards for cloud computing issued by the US Defense Information Systems Agency (DISA) and documented in the Department of Defense (DoD) Security Requirements Guide (SRG) Provides an authorization process for DoD workload owners who have unique architectural requirements depending on impact level ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 8 of 39  HIPAA5 – The Health Insurance Portability and Accountability Act (HIPAA) contains strict security and compliance standards for organizations processing or storing Protected Health Information (PHI)  ISO 270016 – ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and customer information that’s based on periodic risk assessments  PCI DSS7 – Payment Card Industry (PCI) Data Security Standards (DSS) are strict security standards for preventing fraud and protecting cardholder data for merchants that process credit card payments Evaluating systems in the cloud can be a challenge unless there are architectural standards that align with compliance requirements These architectural standards are especially critical for customers who must prove their systems meet strict compliance standards before they are permitted to go into production Compliance and Governance AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of whether it is deployed in a traditional data center or in the cloud Leading governance practices include:  Understanding required compliance objectives and requirements (from relevant sources)  Establishing a control environment that meets those objectives and requirements  Understanding the validation required based on the organization’s risk tolerance  Verifying the operational effectiveness of the control environment Deployment in the AWS cloud gives organizations options to apply various types of controls and verification methods Workload owners can follow these basic steps to ensure strong governance and compliance: 1 Review information from AWS and other sources to understand the entire IT environment ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 9 of 39 2 Document all compliance requirements 3 Design and implement control objectives to meet the organization’s compliance requirements 4 Identify and document controls owned by outside parties 5 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help customers gain a better understanding of their control environment and help clearly define the verification activities that must be performed For more information on governance in the cloud see Security at Scale: Governance in AWS8 Challenges in Architecting for Governance AWS provides a high level of flexibility in how customers can design architectures for their applications in the cloud AWS has documented best practices in the whitepapers user guides API references and other resources that describe how to design for elasticity availability and security But these resources alone do not prevent bad design and improper configuration Architectural decisions that impact security can put customer data or personal information at risk and create liability Consider the following challenges:  Building a single workload with different architecture choices that is still compliant  The need to individually assess each of these unique architectures  The high level of flexibility leaves room for error and serious mistakes can be resolved only by redeployment of the application  Security analysts may not understand the differences between the many architectural decisions ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 10 of 39 Learning Curve By deploying applications in AWS workload owners and developers have a much greater level of control over and access to resources beyond the operating system and software However the number of decisions required when building an architecture can be overwhelming for those new to AWS Some of the se architectural decisions include how to address:  Amazon VPC structure and network controls  AWS Identity and Access Management (IAM) configuration policies permissions; Amazon Simple Storage Service (S3) bucket policies  Storage and database options  Load balancing  Monitoring options alerts tagging  Aggregation analysis and storage considerations for logging produced by a workload or AWS service Implementing a Managed Services Organization To implement governance AWS customers have begun establishing centralized teams within their organizations that facilitate the migration of legacy applications and the development of new applications Such a team can be called a provisioning team a center of excellence a broker and most commonly the managed service organization (MSO) which is the term we use Customers use an MSO to establish repeatable processes and templates for deploying applications to AWS while maintaining organizational control over their enterprise’s applications When the MSO function is outsourced it is generally referred to as a managed service partner (MSP) Many MSPs are validated by AWS under our Managed Service Program9 Understanding the enterprise’s cloud governance model is key to determining the provisioning strategy for accounts Amazon VPCs and applications and for deciding how to automate these processes Large enterprises generally centrally manage cloud operations at some level It is important to find the optimal balance between central management and decentralized control10 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 11 of 39 In a centralized governance model an MSO provides the minimum requirements for workload owners who are deploying applications in the cloud:  Guardrails for security data protection and disaster recovery  Shared services for security continuous monitoring connectivity and authentication  Auditing the deployments of workload owners to ensure adherence to security and compliance standards For most large enterprises there are typical ly two sets of cloud governance roles involved in the deployment of applications:  MSO – As previously mentioned a component of centralized cloud governance; responsibilities can include account provisioning establishment of connectivity and Amazon VPC networking security auditing hosting of shared services billing and cost management  Workload Owners – Those who are directly responsible for the deployment development and maintenance of applications; a workload owner can be a cost center or a department and may include system administrators developers and others directly responsible directly for one or more applications Enterprise customers establish an MSO when there are common functions that can be centralized to ensure that applications are deployed in a secure and compliant fashion The MSO can also accelerate the rate of migration through reuse of approved configurations which minimizes development and approval time while ensuring compliance through the automated implementation of organization al security requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 12 of 39 Figure 2: Shared Responsibility Between the CSP the MSO and the Workload Owner Adding an MSO allows the authorization documentation of the workload owner to be scoped down to only the configuration and installation of software specific to a particular application because the workload owner inherits a significant portion of the security control implementation from AWS and the organization’s MSO Establishing an MSO requires some up front work but this investment provides enhanced control over applications increased speed to deployment decreased time to authorization and overall enhancement of the enterprise’s security posture Common Activities of the MSO MSOs implemented by AWS customers often perform the following activities:  Account provisioning After reviewing the workload owner’s use case the MSO establishes the initial account connects it to the appropriate account for consolidated billing and configures basic security functionality prior to granting access to the workload owner ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 13 of 39  Security oversight Centralized account provisioning allows the MSO to implement features that enable security personnel to monitor the application as it is deployed and managed; the MSO might perform activities such as establishing an auditor group with crossaccount access and linking the application VPC to a shared services VPC that is controlled by the MSO  Amazon VPC configuration Deploying the VPC and its subnets including configuring security groups and network ACLs To maintain tighter control over the application VPCs the MSO may retain control of VPC configuration and require the workload owner to request desired changes to network security  IAM configuration Creating user groups and assignment of rights including creation of groups for internal auditors an IAM superuser and application administrative groups segregated by functionality (eg database and Unix administrators)  Development and approval of templates Creating preapproved AWS CloudFormation templates for common use cases Using templates allows workload owners to inherit the security implementation of the approved template thereby limiting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications  AMI creation and management Creating a library of common approved Amazon Machine Images (AMIs) for the organization allowing centralized management and updating of machine images Creating common templates allows the MSO to enforce the use of approved AMIs  Development of a shared services VPC A shared service VPC allows the MSO to receive continuous monitoring feeds from the organization’s application VPC and to provide common shared services that are required for their organization This often includes a shared access management platform logging endpoints and the aggregation of configuration information ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 14 of 39 Standardizing Architecture for Compliance The solution to the challenge of implementing security controls for applications running on AWS is to build standardized automated and repeatable architectures that can be deployed for common use cases Automation can help customers easily meet the foundational requirements for buildi ng a secure application in the AWS cloud while providing a level of uniformity that follows proven best practices Architectural Baselines To determine the best method for standardizing and automating architecture in AWS establish baseline requirements up front These are the minimum common requirements to which most (or all) workloads must adhere An enterprise’s baseline requirements normally follow preexisting compliance controls regulatory guidelines security standards and best practices Typically a central department or group of individuals who are also involved in the monitoring auditing and evaluation of systems that are being deployed establish standard architectures based upon their baseline compliance and operational requirements Standard architectures can be shared among multiple applications and use cases within an organization This provides efficiency and uniformity and reduces the time and effort spent in designing architectures for new applications on AWS In an organization with a centralized cloud model these standard architectures are deployed during the account provisioning or application onboarding process Access Control/IAM Configuration IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control which actions users and applications can perform through the AWS Management Console or AWS API Federation allows IAM roles to be mapped to permissions from central directory services The enterprise should determine how to implement the following IAM controls:  Standard users groups or both that will exist in every account ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39  Crossaccount roles or federated roles  Roles for EC2 instances and application access to the AWS API  Roles requiring access to S3 buckets and other shared resources  Security requirements such as password policies and multifactor authentication (MFA) Networking/VPC Configuration Network boundaries and components are critical to deploying a secure architecture in the cloud An Amazon VPC is a logically isolated section of the AWS cloud which can be configured to enforce these network boundaries An AWS account can have one or more Amazon VPCs Subnets are logical groupings of IP address space within an Amazon VPC and exist within a single Availability Zone (AZ) A VPC strategy depends on the requirements of a common use case Amazon VPCs can be designated based on application lifecycle (production development) or on role (management shared services) A well documented Amazon VPC strategy will also take into account:  The number of Amazon VPCs per AWS account  The subnet structure within an Amazon VPC: the number of subnets and routing capabilities of each subnet  High availability requirements: Amazon VPC subnet s across availability zones (AZs)  Connectivity options: internet gateways virtual private gateways and routing AWS provides the components necessary for controlling the network boundaries of an application in an Amazon VPC The following table lists examples of Amazon VPC networking controls that can be utilized in AWS Control Implementation Protection Provided VPC Routing Tables Control which VPC subnets may communicate directly with the Internet Provides segmentation and broad reduction of attack surface area per subnet VPC Network Subnet level all traffic allowed by Provides blacklist protection for ports ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39 Access Control Lists (NACLs) default stateless filtering designed and implemented across one or more VPC subnets and protocols with secu rity concerns such as TFTP and NetBIOS VPC Security Group(s) Hypervisor level all inbound connections denied by default stateful filtering designed for one or more instances Provides whitelist abilities for ingress and egress traffic opening services and protocols required by the instance and applications Host based Protection Customer selected software to provide intrusion detection and prevention and firewall and/or logging capabilities Depending on product implemented can provide scalable protec tion and detection capabilities and security behavior visibility across your virtual fleet Because VPC networking configuration is critical to ensure the confidentiality integrity and availability of an application enterprises should define standards that adhere to security and AWS best practices MSOs should follow these standards or in the case of decentralized deployment workload owners should have a blueprint to follow when building a VPC structure Resource Tagging Almost all AWS resources allow the addition of user defined tags These tags are metadata and are irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account restricting access to resources based on tagging is important Regardless account structure tagbased IAM policies can be used to place extra security restrictions on critical resources The following example of an IAM policy specifies a condition that restricts an IAM user to changing the state of an EC2 instance that has the resource tag of “project = 12345 ” { ""Version"": ""20121017"" ""Statement"": [ { ""Action"": [ ""ec2:StopInstances"" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 16 of 39 AWS recommends the following to effectively use resource tagging:  Establish tagging baselines that define common keys and expected values across all accounts  Implement tag enforcement through both auditing and automation methods  Use automated deployment with AWS CloudFormation to automatically tag resources AMI Configuration Organizations commonly ensure security and compliance by centrally providing workload owners with prebuilt Amazon Machine Images (AMIs) These “golden ” AMIs can be preconfigured with hostbased security software and be hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as starting images on which to install their own software and configuration knowing the images are already compliant Note that managing centrally distributed AMIs can be an involved task for any central team Do not customize software and configuration which are likely to ""ec2:RebootInstances"" ""ec2:TerminateInstances"" ] ""Condition"": { ""StringEquals"": { ""ec2:ResourceTag/project"":""12345"" } } ""Resource"": [ ""arn:aws:ec2:your_region:your_account_ID:instance/*"" ] ""Effect"": ""Allow"" } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 17 of 39 change frequently in an AMI; instead configure them by using Amazon Elastic Compute Cloud (Amazon EC2) user data scripts or automation tools such as Chef Puppet or AWS OpsWorks Figure 3: Differences Between FullyCconfigured and Base AMIs Figure 3 shows how preconfigured AMIs can be used through automation and policy as the standard to control which new EC2 instances are deployed by workload owners Building AMIs can be partially automated by using tools such as Aminator and Packer11 Continuous Monitoring Continuous monitoring is the proactive approach of identifying risk and compliance issues by accurately tracking and monitoring system activity Certain compliance standards such as NIST SP 80053 require continuous monitoring to meet specific security controls AWS includes several services and native capabilities that can facilitate a continuous monitoring solution in the cloud AWS CloudTrail AWS CloudTrail is a service that logs API activity within an AWS account and delivers these logs to an Amazon Simple Storage Service (Amazon S3) bucket This data can be analyzed with thirdparty tools such as Splunk Alert Logic or CloudCheckr12 As a security standard CloudTrail should be enabled on all accounts and should log to a bucket that is accessible by security tools and applications ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 18 of 39 Amazon CloudWatch Alarms Amazon CloudWatch alarms notify users and applications when events related to AWS resources occur For example the failure of an instance can trigger an alarm to send an Amazon Simple Notification Service (Amazon SNS) notification by email to a group of users You can create common alarms for metrics and events within an account that must be monitored Centralized Logging In AWS application logs can be centralized for analysis by security tools This can be simplified by using Amazon CloudWatch Logs CloudWatch Logs provides an agent which can be configured to send application log data directly to CloudWatch Metric filters can then be used to track certain events and activity at the OS and application levels Notifications Amazon SNS can be used to send email or SMSbased notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization w ho need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS AWS Config AWS Config is a service that provides you with an AWS resource inventory a configuration history and configuration change notifications all of which enable security and governance13 AWS Config allows detailed tracking and notification whenever a resource in an AWS account is created modified or deleted The Shared Services VPC Our enterprise customers have found that establishing a single Amazon VPC that contains security applications required for monitoring their applications simplifies centralized control of infrastructure and provides easier access to common features such as Network Time Protocol (NTP) servers directory services and certificate management repositories ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 19 of 39 Figure 4: A Sample SharedService Amazon VPC Approach for DoD Customers Figure 4 provides an example of a shared service VPC approach used by a DoD MSO that establishes two VPCs for use by all of their applications In the first VPC the MSO established a VPC dedicated to providing a web application firewall that screens all traffic for known attack patterns creates a single point for monitoring web traffic and yet does not create a singlepoint of failure due to its ability to scale with traffic In the second VPC the MSO hosts a variety of common services including Active Directory servers DNS servers NTP servers HostBased Security System (HBSS) ePolicy Orchestrator (ePO) rollup servers and a master Assured Compliance Assessment Solution (ACAS) Security Center server Each organization must determine the common services that they must host in their AWS environment to support the needs of workload owners ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 20 of 39 Automating for Compliance Any customer can create prebuilt and customizable reference architectures with the tools AWS provides although it does require a level of effort and expertise Automation Methods AWS CloudFormation is the core of AWS infrastructure automation The service allows you to automatically deploy complete architectures by using prebuilt JSONformatted template files The set of resources created by an AWS CloudFormation template is referred to as a “stack ” Modular Design for Compliance Automation When building enterprisewide AWS CloudFormation templates to automate compliance we recommend that you use a modular design Use separate stacks based on the commonality of configuration among applications This can automate and enforce the baseline standards for security and compliance described in the previous sections Figure 5 shows how a customer can develop and maintain AWS CloudFormation templates using a modular design A single workload would use one template from each of these stacks nested in a single template to deploy and configure an entire application ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 21 of 39 Figure 5: AWS CloudFormation Stacks Stack 1 – Stack 1 is the primary security template applied to each account; it deploys common IAM users roles groups and associated policies Stack 2 – Generally there will be a template for each common use case to deploy the associated VPC architecture; this can take into account connectivity options such as VPC peering NAT instances internet and virtual private gateways Stack 3 – There is a template for each common configuration of an application architecture They contain applicationrelated components that are common among multiple applications but distinct among use cases such as elastic load balancers Elastic Load Balancing SSL configuration common security groups and common S3 buckets Stack 4 – There is a template for each specific application that deploys the associated EC2 instances autoscaling groups and other instancelevel resources In this stack instances can be bootstrapped with required user data and other resources such as applicationspecific security groups can be created Use Case Packages Building templates in this manner allows you to reuse configurations For specific use cases and application types you can use “packages ” that consist of ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 22 of 39 multiple templates nested within a single main template to deploy an entire architecture as shown in Figure 6 Figure 6: Example Package That Includes IAM Base Configuration VPC Architecture 1 Application Architecture 2 and APP2 Template An organization with a decentralized cloud governance model can use this automation structure to establish “blueprint ” architectures and allow workload owners full control of deployment at all levels In contrast an organization with a centralized cloud team that is responsible for provisioning might allow workload owners to provision only the applicationlevel components of the architecture while retaining responsibility for initial account provisioning IAM controls and Amazon VPC configuration To successfully build templates to automate compliance:  Keep templates modular; use nested stacks when possible  Use parameters as much as necessary to ensure flexibility  Use the DependsOn attribute and wait conditions to prevent dependency issues when resources are deployed  Develop a version control process to maintain template packages ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 23 of 39  Allow for command line interface (CLI)based or AWS Service Catalog based deployment  Use a parameters file  Use IAM policies to restrict the ability of users to delete AWS CloudFormation stacks Automating Compliance for EC2 Instances There are four tools for automating the configuration of EC2 instances at the operating system and application levels to meet compliance requirements Custom AMIs AWS allows you to create customized AMIs that can be built and hardened for use by workload owners to further install software and applications Building a compliant AMI may requires you to take into account the following:  Software packages and updates  Password policies  SSH keys  File system permissions/ownership  File system encryption  User/group configuration  Access control settings  Continuous monitoring tools  Firewall rules  Running services User Data Scripts You can employ user data to bootstrap EC2 instances to install packages and perform configuration on launch Utilize user data to directly manipulate instance configuration with any of the following tools: ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 24 of 39  CloudInit directives – Specify configuration parameters in user data which cloudinit can use to directly modify configuration An example of a directive is “Packages ” which can install a list of specific packages on the instance  Shell scripts – Include Bash or PowerShell scripts directly in user data to run on instance launch There is a 16 KB raw data limit on user data which limits this option  External scripts – A user data script can pull down a larger shell script from an S3 bucket URL or any other location and run this script to further configure the instance Configuration Management Software Configuration management solutions allow continuous management of instance configuration This can automate consistency among instances and make managing changes easier Examples of such solutions include:  Chef  Puppet  Ansible  SaltStack  AWS OpsWorks By using these configuration management solutions you can build scripts and packages to secure an operating system These hardening operations can include modifying user access or file system permissions; disabling services; making firewall changes; and many other operations used to secure a system and reduce its attack surface The following example of a Chef script implements a password age policy: template '/etc/logindefs' do source 'logindefserb') mode 0444 owner 'root' group 'root' ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 25 of 39 You can design packages of configuration scripts for example Puppet modules or Chef cookbooks based on specific compliance requirements and apply them to instances that must meet those requirements Containers Containerization with applications such as Docker14 or Amazon EC2 Container Service (Amazon ECS)15 allows one or more applications to run independently on a single instance within an isolated user space Figure 7: Containerization From a compliance perspective containers can be prebuilt with a standardized and hardened configuration based on the operating system and application Development & Management Using a modular approach and a common structure for templates simplifies updates and enforces uniform development by those responsible for creating new use case packages We recommend using the following elements when developing and managing AWS CloudFormation template packages that are architected for compliance Outputs The Output section of a template can include custom information and can be used to retrieve the ID of generated resources when nested stacks are used It variables (password_max_age: node['auth']['pw_max_age'] password_min_age: node['auth']['pw_min_age'] ) end ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 26 of 39 can also be used to provide general information that can be viewed from the AWS CloudFormation console or from the CLI/API describestack s call The Output sections of template files should include at minimum the following reference information:  Use case/application type  Compliance type  Date created  Maintained by Parameters AWS CloudFormation parameters16 are fields that allow users to specify data to the template upon launch Use parameters whenever possible You can design an entire set of AWS CloudFormation templates for a common use case by using highly customized parameters For example most tiered web applications share a similar architecture For this type of use case you can develop a complete fourstack template package so that multiple webbased applications can easily be deployed with the same template files by the user specifying parameters for AMIs and other applicationspecific resources Conditions AWS CloudFormation allows the use of Conditions17 which must be true for resources to be created When used in combination with parameters conditions enable you to design templates that make reference architectures flexible and based on application requirements For example a condition can be used to launch an EC2based database instead of an Amazon Relational Database Service (Amazon RDS) instance based on input parameters specified by the user as shown in the following snippet: ""CreateDBInstance"": { ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 27 of 39 Custom Resources AWS CloudFormation allows you to create custom resources18 which can be used to integrate with external processes or thirdparty providers Custom resources can also be designed to invoke AWS Lambda functions which can provide levels of automation not available with AWS CloudFormation alone Figure 8: Custom Resources Infrastructure as Code AWS CloudFormation templates and associated scripts documents and parameter files can be managed just as any application code would be We recommend that you use version control repositories such as Git or Subversion (SVN) to track changes and allow multiple users to efficiently push updates Capabilities such as version control testing and rapid deployment are possible with AWS CloudFormation templates just as with any source code A full Continuous Integration/Continuous Deployment (CI/CD) solution can be implemented using additional tools such as Jenkins19 ""Fn::Not"": [ { ""Fn::Equals"": [ { ""Ref"": ""DatabaseAmi"" } ""none"" ] } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 28 of 39 Figure 9: Example of CI/CD in AWS Using AWS CloudFormation You can store prebuilt use case packages in either a source code repository or in an S3 bucket This allows provisioning teams and workload owners to easily pull down the latest versions of these files Deployment To ensure a secure reliable and efficient deployment of prebuild template packages you should consider implementing several operational practices as described in the following sections AWS CLI Although you can use the AWS CloudFormation console to deploy templates from a webbased interface there are clear advantages to using the AWS CLI and other automated methods – especially if the templates require input to many parameters The AWS CLI is automatically installed on the Amazon Linux AMI You can use the AWS CLI to deploy automated architectures with a single command from an EC2 Linux instance Including a parameters file simplifies inputting template parameters by eliminating the need to manually input data for each field ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 29 of 39 You can use an additional script as a wrapper to simplify the CLI command or alternatively to directly call the AWS CloudFormation API to create the stack Launch EC2 instances into a predefined IAM role that allows access only to the AWS CloudFormation API To provide “least privilege ” within the AWS CloudFormation service use additional restrictions To launch a template from the AWS CLI: 1 Create an IAM role that allows an EC2 instance to access the AWS CloudFormation API 2 Launch an EC2 instance into the IAM role in a VPC (preferably a shared services VPC) 3 Copy or download the template package to the EC2 instance 4 Run the AWS CLI aws cloudformation create stack command to launch the template stack Security The security of AWS CloudFormation template packages should always be considered especially by customers who must adhere to strict compliance requirements Source code repositories should be secured to allow write access only to those responsible for updating packages In addition user names passwords and access keys should never be included in user data when automating deployment of EC2 instances because they are unencrypt ed plain text It is critical to understand that deleting an AWS CloudFormation stack actually deletes all underlying resources effectively destroying all data stored in EC2 aws cloudformation createstack stackname myStack template body file:///tem platejson parameters file:///parameters_filejson capabilities CAPABILITY_IAM ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 30 of 39 To mitigate the risk of accidental resource deletion use the following safeguards IAM permissions20 Restrict the ability to delete AWS CloudFormation stacks to only users groups and roles that require that ability You can write IAM policies that deny users and groups to which those policies are applied the ability to delete any stack The following is an example of an IAM policy that denies the DeleteStack and UpdateStack API calls: Deletion Policy21 Resources such as S3 buckets and EC2 and RDS instances support the AWS CloudFormation DeletionPolicy attribute Use this attribute to require that resources be retained upon stack deletion or that a snapshot be created (if snapshots are supported) The following is an example of a deletion policy with an S3 bucket AWS CloudFormation resource: { ""Version"":""2012 1017"" ""Statement"":[{ ""Effect"":""Deny"" ""Action"":[ ""cloudformation:DeleteStack"" ""cloudformation:Updat eStack"" ] ""Resource"":""*” }] } ""myS3Bucket"" : { ""Type"" : ""AWS::S3::Bucket"" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 31 of 39 Auditing Automating architecture deployment in AWS can help simplify the process of auditing and accrediting deployed applications Having a base configuration for components such as IAM and VPC controls ensures that workload owners are deploying architectures based on compliance standards Security personnel at the customer’s MSO can “sign off ” on reusable template packages that are based on customer security standards and compliance requirements as compliant The security accreditation and auditing process can make use of automation with the following AWS capabilities:  Tagging –AWS resources can be queried for common tags Tags can be applied at the sta ck level to all resources that support tagging  Template validation –A scripted validation of the configuration can be tested against the AWS CloudFormation template files prior to deployment  SNS notification –A nested stack in a template can be configured to send notifications about stack events to an Amazon SNS topic These Amazon SNS topics can be used to alert individuals groups or applications that a specific template has been deployed in the account  Testing deployed resources –Through the AWS API scripted tests can be conducted to validate that deployed architectures meet security requirements For example tests can be run to detect if any security group has open access to certain ports or if there is an internet gateway in a VPC that should not have one  ISV solutions –Thirdparty solutions for analyzing deployed architectures are available from AWS Partners Security control validation can also be implemented through solutions such as Telos’ Xacta risk management solution ""DeletionPolicy"" : ""Retain"" } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 32 of 39 AWS Service Catalog Integration AWS Service Catalog allows IT administrators to create and manage approved catalogs of resources which are called products IT administrators create portfolios of one or more products which they can then distribute to AWS end users and workload owners End users can access products through a personalized portal22 Product – Products can be created to provide specific types of applications or to address specific use cases or alternatively they can be used to deploy base resources such as IAM and VPC configuration which other resources such as EC2 instances can utilize Template package deployment can be further automated and simplified by making the template package an AWS Service Catalog product Portfolios – A portfolio consists of one or more products Portfolios can include products for different types of use cases and can be organized by compliance type Permissions – End users and workload owners who are IAM users or members of IAM groups or roles can be given permission to use specific portfolios based on the level of access they need and what they need to deploy Constraints – Constraints are a granular control applied at a portfolio or product level that restrict the ways that AWS resources can be deployed Constraints can be used to allow templates to deploy all resources at a higher level of access than a workload owner has through IAM policies Tags – Tags can be used to control access to resources or for cost allocation Tags are enforced at the portfolio or product level AWS Service Catalog allows sharing of portfolios that are created in a common shared services AWS account This allows central management of and access to deployable reference architectures Central Management of AWS Service Catalog Customers with centralized governance models can fully control and manage the AWS Service Catalog products that workload owners have access to ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 33 of 39 Figure 10: Using AWS Service Catalog Constraints Automating for Governance: HighLevel Steps Automating a compliant secure and reliable architecture that adheres to an organization’s governance model involves several basic steps This section presents a highlevel overview Prerequisites Before beginning to develop automated reference architectures based on compliance requirements your organization must define the following:  Cloud strategy and roadmap  Governance model  Cloud tasks roles and responsibilities  VPC and account creation strategy  Security standards and compliance requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 34 of 39 Automating for compliance will often be part of a larger IT transformation initiative Many architectural requirements relate directly to existing governance and securityrelated decisions Step 1: Define Common Use Cases Customers must first determine t he standard use cases of their workloads Many applications deployed on AWS support a common use case These use cases share identical or similar base architectures for VPC design IAM configuration and other architectural components The following are examples of a few common use cases:  Web applications – Web applications normally consist of multiple tiers (proxy/web application and database) for hosting webbased applications accessed by end users These applications can be designed for scalability and elasticity when properly architected in AWS Different VPC configurations are required depending on whether the application is intended to be internal facing or accessible from users on the public Internet  Enterprise applications – Enterprise applications are almost always commercial offtheshelf (COTS) products that are used widely within an organization in critical tobusiness functions Examples include Microsoft SharePoint Active Directory PeopleSoft and Oracle EBusiness Suite Often each enterprise application addresses a specific use case with an architecture that is standardized  Data analytics – Applications that analyze large data sets have architectures that require the deployment of common data analytics applications and use AWS big data services such as Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) Amazon Kinesis and Amazon DynamoDB (DynamoDB) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 35 of 39 Step 2: Create and Document Reference Architectures A welldesigned reference architecture provides clear documentation on how resources will be used within AWS Reference architectures should be created in Visio PowerPoint or another platform from which they can be distributed Figure 11: Example Reference Architecture in PowerPoint Step 3: Validate and Document Architectu re Compliance Accurately documenting how the reference architecture satisfies compliance requirements can reduce the amount of effort required for a workload owner to ensure that the architecture being deployed meets compliance requirements Compliance documentation may include:  A security controls implementation matrix (SCTM) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 36 of 39  A system security plan (SSP)  A concept of operations (ConOps) Organizations that must follow specific compliance controls should determine which resources components and configurations meet the requirements of each control Including this documentation in a packaged deployment reduces the need to repeat the same compliance analysis for a proposed architecture Figure 12: Example of a Security Controls Implementation Matrix Provided by the Cloud Security Alliance Step 4: Build Automated Solutions Based on Architecture There are many ways to automate infrastructure creation with AWS services and features Most commonly AWS CloudFormation templates are used to automate deployment and configuration of AWS resources Create template packages using the design guidelines provided in “Automating for Compliance ” earlier in this whitepaper When building templates determine which configurations are common among various types of applications and use cases Properly maintain and update templates when necessary ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 37 of 39 Step 5: Develop an Accreditation and Approval Process Existing processes and methods for evaluating systems against compliance requirements may not apply or may need to be changed for applications in the cloud When automating compliance for an entire enterprise involve security teams early on so they can provide input and gain a deeper understanding of how applications will be deployed in AWS The accreditation and approval plan for automated deployments should consider of all of the following:  The compliance standards that the organization must follow  The current approval process for applications and infrastructure  The existing security requirements related to networking continuous monitoring access control and auditing  The current (and proposed) tools for security analysis scanning and monitoring  The hardening requirements for deployed operating systems if there are any and the need for prehardened custom images  The processes and methods used to validate both architecture templates and deployed configurations Conclusion Developing an automated solution for governance and compliance can reduce the cost time and effort to deploy applications in AWS while minimizing risk and simplifying architecture design When this approach is packaged into a reusable solution it can decrease the level of effort to produce compliancerelated documentation and allow time normally spent evaluating compliant architectures to be used to drive the organization’s goals and mission ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 38 of 39 Contributors The following individuals and organizations contributed to this document:  Mike Dixon Consultant AWS Public Sector Sales  Lou Vecchioni Senior Consultant AWS ProServ  Brett Miller Senior Consultant AWS ProServ  Josh Weatherly Practice Manager AWS ProServ  Andrew McDermott Senior Compliance Architect AWS Security Notes 1 http://wwwgartnercom/itglossary/itgover nance/ 2 http://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP80053r4pdf 3 http://d0awsstaticcom/whitepapers/compliance/awsarchitectureand securityrecommendationsforfedrampcompliancepdf 4 http://iasedisamil/cloud_security/Documents/u cloud_computing_srg_v1r1_finalpdf 5 http://awsamazoncom/compliance/hipaacompliance/ 6 http://www27000org/iso27001htm 7 http://awsamazoncom/compliance/pcidsslevel1 faqs/ 8 http://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance_i n_AWSpdf 9 http://awsamazoncom/partners/managedservice/ 10 https://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance _in_AWSpdf 11 https://githubcom/Netflix/aminator https://wwwpackerio/intro/indexhtml ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 39 of 39 12 http://awsamazoncom/cloudtrail/partners/ 13 http://awsamazoncom/config/ 14 https://wwwdockercom / 15 http://awsamazoncom/ecs/ 16 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/paramet ers sectionstructurehtml 17 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/conditio ns sectionstructurehtml 18 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws resourcecfncustomresourcehtml 19 https://wikijenkinsciorg/display/JENKINS/AWS+Cloudformation+Plugin 20 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/using iamtemplatehtml 21 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws attributedeletionpolicyhtml 22 http://awsamazoncom/servicecatalog/",General,consultant,Best Practices AWS_Answers_to_Key_Compliance_Questions,ArchivedAWS Answers t o Key Compliance Questions January 2017 We w elcome yo ur feedback Please share yo ur thoughts at t his link This paper has been archived For the latest technical content about AWS Compliance see https:// awsamazoncom/compliance/faq/Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Key Compliance Questions and Answers 1 Further Reading 8 Document Revis ions 8 Archived Abstract This document addresses common cloud computing compliance questions as they relate to AWS The answers to these may be of interest when evaluating and operating in a cloud computing environment and may assist in AWS customers’ control management efforts ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 1 Key Compliance Questions and Answers Category Cloud Computing Question AWS Information Control Ownership Who owns which controls for cloud deployed infrastructure? For the portion deployed into AWS AWS controls the physical components of that technology The customer owns and controls everything else including control over connection points and transmissions To help customers better understand what controls we have in place and how effectively they are operating we publish a SOC 1 Type II report with controls defined around EC2 S3 and VPC as well as detailed physical security and environmental controls These controls are defined at a high level of specificity that should meet most customer needs AWS customers that have signed a non disclosure agreement with AWS may request a copy of the SOC 1 Type II report Auditing IT How can auditing of the cloud provider be accomplished? Auditing for most layers and controls above the physical controls remains the responsibility of the customer The definition of AWS defined logical and physical controls is documented in the SOC 1 Type II report and the report is available for review by audit and compliance teams AWS ISO 27001 and other certifications are also available for auditors to review SarbanesOxley compliance How is SOX compliance achieved if in scope systems are deployed in the cloud provider environment? If a customer processes financial information in the AWS cloud the customer’s auditors may determine that some AWS systems come into scope for Sarbanes Oxley (SOX) requirements The customer’s auditors must make their own determination regarding SOX applicability Because most of the logical access controls are managed by customer the customer is best positioned to determine if its control activities meet relevant standards If the SOX auditors request specifics regarding AWS’ physical controls they can reference the AWS SOC 1 Type II report which details the controls that AWS provides HIPAA compliance Is it possible to meet HIPAA compliance requirements while deployed in the cloud provider environment? HIPAA requirements apply to and are controlled by the AWS customer The AWS platform allows for the deployment of solutions that meet industry specific certification requirements such as HIPAA Customers can use AWS services to maintain a security level that is equivalent or greater than those required to protect electronic health records Customers have built healthcare applications ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 2 Category Cloud Computing Question AWS Information compliant with HIPAA’s Security and Privacy Rules on AWS AWS provides additional information about HIPAA compliance on its web site including a whitepaper on this topic GLBA compliance Is it possible to meet GLBA certification requirements while deployed in the cloud provider environment? Most GLBA requirements are controlled by the AWS customer AWS provides means for customers to protect data manage permissions and build GLBA compliant applications on AWS infrastructure If the customer requires specific assurance that physical security controls are operating effectively they can reference the AWS SOC 1 Type II report as relevant Federal regulation compliance Is it possible for a US Government agency to be compliant with security and privacy regulations while deployed in the cloud provider environment? US Federal agencies can be compliant under a number of compliance standards including the Federal Information Secur ity Management Act (FISMA) of 2002 Federal Risk and Authorization Management Program (FedRAMP) the Federal Information Processing Standard (FIPS) Publication 1402 and the International Traffic in Arms Regulations (ITAR) Compliance with other laws and statutes may also be accommodated depending on the requirements set forth in the applicable legislation Data location Where does customer data reside? AWS customers designate in which physical region their data and their servers will be located Data replication for S3 data objects is done within the regional cluster in which the data is stored and is not replicated to other data center clusters in other regions AWS customers designate in which physical region their data and their servers will be located AWS will not move customers' content from the selected Regions without notifying the customer unless required to comply with the law or requests of governmental entities For a complete list of regions see awsamazoncom/about aws/globa linfrastructure EDiscovery Does the cloud provider meet the customer’s needs to meet electronic discovery procedures and requirements? AWS provides infrastructure and customers manage everything else including the operating system the network configuration and the installed applications Customers are responsible for responding appropriately to legal procedures involving the identification collection processing analysis and production of electronic documents they store or process using AWS Upon request AWS may work with customers who require AWS’ assistance in legal proceedings ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 3 Category Cloud Computing Question AWS Information Data center tours Are data center tours by customers allowed by the cloud provider? No Due to the fact that our data centers host multiple customers AWS does not allow data center tours by customers as this exposes a wide range of customers to physical access of a third party To meet this customer need an independent and competent auditor validates the presence and operation of controls as part of our SOC 1 Type II report This broadly accepted thirdparty validation provides customers with the independent perspective of the effectiveness of controls in place AWS customers that have signed a non disclosure agreement with AWS may request a copy of the SOC 1 Type II report Independent reviews of data center physical security is also a part of the ISO 27001 audit the PCI assessment ITAR audit and the FedRAMP sm testing programs Thirdparty access Are third parties allowed access to the cloud provider data centers? AWS strictly controls access to data centers even for internal employees Third parties are not provided access to AWS data centers except when explicitly approved by the appropriate AWS data center manager per the AWS access policy See the SOC 1 Type II report for specific controls related to physical access data center access authorization and other related controls Privileged actions Are privileged actions monitored and controlled? Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data is and server instances are logically isolated from other customers by default Privileged user access control is reviewed by an independent auditor during the AWS SOC 1 ISO 27001 PCI ITAR and FedRAMP sm audits Insider access Does the cloud provider address the threat of inappropriate insider access to customer data and applications? AWS provides specific SOC 1 controls to address the threat of inappropriate insider access and the public certification and compliance initiatives covered in this document address insider access All certifications and third party attestations evaluate logical access preventative and detective controls In addition periodic risk assessments focus on how insider access is controlled and monitored Multitenancy Is customer segregation implemented securely? The AWS environment is a virtualized multi tenant environment AWS has implemented security management processes PCI controls and other security controls designed to isolate each customer from other customers AWS systems are designed ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 4 Category Cloud Computing Question AWS Information to prevent customers from accessing physical hosts or instances not assigned to them by filtering through the virtualization software This architecture has been valid ated by an independent PCI Qualified Security Assessor (QSA) and was found to be in compliance with all requirements of PCI DSS version 31 published in April 2015 Note : AWS also has single tenancy options Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud while isolating your Amazon EC2 compute instances at the hardware level Hypervisor vulnerabilities Has the cloud provider addressed known hypervisor vulnerabilities? Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor The hypervisor is regularly assessed for new and existing vulnerabilities and attack vectors by internal and external penetration teams and is well suited for maintaining strong isolation between guest virtual machines The AWS Xen hypervisor security is regularly evaluated by independent auditors during assess ments and audits See the AWS security whitepaper for more information on the Xen hypervisor and instance isolation Vulnerability management Are systems patched appropriately? AWS is responsible for patching systems supporting the delivery of service to customers such as the hypervisor and networking services This is done as required per AWS policy and in accordance with ISO 27001 NIST and PCI requirements Customers control their own guest operating systems software and applications and are therefore responsible for patching their own systems Encryption Do the provided services support encryption? Yes AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS SimpleDB and EC2 IPSec tunnels to VPC are also encrypted Amazon S3 also offers Server Side Encryption as an option for customers Customers may also use third party encryption technologies Refer to the AWS Security white paper for more information Data ownership What are the cloud provider ’s rights over customer data? AWS customers retain control and ownership of their data AWS errs on the side of protecting customer privacy and is vigilant in determining ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 5 Category Cloud Computing Question AWS Information which law enforcement requests we must comply with AWS does not hesitate to challenge orders from law enforcement if we think the orders lack a solid basis Data isolation Does the cloud provider adequately isolate customer data? All data stored by AWS on behalf of customers has strong tenant isolation security and control capabilities Amazon S3 provides advanced data access controls Please see the AWS security whitepaper for more information about specific data services’ security Composite services Does the cloud provider layer its service with other providers’ cloud services? AWS do es not leverage any thirdparty cloud providers to deliver AWS services to customers Physical and environmental controls Are these controls operated by the cloud provider specified? Yes These are specifically outlined in the SOC 1 Type II report In addition other certifications AWS supports such as ISO 27001 and FedRAMP sm require best practice physical and environmental controls Client side protection Does the cloud provider allow customers to secure and manage access from clients such as PC and mobile devices? Yes AWS allows customers to manage client and mobile applications to their own requirements Server security Does the cloud provider allow customers to secure their virtual servers? Yes AWS allows customers to implement their own security architecture See the AWS security whitepaper for more details on server and network security Identity and Access Management Does the service include IAM capabilities? AWS has a suite of identity and access management offerings allowing customers to manage user identities assign security credentials organize users in groups and manage user permissions in a centralized way Please see the AWS web site for more information Scheduled maintenance outages Does the provider specify when systems will be brought down for maintenance? AWS does not require systems to be brought offline to perform regular maintenance and system patching AWS’ own maintenance and system patching generally do not impact customers Maintenance of instances themselves is controlled by the customer Capability to scale Does the provider allow customers to scale beyond the original agreement? The AWS cloud is distributed highly secure and resilient giving customers massive scale potential Customers may scale up or down paying for only what they use ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 6 Category Cloud Computing Question AWS Information Service availability Does the provider commit to a high level of availability? AWS does commit to high levels of availability in its service level agreements (SLA) For example Amazon EC2 commits to annual uptime percentage of at least 9995% during the service year Amazon S3 commits to monthly uptime percentage of at least 999% Service credits are provided in the case these availability metrics are not me t Distributed Denial Of Service (DDoS) attacks How does the provider protect their service against DDoS attacks? The AWS network provides significant protection against traditional network security issues and the customer can implement further protection See the AWS Security Whitepaper for more information on this topic including a discussion of DDoS attacks Data portability Can the data stored with a service provider be exported by customer request? AWS allows customers to move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport Service provider business continuity Does the service provider operate a business continuity progr am? AWS does operate a business continuity program Detailed information is provided in the AWS Security Whitepaper Customer business continuity Does the service provider allow customers to implement a business continuity plan? AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/availability zone deployment architectures Data durability Does the service specify data durability? Amazon S3 provides a highly durable storage infrastructure Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region Once stored Amazon S3 maintains the durability of objects by quickly detectin g and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired using redundant data Data stored in S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Backups Does the service provide backups to tapes? AWS allows customers to perform their own backups to tapes using their own tape backup service provider However a tape backup is not a service provided by AWS Amazon S3 service is designed to drive the likelihood of data loss to near zero percent and the durability equivalent of multi site copies of data objects is achieved through data ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 7 Category Cloud Computing Question AWS Information storage redundancy For information on data durability and redundancy please refer to the AWS web site Price increases Will the service provider raise prices unexpectedly? AWS has a history of frequently reducing prices as the cost to provide these services reduces over time AWS has reduced prices consistently over the past several years Sustainability Does the service provider company have long term sustainability potential? AWS is a leading cloud provider and is a longterm business strategy of Amazoncom AWS has very high long term sustainability potential ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 8 Further Reading For additional information see the following sources: • AWS Risk and Compliance Overview • AWS Certifications Program s Reports and Third Party Attestations • CSA Consensus Assessments Initiative Questionnaire Document Revisions Date Description January 2017 Migrated to new template January 2016 First publication,General,consultant,Best Practices AWS_Best_Practices_for_DDoS_Resiliency,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAWS Best Practices for DDoS Resiliency First Published June 2015 Updated September 21 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlContents Introduction 1 Denial of Service Attacks 1 Infrastructure Layer Attacks 3 Applic ation Layer Attacks 5 Mitigation Techniques 7 Best Practices for DDoS Mitigation 11 Attack Surface Reduction 18 Obfuscat ing AWS Resources (BP1 BP4 BP5) 18 Operational Techniques 21 Visibility 21 Support 28 Conclusion 30 Contributors 30 Further Reading 30 Document revisions 31 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAbstract It’s important to protect your business from the impact of Distributed Denial of Service (DDoS ) attacks as well as other cyberattacks Keeping customer trust in your service by maintaining the availability and responsiveness of your application is high priority You al so want to avoid unnecessary direct costs when your infrastructure must scale in response to an attack Amazon Web Services (AWS ) is committed to providing you with the tools best practices and services to defend against bad actors on the internet Using the right services from AWS helps ensure high availability security and resiliency In this whitepaper AWS provide s you with prescriptive DDoS guidance to improve the resiliency of applications running on AWS This includes a DDoS resilient reference architecture that can be used as a guide to help protect application availability This whitepaper also describe s different attack types such as infrastructure layer attacks and application layer attacks AWS explain s which best practices are most effectiv e to manage each attack type In addition the services and features that fit into a DDoS mitigation strategy are outlined and how each one can be used to help protect your applications is explained This paper is intended for IT decision makers and secur ity engineers who are familiar with the basic concepts of networking security and AWS Each section has links to AWS documentation that provides more detail on the best practice or capability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 1 Introduction Denial of Service Attack s A Denial of Service (DoS) attack is a deliberate attempt to make a website or application unavailable to users such as by flooding it with network traffic Attackers use a variety of techniques that consume large amounts of network bandwidth or tie up other system resources disrupting access for legitimate users In its simplest form a lone attacker uses a single source to carry out a DoS attack against a target as shown in the following image Diagram of a DoS Att ack In a DDoS attack an attacker uses multiple source s to orchestrate an attack against a target These sources can include distributed groups of malware infected computers routers IoT devices and other endpoints The following diagram shows a network of compromised host participates in the attack generating a flood of packets or requests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 2 to overwhelm the target Diagram o f a DDoS Attack There are seven layers in the Open Systems Interconnection (OSI) model and they are described in the Open Systems Interconnection (OSI) Model table DDoS attacks are most common at layers three four six and seven Layer three and four attack s correspond to the Network and Transport layers of the OSI model Within this paper AWS refers to these collectively as infrastructure layer attacks Layer s six and seven attacks correspond to the Presentation and Application layers of the OSI model AWS will address these together as application layer attacks Examples of these attack types are discussed in the following sections Open Systems Interconnection (OSI) Model # Layer Unit Description Vector Examples 7 Application Data Network process to application HTTP floods DNS query floods 6 Presentation Data Data representation and encryption TLS abuse This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 3 # Layer Unit Description Vector Examples 5 Session Data Interhost communication N/A 4 Transport Segments Endtoend connections and reliability SYN floods 3 Network Packets Path determination and logical addressing UDP reflection attacks 2 Data Link Frames Physical addressing N/A 1 Physical Bits Media signal and binary transmission N/A Infrastructure Layer Attacks The most common DDoS attacks User Datagram Protocol (UDP) reflection attacks and synchronize (SYN) floods are infrastructure layer attacks An attacker can use either of these methods to generate large volumes of traffic that can inundate the capacity of a network or tie up resources on systems such as server s firewall s intrusion prevention system ( IPS) or load balancer s While these attacks can be easy to identify to mitigate them effectively you must have a network or s ystems that scale up capacity more rapidly than the inbound traffic flood This extra capacity is necessary to either filter out or absorb the attack traffic freeing up the system and application to respond to legitimate customer traffic UDP Reflection At tacks User Datagram Protocol (UDP) reflection attacks exploit the fact that UDP is a stateless protocol Attackers can craft a valid UDP request packet listing the attack target’s IP address as the UDP source IP address The attacker has now falsified —spoo fed—the UDP request packet’s source IP The UDP packet contain s the spoofed source IP and is sent by the attacker to an intermediate server The server is tricked into sending its UDP response packets to the targeted victim IP rather than back to the attac ker’s IP address The intermediate server is used because it generates a response that is several times larger than the request packet effectively amplifying the amount of attack traffic sent to the target IP address The amplification factor is the ratio of response size to request size and it varies depending on which protocol the attacker uses: DNS NTP SSDP CLDAP This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 4 Memcached CharGen or QOTD For example the amplification factor for DNS can be 28 to 54 times the original number of bytes So if an a ttacker sends a request payload of 64 bytes to a DNS server they can generate over 3400 bytes of unwanted traffic to an attack target UDP reflection attacks are accountable for larger volume of traffic in compar ison to other attacks The UDP Reflection A ttack f igure illustrates the reflection tactic and amplification effect UDP Reflection Attack SYN Flood Attacks When a user connects to a Transmission Control Protocol (TCP) service such as a web server their client sends a SYN synchronizatio n packet The server returns a SYN ACK packet in acknowledgement and finally the client responds with an acknowledgement (ACK) packet which completes the expected three way handshake The following image illustrates this typical handshake This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 5 SYN 3 way Handshake In a SYN flood attack a malicious client sends a large number of SYN packets but never sends the final ACK packets to complete the handshakes The server is left waiting for a response to the half open TCP connections and eventually runs out of capacity to accept new TCP connections This can prevent new users from connecting to the server The attack is trying to tie up available server connections so that resources are not available for legitimate connections While SYN floods can reach up to hundreds o f Gbps the purpose of the attack is not to increase SYN traffic volume Application Layer Attacks An attacker may target the application itself by using a layer 7 or application layer attack In these attacks similar to SYN flood infrastructure attacks the attacker attempts to overload specific functions of an application to make the application unavailable or unresponsive to legitimate users Sometimes this can be achieved with very low request volumes that generate only a small volume of network traffi c This can make the attack difficult to detect and mitigate Examples of application layer attacks include HTTP floods cache busting attacks and WordPress XML RPC floods In an HTTP flood attack an attacker sends HTTP requests that appear to be from a valid user of the web application Some HTTP floods target a specific resource while more complex HTTP floods attempt to emulate human interaction with the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 6 This can increase the difficulty of using common mitigation techniques like request ra te limiting Cache busting attacks are a type of HTTP flood that use variations in the query string to circumvent content delivery network (CDN) caching Instead of being able to return cached results the CDN must contact the origin server for every page request and these origin fetches cause additional strain on the application web server With a WordPress XML RPC flood attack also known as a WordPress pingback flood an attacker targets a website hosted on the WordPress content management software The attacker misuses the XML RPC API function to generate a flood of HTTP requests The pingback feature allows a website hosted on WordPress (Site A) to notify a different WordPress site (Site B) through a link that Site A has created to Site B Site B then attempts to fetch Site A to verify the existence of the link In a pingback flood the attacker misuses this capability to cause Site B to attack Site A This type of attack has a clear signature: WordPress is typically present in the User Agent of the HTT P request header There are other forms of malicious traffic that can impact an application’s availability Scraper bots automate attempts to access a web application to steal content or record competitive information such as pricing Brute force and credential stuffing attacks are programmed efforts to gain unauthorized access to secure areas of an application These are not strictly DDoS attacks; but their automated nature can look similar to a DDoS attack and they can be mitigated by implementi ng some of the same best practices covered in this paper Application layer attacks can also target Domain Name System (DNS) services The most common of these attacks is a DNS query flood in which an attacker uses many wellformed DNS queries to exhaust t he resources of a DNS server These attacks can also include a cache busting component where the attacker randomizes the subdomain string to bypass the local DNS cache of any given resolver As a result the resolver can’t take advantage of cached domain q ueries and must instead repeatedly contact the authoritative DNS server which amplifies the attack If a web application is delivered over Transport Layer Security ( TLS) an attacker can also choose to attack the TLS negotiation process TLS is computatio nally expensive so an attacker by generating extra workload on the server to process unreadable d ata (or unintelligible (ciphertext)) as a legitimate handshake can reduce server’s availability In a variation of this attack an attacker completes the TLS handshake but perpetually renegotiates the encryption method An attacker can alternatively attempt to exhaust server resources by opening and closing many TLS sessions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Prac tices for DDoS Resiliency 7 Mitigation Techniques Some forms of DDoS mitigation are included automatically with A WS services DDoS resilience can be improved further by using an AWS architecture with specific services covered in the following sections and by implementing additional best practices for each part of the network flow between users and your application All AWS customers can benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against the most common and frequently occurring network and transport layer DDoS attacks that target your website or applications This protection is always on pre configured static and provides no reporting or analytics It is offered on all AWS services and in every AWS Region In AWS Regions DDoS attacks are detected and the Shield Standard system automatically ba selines traffic identifies anomalies and as necessary creates mitigations You can use AWS Shield Standard as part of a DDoS resilient architecture to protect both web and non web applications You can also utilize AWS services that operate from edge l ocations such as Amazon CloudFront AWS Global Accelerator and Amazon Route 53 to build comprehensive availability protection against all known infrastructure layer attacks These services are part of the AWS Global Edge Network and can improve the DDoS resilienc y of your application when serv ing any type of application traffic from edge locations distributed around the world You can run your application in any AWS Region and use these services to protect your application availability and optimize the pe rformance of your application for legitimate end users Benefits of using CloudFront AWS Global Accelerator and Amazon Route 53 include: • Access to internet and DDoS mitigation capacity across the AWS Global Edge Network This is useful in mitigating larg er volumetric attacks which can reach terabit scale • AWS Shield DDoS mitigation systems are integrated with AWS edge services reducing time tomitigate from minutes to sub second • Stateless SYN Flood mitigation techniques proxy and verify incoming connec tions before passing them to the protected service This ensures that only valid connections reach your application while protecting your legitimate end users against false positives drops This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 8 • Automatic traffic engineering systems that disperse or isolate the impact of large volumetric DDoS attacks All of these services isolate attacks at the source before they reach your origin which means less impact on systems protected by these services • Application layer defense when combined with AWS WAF that does not require changing current application architecture (for example in an AWS Region or on premises data center) There is no charge for inbound data transfer on AWS and you do not pay for DDoS attack traffic that is mitigated by AWS Shield The following architecture diagram includes AWS Global Edge Network services DDoS resilient reference architecture This architecture includes several AWS services that can help you improve your web application’s resiliency against DDoS attacks The Summary of Best Practices table provides a summary of these services and the capabilities that they can provide AWS has tagged each service with a best practice indicator (BP1 BP2 ) for easier reference within this document For example an upcoming section discusses the capabilities provided by CloudFront and Global Accelerator that includes the best practice indicator BP1 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 9 Summary of Best Practices Another way to improve your readiness to respond to and mitigate DDoS attacks is by subscribing to AWS Shield Advanced Customers receive tailored detection based on: AWS EDGE AWS REGION Using Amazon CloudFront (BP1) with AWS WAF (BP2) Using AWS Global Accelerator (BP1) Using Amazon Route 53 (BP3) Using Elastic Load Balancing (BP6) with AWS WAF (BP2) Using Security Groups and network ACLs in Amazon VPC (BP5) Using Amazon EC2 Auto Scaling (BP7) Layer 3 (for example UDP reflection) attack mitigation ✔ ✔ ✔ ✔ ✔ ✔ Layer 4 (for example SYN flood) attack mitigation ✔ ✔ ✔ ✔ Layer 6 (for example TLS) attack mitigation ✔ ✔ ✔ ✔ Reduce attack surface ✔ ✔ ✔ ✔ ✔ Scale to absorb application layer traffic ✔ ✔ ✔ ✔ ✔ ✔ Layer 7 (application layer) attack mitigation ✔ ✔(*) ✔ ✔ ✔(*) ✔(*) Geographic isolation and dispersion of excess traffic and larger DDoS attacks ✔ ✔ ✔ * If used with AWS WAF with AWS Application Load Balancer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 10 • Specific traffic patterns of your application • Protection against Layer 7 DDoS attacks i ncluding AWS WAF at no additional cost • Access to 24x7 specialized support from the AWS SRT • Centralized management of security policies through AWS Firewall manager • Cost protection to safeguard against scaling charges resulting from DDoS related usage sp ikes This optional DDoS mitigation service helps protect application s hosted on any AWS Region The service is available globally for CloudFront Amazon Route 53 and Global Accelerato r Using AWS Shield Advanced with Elastic IP addresse s allows you to protect Network Load Balancer (NLBs) or Amazon EC2 instances Benefits of using AWS Shield Advanced include : • Access to the AWS SRT for assistance with mitigating DDoS attacks that impact application availability • DDoS attack visibility by using the AWS Management Console API and Amazon CloudWatch metrics and alarms • Access to the history of all DDoS events from the past 13 months • Access to AWS web application firewall (WAF) at no additional cost for the mitigation of application layer DDoS attacks (wh en used with CloudFront or Application Load Balancer) • Automatic baselining of web traffic attributes when used with AWS WAF • Access to AWS Firewall Manager at no additional cost for automated policy enforcement • Sensitive detection thresholds that rout e traffic into the DDoS mitigation system earlier and can improve time tomitigate attacks against Amazon EC2 or Network Load Balancer when used with an Elastic IP address • Cost protection that enables you to request a limited refund of scaling related costs that result from a DDoS attack • Enhanced service level agreement that is specific to AWS Shield Advanced customers • Proactive engagement from the AWS SRT when a Shield event is detected This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 11 • Protection groups that enable you to bundle resources providing a selfservice way to customize the scope of detection and mitigation for your application by treating multiple resources as a single unit Resource grouping improves the accuracy of detection minimizes false positives eases automatic protection of newly created resources and accelerates the time to mitigate attacks against many resources that comprise a single application For information about protection groups see Shield Advanced protection groups For a complete list of AWS Shield Advanced features and for more information about AWS Shield refer to How AWS Shield works Best Practices for DDoS Mitigation In the following sections each of the recommended best practices for DDoS mitigation are described in more depth For a quick and easy toimplement guide on building a DDoS mitigation layer for static or dynamic web applications see How to Help Protect Dynamic Web Applications Aga inst DDoS Attacks Infrastructure Layer Defense (BP1 BP3 BP6 BP7) In a traditional data center environment you can mitigate infrastructure layer DDoS attacks by using techniques such as overprovisioning capacity deploying DDoS mitigation systems o r scrubbing traffic with the help of DDoS mitigation services On AWS DDoS mitigation capabilities are automatically provided; but you can optimize your application’s DDoS resilience by making architecture choices that best leverage those capabilities and also allow you to scale for excess traffic Key considerations to help mitigate volumetric DDoS attacks include ensuring that enough transit capacity and diversity are available and protecting AWS resources like Amazon EC2 instances against attack traff ic Some Amazon EC2 instance types support features that can more easily handle large volumes of traffic for example up to 100 Gbps network bandwidth interfaces and enhanced networking This helps prevent interface congestion for traffic that has reached the Amazon EC2 instance Instances that support enhanced networking provide higher I/O performance higher bandwidth and lower CPU utilization compared to traditional implementations This improves the ability of the instance to handle large volumes of t raffic and ultimately makes them highly resilient against packets per second (pps) load This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 12 To allow this high level of resilience AWS recommend s using Amazon EC2 Dedicated Instances or EC2 instances with higher networking throughput that have an N suffix an d support for Enhanced Networking with up to 100 Gbps of Network bandwidth for example c6gn16xlarge and c5n18xlarg e or metal instances (such as c5nmetal) For more information about Amazon EC2 instances that support 100 Gigabit network interfaces and enhanced networking see Amazon EC2 Instance Types The module required for enhanced networking and the required enaSupport attribute set are included with Amazon Linux 2 and the latest versions of the Amazon Linux AMI Therefore if you launch an instance with an HVM version of Amazon Linux on a supported instance type enhanced networking is already enabled for your instance For more information see Test whether enhanced networking is enabled For more information about how to enable enhanced networking see Enhanced networking on Linux Amazon EC2 with Auto Scaling (BP7) Another way to mitigate both infrastructure and application layer attacks is to operate at scale If you have web applications you can use load balancers to distribute traffic to a number of Amazon EC2 instances that are overprovisioned or configured to automatically scale These instances can handle sudden traffic surges that occur for any reason including a flash crowd or an application layer DDoS attack You can set Amazon CloudWatch alarms to initiate Auto Scaling to automatically scale the size of your Amazon EC2 fleet in response to events that you define such as CPU RAM Network I/O and even Custom metrics This approach protects applic ation availability when there’s an unexpected increase in request volume When using CloudFront Application Load Balancer Classic Load Balancers or Network Load Balancer with your application TLS negotiation is handled by the distribution (CloudFront) or by the load balancer Th ese features help protect your instances from being impacted by TLS based attacks by scaling to handle legitimate requests and TLS abuse attacks For more information about using Amazon CloudWatch to invoke Auto Scaling see Monitoring CloudWatch metrics for your Auto Scaling groups and instances Amazon EC2 provides resizable compute capacity so that you can quickly scale up or down as requirements change You can scale horizontally by automatically adding instances to your application by Scaling the size of your Auto Scaling group and you can scale vertically by using larger EC2 instance types This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 13 Elastic Load Balancing (BP6) Large DDoS attacks can overwhelm the capacity of a single Amazon EC2 instanc e With Elastic Load Balancing (ELB) you can reduce the risk of overloading your application by distributing traffic across many backend instances Elastic Load Balancing can scale automatically allowing you to manage larger volumes when you have unanticip ated extra traffic for example due to flash crowds or DDoS attacks For applications built within an Amazon VPC there are three types of Elastic Load Balancing to consider depending on your application type: Application Load Balancer (ALB) Classic Load Balancer (CLB) and Network Load Balancer For web applications you can use the Application Load Balancer to route traffic based on content and accept only well formed web requests Application Load Balancer blocks many common DDoS attacks such as SYN fl oods or UDP reflection attacks protecting your application from the attack Application Load Balancer automatically scales to absorb the additional traffic when these types of attacks are detected Scaling activities due to infrastructure layer attacks are transparent for AWS customers and do not affect your bill For more information about protecting web applications with Application Load Balancer see Getting started with Application Load Balancers For TCP based applications you can use Network Load Balancer to route traffic to targets ( for example Amazon EC2 instances) at ultra low latency One key consideration with Network Load Balancer is that any traffic that reaches the load balancer on a valid listener will be routed to your targets not absorbed You can use AWS Shield Advanced to configure DDoS protection for Elastic IP addresses When an Elastic IP address is assigned per Availability Zone to the Network Load Balancer AWS Shield Advanced will apply the relevant DDoS protections for the Network Load Balancer traffic For more information about protecting TCP applications with Network Load Balancer see Getting started with Network Load Balancers Leverage AWS Edge Locations for Scale (BP1 BP3) Access to highly scaled diverse internet connections can significantly increase your ability to optimize latency and throughput to users absorb DDoS attacks and isolate faults while minimizing the impact on your application’s availability AWS edge locations provide an additional layer of netw ork infrastructure that provides these benefits to any application that uses CloudFront Global Accelerator and Amazon Route 53 With these This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 14 services you can comprehensively protect on the edge your applications running from AWS Regions Web Application D elivery at the Edge (BP1) CloudFront is a service that can be used to deliver your entire website including static dynamic streaming and interactive content Persistent connections and variable time tolive (TTL) settings can be used to offload traffic from your origin even if you are not serving cacheable content Use of these CloudFront features reduces the number of requests and TCP connections back to your origin helping protect your web application from HTTP floods CloudFront only accepts well formed connections which helps prevent many common DDoS attacks such as SYN floods and UDP reflection attacks from reaching your origin DDoS attacks are also geographically isolated close to the source which prevents the traffic from impacting other loc ations These capabilities can greatly improve your ability to continue serving traffic to users during large DDoS attacks You can use CloudFront to protect an origin on AWS or elsewhere on the internet If you’re using Amazon S3 to serve static content o n the internet AWS recommends you use CloudFront to protect your bucket You can use origin access identify (OAI) to ensure that users only access your objects by using CloudFront URLs For more information about OAI see Restricting access to Amazon S3 content by using an origin access identity (OAI) For more information about protecting and optimizing the performance of web applications with CloudFront see Getting started with Amazon CloudFront Protect network traffic further from your origin using AWS Global Accelerator (BP1) Global Accelerator is a networking service that i mproves availability and performance of users’ traffic by up to 60% This is accomplished by ingressing traffic at the edge location closest to your users and routing it over the AWS global network infrastructure to your application whether it runs in a single or multiple AWS Regions Global Accelerator routes TCP and UDP traffic to the optimal endpoint based on performance in the closest AWS Region to the user If there is an application failure Global Accelerator provides failover to the next best endpoint within 30 seconds Global Accelerator uses the vast capacity of the AWS global network and integrations with AWS Shield such as a stateless SYN proxy capability that challenges new connection attempts and only serves legitimate end users to protect applications This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 15 You can implement a DDoS resilient architecture that provides many of the same benefits as the Web Application Delivery at the Edge best practice s even if your application uses protocols not supported by CloudFront or you are operating a web application that requires global static IP addresses For example you may require IP addresses that your end users can add to the allow list in their firewalls and are not used by any other AWS customers In these scenarios y ou can use Global Accelerator to protect web application s running on Application Load Balancer and in conjunction with AWS WAF to also detect and mitigate web application layer request floods For more information about protecting and optimizing the performance of network traffic using Global Accelerator see Getting started with AWS Global Accelerator Domain Name Resolution at the Edge (BP3) Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that can be used to direct traffic to your web application It includes advanced features like Traffic Flow Health Checks and Monitoring Latency Based Routing and Geo DNS These advanced features allow you to control how the service responds to DNS requests to improve the performance of your web application and to avoid site outages Amazon Route 53 uses techniques like shuffle sharding and anycast striping that can help users access your application even if the DNS service is targeted by a DDoS attack With shuffle sharding each name server in your delegation set corresponds to a uniq ue set of edge locations and internet paths This provides greater fault tolerance and minimizes overlap between customers If one name server in the delegation set is unavailable users can retry and receive a response from another name server at a differ ent edge location Anycast striping allows each DNS request to be served by the most optimal location dispers ing the network load and reducing DNS latency This provides a faster response for users Additionally Amazon Route 53 can detect anomalies in th e source and volume of DNS queries and prioritize requests from users that are known to be reliable For more information about using Amazon Route 53 to route users to your application see Getting Started with Amazon Route 53 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 16 Application Layer Defense (BP1 BP2) Many of the techniques discussed so far in this paper are effective at mitigating the impact that infrastructure layer DDoS attacks have on your application’s availability To also defend against application layer attacks you need to implement an architecture that allows you to specifically detect scale to absorb and block malicious requests This is an important consideration because network based DDoS mitigation systems are generally ineffective at mitigating complex application layer attacks Detect and Filter Malicious Web Requests (BP1 BP2) When your application runs on AWS you can leverage both CloudFront and AWS WAF to help defend against application layer DDoS attacks CloudFront allows you to cache static content and serve it from AWS edge locations which can help reduce the load on your ori gin It can also help reduce server load by preventing non web traffic from reaching your origin Additionally CloudFront can automatically close connections from slow reading or slow writing attackers (for example Slowloris ) By using AWS WAF you can configure web access control lists ( web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures Each web ACL consists of rules that you can configure to string match or regex match one or more request attributes such as the Uniform Resource Identifier (URI) query string HTTP method or header key In addition by using AWS WAF's rate based rules you can a utomatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you defin e Requests from offending client IP addresses will receive 403 Forbidden error responses and will remained blocked until request rates drop be low the threshold This is useful for mitigating HTTP flood attacks that are disguised as regular web traffic To block attacks based on IP address reputation you can create rules using IP match conditions or use Managed Rules for AWS WAF offered by selle rs in the AWS Marketplace AWS WAF direc tly offers AWS Managed Rules as a managed service where you can choose IP reputation rule groups The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence This is useful if you would like to block IP addresses typically associated with bots or other threats The Anonymous IP list rule group contains rules to block requests from services that allow the obfuscation of viewer identity These include requests from VPNs proxies Tor nodes and cloud platforms (including AWS) Both AWS WAF and CloudFront also enable you to set geo restrictions to block or allow requests from This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices fo r DDoS Resiliency 17 selected countries This can help block attacks originating from geographic locations where you d o not expect to serve users To help identify malicious requests review your web server logs or use AWS WAF’s logging and Sampled Requests features By enabling AWS WAF logging you get detailed information about the traffic analyzed by the web ACL AWS WAF supports log filtering allowing you to specify which web requests are logged and which requests are discarded from the log after the inspection Information recorded in the logs include s the time that AWS WAF received the request from your AWS resourc e detailed information about the request and the matching action for each rule request ed Sampled Requests provide details about requests within the past three hours that matched one of your AWS WAF rules You can use this information to identify potenti ally malicious traffic signatures and create a new rule to deny those requests If you see a number of requests with a random query string make sure to allow only the query string parameters that are relevant to cache for your application This technique is helpful in mitigating a cache busting attack against your origin If you are subscribed to AWS Shield Advanced you can engage the AWS Shield Response Team (SRT) to help you create rules to mitigate an attack that is hurting your application’s availabil ity You can grant AWS SRT limited access to your account’s AWS Shield Advanced and AWS WAF APIs AWS SRT accesses these APIs to place mitigations on your account only with your explicit authorization For more information see the Support section of this document You can use AWS Firewall Manager to centrally configure and manage security rules such as AWS Shield Advanced protections and AWS WAF rules across your organization Your AWS Organizations management account can designate an administrator account which is authorized to create Firewall Manager policies These policies allow you to define criteria such as resource type and tags which determi ne where rules are applied This is useful when you have multiple accounts and want to standardize your protection For more information about: • AWS Managed Rules for AWS WAF see AWS Managed Rules for AWS WAF • Using geo restriction to limit access to your CloudFront distribution see Restricting the geographic distribution of your content • Using AWS WAF see This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 18 o Getting started with AWS WAF o Logging web ACL traffic information o Viewing a sample of web requests • Configuring rate based rules see Protect Web Sites & Services Using Rate Based Rules for AWS WAF • How to manage the deployment of AWS WAF rules across your AWS resources with AWS Firewall Manager see o Getting started with AWS Firewall Manager AWS WAF policies o Getting started with AWS Firewall Manager AWS Shield Advanced policies Attack Surface Reduction Another important consideration when architecting an AWS solution is to limit the opportunities an attacker has to targe t your application This concept is known as attack surface reduction Resources that are not exposed to the internet are more difficult to attack which limits the options an attacker has to target your application’s availability For example if you do not expect users to directly interact with ce rtain resources make sure that those resources are not accessible from the internet Similarly do not accept traffic from users or external applications on ports or protocols that aren’t necessary for communication In the following section AWS provide s best practices to guide you in reduc ing your attack surface and limit ing your application’s internet exposure Obfuscating AWS Resources (BP1 BP4 BP5) Typically users can quickly and easily use an application without requiring that AWS resources be fu lly exposed to the internet For example when you have Amazon EC2 instances behind an Elastic Load Balancing the instances themselves might not need to be publicly accessible Instead you could provide users with access to the Elastic Load Balancing on certain TCP ports and allow only the Elastic Load Balancing to communicate with the instances You can set this up by configuring Security Groups and network access control lists ( network ACLs) within your Amazon Virtual Private Cloud ( Amazon VPC) Amazon VPC allows you to provision a logically isolated section This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 19 of the AWS Cloud where you can launch AWS resources in a virtual network that you define Security groups and network ACLs are similar in that they allow you to control access to AWS res ources within your VPC But security groups allow you to control inbound and outbound traffic at the instance level while network ACLs offer similar capabilities at the VPC subnet level There is no additional charge for using security groups or network ACLs Security Groups and Network Access Control Lists ( Network ACLs) (BP5) You can choose whether to specify security groups when you launch an instance or associate the instance with a security group at a later time All internet traffic to a security group is implicitly denied unless you create an allow rule to permit the traffic For example if you ha ve a web application that uses an Elastic Load Balancing and multiple Amazon EC2 instances you might decide to create one security group for the Elastic Load Balancing (Elastic Load Balancing security group) and one for the instances (web application serv er security group) You can then create an allow rule to permit internet traffic to the Elastic Load Balancing security group and another rule to permit traffic from the Elastic Load Balancing security group to the web application server security group Th is ensures that internet traffic can’t directly communicate with your Amazon EC2 instances which makes it more difficult for an attacker to learn about and impact your application When you create network ACLs you can specify both allow and deny rules T his is useful if you want to explicitly deny certain types of traffic to your application For example you can define IP addresses (as CIDR ranges) protocols and destination ports that are denied access to the entire subnet If your application is used only for TCP traffic you can create a rule to deny all UDP traffic or vice versa This option is useful when responding to DDoS attacks because it lets you create your own rules to mitigate the attack when you know the source IPs or other signature If you are subscribed to AWS Shield Advanced you can register Elastic IP address es as Protected Resources DDoS attacks against Elastic IP addresses that have been registered as Protected Resources are detected more quickly which can result in a faster time to mitigate When an attack is detected the DDoS mitigation systems read the network ACL that corresponds to the targeted Elastic IP address and enforce it at the AWS network border This significantly reduces your risk of impact from a number of infrastr ucture layer DDoS attacks This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 20 For more information about configuring Security Groups and network ACLs to optimize for DDoS resiliency see How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface For more information about using AWS Shield Advanced with Elastic IP addresse s as Protected Resources see the steps to Subscribe to AWS Shield Advanced Protecting Your Origin (BP1 BP5) If you are using CloudFront with an origin that is inside of your VPC you may want to ensure th at only your CloudFront distribution can forward requests to your origin With Edge toOrigin Request Headers you can add or override the value of existing request headers when CloudFront forwards requests to your origin You can use the Origin Custom Hea ders for example XShared Secret header to help validate that the requests made to your origin were sent from CloudFront For more information about protecting your origin with an Origin Custom Headers see Adding custom headers to origin requests and Restricting access to App lication Load Balancers For a guide on implementing a sample solution to automatically rotate the value of Origin Custom Headers for the origin access restriction see How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager Alternatively you can use a n AWS Lambda function to automatically update your security group rules to allow only CloudFront traffic This improves your origin’s security by helping to ensure that malicious users cannot bypass CloudFront and AWS WAF when accessing your web applicatio n For more information about how to protect your origin by automatically updating your security groups see How to Automatically Update Your Security Groups for Amazon CloudFront and AWS WAF by Us ing AWS Lambda Protecting API Endpoints (BP4) Typically when you must expose an API to the public there is a risk that the API frontend could be targeted by a DDoS attack To help reduce the risk you can use Amazon API Gateway as an entryway to appl ications running on Amazon EC2 AWS Lambda or elsewhere By using Amazon API Gateway you don’t need your own servers for the API frontend and you can obfuscate other components of your application By making it harder to detect your application’s compone nts you can help prevent those AWS resources from being targeted by a DDoS attack This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 21 When you use Amazon API Gateway you can choose from two types of API endpoints The first is the default option: edge optimized API endpoints that are accessed through a CloudFront distribution The distribution is created and managed by API Gateway however so you don’t have control over it The second option is to use a regional API endpoint that is accessed from the same AWS region from which your REST API is deployed AWS recommend s that you use the second type of endpoint and associate it with your own CloudFront distribution This gives you control over the CloudFront distribution and the ability to use AWS WAF for application layer protection This mode provides you with access to scaled DDoS mitigation capacity across the AWS global edge network When using CloudFront and AWS WAF with Amazon API Gateway configure the following options: • Configure the cache behavior for your distributions to forward all headers to th e API Gateway regional endpoint By doing this CloudFront will treat the content as dynamic and skip caching the content • Protect your API Gateway against direct access by configuring the distribution to include the origin custom header xapikey by setting the API key value in API Gateway • Protect the backend from excess traffic by configuring standard or burst rate limits for each method in your REST APIs For more information about creating APIs with Amazon API Gateway see Amazon API Gateway Getting Started Operational Techniques The mitigation techniques in this paper help you architect applications that are inherently resilient against DDoS attacks In many cases it’s also usef ul to know when a DDoS attack is targeting your application so you can take mitigation steps This section discusses best practices for gaining visibility into abnormal behavior alerting and automation managing protection at scale and engaging AWS for a dditional support Visibility When a key operational metric deviates substantially from the expected value an attacker may be attempting to target your application’s availability Familiarity with the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 22 normal behavior of your application means you can tak e action more quickly when you detect an anomaly Amazon CloudWatch can help by monitoring applications that you run on AWS For example you can collect and track metrics collect and monitor log files set alarms and automatically respond to changes in your AWS resources If you follow the DDoS resilient reference architecture when architecting your application common infrastructure layer attacks will be blocked before reaching your application If you are subscribed to AWS Shield Advanced you have acc ess to a number of CloudWatch metrics that can indicate that your application is being targeted For example you can configure alarms to notify you when there is a DDoS attack in progress so you can check your application’s health and decide whether to e ngage AWS SRT You can configure the DDoSDetected metric to tell you if an attack has been detected If you want to be alerted based on the attack volume you can also use the DDoSAttackBitsPerSecond DDoSAttackPacketsPerSecond or DDoSAttackRequestsPerSec ond metrics You can monitor these metrics by integrating Amazon CloudWatch with your own tools or by using tools provided by third parties such as Slack or PagerDuty An application layer attack can elevate many Amazon CloudWatch metrics If you’re using AWS WAF you can use CloudWatch to monitor and activate alarm s on increases in requests that you’ve set in AWS WAF to be allowed counted or blocked This allows you to receive a notification if the level of traffic exceeds what your application can handle You can also use CloudFront Amazon Route 53 Application Load Balancer Network Load Balancer Amazon EC2 and Auto Scaling metrics that are tracked in CloudWatch to detect changes that can indicate a DDoS attack The Recommended Amazon CloudWatch Metrics table lists description s of Amazon CloudWatch metrics that are commonly used to detect and react to DDoS attacks Recommended Amazon CloudWatch Metrics Topic Metric Description AWS Shield Advanced DDoSDetected Indicates a DDoS event for a specific Amazon Resource Name (ARN) AWS Shield Advanced DDoSAttackBitsPerSecond The number of bytes observed during a DDoS event for a specific ARN This metric is only available for layer 3/4 DDoS events This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 23 Topic Metric Description AWS Shield Advanced DDoSAttackPacketsPerSecond The number of packets observed during a DDoS event for a specific ARN This metric is only available for layer 3/4 DDoS events AWS Shield Advanced DDoSAttackRequestsPerSecond The number of requests observed during a DDoS event for a specific ARN This metric is only available for layer 7 DDo S events and is only reported for the most significant layer 7 events AWS WAF AllowedRequests The number of allowed web requests AWS WAF BlockedRequests The number of blocked web requests AWS WAF CountedRequests The number of counted web requests AWS WAF PassedRequests The number of passed requests This is only used for requests that go through a rule group evaluation without matching any of the rule group rules CloudFront Requests The number of HTTP/S requests CloudFront TotalErrorRate The percentage of all requests for which the HTTP status code is 4xx or 5xx Amazon Route 53 HealthCheckStatus The status of the health check endpoint ALB ActiveConnectionCount The total number of concurrent TCP connections that are active from clients to the load balancer and from the load balancer to targets ALB ConsumedLCUs The number of load balancer capacity units (LCU) used by your load balancer ALB HTTPCode_ELB_4XX_Count HTTPCode_ELB_5XX_Count The number of HTTP 4xx or 5xx client error codes generated by the load balancer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 24 Topic Metric Description ALB NewConnectionCount The total number of new TCP connections established from clients to the load balancer and from the load balancer to targets ALB ProcessedBytes The total number of bytes proce ssed by the load balancer ALB RejectedConnectionCount The number of connections rejected because the load balancer reached its maximum number of connections ALB RequestCount The number of requests that were processed ALB TargetConnectionErrorCount The number of connections that were not successfully established between the load balancer and the target ALB TargetResponseTime The time elapsed in seconds after the request le aves the load balancer until a response from the target is received ALB UnHealthyHostCount The number of targets that are considered unhealthy NLB ActiveFlowCount The total number of concurrent TCP flows (or connections) from clients to targets NLB ConsumedLCUs The number of load balancer capacity units (LCU) used by your load balancer NLB NewFlowCount The total number of new TCP flows (or connections) established from clients to targets in the time period NLB ProcessedBytes The total number of bytes processed by the load balancer including TCP/IP headers Global Accelerator NewFlowCount The total number of new TCP and UDP flows (or connections) established from clients to endpoints in the time period This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 25 Topic Metric Description Global Accelerator ProcessedBytesIn The total number of incoming byte s processed by the accelerator including TCP/IP headers Auto Scaling GroupMaxSize The maximum size of the Auto Scaling group Amazon EC2 CPUUtilization The percentage of allocated EC2 compute units that are currently in use Amazon EC2 NetworkIn The number of bytes received by the instance on all network interfaces For more information about using Amazon CloudWatch to detect DDoS attacks on your application see Getting Started with Amazon CloudWatch To explore an example of a dash board built using some of the metrics from the preceding table see A custom baseline monitoring system AWS includes several additional metrics and alarms to notify you about an attack and to help you monitor your application’s resources The AWS Shield conso le or API provide a peraccount event summary and details about attacks that have been detected In addition the global threat environment dashboard provides summary information about Figure 1: Global threat environment dashboard Global threat environment dashboard This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 26 all DDoS attacks that have been detected by AWS This information may b e useful to better understand DDoS threats across a larger population of applications in addition to attack trends and comparing with attacks that you may have observed If you are subscribed to AWS Shield Advanced the service dashboard displays additio nal detection and mitigation metrics and network traffic details for events detected on protected resources AWS Shield evaluates traffic to your protected resource along multiple dimensions When an anomaly is detected AWS Shield creates an event and rep orts the traffic dimension where the anomaly was observed With a placed mitigation this protects your resource from receiving excess traffic and traffic that matches a known DDoS event signature Detection metrics are based on sampled network flows or AWS WAF logs when a web ACL is associated with the protected resource Mitigation metrics are based on traffic that's observed by Shield’s DDoS mitigation systems Mitigation metrics are a more precise measurement of the traffic into your resource The networ k top contributors metric provides insight into where traffic is coming from during a detected event You can view the highest volume contributors and sort by aspects such as protocol source port and TCP flags The top contributors metric includes metric s for all traffic observed on the resource along various dimensions It provides additional metric dimensions you can use to understand network traffic that’s sent to your resource during an event The service dashboard also includes details about the act ions automatically taken to mitigate DDoS attacks This information makes it easier to investigate anomalies explore dimensions of the traffic and better understand the actions taken by AWS Shield Advanced to protect your availability Another tool that can help you gain visibility into traffic that is targeting your application is VPC Flow Logs On a traditional network you might use network flow logs to troubleshoot connectivity and security issues and to make sure that network access rules are working as expected By using VPC Flow Logs you can capture information about the IP traffic that is going to and from network interfaces in your VPC Each flow log record includes the following: source and destination IP addresses source and destination ports protocol and the number of packets and bytes transferred during the capture window You can use this information to help identify anomalies in network traffic and to identify a specific attack vector For example most UDP reflection attacks have specific source ports such as source port 53 for DNS reflection This is a clear attack signature that you can identify in the flow log record In resp onse you might This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS R esiliency 27 choose to block the specific source port at the instance level or create a network ACL rule to block the entire protocol if your application doesn’t require it For more information about using VPC Flow Logs to identify network anomalies an d DDoS attack vectors see VPC Flow Logs and VPC Flow Logs – Log and View Network Traffic Flows Visibility and protection management across multiple accounts In scenarios when you operate across multiple AWS accounts and have multiple components to protect using techniques that enable you to operate at scale and reduce operational overhead increase your mitigation capabilities When managing AWS Shield Advanced protected resources in multiple accounts you can se t up centralized monitoring by using AWS Firewall Manager and AWS Security Hub With Firewall Manager you ca n create a security policy that enforces DDoS protection compliance across all your accounts You can use these two services together to manage your protected resources across multiple accounts and centralize the monitoring of those resources Security Hub automatically integrates with Firewall Manager allowing AWS Shield Advanced customers to view security findings in a single dashboard alongside other high priority security alerts and compliance statuses For instance when AWS Shield Advanc ed detects anomalous traffic destined for a protected resource in any AWS account within the scope this finding will be visible in the Security Hub console If configured Firewall Manager can automatically bring the resource into compliance by creating i t as a n AWS Shield Advanced –protected resource and then update Security Hub when the resource is in a compliant state This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 28 For more information about central monitoring of AWS Shield protected resources see Set up centralized monitoring for DDoS events and auto remediate noncompliant resources Support If you experience an attack you can also benefit from support from AWS in assessing the threat and reviewing the architecture of your application or you might want to request other assistance It is important to create a response plan for DDoS attacks before an actual event The best practices outlined in this paper are intended to be proactive measures that you implement before you launch an application but DDoS attacks against your application might still occur Review the options in this section to determine the support resources that are best suited for your scenario Your account Figure 2: Architecture diagram for monitoring Shield protected resources with Firewall Manager and Security Hub Monitoring AWS Shield protected resources with Firewall Manager and Security Hub architecture diagram This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 29 team can evaluate your use case and application and assist with specific questions or challenges that you have If you’re running production workloads on AWS consider subscribing to Business Support which provides you with 24 /7 access to Cloud Support Engineers who can assist with DDoS attack issues If you’re running mission critical workloads consider Enterprise Support which provides the ability to open critical cases and receive the fastest response from a Senior Cloud Support Enginee r If you are subscribed to AWS Shield Advanced and are also subscribed to either Business Support or Enterprise Support you can configure AWS Shield proactive engagement It allows you to configure health checks associate to your resources and provide 24/7 operations contact information When AWS Shield detects signs of DDoS and your application health checks are showing signs of degradation AWS SRT will proactively reach o ut to you This is our recommended engagement model because it allows for the quickest AWS SRT response times and empowers AWS SRT to begin troubleshooting even before contact has been established with you The proactive engagement feature requires you to configure an Amazon Route 53 health check that accurately measures the health of your application and is associated with the resource protected by AWS Shield Advanced Once a Route 53 health check is associated in the AWS Shield console the AWS Shield Adv anced detection system uses the health check status as an indicator of your application’s health AWS Shield Advanced’s health based detection feature will ensure that you are notified and that mitigations are placed more quick ly when your application is u nhealthy AWS SRT will contact you to troubleshoot whether the unhealthy application is being targeted by a DDoS attack and place additional mitigations as needed Completing configuration of p roactive engagement includes adding contact details in the AWS Shield console AWS SRT will use this information to contact you You can configure up to 10 contacts and provide additional notes if you have any specific contact requirements or preferences Proactive engagement contacts should hold a 24/7 role such as a security operations center or an individual who is immediately available You can enable proactive engagement for all resources or for select key production resources where response time is critical This is accomplished by ass igning health checks only t o these resources You can also escalat e to AWS S RT by creating an AWS Support case using the AWS Support console or Support API if you have a DDoS related event that affects your application’s availability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 30 Conclusion The best practices outlined in this p aper can help you build a DDoS resilient architecture that protect s your application’s availability by preventing many common infrastructure and application layer DDoS attacks The extent to which you follow these best practices when you architect your app lication will influence the type vector and volume of DDoS attacks that you can mitigate You can incorporate resiliency without subscribing to a DDoS mitigation service By choosing to subscribe to AWS Shield Advanced you gain additional support visibility mitigation and cost protection features that further protect an already resilient application architecture Contributors The following individuals and organizations contributed to this document: • Jeffrey Lyon AWS Perimeter Protection • Rodrigo Ferroni AWS Security Specialist TAM • Dmitriy Novikov AWS Solutions Architect • Achraf Souk AWS Solutions Architect • Yoshihisa Nakatani AWS Solutions Architect Further Reading For additional information see: • Best Practices for DDoS Mitigation on AWS • Guidelines for Implementing AWS WAF • SID324 – re:Invent 2017: Automating DDoS Response in the Cloud • CTD304 – re:Invent 2017: Dow Jones & Wall Street Journal’s Journey to Manage Traffic Spikes While Mitigating DDoS & Application Layer Threats • CTD310 – re:Invent 2017: Living on the Edge It’s Safer Than You Think! Building Strong with Amazon CloudFront AWS Shield and AWS WAF This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 31 • SEC407 re:Invent 2019: A defense indepth approach to building web applications • SEC321 re:Invent 2020: Get ahead of the curve with DDoS Response Team escalations • William Hill: High performance DDOS Protection with AWS Document revisions Date Description September 21 2021 Updated to include latest recommendations and features AWS Global Accelerator is added as part of comprehensive protection at the edge AWS Firewall Manager for centralized monitoring for DDoS events and auto remediate noncompliant resources December 2019 Updated to clarify cache busting in Detect and Filter Malicious Web Requests (BP1 BP2) section and Elastic Load Balancing and Application Load Balancer usage in Scale to Absorb (BP6) section Updated diagrams and Table 2 marked “Choice of Region” as BP8 Updated BP7 section with more details December 2018 Updated to include AWS WAF logging as a best practice June 2018 Updated to include AWS Shield AWS WAF features AWS Firewall Manager and related best practices June 2016 Added prescriptive architecture guidance and updated to include AWS WAF June 2015 Whitepaper published,General,consultant,Best Practices AWS_Best_Practices_for_Oracle_PeopleSoft,ArchivedAWS Best Practices for Oracle PeopleSoft December 2017 This paper has been archived For the latest technical guidance see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2017 Amazon Web Services Inc and DLZP Group All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Benefits of Running Oracle PeopleSoft on AWS 1 Key Benefits of AWS over On Premises 1 Key Benefits of AWS over SaaS 4 Amazon Web Services Concepts 5 Regions and Availability Zones 5 Amazon Elastic Cloud Compute 7 Amazon Relational Database Service 8 Elastic Load Balancing 8 Amazon Elastic Block Store 8 Amazon Machine Image 8 Amazon Simple Storage Service 9 Amazon Route 53 9 Amazon Virtual Private Cloud 9 AWS Direct Connect 9 AWS CloudFormation 10 Oracle PeopleSoft and Database Licensing on AWS 10 Oracle PeopleSoft and Database License Portability 10 Amazon RDS for Oracle Licensing Models 11 Best Practices for Deploying Oracle PeopleSoft on AWS 11 Traffic Distribution and Load Balancing 12 Use Multiple Availability Zones for High Availability 13 Scalability 14 Standby Instances 15 Amazon VPC Deployment and Connectivity Options 15 Disaster Recovery and Cross Region Deployment 15 Disaster Recovery on AWS with Production On Premises 18 Archived AWS Security and Compliance 19 The AWS Security Model 19 AWS Identity and Access Management 20 Monitoring and Logging 20 Network Security and Amazon Virtual Private Cloud 21 Data Encryption 21 Migration Scenarios and Best Practices 21 Migrate Existing Oracle PeopleSoft Environments to AWS 22 Oracle PeopleSoft Upgrade 22 Performance Testing 22 Oracle PeopleSoft Test and Development Environments on AWS 22 Disaster Recovery on AWS 22 Trai ning Environments 22 Monitoring and Infrastructure 23 Conclusion 23 Contributors 24 References 24 Archived Abstract This whitep aper cover s areas that should be considered when moving Oracle PeopleSoft applications to Amazon Web Services ( AWS) It help s you understand how to leverage AWS for all PeopleSoft a pplications including PeopleSoft Human Capital Management ( HCM ) Financials and Supply Chain Manag ement ( FSCM ) Interactive Hub ( IAH ) and Customer Relationship Management ( CRM) ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 1 Benefits of Running Oracle PeopleSoft on AWS Migrating Oracle PeopleSoft applications to AWS can be simplified by leveraging a standardized architecture footprint It is important to understand that this is not just a conversion from physical hardware to a v irtual ized environment In this section we discuss key benefits of running PeopleSo ft applications on AWS compared to various onpremises and Software asa Service (SaaS) environments whether virtualized or not Key Benefits of AWS o ver On Premises There are several key benefits to running PeopleSoft applications on AWS compared to on premises environments : • Eliminate Long Procurement C ycles : In the traditional deployment model responding to increases in c apacity whether it be disk CPU or memory can cause delays and challenges for your infrastructure team The following diagram provi des an overview of a typical client IT procurement cycle Each step is time sensitive and requir es large capital outlays and multiple approvals This process must be repeated for each change/increase in infrastructure which can compound costs and cause significant delays With AWS resources are available as needed within minutes of you requesting them ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 2 • Moore’s Law: With an on premises environment you end up owning hardware that depreciat es in value every year You cannot simply add and remove compu ting capacity on demand You’re generally locked into the price and capacity of the hardware that you have acquired as well as the resulting hardware support costs With AWS you can change the underlying infrastructure as new capabilities and configurati ons become available • Right Size Anytime: Often you end up oversizing your on premises environments to anticipate potential capacity needs or to address development and quality assurance ( QA) needs early on in the project cycle With AWS you can adjust capacity to match your current needs with ease Since y ou pay only for the services you use you save money during all phases of the software deployment cycle • Resiliency: On premises environments require an extensive set of hardware software and network monitoring tools Failures must be handled on a case bycase basis You must procure and replace failed equipment and correct software and configuration issues Key components of PeopleSoft must be replicated and managed With AWS you can leverag e Elastic Load Balancing (ELB) Auto Recovery for Amazon Elastic Compute Cloud (Amazon EC2 ) and Multi Availability IT Procurement Cycle01Capacility Planning 02Capital Allocation 03 Provisioning04 Maintenance05Hardware RefreshArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 3 Zone ( AZ) capabilities to build a highly tolerant and resilient system with the highest service level agreement ( SLA ) available • Disaster Recovery : Traditional disaster recovery (DR) solutions require immense upfront expenditures and are not easily scalable AWS offers built in disaster recovery solutions to execute your business data continuity plans at low er comparative costs which allows you to benefit from an on demand model while always hav ing the optimal amount of data redundancy • Incidental Data Center Costs: With an on premises environment you typically pay hardware support costs virtualizat ion licensing and support data center operational costs and more All of these costs can be eliminated or reduced by leveraging AWS • Testing: Even though testing is recommended prior to any PeopleSoft application or environment change few perform any significant testing after the initial application launch due to the expense and the unavailability of the required environment With AWS you can easily and quickly create and use a test environment thus eliminating the risk of discoverin g functional performance or security issues in production Again you are charged only for the hours the test environment is used • Hardware : All hardware platforms have e ndoflife (EOL) dates at which point the hardware is no longer supported and you are forced to replace it or face enormous maintenance costs With AWS you can simply upgrade platform instances to new AWS instance types with a single cl ick at no cost for the upgrade • High Availability: High availability for critical applications is a m ajor factor in corporate decisions to choose the AWS Cloud With AWS you can achieve a 9995% uptime by placing your data and your applications in multiple Availability Zones (locations) Your critical data synchronously replicates to standby instances automatically and recovers automatically This automation allows AWS to achieve better performance than the average SLA of other data centers With additional invest ment and infrastructure design uptime could approach 9999% • Unlimited Environments: Onpremises environments are rigid and take too long to provision For example if a performance issue is found in production it takes time to provision a test environment with an identical con figuration to the production environment On AW S you can ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 4 create the test environment and clone your production database quickly and easily Key Benefits of AWS o ver SaaS There are several key benefits to deploying PeopleSoft applications on AWS compared to using a SaaS solution : • Lower Total Cost of Ownership (TCO): If you already use PeopleSoft you do not have to purchase new licenses or take the risks associated with reimplementing your applications —you can just move your existing implementation to AWS If you are a new customer the TCO may still be lower when taking monthly SaaS fees into account • Security: On AWS PeopleSoft can be deployed in a v irtual private cloud (VPC) created using the Amazon Virtual Private Cloud service Your VPC can be connected to your onpremises data center s using AWS Direct Connect bypassing the public i nternet Using AWS Direct Connect you can assign private IP addresses to your PeopleSoft instances as if they were on your internal network By contrast SaaS must be accessed over the public internet making it less secure and requiring a bigger integration effort • Unlimited Usage: SaaS applications have governor/platform limits to accommodate their underlying multitenant architecture Governor limits restrict everything from the number of API calls and transaction times to data sets and file sizes With the AWS Cloud you can provision and use as much capacity as needed and pay only for what you use • Elastic Compute Capacity : SaaS products typically use a multitenant architecture that ties you to a specific instance and the limits of that instance With AWS you can provision as much or as little compute capacity as you need • Application Features and Functions : PeopleSoft lets you manage everything from Financials to the Human Capital Management Interactive Hub within the associated application pillar Many SaaS solutions require multiple applications that you must purchase and integrate even if they come from the same vendor It is easy to overlook the cost of integration in the application buying decision Running PeopleSoft with its rich integrated functionality on AWS avoids this cost ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 5 Amazon Web Services Concepts Understanding the various AWS services and how they can be leveraged will allow you to deploy secure and scalable Oracle PeopleSoft applications no matter if your organization has 10 users or 100000 users Regions and Availability Zones An AWS Region is a physical location in the world Each R egion is a separate geographi c area isolated from the other R egions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2 ) instances and data in multiple locations Re sourc es aren't replicated across Regions unless you do so specifically An AWS account provides multiple R egions so that you can launch your application in locations that meet your requirements For example you might want to launch your application in Eur ope to be closer to your European customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineered t o be highly reliable Common points of failure such as generators and cooling equipment are not shared across Availability Zones Because Availability Zones are physically separate even extremely uncommon disasters such as fires tornados or flooding wo uld only affect a single Availability Zone Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links The following diagram illustrates the relationship between R egions and Availability Zones ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 6 Figure 1: Relationship between AWS Regions and Availability Zones The following figure shows the Regions and the number of Availability Zones in each Region provided by an AWS account at the time of this publication For the most current list of Regions and A vailability Zones see https://awsamazoncom/about aws/global infrastructure/ Note that you can’t describe or access additional Regions from the AWS GovCloud (US) Region or China ( Beijing ) Region ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 7 Figure 2: Map of AWS Regions and Availability Zones Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud ( Amazon EC2) is a web service that provides resizable compute capacity in the cloud billed by the hour You can run virtual machines with various compute and memory capacities You have a choice of operating systems including different versions of Windows Server and Linux ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 8 Amazon Relational Database Service Amazon Relation Database Service ( Amazon RDS) makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks allowing you to focus on your applications and business For PeopleSoft both Microsoft SQL Server and Oracle D atabase s are available Elastic Load Balanc ing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic ELB can be used for load balancing web server traffic Amazon Elastic Block Store Amazon Elastic Block Store ( Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability EBS volumes offer the consistent and lowlatency performance needed to run your workloads Amazon Machine Image An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your EC2 instance AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that we can boot them when you ask us to do so ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 9 Amazon Simple Storage Service Amazon Simple Storage Service ( Amazon S3) provides d evelopers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and cost effecti ve way to route end users to i nternet applications Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways You can leverage multiple layers of security incl uding security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage th e AWS Cloud as an extension of your corporate data center AWS Direct Connect AWS Direct Connect is a network service that provides an alternative to using the i nternet to utilize AWS cloud services Using AWS Direct Connect you can establish private ded icated network connectivity between AWS and your data center office or colocation environment which in many cases can reduce your ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 10 network costs increase bandwidth throughput and provide a more consistent network experience than i nternet based connections AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion You can leverage AWS CloudFormation to quickly provision your PeopleSoft environments as well as to quickly create and update your infrastructure You can create your own CloudFormation templates to describe the AWS PeopleSoft resources and any associated dependencies or runtime parameters required to run them You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work— AWS CloudFormation takes care of this for you After your resources are deployed you can modify and update them in a controlled and predictable way in effect applying version control to your AWS infrastructure the same way you do with your traditional software You can deploy and update a template and its associated collection of resources (called a stack) by using the AWS Management Console AWS Command Line Interface or APIs AWS CloudFormation is available at no additional charge and you pay only for the AWS resources needed to run your applications Oracle PeopleSoft and Database Licensing on AWS Oracle PeopleSoft and Database License Portability Most Oracle s oftware licenses are fully portable to AWS including Enterprise License Agreement (ELA) Unlimited License Agreement (ULA) Business Process Outsourcing (BPO) and Oracle Partner Netw ork (OPN) You can use your existing Oracle PeopleSoft and Oracle D atabase licenses on AWS just like you would use them on premises ; however you should read your Oracle contract for specific information and consult with a knowledgeable Oracle licensing expert when in doubt ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 11 Amazon RDS for Oracle L icensing Models You can run Amazon RDS for Oracle under two different licensing models : “License Included” (LI) and “Bring Your Own License (BYOL)” In the LI model you do not need to separately purchase Oracle licenses as the Oracle Database software has been licensed by AWS If you already own Oracle Database licenses you generally can use the BYOL model to run Oracle databases on Amazon RDS The BYOL model is designed for customers who prefer to use existin g Oracle D atabase licenses or purchase new licenses directly from Oracle Best Practices for D eploying Oracle PeopleSoft on AWS The following architecture diagram illustrates how PeopleSoft Pure Internet Architecture (PIA) can be deployed on AWS You can deploy your PeopleSoft web application and process scheduler servers and the PeopleSoft d atabase across multiple Availability Zones for high availability of your application ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 12 Figure 3: Sample PeopleSoft Pure Internet Architecture deployment on AWS Traffic Distribution and Load Balancing Use Amazon Route 53 DNS to direct users to PeopleSoft hosted on AWS Use Elastic Load Balancing to distribute incoming traffic across your web servers deployed in multiple Availability Zones The load balancer serves as a single point of contact for clients which enables you to increase the availability of your application You can add and remove PeopleSoft web server instances from your load balancer as your needs change without disrupting the overall flow of inform ation Elastic Load Balancing ensures that only healthy w eb server instances receive traffic If a web server instance fails Elastic Load Balancing automatically reroutes the traffic to the remaining running web server instances If a failed web server in stance is restored Elastic Load Balancing restores the traffic to that instance ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 13 The PeopleSoft w eb servers will load balance the requests among the PeopleSoft application servers ; if a PeopleSoft app lication server fails the requests are routed to anothe r available PeopleSoft application server PeopleSoft application server load balancing and failover can be configured in the PeopleSoft configurationproperties file Please refer to the PeopleSoft Documentation for more information on configuring PeopleSoft application server load balancing and failover Use Multiple Availability Zones for High Availability Each Availability Zone is isolated from other Availability Zones and runs on its own physically distinct independent infrastructure The likelihood of two Availability Zones experiencing a failure at the same time is relatively small and you can spread your PeopleSoft web application and process scheduler servers across multiple Availability Zones to ensure high availability of your application In the unlikely event of failure of one Availability Zone user requests are routed by Elastic Load Balancing to the web server instances in the second Availability Zone and the PeopleSoft web server s will failover their request s to PeopleSoft application server instances in the second Availability Zone This ensures that your application continues to remain available in the unlikely event of an Availability Zone failure In addition to the PeopleSoft web and application servers t he PeopleSoft database on Amazon RDS can be deployed in a MultiAZ configuration Multi AZ deployments provide enhanced availability and durability for Amazon RDS DB instances making them a natural fit for production database workloads When you provision a Multi AZ DB i nstance Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a “standby” instance in a different Availability Zone In case of an infrastructure failure (for example instance hardware failure storage failure or network disruption) Amazon RDS performs an automatic failover to the “standby ” instance Since the endpoint for your DB i nstance remains the same after a failover your application can resume database operation s as soon as the failover is complete without the need for manual administrative intervention See Configuring Amazon RDS as an Oracle PeopleSoft Database to learn how to set up Amazon RDS for Oracle as the database backend of your PeopleSoft applicatio n ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 14 You can use the Amazon EC2 Auto Recovery feature to recover failed PeopleSoft web application and process scheduler server instances in case of failure of the underlying host When using Amazon EC2 Auto Recovery several system status checks monitor t he instance and the other components that need to be running in order for your instance to function as expected Among other things the system status checks look for loss of network connectivity loss of system power software issues on the physical host and hardware issues on the physical host If a system status check of the underlying hardware fails the instance will be rebooted (on new hardware if necessary) but will retain its i nstance ID IP address Elastic IP a ddresses EBS volume attachments an d other configuration details Scalability When using AWS you can scale your application easily due to the elastic nature of the cloud You can scale up the PeopleSoft web application and process scheduler servers simply by changing the instance type to a larger instance type For example you can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x132xlarge instance with 128 vCPUs and 1952 GiB RAM After selecting a new instance type only a restart is required f or the changes to take effect Typically the resizing operation is completed in a few minutes the EBS volumes remain attached to the instances and no data migration is required For your PeopleSoft database deployed on Amazon RDS you can scale the comp ute and storage independently Y ou can scale up the compute simply by changing the DB instance class to a larger DB instance class This modification typically takes only a few minutes and the database will be temporarily unavailable during this period You can increase the storage capacity and IOPS provisioned for your database without any impact on database availability You can scale out the web and application tier by adding and configuring more instances when required You can launch a new EC2 instanc e in a few minutes However additional work is required to configure the new web and application tier instance Although it might be possible to automate the scaling out of the web and application tier using scripting this requires an additional technica l investment A simpler alternative might be to use stand by instances as explained in the next section ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 15 Standby Instances To meet extra capacity requirements additional instances of PeopleSoft web and application servers can be pre installed and configured on EC2 instances These standby instances can be shut down until extra capacity is required Charges are not incurred when EC2 instances are shut down —only EBS storage charges are incurred At the time of this publication EBS General Purpose volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an EC2 instance with 120 GB hard disk drive ( HDD ) space the storage charge is only $12 per month These pre installed standby instances provide you the flexibility to use these instances for meeting additional capacity needs as and when required Amazon VPC Deployment and Connectivity Options Amazon VPC provides you with several options for connecting your AWS virtual networks with other remote networks securely If your users are primarily accessing the PeopleSoft application from an office or on premise s you can use a hardware IP sec VPN connection or AW S Direct Connect to connect to the onpremise s network and Amazon VPC If they’re accessing the application from outside the office (eg a sales rep or customer access es it from the field or from home) you can use a Software appliance based VPN connection over the i nternet Please refer to the Amazon Virtual Private Cloud Connectivity Options whitepaper for detailed information Disaster Recovery and Cross Region Deployment Even though a single R egion architecture with multi Availability Zone deployment might suffice for most use cases some customers might want to consider a multi Region deployment for disaster recovery (DR) depending on business requirements For example there might be a business policy that mandates that the disaster recovery site should be located a certain distance away from the primary site ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 16 CrossR egion deployments for DR should be designed and validated for specific use cases based on customer uptime needs and budget The following diagram depicts a typical PeopleSoft deployment across R egions that addresses both high availability and DR requirements The users are directed to the PeopleSoft application in the primary Region using Amazon Route 53 In ca se the primary Region is unavailable due to a disaster failover is initiated and the users will be redirected towards the PeopleSoft application deployed in the DR R egion The primary database is deployed on Amazon RDS for Oracle in a Multi AZ configurat ion AWS Database Migration Service (AWS DMS) in continuous data replication mode is used to replicate the data from the RDS instance in the primary R egion to another RDS instance in the DR R egion Note that AWS DMS can replicate only the data not the database schema changes The database schema changes in the RDS DB instance in the primary R egion should be applied separately to the RDS DB instance in the DR Region This could be done while patching or updating the PeopleSoft application in the DR R egion Figure 4: Sample PeopleSoft cross Region deployment on AWS ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 17 Deploying the Database on Amazon EC2 Instances While Amazon RDS is the recommended option for deploying the PeopleSoft database there could be some scenarios where Amazon RDS might not be suitable For example Amaz on RDS might not be suitable if the database size is larger than 6 TB which is the current limit for Amazon RDS for Oracle In such scenarios you can install the PeopleSoft database on Oracle on EC2 instances and configure Oracle Data G uard replication f or high availability and DR as shown in the following figure Figure 5: Sample multiRegion deployment with Oracle on Amazon EC2 In this DR scenario the database is deployed on Oracle running on EC2 instances Oracle Data Guard replication is configured between the primary database and two standby databases One of the two standby databases is ‘local’ (for synchronous replication) in another A vailability Zone in the primary Region The other is a ‘remote’ standby database ( for asynchronous replication) in the DR R egion In case of failure of the primary database the ‘local’ standby database is promoted as the primary database and the PeopleSoft application will connect to it In the extremely unlikely event of a R egion failure or unavailability the ‘rem ote’ standby database is promoted as the primary database and users are redirected to PeopleSoft application in the DR R egion using Route 53 ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 18 For more details on deploying Oracle Database with Data Guard replication on AWS please see the Oracle Database on AWS Quick Start Refer to this AWS whitepaper to learn more about using AWS for Disaster Recovery Disaster Recovery on AWS with Production On Premises You can use AWS to deploy DR environments for PeopleSoft applications running on premises In this scenario the production environment remains on premise s but the DR environment is deployed on AWS If the production environment fails a failover is initiated and users of your application are redirected to the PeopleSoft application deployed on AWS The process is fairly simple and involves the following m ajor steps: 1 Set up connectivity between the onpremises data center and AWS using VPN or AWS Direct Connect 2 Install PeopleSoft web application and process scheduler servers on AWS 3 Install the secondary database on AWS and configure Oracle Data Guard replication between the on premises production database and the secondary database on AWS Alternatively instead of Oracle Data Guard you could use the AWS Database Migration Service (AWS DMS) in continuous data replication mode for replicating the on premises production database to the secondary database on AWS AWS DMS can replicate only the data not the database schema changes The database schema changes in the onpremises production database should be applied separately to the secondary database on A WS This could be done while patching or updating the PeopleSoft application on AWS 4 If the onpremises production environment fails initiate a failover and redirect your users to the PeopleSoft application on AWS ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 19 AWS Security and Compliance The AWS C loud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center —but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security visit the AWS Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide customers with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance visit the AWS Compliance Center The AWS Securit y Model The AWS infrastructure has been architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is slightly different than security in your on premi ses data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastructure that supports the cloud and you are responsible for securing workloads that you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways and gives you the flexibility you need to implement the most applicable security controls fo r your business functions in the AWS environment ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 20 Figure 6 : The AWS shared responsibility model It’s recommended that you take advantage of the various security features AWS offers when deploying PeopleSoft application s on AWS Some of them are listed in the following discussion AWS Identity and Access Management With AWS Identity and Access Management ( IAM ) you can centrally manage users security credentials such as passwords access keys and permissions policies that control which AWS services and resources users can access IAM supports multifactor authentication (MFA) for privileged accounts including options for hardware based authenticators and support for i ntegration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made The AWS API call history produced b y CloudTrail enables security analysis resource change tracking and compliance auditing ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 21 Network Security and Amazon Virtual Private Cloud You create one or more subnets within each VPC Each instance launched in your VPC is connected to one subnet Tra ditional layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network ACLs which are stateless traffic filters that apply to all inbound or outbound traffic from a subnet within your VPC These ACLs can contai n ordered rules to allow or deny traffic based on IP protocol by service port as well as source/destination IP address Security groups are a complete firewall solution enabling filtering on both ingress and egress traffic from an instance Traffic can b e restricted by any IP protocol by service port as well as source/destination IP address (individual IP or classless inter domain routing (CIDR) block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Amazon RDS for SQL Server and Amazon Redsh ift Flexible key management options allow you to choose whether to have AWS manage the encryption keys using the AWS Key Management Service o (AWS KMS) or to maintain complete control over your keys Dedicated hardware based cryptographic key storage opt ions (AWS CloudHSM) are available to help you satisfy compliance requirements For more information see the following AWS whitepapers: Introduction to AWS Security AWS Security Best Practices Migration Scenarios and Best Practices Each customer could potentially have a different migration scenario This section covers some of the most common scenarios ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 22 Migrate Existing Oracle PeopleSoft Environments to AWS This is most suitable if you are on a recent release of PeopleSoft You should design your AWS deployment based on the best practices in this whitepaper Oracle PeopleSoft Upgrade You can leverage AWS as the upgrade environment to keep the PeopleSoft upgrade costs to a minimum In the end you have the option to leverage this new environment for test and development only or you can choose to migrate your entire PeopleSoft environment to AWS Either way it’s a win for you as the overall TCO can be reduced Performance Testing AWS enables you to test your PeopleSoft applications during initial deployments or upgrades with minimal cost because you are only charged for the resour ces you use when the tests run This enable s more consistent repeatable testing for PeopleSoft upgrades and updates which can be budgeted on a predictable basis depending upon your normal need cycles Oracle PeopleSoft Test and Development Environments on AWS The flexibility and pay asyougo nature of AWS makes it compelling for setting up test and d evelopment environments whether to try out AWS prior to a migration or just for additional test and d evelopment environment s if the migration of the produ ction environment is no t imminent Disaster Recovery on AWS You may want to set up a disaster recovery ( DR) environment for your existing PeopleSoft applications on AWS even if your production environment is still on premises This can be done at a much l ower cost than setting up a traditional DR environment Training Environments By leveraging the ability to replicate the p roduction environment you can quickly provision a training environment for short term use and train your ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 23 employees using the most current version of production After training has been completed these instances can be terminate d to save money Monitoring and Infrastructure After migration of your PeopleSoft application to AWS y ou can continue to use the monitoring tools you are familiar with for monitoring your PeopleSoft application You can use PeopleSoft P erformance Monitor to monitor the performance of your PeopleSoft environment You can collect real time resource utilization metrics from your web servers application servers and PeopleSoft Process Scheduler servers as well as key metrics on PeopleTools runtime execution such as SQL statements and PeopleCode events Optionally you can use Oracle Enterprise Manager for monitoring your PeopleSoft environment by instal ling the PeopleSoft Enterprise Environment Management Plug in You can also use Amazon CloudWatch to monitor AWS Cloud resources and the applications you run on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including A mazon EC2 instances Amazon EBS volumes ELB l oad balancers and Amazon RDS DB instances Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom applicati on and system metrics such as memory usage transaction volumes or error rates and Amazon CloudWatch will monitor these as well You can use the Enhanced Monitoring feature of Amazon RDS to monitor your PeopleSoft database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage Conclusion By deploying PeopleSoft applications on the AWS C loud you can reduce costs and simultaneously enable capabilities that might not be possible or cost effective if you deployed your application in an on premises data center The following benefits of deploying PeopleSoft application on AWS were discussed: ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 24 • Low cost – R esources are billed by the hour and only for the duration they are used • Capex to Opex – C hang e from Capex to Op ex to eliminate the need for large capital outlays • High availability – Achieve high availability of 9995 % or more by deploying PeopleSoft in a Multi AZ configuration • Flexibility –Add compute capacity elastically to cope with demand • Testing – Add test environments and use them for short durations Contributors The following individuals and organizations con tributed to this document: • Ashok Sundaram Solutions Architect Amazon Web Services • David Brunet VP Research and Development DLZP Group • Yoav Eilat Amazon Web Services References • Test Drive PeopleSoft R unning EC2 and RDS : http://wwwdlzpgroupcom/testdrivehtml • Amazon EC2 Documentation : https://awsamazoncom/documentation/ec2/ • Amazon RDS Documentation : https://awsamazoncom/documentation/rds/ • Amazon Cloud Watch : https://awsamazoncom/cloudwatch/ • AWS Cost Estimator : http://calculators3amazonawscom/indexhtml • AWS Trusted Advisor : https://awsamazoncom/premiumsupport/trustedadvisor/ • Oracl e Cloud Licensing : http://wwworaclecom/us/corporate/pricing/cloud licensing 070579pdf ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 25 • Amazon VPC Connectivity Options : https://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectiv ity_Optionspdf • AWS Security : http://d0awsstaticcom/whitepapers /Sec urity/Intro_to_AWS_Security pdf • AWS Security B est Practices : http://mediaamazonwebservicescom/AWS_Security_Best_Practicesp df • Disaster Recovery on AWS : http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf • AWS Support : https://awsamazoncom/premiumsupport/ • DLZP Gro up: http://wwwdlzpgroupcom/,General,consultant,Best Practices AWS_Certifications_Programs_Reports_and_ThirdParty_Attestations,"ArchivedAWS C ertifications Programs R eports and ThirdParty Attestations March 2017 This paper has been archived For the latest information see A WS Services in Scope by Compliance ProgramArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents CJIS 1 CSA 1 Cyber Essentials Plus 2 DoD SRG Levels 2 and 4 2 FedRAMP SM 3 FERPA 3 FIPS 140 2 4 FISMA and DIACAP 4 GxP 4 HIPAA 5 IRAP 6 ISO 9001 6 ISO 27001 7 ISO 27017 8 ISO 27018 8 ITAR 9 MPAA 9 MTCS Tier 3 Certification 10 NIST 10 PCI DSS Level 1 11 SOC 1/ISAE 3 402 11 SOC 2 13 SOC 3 14 Further Reading 15 Document Revisions 15 Archived Abstract AWS engages with external certifying bodies and independent auditors to provide customers with considerable information regarding the policies processes and controls established and operated by AWS ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 1 CJIS AWS complies with the FBI's Criminal Justice Inf ormation Services (CJIS) standard We sign CJIS security agreements with our customers including allowing or performing any required employee background checks according to the CJIS Security Policy Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to improve the security and protection of CJI data using the advanced security services and features of AWS such as activity logging ( AWS CloudTrail ) encryption of data in motion and at rest (S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and CloudHSM ) and integrated permission management (IAM federated identity management multi factor authentication) AWS has created a Criminal Justice Information Services (CJIS) Workbook in a security plan template format aligned to the CJIS Policy Areas Additionally a CJIS Whitepaper has been developed to help guide customers in their journey to cloud adoption Visit the CJIS Hub Page at https://awsamazoncom/compliance/cjis/ CSA In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assurance Registry (STAR) is a free pub licly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering contracting with AWS is a CSA STAR registrant and has completed the Cloud Security Alliance (CSA) Consensus Assessments Initiative Questionnaire (CAIQ) This CAIQ published by the CSA provides a way to reference and document what security controls exist in AWS’ Infrastructure as a Service offerings The CAIQ provides 298 questions a cloud consumer and cloud auditor may wish to ask of a cloud provider See CSA Consensus Assessments Initiative Questionnaire ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 2 Cyber Essentials P lus Cyber Essentials Plus is a UK Government backed industry supported certification scheme introduced in the UK to help organizations demonstrate operational security against common cyber attacks It demonstrates the baseline controls AWS implements to mitigate the risk from common Internet based threats within the context of the UK Government's "" 10 Steps to Cyber Security "" It is backed by industry including the Federation of Small Businesses the Confederation of British Industry and a number of insurance organizations that offer incentives for businesses holding this certificatio n Cyber Essentials sets out the necessary technical controls; the related assurance framework shows how the independent assurance process works for Cyber Essentials Plus certification through an annual external assessment conducted by an accredited assess or Due to the regional nature of the certification the certification scope is limited to EU (Ireland) region DoD SRG Levels 2 and 4 The Department of Defense (DoD) Cloud Security Model (SRG) provides a formalized assessment and authorization process for cloud service providers (CSPs) to gain a DoD Provisional Authorization which can subsequently be leveraged by DoD customers A Provisional Authorization under the SRG provides a reusabl e certification that attests to our compliance with DoD standards reducing the time necessary for a DoD mission owner to assess and authorize one of their systems for operation on AWS AWS currently holds provisional authorizations at Levels 2 and 4 of th e SRG Additional information of the security control baselines defined for Levels 2 4 5 and 6 can be found at http://iasedisamil/cloud_security/Pages/indexaspx Visit the DoD Hub Page at https://awsamazoncom/compliance/dod/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 3 FedRAMPsm AWS is a Federal Risk and Authorization Management Program (FedRAMPsm) Compliant Cloud Service Provider AWS has completed th e testing performed by a FedRAMPsm accredited Third Party Assessment Organization (3PAO) and has been granted two Agency Authority to Operate (ATOs) by the US Department of Health and Human Services (HHS) after demonstrating compliance with FedRAMPsm requi rements at the Moderate impact level All US government agencies can leverage the AWS Agency ATO packages stored in the FedRAMPsm repository to evaluate AWS for their applications and workloads provide authorizations to use AWS and transition workload s into the AWS environment The two FedRAMPsm Agency ATOs encompass all US regions (the AWS GovCloud (US) region and the AWS US East/West regions) For a complete list of the services that are in the accreditation boundary for the regions stated above see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services inscope/ ) For more information on AWS FedRAMPsm compliance please see the AWS FedRA MPsm FAQs at https://awsamazoncom/compliance/fedramp/ FERPA The Family Educational Rights and Privacy Act (FERPA) (20 USC § 1232g; 34 CFR Part 99) is a Federal law that protects the privacy of student education records The law applies to all schools that receive funds under an applicable program of the US Department of Education FERPA gives parents certain rights with respect to their children's education records These rights transfer to the student when he or she reaches the age of 18 or attends a school beyond the high school level Students to whom the rights have transferred are ""eligible students"" AWS enables c overed entities and their business associates subject to FERPA to leverage the secure AWS environment to process maintain and store protected education information AWS also offers a FERPA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of educational data ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 4 The FERPA Compliance on AWS whitepaper outlines how companies can use AWS to process systems that facilitate FERPA compliance: FIPS 1402 The Federal Information Processing Standard (FIPS) Publication 1402 is a US government security standard that specifies the security requirements for cryptographic modules protecting sensitive information To support customers with FIPS 140 2 requirements SSL terminations in AWS GovCloud (US) operate using FIPS 140 2 validated hardware AWS works with AWS GovCloud (US) customers to provide the information they need to help manage compliance when using the AWS GovCloud (US) environment FISMA and DIACAP AWS enables US government agencies to achieve and sustain compliance with the Federal Information Security Management Act ( FISMA ) The AWS infrastructure has been evaluated by independent assessors for a variety of government systems as part of their system owners' approval process Numerous Federal Civilian and Department of Defense (DoD) organizations have s uccessfully achieved security authorizations for systems hosted on AWS in accordance with the Risk Management Framework (RMF) process defined in NIST 800 37 and DoD Information Assurance Certification and Accreditation Process ( DIACAP ) GxP GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs medical devices and medical software applications The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product related safety decisions AWS offers a GxP whitepaper which details a comprehensive approach for using AWS for GxP systems This whitepaper provides guidance for using AWS Products in the context of GxP and the content has been developed in conjunction with AWS pharmaceutical and medical device customers as well as ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 5 software partners who are curre ntly using AWS Products in their validated GxP systems For more information on the GxP on AWS please contact AWS Sales and Business Development For additional information ple ase see our GxP Comp liance FAQs at https://awsamazoncom/compliance/gxp part 11annex 11/ HIPAA AWS enables covered entities and their business associates subject to the US Health Insurance Portabilit y and Accountability Act (HIPAA) to leverage the secure AWS environment to process maintain and store protected health information and AWS will be signing business associate agreements with such customers AWS also offers a HIPAA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information The Architecting for HIPAA Secur ity and Compliance on Amazon Web Services whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and Health Information Technology for Economic and Clinical Health (HITECH) compliance Customers who execute an AWS BAA may use any AWS service in an account designated as a HIPAA Account but they may only process store and transmit PHI using the HIPAA eligible services defined in the AWS BAA For a complete list of these services see the HIPAA Eligible Services Reference page (https://awsamazoncom/compliance/hipaa eligible services reference/) AWS maintains a standards based risk management program to ensure that the HIPAA eligible servic es specifically support the administrative technical and physical safeguards required under HIPAA Using these services to store process and transmit PHI allows our customers and AWS to address the HIPAA requirements applicable to the AWS utility based operating model For additional information please see our HIPAA Compliance FAQs and Architecting for HIPAA Security and Compliance on Amazon Web Services ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 6 IRAP The Information Security Registered Assessors Program (IRAP) enables Australian government customers to validate that appropriate controls are in place and determine the appropria te responsibility model for addressing the needs of the Australian Signals Directorate (ASD) Information Security Manual (ISM) Amazon Web Services has completed an independent assessment that has determined all applicable ISM controls are in place relating to the processing storage and transmission of Unclassified (DLM) for the AWS Sydney Region For more information see the IRAP Compli ance FAQs at https://awsamazoncom/compliance/irap/ and AWS alignment with the Australian Signals Directorate (ASD) Cloud Computing Security Considerations ISO 9001 AWS has achieved ISO 9001 certification AWS’ ISO 9001 certification directly supports customers who develop migrate and operate their quality controlled IT systems in the AWS cloud Customers can leverage AWS’ compliance reports as evidence for their own ISO 9001 programs and industry specific quality programs such as GxP in life sciences ISO 13485 in medical devices AS9100 in aerospace and ISO/TS 16949 in automotive AWS customers who don't have quality system requirements will still benefit from the additional assurance and transparency that an ISO 9001 certification provides The ISO 9001 certification covers the quality management system over a specified scope of AWS services and Regions of operations For a complete list of services see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) ISO 9001:2008 is a global standard for managing the quality of products and services The 9001 standard outlines a quality management system based on eight principles defined by the International Organization for Standardization (ISO) Technical Committee for Quality Management and Quality Assurance They include: • Customer focus ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 7 • Leadership • Involvement of people • Process approach • System approach to management • Continual Improvement • Factual approach to decision making • Mutually beneficial supplier relationships The AWS ISO 9001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_9001_certificationpdf AWS provides additional information and frequently asked questions abou t its ISO 9001 certification at: https://awsamazoncom/compliance/iso 9001 faqs/ ISO 27001 AWS has achieved ISO 27001 certification of our Information Security Management System (ISMS) cove ring AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer informati on that’s based on periodic risk assessments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This certification reinforces Amazon’s commitment to providing significant information regarding our security controls and practices The AWS ISO 27001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27001_global_certificationpdf ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 8 AWS provides additional information and frequently asked questions about its ISO 27001 certification at: https://awsamazoncom/compliance/iso 27001 faqs/ ISO 27017 ISO 27017 is the newest code of practice released by the International Organization for Standardization (ISO) It provides implementation guidance on information security controls that specifically relate to cloud services AWS has achieved ISO 27017 certification of our Information Security Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://aws amazoncom/compliance/services in scope/ ) The AWS ISO 27017 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27017_certificationpdf AWS pr ovides additional information and frequently asked questions about its ISO 27017 certification at https://awsamazoncom/compliance/iso 27017 faqs/ ISO 27018 ISO 27018 is the first Internat ional code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Informatio n (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification of our Information Sec urity Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 9 The AWS ISO 27018 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27018_certificationpdf AWS provides additional information and frequently asked questions about its ISO 27018 certification at https://awsamazo ncom/compliance/iso 27018 faqs/ ITAR The AWS GovCloud (US) region supports US International Traffic in Arms Regulations ( ITAR ) compliance As a part of managing a comprehensive ITAR compliance program companies subject to ITAR export regulations must control unintended exports by restricting access to protected data to US Persons and restricting physical location of th at data to the US AWS GovCloud (US) provides an environment physically located in the US and where access by AWS Personnel is limited to US Persons thereby allowing qualified companies to transmit process and store protected articles and data subject t o ITAR restrictions The AWS GovCloud (US) environment has been audited by an independent third party to validate the proper controls are in place to support customer export compliance programs for this requirement MPAA The Motion Picture Association of America (MPAA) has established a set of best practices for securely storing processing and delivering protected media and content ( http://wwwfightfilmtheftorg/facility security programhtml ) Media companies use these best practices as a way to assess risk and security of their content and infrastructure AWS has demonstrated alignment with the MPAA best practices and the AWS infrastructure is compliant with all applicable MPAA i nfrastructure controls While the MPAA does not offer a “certification” media industry customers can use the AWS MPAA documentation to augment their risk assessment and evaluation of MPAA type content on AWS See the AWS Compliance MPAA hub page for additional details at https://awsamazoncom/compliance/mpaa/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 10 MTCS Tier 3 Certification The Multi Tier Cloud Security (MTCS) is an operational Singapore security management Standard (SPRING SS 584:2013) based on ISO 27001/02 Information Security Management System (ISMS) standards The certification assessment requires us to: • Systematically evaluate our information security risks taking into account the impact of company threats and vulnerabili ties • Design and implement a comprehensive suite of information security controls and other forms of risk management to address company and architecture security risks • Adopt an overarching management process to ensure that the information security controls meet the our information security needs on an ongoing basis View the MTCS Hub Page at https://awsamazoncom/compliance/aws multitiered cloud security standard certification/ NIST In June 2015 The National Institute of Standards and Technology (NIST) released guidelines 800 171 ""Final Guidelines for Protecting Sensitive Government Information Held by Contractors"" This guidance is applicable to the pro tection of Controlled Unclassified Information (CUI) on nonfederal systems AWS is already compliant with these guidelines and customers can effectively comply with NIST 800 171 immediately NIST 800 171 outlines a subset of the NIST 800 53 requirements a guideline under which AWS has already been audited under the FedRAMP program The FedRAMP Moderate security control baseline is more rigorous than the recommended requirements established in Chapter 3 of 800 171 and includes a significant number of security controls above and beyond those required of FISMA Moderate systems that protect CUI data A detailed mapping is available in the NIST Special Publication 800 171 starting on page D2 (which is page 37 in the PDF) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 11 PCI DSS Level 1 AWS is Level 1 compliant under the Payment Card Industry (PCI) Data Security Standard (DSS) Customers can run applicati ons on our PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud In February 2013 the PCI Security Standards Council released PCI DSS Cloud Computing Guidelines These guidelines provide customers who are managing a cardholder data environment with considerations for maintaining PCI DSS controls in the cloud AWS has incorporated the PCI DSS Cloud Computing Guidelines into the AWS PCI Compliance Package for customers The AWS PCI Compliance Package includes the AWS PCI Attestation of Compliance (AoC) which shows that AWS has been successfully validated against standards applicable to a Level 1 service provider under PCI DSS Version 31 and the AWS PCI Responsibility Summary which explains how compliance responsibilities are shared between AWS and our customers in the cloud For a complete list of services in scope for PCI DSS Level 1 see the AWS Services in Scope by Comp liance Program page (https://awsamazoncom/compliance/services inscope/ ) For more information see https://awsa mazoncom/compliance/pci dsslevel 1faqs/ SOC 1/ISAE 3402 Amazon Web Services publishes a Service Organization Controls 1 (SOC 1) Type II report The audit for this report is conducted in accordance with American Institute of Certified Public Accountants (AICPA): AT 801 (formerly SSAE 16) and the International Standards for Assurance Engagements No 3402 (ISAE 3402) This dual standard report is intended to meet a broad range of financial auditing requirements for US and international auditing bodies The SOC 1 report audit attests that AWS’ control objectives are appropriately designed and that the individual controls defined to safeguard customer data are operating effectively This report is the replacement of the Statement on Auditing Standards No 70 (SAS 70) Type II Audit report ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 12 The AWS SOC 1 control objectives are provided here The report itself identifies the control activities that support each of these objectives and the independent auditor’s results of their testing procedures of each control Objective Area Objective Description Security Organization Controls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization Employee User Access Controls provide reasonable assurance that procedures have been established so that Amazon employee user accounts are added modified and deleted in a timely manner and reviewed on a periodic basis Logical Security Controls provide reasonable assurance that policies and mechanisms are in place to appropriately restrict unauthorized internal and external access to data and customer data is appropriately segregated from other customers Secure Data Handling Controls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately Physical Security and Environmental Protection Controls provide reasonable assurance that physical access to data centers is restricted to authorized personnel and that mechanisms are in place to minimize the effect of a malfunction or physical disaster to data center facilities Change Management Controls provide reasonable assurance that changes (including emergency / non routine and configuration) to existing IT resources are logged authorized tested approved and documented Data Integrity Availability and Redundancy Controls provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 13 Objective Area Objective Description Incident Handling Controls provide reasonable assurance that system incidents are recorded analyzed and resolved The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financi al statements As AWS’ customer base is broad and the use of AWS services is equally as broad the applicability of controls to customer financial statements varies by customer Therefore the AWS SOC 1 report is designed to cover specific key controls li kely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios This allows customers to leverage the AWS infrastructure to store and process critical data including that which is integral to the financial reporting process AWS periodically reassesses the selection of these controls to consider customer feedback and usage of this important audit report AWS’ commitment to the SOC 1 report is ongoing and AWS w ill continue the process of periodic audits For the current scope of the SOC 1 report see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) SOC 2 In addition to the SOC 1 report AWS publishes a Service Organization Controls 2 (SOC 2) Type II report Similar to the SOC 1 i n the evaluation of controls the SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define l eading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report provides additional transparency into AWS security and availability based on a pre defined industry standard of leading pract ices and further demonstrates AWS’ commitment to protecting ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 14 customer data The SOC 2 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services SOC 3 AWS publishes a Service Organization Co ntrols 3 (SOC 3) report The SOC 3 report is a publically available summary of the AWS SOC 2 report The report includes the external auditor’s opinion of the operation of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services The AWS SOC 3 report includes all AWS data center s worldwide that support in scope services This is a great resource for customers to validate that AWS has obtained external auditor assurance without going through the process to request a SOC 2 report The SOC 3 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services View the AWS SOC 3 report here ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 15 Further Reading For additional information see the following sources: • AWS Risk and Compliance Overview • AWS Answers to Key Compliance Questions • CSA Consensus Assessments Initiative Questionnaire Document Revisions Date Description March 2017 Updated in scope services January 2017 Migrated to new template January 2016 First publication",General,consultant,Best Practices AWS_Cloud_Adoption_Framework_Security_Perspective,Archived AWS Cloud Adoption Framewo rk Security Perspective June 2016 This paper has been archived For the latest content about the AWS Cloud Adoption Framework see the AWS Cloud Adoption Framework page: https://awsamazoncom/professionalservices/CAFArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 2 of 34 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 3 of 34 Contents Abstract 4 Introduction 4 Security Benefits of AWS 6 Designed for Security 6 Highly Automated 6 Highly Available 7 Highly Accredited 7 Directive Component 8 Considerations 10 Preventive Component 11 Considerations 12 Detective Component 13 Considerations 14 Responsive Component 15 Considerations 16 Taking the Journey – Defining a Strategy 17 Considerations 19 Taking the Journey – Delivering a Program 20 The Core Five 21 Augmenting the Core 22 Example Sprint Series 25 Considerations 27 Taking the Journey – Develop Robust Security Operations 28 Conclusion 29 Appendix A: Tracking Progress Across the AWS CAF Security Perspective 30 ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 4 of 34 Key Security Enablers 30 Security Epics Progress Model 31 CAF Taxonomy and Terms 33 Notes 34 Abstract The Amazon Web Services (AWS) Cloud Adoption Framework1 (CAF) provides guidance for coordinating the different parts of organizations migrating to cloud computing The CAF guidance is broken into a number of areas of focus relevant to implementing cloudbased IT systems These focus areas are called perspectives and each perspective is further separated into components There is a whitepaper for each of the seven CAF perspectives This whitepaper covers the Security Perspective which focuses on incorporating guidance and process for your existing security controls specific to AWS usage in your environment Introduction Security at AWS is job zero All AWS customers benefit from a data center and network architecture built to satisfy the requirements of the most security sensitive organizations AWS and its partners offer hundreds of tools and features to help you meet your security objectives around visibility auditability controllability and agility This means that you can have the security you need but without the capital outlay and with much lower operational overhead Figure 1: AWS CAF Security Perspective ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 5 of 34 than in an onpremises environment The Security Perspective goal is to help you structure your selection and implementation of controls that are right for your organization As Figure 1 illustrates the components of the Security P erspective organize the principles that will help drive the transformation of your organization’s security culture For each component this whitepaper discusses specific actions you can take and the means of measuring progress :  Directive controls establish the governance risk and compliance models the environment will operate within  Preventive controls protect your workloads and mitigate threats and vulnerabilities  Detective controls provide full visibility and transparency over the operation of your deployments in AWS  Responsive controls drive remediation of potential deviations from your security baselines Security in the cloud is familiar The increase in agility and the ability to perform actions faster at a larger scale and at a lower cost does not invalidate well established principles of information security After covering the four Security Perspective components this whitepaper shows you the steps you can take to on your journey to the cloud to ensure that your environment maintains a strong security footing:  Defin e a strategy for security in the cloud When you start your journey look at your organization al business objectives approach to risk management and the level of opportunity presented by the cloud  Deliver a security program for development and implementation of security privacy compliance and risk management capabilities The scope can initially appear vast so it is important to create a structure that allows your organization to holistically address security in the cloud Th e implementation should allow for iterative development so that capabilit ies mature as programs develop This allows the security component to be a catalyst to the rest of th e organization’s cloud adoption efforts ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 6 of 34  Develop robust security operations capabilities that continuously mature and improve The security journey continues over time We recommend that you intertwine operational rigor with the building of new capabilities so the constant iteration can bring continuous improvement Security Benefits of AWS Cloud security at AWS is the highest priority As an AWS customer you will benefit from a data center and network architecture built to meet the requiremen ts of the most securitysensitive organizations An advantage of the AWS cloud is that it allows customers to scale and innovate while maintaining a secure environment Customers pay only for the services they use meaning that you can have the security you need but without the upfront expenses and at a lower cost than in an onpremises environment This section discusses some of the security benefits of the AWS platform Designed for Security The AWS Cloud infrastructure is operated in AWS data centers and is designed to satisfy the requirements of our most securitysensitive customers The AWS infrastructure has been designed to provide high availability while putting strong safeguards in place for customer privacy All data is stored in highly secure AWS data centers Network firewalls built into Amazon VPC and web application firewall capabilities in AWS WAF let you create private networks and control access to your instances and applications When you deploy systems in the AWS Cloud AWS helps by sharing the security responsibilities with you AWS engineers the underlying infrastructure using secure design principles and customers can implement their own security architecture for workloads deployed in AWS Highly Automated At AWS we purposebuild security tools and we tailor them for our unique environment size and global requirements Building security tools from the ground up allows AWS to automate many of the routine tasks security experts normally spend time on This means AWS security experts can spend more time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 7 of 34 focusing on measures to increase the security of your AWS Cloud environment Customers also automate security engineering and operations functions using a comprehensive set of APIs and tools Identity management network security and data protection and monitoring capabilities can be fully automated and delivered using popular software development methods you already have in place Customers take an automated approach to responding to security issues When you automate using the AWS services rather than having people monitoring your security position and reacting to an event your system can monitor review and initiate a response Highly Available AWS builds its data centers in multiple geographic Regions Within the Regions multiple Availability Zones exist to provide resiliency AWS designs data centers with excess bandwidth so that if a major disruption occurs there is sufficient capacity to loadbalance traffic and route it to the remaining sites minimizing the impact on our customers Customers also leverage this MultiRegion Multi AZ strategy to build highly resilient applications at a disruptively low cost to easily replicate and back up data and to deploy global security controls consistently across their business Highly Accredited AWS environments are continuously audited with certifications from accreditation bodies across the globe This means that segments of your compliance have already been completed For more information about the security regulations and standards with which AWS complies see the AWS Cloud Compliance2 web page To help you meet specific government industry and company security standards and regulations AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards You can obtain available compliance reports by contacting your AWS account representative Customers inherit many controls operated by AWS into their own compliance and certification programs lowering the cost to maintain and run security assurance efforts in addition to actually maintaining the controls themselves With a strong foundation in place you are free to optimize the security of your workloads for agility resilience and scale ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 8 of 34 The rest of this whitepaper introduces each of the components of the Security Perspective You can use these components to explore the security goals you need to be successful on your journey to the cloud Directive Component The Directive component of the AWS Security Perspective provides guidance on planning your security approach as you migrate to AWS The key to effective planning is to define the guidance you will provide to the people implementing and operating your security environment The information needs to provide enough direction to determine the controls needed and how they should be operated Initial areas to consider include:  Account Governance — Direct the organization to create a process and procedures for managing AWS accounts Areas to define include how account inventories will be collected and maintained which agreements and amendments are in place and what criteria to use for when to create an AWS account Develop a process to create accounts in a consistent manner ensuring that all initial settings are appropriate and that clear ownership is established  Account Ownership and contact information —Establish an appropriate governance model of AWS accounts used across your organization and plan how contact information is maintained for each account Consider creating AWS accounts tied to email distribution lists rather than to an individual ’s email address This allows a group of people to monitor and respond to information from AWS about your account activity Additionally this provides resilience when internal personnel change and it provides a means of assigning security accountability List your security team as a security point of contact to speed timesensitive communications  Control framework —Establish or apply an industry standard control framework and determine if you need modifications or additions in order to incorporate AWS services at expected security levels Perform a compliance mapping exercise to determine how compliance requirements and security controls will reflect AWS service usage  Control ownership —Review the AWS Shared Responsibility Model3 information on the AWS website to determine if control ownership ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 9 of 34 modifications should be made Review and update your responsibility assignment matrix (RACI chart) to include ownership of controls operating in the AWS environment  Data classification —Review current data classifications and determine how those classifications will be managed in the AWS environment and what controls will be appropriate  Change and asset management —Determine how change management asset management are to be performed in AWS Create a means to determine what assets exist what the systems are used for and how the systems will be managed securely This can be integrated with an existing configuration management database (CMDB) Consider creating a practice for naming and tagging that allows identification and management to occur to the securit y level required You can use this approach to define and track the metadata that enables identification and control  Data locality —Review criteria for where your data can reside to determine what controls will be needed to manage the configuration and usage of AWS services across Regions AWS customers choose the AWS Region(s) where their content will be hosted This allows customers with specific geographic requirements to establish environments in locations they choose Customers can replicate and back up content in more than one Region but AWS does not move customer content outside of the customer’s chosen Region(s)  Least privilege access — Establish an organizational security culture built on the principle of least privilege and strong authentication Implement protocols to protect access to sensitive credential and key material associated with every AWS account Set expectations on how authority will be delegated down through software engineers operations staff and other job functions involved in cloud adoption  Security operations playbook and r unbooks —Define your security patterns to create durable guardrails the organization can reference over time Implement the plays through automation as runbooks; document human in theloop interventions as appropriate ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 10 of 34 Considerations  Do create a tailored AWS shared responsibility model for your ecosystem  Do use strong authentication as part of a protection scheme for all actors in your account  Do promote a culture of security ownership for application teams  Do extend your data classification model to include services in AWS  Do integrate developer operations and security team objectives and job functions  Do consider creating a strategy for naming and tracking accounts used to manage services in AWS  Do centralize phone and email distribution lists so that teams can be monitored ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 11 of 34 Preventive Component The Preventive component of the AWS Security Perspective provides guidance for implementing security infrastructure with AWS and within your organization The key to implementing the right set of controls is enabling your security teams to gain the confidence and capability they need to build the automation and deployment skills necessary to protect the enterprise in the agile scalable environment that is AWS Use the Directive component to determine the controls and guidance that you will need and then use the Preventive component to determine how you will operate the controls effectively AWS regularly provides guidance on best practices for AWS service utilization and workload deployment patterns which can be used as control implementation references Visit the AWS Security Center blog and most recent AWS Summit and re:Invent conference Security Track videos Consider the following areas to determining what changes (if any) you need to make to your current security architectures and practices This will help you with a smooth and planned AWS adoption strategy  Identity and access —Integrate the use of AWS into the workforce lifecycle of the organization as well as into the sources of authentication and authorization Create finegrained policies and roles associated with appropriate users and groups Create guardrails that permit important changes through automation only and prevent unwanted changes or roll them back automatically These steps will reduc e human access to production systems and data  Infrastructure protection —Implement a security baseline including trust boundaries system security configuration and maintenance (eg harden and patch) and other appropriate policy enforcement points (eg security groups AWS WAF Amazon API Gateway) to meet the needs that you identified using the Directive component  Data protection —Utilize appropriate safeguards to protect data in transit and at rest Safeguards includ e finegrain ed access controls to objects creating and controlling the encryption keys used to encrypt your data ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 12 of 34 selecting appropriate encryption or tokenization methods integrity validation and appropriate retention of data Considerations  Do treat security as code allowing you to deploy and validate security infrastructure in a manner that allows you the scale and agility to protect the organization  Do create guardrails sensible defaults and offer templates and best practices as code  Do build security services that the organization can leverage for highly repetitive or particularly sensitive security functions  Do define actors and then storyboard their experience interacting with AWS services  Do use the AWS Trusted Advisor tool to continually assess your AWS security posture and consider an AWS Well Architected review  Do establish a minimal viable security baseline and continually iterate to raise the bar for the workloads you’re protecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 13 of 34 Detective Component The Detective component of the AWS CAF Security Perspective provides guidance for gaining visibility into your organization’s security posture A wealth of data and information can be gathered by using services like AWS CloudTrail servicespecific logs and API/CLI return values Ingesting these information sources into a scalable platform for managing and monitoring logs event management testing and inventory/audit will give you the transparency and operational agility you need to feel confident in the security of your operations  Logging and monitoring —AWS provides native logging as well as services that you can leverage to provide greater visibility near to real time for occurrences in the AWS environment You can use these tools to integrate into your existing logging and monitoring solutions Integrate the output of logging and monitoring sources deeply into the workflow of the IT organization for end toend resolution of securityrelated activity  Security testing —Test the AWS environment to ensure that defined security standards are met By testing to determine if your systems will respond as expected when certain events occur you will be better prepared for actual events Examples of security testing include vulnerability scanning penetration testing and error injection to prove standards are being met The goal is to determine if your control will respond as expected  Asset inventory —Knowing what workloads you have deployed and operational will allow you to monitor and ensure that the environment is operating at the security governance levels expected and demanded by the security standards  Change detection —Relying on a secure baseline of preventive controls also requires knowing when these controls change Implement measures to determine drift between secure configuration and current state ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 14 of 34 Considerations  Do determine what logging information for your AWS environment you want to capture monitor and analyze  Do determine how your existing security operations center (SOC) business capability will integrate AWS security monitoring and management into existing practices  Do continually conduct vulnerability scans and penetration tests in accordance with AWS procedures for doing so ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 15 of 34 Responsive Component The Responsive component of the AWS CAF Security Perspective provides guidance for the responsive portion of your organization’s security posture By incorporating your AWS environment into your existing security posture and then preparing and simulating actions that require response you will be better prepared to respond to incidents as they occur With automated incident response and recovery and the ability to mitigate portions of disaster recovery it is possible to shift the primary focus of the security team from response to performing forensics and root cause analysis Some things to consider as part of adapting your security posture include the following:  Incident response —During an incident containing the event and returning to a known good state are important elements of a response plan For instance automating aspects of those functions using AWS Config rules and AWS Lambda responder scripts gives you the ability to scale your response at Internet speeds Review current incident response processes and determine if and how automated response and recovery will become operational and managed for AWS assets The security operations center’s functions should be tightly integrated with the AWS APIs to be as responsive as possible This provides the security monitoring and management function for AWS Cloud adoption  Security incident response simulations —By simulating events you can validate that the controls and processes you have put in place react as expected Using this approach you can determine if you are effectively able to recover and respond to incidents when they occur  Forensics —In most cases your existing forensics tools will work in the AWS environment Forensic teams will benefit from the automated deployment of tools across regions and the ability to collect large volumes of data quickly with low friction using the same robust scalable services their business critical applications are built on such as Amazon Simple Storage Service (S3) Amazon Elastic Block Store (EBS) Amazon Kinesis Amazon DynamoDB Amazon Relational Database Service (RDS) Amazon RedShift and Amazon Elastic Compute Cloud (EC2) ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 16 of 34 Considerations  Do update your incident response processes to recognize the AWS environment  Do leverage services in AWS to forensically ready your deployments through automation and feature selection  Do automate response for robustness and scale  Do use services in AWS for data collection and analysis in support of an investigation  Do validate your incident response capability through simulations of security incident responses ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 17 of 34 Taking the Journey – Defining a Strategy Review your current security strategy to determine if portions of the strategy would benefit from change as part of a cloud adoption initiative Map your AWS cloud adoption strategy against the level of risk your business is willing to accept your approach to meeting regulatory and compliance objectives as well as your definitions for what needs to be protected and how it will be protected Table 1 provides an example of a security strategy that articulates a set of principles which are then mapped to specific initiatives and work streams Principle Example Action s Infrastructure as code Skill up security team in code and automation ; move to DevSecOps Design guardrails not gates Architect drives toward good behavior Use the cloud to protect the cloud Build operate and manage security tools in the cloud Stay current ; run secure Consume new security features; patch and replace frequently Reduce reliance on persistent access Establish role catalog; automate KMI via secrets service Total visibility Aggregate AWS logs and metadata with OS and app logs Deep insights Implement a s ecurity data warehouse with BI and analytics Scalable incident response (IR) Update IR and Forensics standard operating procedure ( SOP) for shared responsibility framework SelfHealing Automate correction and restoration to known good state Table 1: Example Security Strategy As your strategy evolves you will want to begin iterating on your thirdparty assurance frameworks and organizational security requirements and incorporating into a risk management framework that will guide your journey to AWS It is often an effective practice to evolve your compliance mapping as you ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 18 of 34 gain a better understanding of the needs of your workloads in the cloud and the security capabilities provided by AWS Another key element of your strategy is mapping out the shared responsibility model specific to your ecosystem In addition to the macro relationship you share with AWS you’ll want to explore internal organizational shared responsibilities as well as those you impart upon your partners Companies can break their shared responsibility model into three major areas: a control framework; a responsible accountable consulted informed model (RACI); and a risk register The control framework describes how the security aspects of the business are expected to work and what controls will be put in place to manage risk You can use the RACI to identify and assign a person with responsibility for controls in the framework Finally use a risk register to capture controls without proper ownership Prioritize residual risks that have been identified aligning their treatment with new work streams and initiatives put in place to resolve them As you map these shared responsibilities you can expect to find new opportunities to automate operations and improve workflow between critical actors in your security compliance and risk management community Figure 2 shows an example extended shared responsibility model Figure 2: Example Shared Responsibility Model ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 19 of 34 Considerations  Do create a tailored strategy that addresses your organization al approach to implementing security in the cloud  Do promote automation as an underlying theme for all your strategy  Do clearly articulate your approach to cloud first  Do promote agility and flexibility by defining guardrails  Do take strategy as a short exercise that defines your organization’s approach to information security in the cloud  Do iterate quickly while laying down what the strategy is Your aim is to have a set of guiding principles that will drive the core of the effort forward – strategy is not the end in itself Move quickly and be willing to adapt and evolve  Do define strategic principles which will impart the culture you want in security and which inform the design decisions you’ll make rather than a strategy which impl ies specific solutions ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 20 of 34 Taking the Journey – Delivering a Program With a strategy in place it is now time to put it into practice and initiate the implementation that will transform your security organization and secure the cloud journey Whil e you have a wide choice of options and features your implementation should not be not a protracted effort This process of designing and implementing how different capabilities will work together represents an opportunity to quickly gain familiarity and learn how to iterate your designs to best meet your requirements Learn from actual implementation early then adapt and evolve using small changes as you learn To help you with your implementation you can use the CAF Security Epics (See Figure 3) The Security Epics consist of groups of user stories (use cases and abuse cases) that you can work on during sprints Each of these epics has multiple iterations addressing increasingly complex requirements and layering in robustness Although we advise the use of agile the epics can also be treated as general work streams or topics that help in prioritizing and structuring delivery using any other framework A proposed structure consists of the following 10 security epics (Figure 4 ) to guide your implementation Figure 3: AWS CAF Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 21 of 34 The Core Five The following five epics are the core control and capability categories that you should consider early on because they are fundamental to getting your journey started  IAM —AWS Identity and Access Management (IAM) forms the backbone of your AWS deployment In the cloud you must establish an account and be granted privileges before you can provision or orchestrate resources Typical automation stories may include entitlement mapping/grants/audit secret material management enforcing separation of duties and least privilege access just intime privilege management and reducing reliance on long term credentials  Logging and monitoring —AWS services provide a wealth of logging data to help you monitor your interactions with the platform The performance of AWS services based upon your configuration choices and the ability to ingest OS and application logs to create a common frame of reference Typical automation stories may include log aggregation thresholds/alarming/alerting enrichment search platform visualization stakeholder access and workflow and ticketing to initiate closedloop organizational response  Infrastructure security —When you treat infrastructure as code security infrastructure becomes a first tier workload that must also be deployed as code This approach will afford you the opportunity to programmatically configure AWS services and deploy security infrastructure from AWS Figure 4: AWS Ten Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 22 of 34 Marketplace partners or solutions of your own design Typical automation stories may include creating custom templates to configure AWS services to meet your requirements implementing security architecture patterns and security operations plays as code crafting custom security solutions from AWS services using patch management strategies like blue/green deployments reducing exposed attack surface and validating the efficacy of deployments  Data protection —Safeguarding important data is a critical piece of building and operating information systems and AWS provides services and features giving you robust options to protect your data throughout its lifecycle Typical automation stories may include making workload placement decisions implementing a tagging schema constructing mechanisms to protect data in motion such as VPN and TLS/SSL connections (including AWS Certificate Manager) constructing mechanisms to protect data at rest through encryption at appropriate tiers in your infrastructure using AWS Key Management Service (AWS KMS) implementation/integration deploying AWS CloudHSM creating tokenization schemes and implementing and operating of AWS Marketplace Partner solutions  Incident response —Automating aspects of your incident management process improves reliability and increases the speed of your response and often creates and environment easier to assess in afteraction reviews Typical automation stories may include using AWS Lambda function “ responders ” that react to specific changes in the environment orchestrating auto scaling events isolating suspect system components deploying just intime investigative tools and creating workflow and ticketing to terminate and learn from a closed loop organizational response Augmenting the Core These five epics represent the themes that will drive continued operational excellence through availability automation and audit You’ll want to judiciously integrate these epics into each sprint When additional focus is required you may consider treating them as their own epics  Resilience —High availability continuity of operations robustness and resilience and disaster recovery are often reasons for cloud deployments with AWS Typical automation stories may include using Multi AZ and Multi Region deployments changing the available attack surface scaling and ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 23 of 34 shifting allocation of resources to absorb attacks safeguarding exposed resources and deliberately inducing resource failure to validate continuity of system operations  Compliance validation —Incorporating compliance end toend into your security program prevents compliance from being reduced to a checkbox exercise or an overlay that occurs post deployment This epic provides the platform that consolidates and rationalizes the compliance artifacts generated through the other epics Typical automation stories may include creating security unit tests mapped to compliance requirements designing services and workloads to support compliance evidence collection creating compliance notification and visualization pipelines from evidentiary features monitoring continuous ly and creating compliancetoolingoriented DevSecOps teams  Secure CI/CD (DevSecOps) —Having confidence in your software supply chain through the use of trusted and validated continuous integration and continuous deployment tool chains is a targeted way to mature security operations practices as you migrate to the cloud Typical automation stories may include hardening and patching the tool chain least privilege access to the tool chain logging and monitoring of the production process security integration/deployment visualization and code integrity checking  Configuration and vulnerability analysis —Configuration and vulnerability analysis gain big benefit from the scale agility and automation afforded by AWS Typical automation stories may include enabling AWS Config and creating customer AWS Config Rules using Amazon CloudWatch Events and AWS Lambda to react to change detection implementing Amazon Inspector selecting and deploying continuous monitoring solutions from the AWS Marketplace deploying triggered scans and embedding assessment tools into the CI/CD tool chains  Security big data and predictive analytics —Security operations benefit from big data services and solutions just like any other aspect of the business Leveraging big data gives you deeper insights in a more timely fashion thus enhancing your agility and ability to iterate on your security posture at scale Typical automation stories may include creating security data lakes developing analytics pipelines creating visualization to drive security decision making and establishing feedback mechanisms for autonomic response ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 24 of 34 After this structure is defined an implementation plan can be crafted Capabilities change over time and opportunities for improvement will be continually identified As a reminder the themes or capability categories above can be treated as epics in an agile methodology which contain a range of user stories including both use cases and abuse cases Multiple sprints will lead to increased maturity while retain ing flexibility to adapt to business pace and demand ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 25 of 34 Example Sprint Series Consider organizing a sample set of six twoweek sprints (a group of epics driven over a twelveweek calendar quarter) including a short prep period in the following way Your approach will depend on resource availability priority and level of maturity desired in each capability as you move towards your minimal ly viable production capability ( MVP )  Sprint 0 —Security cartography: compliance mapping policy mapping initial threat model review establish risk registry; Build a backlog of use and abuse cases ; plan the security epics  Sprint 1 —IAM; logging and monitoring  Sprint 2 —IAM; logging and monitoring; infrastructure protection  Sprint 3 —IAM; logging and monitoring; infrastructure protection  Sprint 4 —IAM; logging and monitoring; infrastructure protection; data protection  Spring 5 —Data protection automating security operations incident response planning/tooling; r esilience  Sprint 6 —Automating security operations incident r esponse; r esilience A key element of compliance validation is incorporating the validation into each sprint through security and compliance unit test cases and then undergoing the promotion to production process When explicit compliance validation capability is required sprints can be established to focus specifically on those user stories Over time iteration can be leveraged to achieve continuous validation and implementation of autocorrection of deviation where appropriate The overall approach aims to clearly define what an MVP or baseline is which will then map to first sprint in each area In the initial stages the end goal can be less defined but a clear roadmap of initial sprints is created Timing experience and iteration will allow refining and adjusting the end state to be just right for your organization In reality the final state may continuously shift but ultimately the process does lead to continuous improvement at a faster pace This approach can be more effective and have greater cost efficiency than a big bang approach based on long timelines and high capital outlays ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 26 of 34 Diving a little deeper the first sprint for IAM can consist of defining the account structure and implementing the core set of best practices A second sprint can implement federation A third sprint can expand account management to cater for multiple accounts and so on IAM user stories that may span one or more of these initial sprints could include stories such as the following: “As an access administrator I want to create an initial set of users for managing privileged access and federation identity provider trust relationships ” “As an access administrator I want to map users in my existing corporate directory to functional roles or sets of access entitlements on the AWS platform” “As an access administrator I want to enforce multi factor authentication on all interaction with the AWS console by interactive users” In this example the following logging and monitoring user stories may span one or more initial sprints: “As a security operations analyst I want to receive platform level logging for all AWS Regions and AWS Accounts” “As a security operations analyst I want all platform level logs delivered to one shared location from all AWS Regions and accounts” “As a security operations analyst I want to receive alerts for any operation that attaches IAM policies to users groups or roles” You can build capability in parallel or serial fashion and maintain flexibility by including security capability user stories in the overall product backlog You can also split the user stories out into a securityfocused DevOps team These are decisions you can periodically revisit allowing you to tailor your delivery to the needs of the organization over time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 27 of 34 Considerations  Do review your existing control framework to determine how AWS services will be operated to meet your required security standards  Do define actors and then storyboard their experience interacting with AWS services  Do define what the first sprint is and what the initial highlevel longer term goal will be  Do establish a minimal ly viable security baseline and continually iterate to raise the bar for the workloads and data you’re prot ecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 28 of 34 Taking the Journey – Develop Robust Security Operations In an environment where infrastructure is code security must also be treated as code The Security Operations component provides a means to communicate and operationalize the fundamental tenets of security as code:  Use the cloud to protect the cloud  Security infrastructure should be cloudaware  Expose security features as services using the API  Automate everything so that your security and compliance can scale To make this governance model practical lines of business often organize as DevOps teams to build and deploy infrastructure and business software You can extend the core tenets of the governance model by integrating security into your DevOps culture or practice; which is sometimes called DevSecOps Build a team around the following principles:  The security team embraces DevOps cultures and behaviors  Developers contribute openly to code used to automate security operations  The security operations team is empowered to participate in testing and automation of application code  The team takes pride in how fast and frequently they deploy Deploying more frequently with smaller changes reduces operational risk and shows rapid progress against the security strategy Integrated development security and operations teams have three shared key missions  Harden the continuous integration/ continuous deployment tool chain  Enable and promote the development of resilient software as it traverses the tool chain  Deploy all security infrastructure and software through the tool chain ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 29 of 34 Determining the changes (if any) to current security practices will help you plan a smooth AWS adoption strategy Conclusion As you embark on your AWS adoption journey you will want to update your security posture to include the AWS portion of your environment This Security Perspective whitepaper prescriptively guides you on an approach for taking advantage of the benefits that operating on AWS has for your security posture Much more security information is available on the AWS website where security features are described in detail and more detailed prescriptive guidance is provided for common implementations There is also a comprehensive list of security focused content4 that should be reviewed by various members of your security team as you prepare for AWS adoption initiatives ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 30 of 34 Appendix A: Tracking Progress Across t he AWS CAF Security Perspective You can use the key security enablers and the security epics progress model discussed in this appendix to measure the progress and the maturity of your implementation of the AWS CAF Security Perspective The enablers and the progress model can be used for project planning purposes to evaluate the robustness of implementations or simply as a means to drive conversation about the road ahead Key Security Enablers Key security enablers are milestones that help you stay on track We use a scoring model that consists of three values: Unaddressed Engaged and Completed  Cloud Security Strategy [Unaddressed Engaged Completed]  Stakeholder Communication Plan [Unaddressed Engaged Completed]  Security Cartography [Unaddressed Engaged Completed]  Document Shared Responsibility Model [Unaddressed Engaged Completed]  Security Operations Playbook & Runbooks [Unaddressed Engaged Completed]  Security Epics Plan [Unaddressed Engaged Completed]  Security Incident Response Simulation [Unaddressed Engaged Completed] ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 31 of 34 Security Epics Progress Model The security epics progress model helps you evaluate your progress in implementing the 10 Security Epics described in this paper We use a scoring model of 0 (zero) through 3 to measure robustness We provided examples for the Identity and Access Management and the Logging and Monitoring epics so you could see how this progression works Core 5 Security Epics 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 32 of 34 Security Epic 0 1 2 3 Identity and Access Management Example: No relationship between on premise s and AWS identities Example: An approach is defined for workforce lifecycle identity management IAM architecture is documented Job functions are mapped to IAM policy needs Example: Implemented IAM as defined in architecture IAM policies implemented that map to some job functions IAM implementation validated Example: Automation of IAM lifecycle workflow s Logging and Monitoring Example: No utilization of AWS provided logging and monitoring solutions Example: An approach is defined for log aggregation monitoring and integration into security event management processes Example: Platform level and service level logging is enab led and centralized Example: Events with security implications are deeply integrated into security workflow and incident management processes and systems Infrastructure Security Data Protection Incident Management ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 33 of 34 Augmenting the Core 5 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation Security Epic 0 1 2 3 Resilience DevSecOps Compliance Validation Configuration & Vulnerability Management Security Big Data CAF Taxonomy and Terms The Cloud Adoption Framework (CAF) is the framework AWS created to capture guidance and best practices from previous customer engagements An AWS CAF perspective represents an area of focus relevant to implementing cloudbased IT systems in organizations For example the Security Perspective provides guidance and process for evaluating and enhancing your existing security controls as you move to the AWS environment ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 34 of 34 Each CAF Perspective is made up of components and activities A component is a subarea of a perspective that represents a specific aspect that needs attention This whitepaper explores the components of the Security perspective An activity provides more prescriptive guidance for creating actionable plans that the organization can use to move to the cloud and to operate cloudbased solutions on an ongoing basis For example Directive is one component of the Security Perspective and tailoring an AWS shared responsibility model for your ecosystem may be an activity within that component When combined the Cloud Adoption Framework (CAF) and the Cloud Adoption Methodology (CAM) can be used as guidance during your journey to the AWS cloud Notes 1 https://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf 2 https://awsamazoncom/compliance/ 3 https://awsamazoncom/compliance/sharedresponsibilitymodel/ 4 https://awsamazoncom/security/securityresources/,General,consultant,Best Practices AWS_Cloud_Transformation_Maturity_Model,ArchivedAWS Cloud Transformation Maturity Model September 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Project Stage 3 Challenges and Barriers 4 Transformation Activities 5 Outcomes and Maturity 7 Foundation Stage 8 Challenges and Barriers 8 Transformation Activities 9 Outcomes and Maturity 10 Migration Stage 11 Challenges and Barriers 11 Transformation Activities 12 Outcomes and Maturity 14 Optimization Stage 15 Challenges and Barriers 15 Transformation Activities 16 Outcomes and Maturity 17 Conclusion 18 Contributors 18 Document Revisions 19 Archived Abstract The AWS Cloud Transformation Maturity Model (CTMM) maps the maturity of an IT organization’s process people and technology capabilities as they move through the four stages of the journey to the AWS Cloud : project foundation migration and optimization The objective of the CTMM is to help enterprise IT organizations understand the significant challenges they might face as they adopt AWS learn best practice s and activities to handle those challenges and recognize the signs of maturity or expected outcomes to gauge their maturity and readiness at every stage This whitepaper guide s organizations to measur e their readiness for the AWS Cloud build an effective cloud transformation strategy and drive a n effective execution plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 1 Introduction The Amazon Web Services ( AWS) Cloud Transformation Maturity Model (CTMM) is a tool enterprise customers can use to assess the maturity of their cloud adoption through four key stages : project foundation migration and optimization Each stage brings an organization ’s people processes and technologies closer to realizing its vision of ITasaService (ITaaS) To fully benefit from the AWS C loud the whole organization has to transform and adopt the cloud —not just the IT division Figure 1 shows the key AWS CTMM activities and when they occur during the four stages of cloud transformation Figure 1 : AWS Cloud T ransform ation Maturi ty Model – stages milestones and timeline The four stages of cloud transformation are described in detail in this paper Table 1 provides a mat urity matrix of the challenges key transformation activities and outcomes at each stage of the AWS CTMM ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 2 Table 1: AWS Cloud T ransformation Maturity Matrix Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Project Limited knowledge of AWS services Raise level of AWS awareness via education and training Organization knowledge and support Limited executive support for new IT investment Seek case studies of proven return on investment ( ROI) and participate in AWS executive briefings Executive support and appropriate funding Unable to purchase required services Use current services or create new contract Educate procurement and legal staff about new purchasing paradigms when procur ing cloud services and tools1 Ability to purchase all required services Limited confidence in cloud service capabilities Execute one or more pilot/POC project s Increased confidence and fewer concerns No clear ownership or direction Conduct a Kickoff and Discovery Workshop IT ownership with clear strategy and direction Foundation Assigning the required resources to effectively drive the transformation Conduct a People Model Workshop and establish a CCoE Dedicated resources to define policies architecture Lack of a detailed organizational transformation plan Conduct a Governance Model Workshop and a Migration Jumpstart Detailed plan for all aspects of the transformation (People Process and Technology ) Limited knowledge of security and compliance paradigms and requirements in the cloud Conduct an AWS Security Risk and Compliance Workshop Best practice security policies architecture and procedures Cost and budget management requirements and concerns Conduct an AWS Cost Model Workshop Detailed TCO for proposed operating environment Migration Developing an effective and efficient migration strategy Conduct an Application Portfolio Assessment Jumpstart A migration strategy with a clear line of sight from current to target state environment ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 3 Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Implementing an effective and efficient migration process Select and implement best migration environment A cost efficient and effective application migration process Managing environment efficiently and effectively Select and implement best management environment A cost efficient and effective portfolio management with robust governance and security Migrating all targeted applications ( AllIn ) successfully Migrate workloads using AWS/Partner implementation tools and services Allin – organization achieving significant benefits Optimization Optimizing cost management Leverage AWS tools and features to continuous ly improv e operational costs (eg consol idated billing Reserved Instances discounts ) Focused and robust processes in place to continuous ly seek ways to optimize costs Optimizing service management Utilize latest AWS tools to continuously improve service management methods/processes Fully optimized service management and increased customer satisfaction Optimizing application management services Utilize AWS best practices and tools (eg DevOps CI/CD) to continuously improve application management methods/tools Rigorous emphasis on optimized application management services Optimizing enterprise services Continuously seek ways to aggregate and improve shared services Optimized enterprise services and customer satisfaction Project Stage The project stage begins the transformation journey for your organization Organizations in this stage usually have limited knowledge of c loud services and their potential costs and benefits and typically they don’t have a centralized cloud adoption strategy Getting through this initial stage is crucial to the ultimate success for your organization’s journey to the cloud T he outcomes realized and lessons learned ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 4 here lay the strong foundation for broader cloud adoption at all organizational levels Challenges and Barriers Your organization needs to overcome t he following key challenges and barriers during this stage of the transformation : • Limited knowledge and training – IT s taff and their internal customers are accustomed to the older model and related process of acquiring and consuming IT Significant investment in training is required for IT staff and other business units to adopt the cloud model • Executive support and fund ing – IT leaders have traditionally framed IT infrastructure investments as a necessary evil to gain funding approval for signi ficant infrastructure upgrades As a result e xecutives are often skeptical and resistant to any new funding In addition executives constantly hear complaints from IT customers ( that is the other business units ) about rising costs poor service delivery and fail ed or failing project implementations • Purchasing public cloud services – IT leaders face the challenge of establishing new contracts or leveraging existing contracts with specific terms and conditions to purchase cloud services A significant obstacle can be the lack of awareness among the procurement and legal staff about purchasing paradigms for cloud services In addition IT leaders have to ensure that new contracts meet the competitive bidding laws of their jurisdiction which can be a long and complex process • Limited confidence in cloud service models – Cloud service infrastructure provisioning and management operation models are significantly different from the traditional on premise s operating model Your IT group might require hands on experience before it is ready to support the transformation effort If your IT group resists change or isn’t enthusiastic about changing to the cloud model your transformation initiative could be significantly undermine d • IT ownership and direction – IT leaders have many leadership challenges including shadow IT where other business units set up their own IT operations IT leaders have to gain control of central IT ownership ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 5 and communicat e a clear transformation roadmap to all organization stakeholders Transformation Activities To overcome the challenges and barriers in the project stage and mature to the foundation stage your organization must complete the following transformation activities : • Contact an AWS account manager – An AWS a ccount manager is a key resource and a single point of contact who can connect you with AWS Partners and professional services to address all of your AWS needs To get in touch with an AWS account manager go to Contact Us 2 • Raise the level of AWS awareness – There are many AWS events3 and education and training resources for your organization’s stakeholders including: o AWS Business Essentials – This training helps your IT business leaders and professionals understand the benefits of cloud computing from the strategic business value perspective For more information see the AWS Business Essentials website4 o Online videos and hands on labs – AWS offers a series of free ondemand instructional videos and labs to help you learn about AWS in minutes5 In addition qwikL ABS provide hands on practice with popular AWS Cloud services and real world scenarios 6 To learn more about AWS services and features from AWS engineers and solution s architects and to hear customer perspectives visit the AWS YouTube Channel 7 o AWS Technical Essentials – This training provides an overview of AWS services and solutions to your technical users to give them the information they need to make informed decisions about the IT solutions for your organization For more information see the AWS Technical Essentials website8 o AWS whitepapers – The comprehensive online collection of AWS Whitepapers cover s a broad range of technical topics including best practices for solving business problems architectures security compliance and cloud economics9 ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 6 o AWS trainings – AWS offers an array of instructor led technical trainings to help your teams develop the skills to design deploy and operate infrastructure and applications i n the AWS C loud Please visit AWS Training and Certification for more information10 Table 2 : AWS rec ommended educational resources for roles in your organization Role Resources IT leadership team AWS Business Essentials Online Videos and Labs AWS Whitepapers IT staff AWS Business Essentials Online Videos and Labs AWS T echnical Essentials AWS W hitepapers AWS Training and Certification IT customers AWS Business Essentials Online Videos and Labs AWS Whit epapers • Secure executive support and funding – AWS offers cost and value modeling workshops to provide you with estimated costs and strategic value so you can perform a costbenefit analysis as a basis for securing executive support and funding In addition numerous case studies 11 and whitepapers demonstrate proven cost savings and agility benefits for customers of all sizes in virtually every market segment • Consider purchasing o ptions – You can buy AWS Cloud services12 the following ways: o Direct purchase from AWS – Start using AWS services within minutes by opening an account online in accordance with the AWS Terms and Conditions o Indirect p urchase from an AWS Partner – Acquire AWS via Partner contract vehicles to serve the needs of federal state and l ocal governments as well as the education sector F or more information see the AWS whitepaper Ten Considerations for a Cloud Procurement13 the contracts web page AWS Public Sector Contract Center14 or send an email to aws wwps contract mgmt@amazoncom ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 7 • Execute a pilot or proof ofconcept (POC) project – Most customers leverage one or more pilot or POC projects to test AWS implementation on representative workloads AWS supports such initiatives by providing accelerator service s such as an AWS Migration Jumpstart to provide the end toend knowledge transfer of an actual workload migration In addition for customers working with an AWS Partner the AWS POC Program is another avenue to get funding for POC projects executed via eligible AWS Partners F or more information see the Partner Funding webpage 15 • Conduct an IT Transformation Workshop – This workshop enable s rapid cloud adoption by showing you how to replace uncertainty with a vision and strategy on how to derive value from AWS The workshop is an interactive educational experience where you can clearly identify business drivers objectives and blockers This helps you build a cloud adoption roadmap to guide you through the next steps in your journey to the cloud Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity and readiness to proceed to the foundation stage : • Effective use of AWS resources – The AWS account manager works with your organization to coordinate the appropriate AWS professional services onsite presentations and meetings onsite training web service accounts and support • Knowledgeable and trained o rganization – Your IT leadership team is familiar with AWS its costs and benefits and transformation best practices Key IT staff members have some hands on experience with AWS services and IT customers have basic knowledge of AWS features and capabilities • Executive support and funding – Your IT leadership team has presented a sound business case for funding the cloud transformation initiative to your organization’s executive leadership This business case typically includes a cost benefit analysis customer reference examples and risk management assessments ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 8 • Ability to p urchase AWS and AWS profe ssional services – Your IT team has work ed with the AWS account manager to identify an existing contract vehicle via an AWS Partner or to put a new contract in place 16 • IT staff confidence and true buyin – The POC was executed successfully and addressed the concerns of your key IT staff who se complete support is crucial to effectively transform the organization • Central IT ownership and a clear transformation roadmap – Centralized ownership of the cloud initiative has emerged and all of your stakeholders participated in an IT Transformation Workshop The IT leaders have a clear vision and a transformation roadmap has been communicated to key stakeholders across the organization The roadmap provides direction on establishing preliminary AWS governance policies that mitigate the risks of business units moving ahead Foundation Stage The foundation stage is characterized by the customer’s intent to move forward with migration to AWS with executive spo nsorship some experience with AWS services and partially trained staff During this stage the customer’s environment is assessed all contractual agreements are in place and a plan is created for the migration The migration plan details the business case in scope workloads approach to migration resources required and the timeframe Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: • Assigning transformation support resources – Effective execution in this stage requires a significant amount of time from key IT staff who are knowledgeable and trusted to provide input into decisions concerning architecture security and governance This can be challenging because IT organizations are constantly inundated with competing priori ties related to managing the current environment This situation is further compounded by the limited number of key infrastructure security and service management staff • Providing leadership through a transformation plan – IT leaders are challenged with the daunting task of developing a transformation plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 9 that addresses all aspects of organization al change including business governance architecture service delivery operations roles and responsibili ties and training • Integrating s ecurity and compliance p olicies – IT organizations are challenged with integrating AWS into their existing security and control framework that supports their current IT environment They are also challenged with configurin g AWS to be in compliance with regulatory requirements • Managing c ost and budget – IT organizations are challenged to develop a budget aligned with the OpEx model of utility computing measurable benefit goals and an effective cost management process Transformation Activities We recommend t he following transformation activities to achieve the necessary outcomes before moving to the migration stage: • Establish a Cloud Center of Excellence (CCo E) – AWS recommends strong governance practices using a CCoE We recommend that you staff the CCoE gradually with a dedicated team that has the following core responsibilities: o Defining central policies and strategy o Providing support and knowledge transfer to business units using hybrid cloud solutions o Creating and provisioning AWS accounts for workload/program owners o Providing a central point of access control and security standards o Creating and managing common use case architectures (blueprints) The use of a CCoE lowers the implementation and migration risk across the organization and serves as a conduit for sharing the best practices for a broader impact of cloud transformation throughout the organization • Develop security and compliance architecture – AWS Prof essional Services helps your organization achieve risk management and compliance goals Prescriptive guidance enables you to adopt rigorous ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 10 methods for implementing security and compliance processes for systems and personnel • Develop a value management plan – Developing a robust value management model is a key activity that includes tactical benefits ( cost management prioritization of IT spending and a system of allocating costs ) and strategic value from the cloud (agility time to market ITaaS innova tion) When you have a plan you can focus on and prioritize initiatives (see Figure 2) For example with AWS you can view specific IT operating costs and system performance data AWS also enables allocati on to specific business groups or specific applicat ions in near real time Figure 2 : Strategic and t actical values of AWS adoption identified Outcomes and Maturity Use t he following key outcomes to measure your organization ’s readiness to move to the migration stage : • CCoE for Cloud Governance – The central CCOE provides the following benefits: o Standardization of s trategy and v ision – Centralization allows a single point of cloud strategy that is aligned with the larger business requirements of the wider organization o Centralized expertise – A central cloud team can be trained quickly in specialized cloud technologies while individual business areas are still getting up to speed ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 11 o Standardization of t echnical processes and procedures – A central team owns the responsibility for standard processe s procedures and blueprints which can include the use of automation and other methods to simplify and standardize deployments by application owners o Bias for a ction – A central cloud team has a vested interest in making sure that the cloud computing model is successful whereas decentralized business units might be less effective if they don’t realize a direct benefit • Clear transformation roadmap – A transformation roadmap establishes a plan identifies resourc es and provides details about migration activities The roadmap is used to define the ordering and dependencies of your initiatives to achieve t he goals set by the CCo E steering c ommittee or program management • Best practice security and compliance architecture – A highly scalable best practice architecture design is created that supports all policy and regulatory compliance requirements • Strong value management plan – A value management plan determines and describe s how you quantify value and identifies the areas where the project team s should focus Migration Stage The migration stage is where your organization matures overall with governance technical and operational foundation in place to effectively and efficiently migrate targeted application s Dur ing this stage the building blocks of the migration and operational tools are implemented and the mass migration of inscope workloads is completed Significant risks exist at this stage such as project delays budget overruns and application failures If the appropr iate migration strategies tools and methods are not implemented there is also a risk that customer confidence and support will diminish Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 12 • Developing an e ffective and efficient m igration strategy – Your organization is challenged to implement a strategy that mini mizes the risk of project failures and maximizes ROI Many ambitious IT projects fail because they are based on inappropriate strategies and plans It’s critical to classify sequenc e and have an appropriate migration disposition for your targeted application workloads to ensure the success of the overall implementation pl an • Implementing a robust migration process – Your organization is challenged to implement a migration execution process that minimizes cost and is repeatable and sustainable The selection and implementation of proven migration tools and methods is a ke y factor in your organization ’s ability to minimize the risks associated with migrating targeted application workloads • Setting up a c loud environment – Your organization is challenged to implement a cloud environment that is controlled sustainable reliable and enables improved agility This challenge includes leveraging existing tools and processes as well as developing new tools and processes • Going allin – Your organization is challenged to implement process es that enable the effe ctive and efficient migration of all application workloads onto AWS on time and within budget Like all projects the risk is that technical failures unsustainable processes and performance failures could create significant project delays and unplanned costs Transformation Activities We recommend t he following transformation activities to achieve the outcomes in this stage and mature to the optimization stage : • Conduct a portfolio assessment – Your organization must go through a portfolio rationalization exercise to determine which applications to migrate r eplace or in some cases eliminate Figure 3 illustrates decision points to consider in determining the strategy for moving each application to the AWS Cloud focusing on the 6 Rs : retire retain rehost replatform repurchase and refactor ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 13 Figure 3 : Application migration dispositions and paths identified from migration strategy Table 3 describes the transformation impact of the 6 Rs in the order of their execution complexity Table 3: Cloud m igration strategies and corresponding levels of complexity for execution Migration Pattern Transformation Impact Complexity Refactoring Rearchitecting and recoding require investment in new capabilities delivery of complex programs and projects and potentially significant business disruption Optimization for the cloud should be realized High Replatforming Amortization of transformation costs is maximized over larger migrations Opportunities to address significant infrastructure upgrades can be realized This has a positive impact on compliance regulatory and obsolescence drivers Opportunities to optimize in the cloud should be realized High Repurchasing A replacement through either procurement or upgrade Disposal commissioning and decommissioning costs may be significant Medium Rehosting Typically referred to as lift and shift or forklifting Automated and scripted migrations are highly effective Medium Retiring Decommission and archive data as necessary Low Retaining This is the do nothing option Legacy costs remain and obsolescence costs typically increase over time Low ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 14 • Implement a m igration environment – In addition to the migration strategy your organization must develop a migration process for each application workload These processes include application migration tools data migration tools validation methods and roles and responsibilities In addition to other criteria such as business criticality and architecture each application is classified by migration method and process For example Figure 3 shows how you can migrate applications using AWS VM Import /Export or third party migration tools or by manually moving the code and data • Implement a best management environment – Your organization must develop and implement an effective cloud governance and operating model that addresses your organization’s nee d from the standpoint of access security compliance and automation • Migrate targeted workloads – AWS recommends using the principles of agile methodology to effectively execute and manage the migration of workloads from end to end This requires that y our organization plan schedule and execute migrations in repeatable sprints incorporating lessons learned after every sprint Each migration sprint should go through an appropriate acceptance test and change control process Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity in this stage and assess the organization’s readiness to progress to the optimization stage : • Allin with AWS – This means that the organization has declared that AWS is its primary cloud host for both legacy and new applications T his is a strategic long term direction from executive leadership to stop managing data centers and migrat e all targeted application workloads to AWS • IT as a Service (ITaaS) – Your organization is realizing the core benefits of cloud adoption : measurable cost savings agility and innovation Your organization is now effectively prov iding IaaS based services as a part of an ITaaS delivery organization ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 15 Optimization Stage The optimization stage is the fourth stage in the transformation maturity model To reach this stage your organization has successfully migrated all targeted application workloads ( that is it is allin on AWS) and is efficiently managing the AWS environment and service delivery process Thi s phase is an ongoing loop not a destination The objective of this phase is to optimize existing process es by lowering costs improving service and extending AWS value deeper into your organization The focus on continuous service improvement enables you to realize the true value of utility computing where you constantly seek optimiz ation and addition of newer AWS services to drive cost and performance efficiencies Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this phase of the transformation journey: • Optimize costs – Reducing and optimizing costs are not new challenges to the IT world With AWS your organization can finally realize those benefits AWS and third party providers frequently re lease new features and services including various discounting/consumption based models that you can evaluate for efficacy within your organization For example by evaluating application and database licensing fees that are often overlooked your organization can realize significant costreduction opportunities available with a cloud based payasyougo model • Optimize operation services – Your organization will be challenged to continuously improve the service delivery model for provisioning change control and managing the environment AWS and third party providers frequently release new features (eg automation templates) and services that you can investigate to improve automation and repeatability of tasks • Optimize application services – Your organization will be challenged to continuously improve application services that you use to build and enhance applications AWS and third party providers frequently release new features and services that your organization can evaluate to further optimiz e application services ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 16 • Optimize enterprise services – O rganization s are constantly challenged to seek Software asaService ( SaaS )based offerings as opposed to hosted solutions to continuously improve enterprise application services AWS and third party providers innovat e at a rapid pace adding services and features (eg managed databases virtual desktop email and document management) that can simplify your enterprise services Transformation Activities Your organization should complete t he following transformation activities to achieve the outcomes that your organization needs to continuously maximize maturity and value: • Implement a continuous cost optimization process – Either the designated resources on a CCo E or a group of centralized staff from IT Finance must be trained to support an ongoing process using AWS or third party cost management tools to assess costs and optimize savings • Implement a continuous operation management optimization process – Your organization should evaluate ongoing advancements in AWS services as well as thirdparty tools to pursue continuous improvement to operation management and service delivery process es • Implement a continuous applicati on service optimization process – Your organization should evaluate ongoing advancements in AWS services and features including thirdparty offerings to seek continuous improvement to the application service process Your organization might not use the AWS fully managed a pplication service solutions to migrat e existing application s but these services provide significant value in new application development AWS a pplication service offerings include the following : o Amazon API Gateway – A fully managed ser vice that makes it easy for developers to create publish maintain monitor and secure APIs at any scale o Amazon AppStream 20 – E nables you to stream your existing Windows applications from the cloud reaching more users on more devices without code modifications ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 17 o Amazon Elasticsearch Service (Amazon ES) – This fully managed service makes it easy to deploy operate and scale Amazon ES for log analytics full text search application monitoring and more o Amazon Elastic Transcoder – M edia transcoding in the cloud This service is designed to be a highly scalable easy touse and cost effective way for developers and businesses to convert (that is transcode) media files from their source format into formats required by consumer playback devices such as smartphones tablets and PCs • Implement a continuous enterprise s ervice optimization process – AWS continually innovat es and launch es additional enterprise applications that your organization should consider implementing to achieve ease ofuse and enterprise grade security without the burden of managing maintenance overhead For example AWS enterprise services applications include: o Amazon WorkSpaces – A managed desktop cloud computing service o Amazon WorkDocs – A fully managed secure enterprise s torage and sharing service with strong administrative controls and feedback capabilities that improve user productivity o Amazon WorkMail – A secure managed business email and calendar service with support for existing desktop and mobile email clients Outcomes and Maturity Use t he following transformation outcomes to measure your organization’s maturity as optimized and continuously maximizing maturity and value: • Optimized cost savings – Your organization has an ongoing process and a team focused on continually review ing AWS usage across your organization and identify ing cost reduction opportunities • Optimized operations management process – Your organization has an ongoing process in place to routinely review AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the current operation management process ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 18 • Optimized application development process – Your organization has an ongoing process in place to evaluate AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the application architecture and development process • Optimized enterprise services – Your organization has an ongoing process in place to regularly review AWS and third party management enterprise s ervice offerings to improve the delivery security and management of services offered throughout the organization Conclusion Every customer’s cloud journey is unique However the challenges corresponding actions and outcomes achieved are similar The AWS Cloud Transformation Maturity Model provide s you with a way to identify and anticipate the challenges early become familiar with the mitigation strategies based on AWS best practices and guidance and successfully drive value from cloud transforma tion AWS and its thousands of partners have leveraged this model to accelerate customer adoption of AWS Cloud services by compressing the time through each stage of their cloud transformation Even in situations where customers pursue certain activities in parallel across multiple stages or are at varying levels of maturity in different parts of the organization due to their size and IT organizational structure the guidance provided in th is paper can help you significantly reduce the risk and uncertainty in your organization ’s cloud transformation initiative Contributors The following individuals and organizations contributed to this document: • Blake Chism Global Practice Development AWS Public Sector • Sanjay Asnani Partner Strategy Consultant AWS Public Sector • Brian Anderson Practice Manager SLG AWS Public Sector ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 19 Document Revisions Date Description September 2017 Updated content September 2016 First publication 1 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 2 https://awsamazoncom/contact us/ 3 https://awsamazoncom/about aws/events/ 4 https://awsamazoncom/training/course descriptions/business essentials/ 5 https://awsamazoncom/training/intro_series/ 6 https://qwiklabscom/ 7 https://wwwyoutubecom/user/AmazonWebServices 8 https://awsamazoncom/training/course descriptions/essentials/ 9 https://awsamazoncom/whitepapers/ 10 https://awsamazoncom/training/ 11 https://awsamazoncom/solutions/case studies/ 12 https://awsamazoncom/how tobuy/ 13 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 14 https://awsamazoncom/contract center/ 15 https://awsamazoncom/partners/fundingbenefits/ 16 https://awsamazoncom/contract center/ Notes,General,consultant,Best Practices AWS_Database_Migration_Service_Best_Practices,AWS Database Migration Service Best Practices August 2016 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: http://awsamazoncom/whitepapers ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 2 of 17 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 3 of 17 Contents Abstract 4 Introduction 4 Provisioning a Replication Server 6 Instance Class 6 Storage 6 Multi AZ 7 Source Endpoint 7 Target Endpoint 7 Task 8 Migration Type 8 Start Task on Create 8 Target Table Prep Mode 8 LOB Controls 9 Enable Logging 10 Monitoring Your Tasks 10 Host Metrics 10 Replication Task Metrics 10 Table Metrics 10 Performance Expectations 11 Increasing Performance 11 Load Multiple Tables in Parallel 11 Remove Bottlenecks on the Target 11 Use Multiple Tasks 11 Improving LOB Performance 12 Optimizing Change Processing 12 Reducing Load on Your Source System 12 Frequently Asked Questions 13 What are the main reasons for performing a database migration? 13 ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 4 of 17 What steps does a typical migration project include? 13 How Much Load Will the Migration Process Add to My Source Database? 14 How Long Does a Typical Database Migration Take? 14 I’m Changing Engin es–How Can I Migrate My Complete Schema? 14 Why Doesn’t AWS DMS Migrate My Entire Schema? 14 Who Can Help Me with My Database Migration Project? 15 What Are the Main Reasons to Switch Database Engines? 15 How Can I Migrate from Unsupported Database Engine Versions? 15 When Should I NOT Use DMS? 16 When Should I Use a Native Replication Mechanism Instead of the DMS and the AWS Schema Conversion Tool? 16 What Is the Maximum Size of Database That DMS Can Handle? 16 What if I Want to Migrate from Classic to VPC? 17 Conclusion 17 Contributors 17 Abstract Today as many companies move database workloads to Amazon Web Services (AWS) they are often also interested in changing their primary database engine Most current methods for migrating databases to the cloud or switching engines require an extended outage The AWS Database Migration Service helps organizations to migrate database workloads to AWS or change database engines while minimizing any associated downtime This paper outlines best practices for using AWS DMS Introduction AWS Database Migration Service allows you to migrate data from a source database to a target database During a migration the service tracks changes being made on the source database so that they can be applied to the target database to eventually keep the two databases in sync Although the source and target databases can be of the same engine type they don’t need to be The possible types of migrations are: 1 Homogenous migrations (migrations between the same engine types) 2 Heterogeneous migrations (migrations between different engine types) ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 5 of 17 At a high level when using AWS DMS a user provisions a replication server defines source and target endpoints and creates a task to migrate data between the source and target databases A typical task consists of three major phases: the full load the application of cached changes and ongoing replication During the full load data is loaded from tables on the source database to tables on the target database eight tables at a time (the default) While the full load is in progress changes made to the tables that are being loaded are cached on the replication server ; these are the cached changes It’s important to know that the capturing of changes for a given table doesn’t begin until the full load for that table starts ; in other words the start of change capture for each individual table will be different After the full load for a given table is complete you can begin to apply the cached changes for that table immediately When ALL tables are loaded you begin to collect changes as transactions for the ongoing replication phase After all cached changes are applied your tables are consistent transactionally and you move to the ongoing replication phase applying changes as transactions Upon initial entry into the ongoing replication phase there will be a backlog of transactions causing some lag between the source and target databases After working through this backlog the system will eventually reach a steady state At this point when you’re ready you can:  Shut down your applicatio ns  Allow any remaining transactions to be applied to the target  Restart your applications pointing at the new target database AWS DMS will create the target schema objects that are needed to perform the migration However AWS DMS takes a minimalist approach and creates only those objects required to efficiently migrate the data In other words AWS DMS will create tables primary keys and in some cases unique indexes It will not create secondary indexes nonprimary key constraints data defaults or other objects that are not required to efficiently migrate the data from the source system In most cases when performing a migration you will also want to migrate most or all of the source schema If you are performing a homogeneous migration you can accomplish this by using your engine’s native tools to perform a no data export/import of the schema If your migration is heterogeneous you can use the AWS Schema Conversion Tool (AWS SCT) to generate a complete target schema for you Note Any inter table dependencies such a s foreign key constraints must be disabled during the “ful l load” and “cached change application ” phases of AWS DMS processing Also if performance is an issue it will be beneficial to remove or disable secondary indexes during the migration process ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 6 of 17 Provisioning a Replication Server AWS DMS is a managed service that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance The service connects to the source database reads the source data formats the data for consumption by the target database and loads the data into the target database Most of this processing happens in memory however large transactions may require some buffering on disk Cached transactions and log files are also written to disk The following sections describe what you should consider when selecting your replication server Instance Class Some of the smaller instance classes are sufficient for testing the service or for small migrations If your migration involves a large number of tables or if you intend to run multiple concurrent replication tasks you should consider using one of the larger instances because the service consumes a fair amount of memory and CPU Note T2 type instances are designed to provide moderate baseline performance and the capability to burst to si gnificantly higher performance as required by your workload They are intended for workloads that don't use the full CPU often or consistently but that occasionally need to burst T2 instances are well suited for general purpose workloads such as web se rvers developer environments and small databases If you’re troubleshooting a slow migration and using a T2 instance type look at the CPU Utilization host metric to see if you’re bursting over the baseline for that instance type Storage Depending on the instance class your replication server will come with either 50 GB or 100 GB of data storage This storage is used for log files and any cached changes that are collected during the load If your source system is busy or takes large transactions or if you’re running multiple tasks on the replication server you might need to increase this amount of storage However the default amount is usually sufficient Note All storage volumes in AWS DMS are GP2 or General Purpose SSDs GP2 volumes come with a ba se performance of three I/O Operations Per Second (IOPS) with abilities to burst up to 3 000 IOPS on a credit basis As a rule of thumb check the ReadIOPS and WriteIOPS metrics for the replication instance and be sure the sum of these values does not cro ss the base performance for that volume ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 7 of 17 MultiAZ Selecting a MultiAZ instance can protect your migration from storage failures Most migrations are transient and not intended to run for long periods of time If you’re using AWS DMS for ongoing replication purposes selecting a MultiAZ instance can improve your availability should a storage issue occur Source Endpoint The change capture process used when replicating ongoing changes collects changes from the database logs by using the database engines native API no client side install is required Each engine has specific configuration requirements for exposing this change stream to a given user account (for details see the AWS Key Management Service documentation ) Most engines require some additional configuration to make the change data consumable in a meaningful way without data loss for the capture process (For example Oracle requires the addition of supplemental logging and MySQL requires rowlevel bin logging) Note When capturing changes from an Amazon Relational Database Service (Amazon RDS) source ensure backups are enabled and the source is configured to retain change logs for a sufficiently long time ( usually 24 hours) Target Endpoint Whenever possible AWS DMS attempts to create the target schema for you including underlying tables and primary keys However sometimes this isn’t possible For example when the target is Oracle AWS DMS doesn’t create the target schema for security reasons In MySQL you have the option through extra connection parameters to have AWS DMS migrate objects to the specified database or to have AWS DMS create each database for you as it finds the database on the source Note For the purposes of this paper in Oracle a user and schema are synonymous In MySQL schema is synonymous with database Both SQL Server and Postgres have a concept of database AND schema In this paper we’re referring to the schema ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 8 of 17 Task The following section highlights common and important options to consider when creating a task Migration Type  Migrate existing data If you can afford an outage that’s long enough to copy your existing data this is a good option to choose This option simply migrates the data from your source system to your target creating tables as needed  Migrate existing data and replicate ongoing changes This option performs a full data load while capturing changes on the source After the full load is complete captured changes are applied to the target Eventually the application of changes will reach a steady state At that point you can shut down your applications let the remaining changes flow through to the target and restart your applications to point at the target  Replicate data changes only In some situations it may be more efficient to copy the existing data by using a method outside of AWS DMS For example in a homogeneous migration using native export/import tools can be more efficient at loading the bulk data When this is the case you can use AWS DMS to replicate changes as of the point in time at which you started your bulk load to bring and keep your source and target systems in sync When replicating data changes only you need to specify a time from which AWS DMS will begin to read changes from the database change logs It’s important to keep these logs available on the server for a period of time to ensure AWS DMS has access to these changes This is typically achieved by keeping the logs available for 24 hours (or longer) during the migration process Start Task on Create By default AWS DMS will start your task as soon as you create it In some situations it’s helpful to postpone the start of the task For example using the AWS Command Line Interface (AWS CLI) you may have a process that creates a task and a different process that starts the task based on some triggering event Target Table Prep Mode Target table prep mode tells AWS DMS what to do with tables that already exist If a table that is a member of a migration doesn’t yet exist on the target AWS DMS will create the table By default AWS DMS will drop and recreate any existing tables on the target in preparation for a full load or a reload If you’re precreating your schema set your target table prep mode to truncate causing AWS DMS to truncate existing tables prior to load or reload When the table ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 9 of 17 prep mode is set to do nothing any data that exists in the target tables is left as is This can be useful when consolidating data from multiple systems into a single table using multiple tasks AWS DMS performs these steps when it creates a target table:  The source database column data type is converted into an intermediate AWS DMS data type  The AWS DMS data type is converted into the target data type This data type conversion is performed for both heterogeneous and homogeneous migrations In a homogeneous migration this data type conversion may lead to target data types not matching source data types exactly For example in some situations it’s necessary to triple the size of varchar columns to account for multibyte characters We recommend going through the AWS DMS documentation on source and target data types to see if all the data types you use are supported If the resultant data types aren’t to your liking when you’re using AWS DMS to create your objects you can precreate those objects on the target database If you do pre create some or all of your target object s be sure to choose the truncate or do nothing options for target table preparation mode LOB Controls Due to their unknown and sometimes large size large objects (LOBs) require more processing and resources than standard objects To help with tuning migrations of systems that contain LOBs AWS DMS offers the following options:  Don’t include LOB columns When this option is selected tables that include LOB columns are migrated in full however any columns containing LOBs will be omitted  Full LOB mode When you select full LOB mode AWS DMS assumes no information regarding the size of the LOB data LOBs are migrated in full in successive pieces whose size is determined by the LOB chunk size Changing the LOB chunk size affects the memory consumption of AWS DMS; a large LOB chunk size requires more memory and processing Memory is consumed per LOB per row If you have a table containing three LOBs and are moving data 1000 rows at a time an LOB chunk size of 32 k will require 3*32*1000 = 96000 k of memory for processing Ideally the LOB chunk size should be set to allow AWS DMS to retrieve the majority of LOBs in as few chunks as possible For example if 90 percent of your LOBs are less than 32 k then setting the LOB chunk size to 32 k would be reasonable assuming you have the memory to accommodate the setting  Limited LOB mode When limited LOB mode is selected any LOBs that are larger than max LOB size are truncated to max LOB size and a warning is issued to the log file Using limited LOB mode is almost always more efficient and faster than full LOB mode You can usually query your data dictionary to determine the size of the largest LOB in a table settin g max LOB size to something slightly larger than this (don’t forget to account for multibyte characters) If you have a table in which most LOBs are small with a few ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 10 of 17 large outliers it may be a good idea to move the large LOBs into their own table and use two tasks to consolidate the tables on the target LOB columns are transferred only if the source table has a primary key or a unique index on the table Transfer of data containing LOBs is a two step process : 1 The containing row on the target is c reated without the LOB data 2 The table is updated with the LOB data The process was designed this way to accommodate the methods source database engines use to manage LOBs and changes to LOB data Enable Logging It’s always a good idea to enable loggi ng because many informational and warning messages are written to the logs However be advised that you’ll incur a small charge as the logs are made accessible by using Amazon CloudWatch Find appropriate entries in the logs by looking for lines that start with the following:  Lines starting with “E:” – Errors  Lines starting with “W:” – Warnings  Lines starting with “I:” – Informational messages You can use grep (on UNIXbased text editors) or search (for Windowsbased text editors) to find exactly what you’re looking for in a huge task log Monitoring Your Tasks There are several options for monitoring your tasks using the AWS DMS console Host Metrics You can find host metrics on your replication instances monitoring tab Here you can monitor whether your replication instance is sized appropriately Replication Task Metrics Metrics for replication tasks including incoming and committed changes and latency between the replication host and source/target databases can be found on the task monitoring tab for each particular task Table Metrics Individual table metrics can be found under the table statistics tab for each individual task These metrics include: the number of rows loaded during the full load; the number of inserts updates and deletes since the task started; and the number of DDL operations since the task started ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 11 of 17 Performance Expectations There are a number of factors that will affect the performance of your migration: resource availability on the source available network throughput resource capacity of the replication server ability of the target to ingest changes type and distribution of source data number of objects to be migrated and so on In our tests we have been able to migrate a terabyte of data in approximately 12 –13 hours (under “ideal” conditions) Our tests were performed using source databases running on EC2 and in Amazon RDS with target databases in RDS Our source databases contained a representative amount of relatively evenly distributed data with a few large tables containing up to 250 GB of data Increasing Performance The performance of your migration will be limited by one or more bottlenecks you encounter along the way The following are a few things you can do to increase performance Load Multiple Tables in Parallel By default AWS DMS loads eight tables at a time You may see some performance improvement by increasing this slightly when you’re using a very large replication server; however at some point increasing this parallelism will reduce performance If your replication server is smaller you should reduce this number Remove Bottlenecks on the Target During the migration try to remove any processes that would compete for write resources on your target database This includes disabling unnecessary triggers validation secondary indexes and so on When migrating to an RDS database it’s a good idea to disable backups and Multi AZ on the target until you’re ready to cutover Similarl y when migrating to nonRDS systems disabling any logging on the target until cutover is usually a good idea Use Multiple Tasks Sometimes using multiple tasks for a single migration can improve performance If you have sets of tables that don’t particip ate in common transactions it may be possible to divide your migration into multiple tasks Note Transactional consistency is maintained within a task Therefore it’s important that tables in separate tasks don’t participate in common transactions Ad ditionally each task will independently read the transaction stream Therefore be careful not to put too mu ch stress on the source system For very large systems or systems with many LOBs you may also consider using multiple replication servers each co ntaining one or more tasks A review of the ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 12 of 17 host statistics of your replication server can help you determine whether this might be a good option Improving LOB Performance Pay attention to the LOB parameters Whenever possible use limited LOB mode If you have a table which consists of a few large LOBs and mostly smaller LOBs consider breaking up the table into a table that contains the large LOBs and a table that contains the small LOBs prior to the migration You can then use a task in limited LOB mode to migrate the table containing small LOBs and a task in full LOB mode to migrate the table containing large LOBs Important In LOB processing LOBs are migrated using a two step process : first the containing row is created without the LOB and then the row is updated with the LOB data Therefore even if the LOB column is NOT NULLABLE on the source it must be nullable on the target during the migration Optimizing Change Processing By default AWS DMS processes changes in a transactional mode which preserves transactional integrity If you can afford temporary lapses in transactional integrity you can turn on batch optimized apply Batch optimized apply groups transactions and applies them in batches for efficiency purposes Note Using batch optimized apply will almost certainly violate referential integrity constraints Therefore you should disable them during the migration process and enable them as part of the cutover process Reducing Load on Your Source System During a migration AWS DMS performs a full table scan of each source table being processed (usually in parallel) Additionally each task periodically queries the source for change information To perform change processing you may be required to increase the amount of data written to your database ’s change log If you find you are overburdening your source database you can reduce the number of tasks or tables per task of your migration If you prefer not to add load to your source consider performing the migration from a read copy of your source system ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 13 of 17 Note Using a read copy will increase the replication lag Frequently Asked Questions What Are the Main Reasons for Performing a Database migration? Would you like to move your database from a commercial engine to an open source alternative? Perhaps you want to move your onpremises database into the AWS Cloud Would you like to divide your database into functional pieces? Maybe you’d like to move some of your data from RDS into Amazon Redshift These and other similar scenarios can be considered “database migrations” What Steps Does a Typical Migration Project Include? This of course depends on the reason for and type of migration you choose to perform At a minimum you’ll want to do the following Perform an Assessment In an assessment you determine the basic framework of your migration and discover things in your environment that you’ll need to change to make a migration successful The following are some questions to ask:  Which objects do I want to migrate?  Are my data types compatible with those covered by AWS DMS?  Does my source system have the necessary capacity and is it configured to support a migration?  What is my target and how should I configure it to get the required or desired capacity? Prototype Migration Configuration This is typically an iterative process It’s a good idea to use a small test migration consisting of a couple of tables to verify you’ve got things properly configured Once you’ve verified your configuration test the migration with any objects you suspect could be difficult These can include LOB objects character set conversions complex data types and so on When you’ve worked out any kinks related to complexity test your largest tables to see what sort of throughput you can achieve for them Design Your Migration Concurrently with the prototyping stage you should determine exactly how you intend to migrate your application The steps can vary dramatically depending on the type of migration you’re performing ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 14 of 17 Testing Your Endto End Migration After you have completed your prototyping it’s a good idea to test a complete migration Are all objects accounted for? Does the migration fit within expected time limits? Are there any errors or warnings in the log files that are a concern? Perform Your Migration After you’re satisfied that you’ve got a comprehensive migration plan and have tested your migration end toend it’s time to perform your migration! How Much Load Will the Migration Process Add to My Source Database? This a complex question with no specific answer The load on a source database is dependent upon several things During a migration AWS DMS performs a full table scan of the source table for each table processed in parallel Additionally each task periodically queries the source for change information To perform change processing you may be required to increase the amount of data written to your databases change log If your tasks contain a Change Data Capture (CDC) component the size location and retention of log files can have an impact on the load How L ong Does a Typical Database Migration Take? The following are items that determine the length of your migration: total amount of data being migrated amount and size of LOB data size of the largest table s total number of objects being migrated secondary indexes created on the target before the migration resources available on the source system resources available on the target system resources available on the replication server network throughput and so on Clearly there is no one formula that will predict how long your migration will take The best way to gauge how long your particular migration will take is to test it I’m Changing Engines –How Can I Migrate My Complete Schema? As previously stated AWS DMS will only create those objects needed to perform an optimized migration of your data You can use the free AWS Schema Conversion Tool (AWS SCT) to convert an entire schema from one database engine to another The AWS SCT can be used with AWS DMS to facilitate the migration of your entire system Why Doesn’t AWS DMS Migrate My Entire Schema? All database engines supported by AWS DMS have native tools that you can use to export and import your schema in a homogeneous environment Amazon has developed the AWS SCT to facilitate the migration of your schema in a heterogeneous environment The AWS DMS is ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 15 of 17 intended to be used with one of these methods to perform a complete migration of your database Who Can Help Me with My Database Migration Project? Most of Amazon’s customers should be able to complete a database migration project by themselves However if your project is challenging or you are short on resources one of our migration partners should be able to help you out For details please visit https://awsamazoncom/partners What Are the Main R easons to Switch Database Engines? There are two main reasons we see people switching engines:  Modernization The customer wants to use a modern framework or platform for their application portfolio and these platforms are available only on more modern SQL or NoSQL database engines  License fees The customer wants to migrate to an open source engine to reduce license fees How Can I Migrate from Unsupported Database Engine Versions? Amazon has tried to make AWS DMS compatible with as many supported database versions as possible However some database versions don’t support the necessary features required by AWS DMS especially with respect to change capture and apply Currently to fully migrate from an unsupported database engine you must first upgrade your database to a supported engine Alternatively y ou may be able to perform a complete migration from an “unsupported” version if you don’t need the change capture and apply capabilities of DMS If you are performing a homogeneous migration one of the following methods might work for you:  MySQL: Importing and Exporting Data From a MySQL DB Instance  Oracle: Importing Data Into Oracle on Amazon RDS  SQL Server: Importing and Exporting SQL Server Databases  PostgreSQL: Importing Data into PostgreSQL on Amazon RDS ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 16 of 17 When Should I NOT Use DMS? Most databases offer a native method for migrating between servers or platforms Sometimes using a simple backup and restore or export/import is the most efficient way to migrate your data into AWS If y ou’re considering a homogeneous migration you should first assess whether a suitable native option exists In some situations you might choose to use the native tools to perform the bulk load and use DMS to capture and apply changes that occur during the bulk load For example when migrating between different flavors of MySQL or Amazon Aurora creating and promoting a read replica is most likely your best option See Importing and Exporting Data From a MySQL DB Instance When Should I Use a Native Replication Mechanism Instead of the DMS and the AWS Schema Conversion Tool? This is very much related to the previous question If you can successfully set up a replica of your primary database in your target environment by using native tools more easily than you can with DMS you should consider using that native method for migrating your system Some examples include:  Read replicas – MySQL  Standby databases – Oracle Postgres  AlwaysOn availability groups – SQL Server Note AlwaysOn is not supported in RDS What Is the Maximum Size of Database That DMS Can Handle ? This depends on your environment the distribution of data and how busy your source system is The best way to determine whether your particular system is a candidate for DMS is to test it out Start slowly to get the configuration worked out add some complex objects and finally attempt a full load as a test As a ballpark maximum figure: Under mostly ideal conditions (EC2 to RDS cross region) over the course of a weekend (approximately 33 hours) we were able to migrate five terabytes of relatively evenly distributed data including four large (250 GB) tables a huge (1 TB) table 1000 small to moderately sized tables three tables that contained LOBs varying between 25 GB and 75 GB and 10000 very small tables ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 17 of 17 What if I Want to Migrate from Classic to VPC? DMS can be used to help minimize databaserelated outages when moving a database from outside a VPC into a VPC The following are the basic strategies for migrating into a VPC:  Generic EC2 Classic to VPC Migration Guide: Migrating from a Linux Instance in EC2 Classic to a Linux Instance in a VPC  Specific Procedures for RDS: Moving a DB Instance Not in a VPC into a VPC Conclusion This paper outlined best practices for using AWS DMS to migrate data from a source database to a target database and offers answers to several frequently asked questions about migrations As companies move database workloads to AWS they are often also interested in changing their primary database engine Most current methods for migrating databases to the cloud or switching engines require an extended outage The AWS DMS helps to migrate database workloads to AWS or change database engines while minimizing any associated downtime Contributors The following individuals and organizations contributed to this document:  Ed Murray Senior Database Engineer Amazon RDS/AWS DMS  Arun Thiagarajan Cloud Support Engineer AWS Premium Support Archived,General,consultant,Best Practices AWS_Governance_at_Scale,ArchivedAWS Governance at Scale November 2018 This paper has been archived For the latest technical guidance abot the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazonc om/whitepapersArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents A mazon Web Services (A WS) current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppl iers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Contents 3 Abstract 1 Introduction 2 Traditional Appr oaches to Manage Scale 2 Governance at Scale 3 Governance at Scale Focal Points 4 Deciding on Your Solution 11 Conclusion 14 Appendix A : Example Use Case 15 Appendix B: Governance at Scale Capability Checklist 17 Account Management 17 Budget Management 18 Security a nd Compliance Automation 17 Archived Amazon Web Services – AWS Governance at Scale Page 1 Abstract Customers need to structure their governance to grow and scale as they grow the number of AWS accounts AWS proposes a new approach to meet these challenges Governance at Scale addresses AWS account management cost control and security and compliance through automation; organized by a centralized management toolset Governance at Scale aligns the organization hierarchy with the AWS multi account structure for complete management through an intuitive interface There are three areas of focus for governanc e at scale with techniques for addressing them using a toolset for a typical organizational hierarchy This whitepaper includes an example use case an evaluation and selection criteria for developing or procuring a toolset to instantiate governance at sc ale Archived Amazon Web Services – AWS Governance at Scale Page 2 Introduction As operational footprints scale on AWS a common theme across compan ies is the need to maintain control over cloud resource usage visibility and policy enforcement The ability to rapidly provision instances introduces the potential risk of overspending and misconfigurations When strong governance and enforcement are not in place it can cause security concerns Companies must address oversight challenges so risks are known and can be minimized Identified stakeholders are responsible for budget alignment governance compliance business objectives and technical direction across an entire company To meet these needs AWS has developed this governance at scale guidance to help identify and instantiate best practices Governance at Scale can help compan ies establish centrally managed budgets for cloud resources oversight of cloud implementations and a dashboard of the company’s cloud health Cloud health is based on near realtime compliance to governance policies and enforcement mechanisms To enable this the policies and mechanisms are separated into three governance at scale focal points: • Account Management Automate account provisioning and maintain good security when hundreds of users and business units are requesting cloud based resources • Budget & Cost Management Enforce and monitoring budgets across many accounts workloads and users • Security & Compliance Automation Manage security risk and compliance at a scale and pace to ensure the organization maintains compliance while minimizing impa ct to the business Traditional Approaches to Manage Scale Companies employ three basic approaches to manage large operations on AWS provision multiple AWS accounts control budgets address security risk and compliance Each of these approaches have the following l imitations : • Traditional IT management processes : A central group controls access through approval chains and manual or partially automated setup processes for accounts and resources This approach is difficult to scale because it relies on people and processes that lack automated workflows for help des k tickets and hand offs between staff with different roles Archived Amazon Web Services – AWS Governance at Scale Page 3 • Unrestricted decentralized access to AWS across multiple disassociated accounts This approach can cause resource sprawl that leadership cannot see While usage can scale visibility and accountability are sacrificed The lack of visibility within a self service cloud model introduces compliance and financial risks that most companies cannot tolerate • Using a cloud broker enables visibility and accountability but may limit which AWS servi ces are available to developers and applications or require additional technology augmentation for organizations that require native access to AWS services Companies that have large scale cloud adoption attempt to work around these limitations by using a combination of technologies to address agility and governance goals Companies may use a specific account management application a specific cost enforcement system or multiple toolsets for security and compliance These separate technologies introduce additional layers of complexity and interoperability challenges Governance at Scale AWS Governance at Scale helps you to monitor and control costs accounts and compliance standards associated with operating large enterprises on AWS This guidance is der ived from best practices at AWS and from customers who have successfully operated at scale The components are designed to be flexible so that both technical users and project teams can self serve on AWS while leadership maintains control on spending deci sions and automated policy enforcement Companies can implement governance at scale practices by developing their own solution investing in a commercial solution aligned to the framework or engaging AWS Professional Services for custom options Mechanism s that align to governance at scale focus on control and reporting of budget security and compliance and enforcing AWS access across all stakeholder teams A core element is a centralized interface that provides hierarchical structure while preserving n ative access to the AWS API the AWS Management Console and the AWS SDK/CLI AWS guidance to achieve governance at scale is designed to conform with a company’s existing structure and business processes The following diagram shows a typical government or corporate company Each layer can have different technical financial reporting and security requirements Different departments and teams can have different success criteria goals and technical skill setsArchived Amazon Web Services – AWS Governance at Scale Page 4 Figure 1: Sample organizational structure An interface and subsystem that meets the governance at scale criteria allows leaders to allocate funding assign budgets and monitor near real time resource consumption Each level within a company can institute policies or adjust company and project budgets based on mission priorities and usage patterns Companies can propagate these policies down through the organization The interface provides the mechanisms for authorized staff to create new projects request new AWS accounts request access to existing accounts restrict access to AWS resources and obtain near realtime metrics on project budget consumption This hierarchy combined with security automation provides reliable near real time reporting for each level of leadership and staff The granular and transparent nature of the workflows and data assures leadership that cloud operations across the enterprise are visible and constrained as appropriate with the implemented governance policies Governance at Scale Focal Points Governance at Scale implements three focal points: Account Management Budget and Cost Management and Security and Compliance Automation Archived Amazon Web Services – AWS Governance at Scale Page 5 Account Management AWS guidance to achieve governance at scale streamlines account management across multiple AWS accounts and workloads within a company through centralization standardization and automation of account maintenance tasks This is done through policy automat ion identity federation and account automation Example instead of requiring a central group to manually manage the company’s master billing account a selfservice model with workflow automation is employed It enables authorized staff to link multiple accounts to one or more master billing accounts and attach appropriate automatic enforced governance policies Figure 2: Automation can create and manage accounts at scale Policy Automation AWS guidance to achieve governance at scale automates the application of company policies deploying accounts with standard specifications to ensure consistency across AWS accounts and resources The policy engine is flexible to accommodate and enforce dif ferent types of security polices such as AWS Identity and Access Management ( IAM ) AWS CloudFormation or custom scripts Identity Federation AWS governance solutions employ AWS S ingle SignOn (SSO) through federated identity integration with external authentication providers such as OpenID or Active Directory to centralize AWS account management and simplify user access to AWS accounts When SSO is used with AWS CloudTrail user activity can be tracked across multiple AWS accounts Archived Amazon Web Services – AWS Governance at Scale Page 6 Account Automation Services such as AWS Organizations AWS CloudFormation and AWS Service Catalog automate AWS acc ount provisioning and network architecture baselining These services replace manual processes and facilitate the use of pre defined standardized system deployment templates Users can create new AWS accounts for projects through self service and lever age the AWS Management Console and APIs without the assistance of provisioning experts Project or AWS account owners within a company use a centralized interface to manage access to resources within their assigned area and configure cross account access to AWS resources The automation of account management removes impediments such as ticketing and additional outofband manual processes from the account provisioning process This accelerates developers access to AWS resources they need Budget and Cost Management Automated methods define and enforce fiscal policies to achieve governance at scale Budget planning and enforcement practices allow leaders and staff to allocate and manage budgets for multiple AWS accounts and define enforcement actions Automation ensures spending is actively monitored and controlled in near real time These mechanisms allow leaders to make proactive well informed decisions around budgetary controls and allocations across their company When budgets are aligned with pr ojects and AWS accounts automation ensures budgets are maintained in real time and accounts can’t exceed an approved budget1 Companies are able to meet fiscal requirements such as the Federal Antideficiency Act for US Government agencies Shared service providers or AWS resellers can implement governance at scale to provide chargeback capabilities across a diverse company Budget Planning It is important to align the company’s budget management process to an automated workflow The workflow should be flexible so that different types of funding sources such as investment appropriation and contract line items (CLINs) are managed as the fu nding is allocated across the company Financial owners should define the timeframe for the funding source set enforcement actions if budget limits are exceeded and track utilization over time Example if AWS provides a customer a $10000 credit the fi nancial owner has the ability to subdivide the funding amount 1 For an example use case where budget enforcement is automated with a governance at scale solution see Appendix A Archived Amazon Web Services – AWS Governance at Scale Page 7 across the company Automation will manage each allocation individually while providing awareness and real time financial dashboards to decision makers over the lifetime of the funding source Budget Enforcement Enforcement of budget constraints is a key component of governance at scale Each layer of the company defines spending limits within accounts and projects monitors account spending in near real time and triggers warning notifications or enforcement actions Automated actions include: • Restricting the use of AWS resources to those that cost less than a specified price • Throttle new resource provisioning • Shut down terminate or de provision AWS resources after archiving configurations a nd data for future use The following diagram illustrates how this could work Red numbers indicate the current or projected AWS spend rate exceeds the budget allocated to the project Green numbers indicate that current AWS spend rate is within budget Wh en viewed on a governance dashboard a decision maker has near real time awareness of usage and spending across the entire company Figure 3: Budgets are allocated and enforced through the company Archived Amazon Web Services – AWS Governance at Scale Page 8 Security and Compliance Automation Governance at scale security and compliance practices employ automation to enforce security requirements and help streamline activities across the company’s AWS accounts These practices are made up of the following items: Identity & Access Au tomation AWS guidance to achieve governance at scale is to offer AWS Identity & Access Management (IAM) capabilities through a central portal Users can access the portal with an approved authentication scheme such as Microsoft Active Directory or Lightweight Directory Access Protocol The system grants access based on the roles defined by the company Once authorized the system enforces a strict “policy of least privilege” by providing access to resources authorized by the appropriate authorities The portal allows users and workload owners to request and approve access to projects AWS accounts and centralized resources by managing company defined IAM policies applied at every level Example if a Chief Information Security Officer (CISO) wants to allow the company to access a new AWS services that was previously not allowed the developer can edit the IAM policy at the root OU level and the system wil l implement the change across all cloud accounts Security Automation Maintaining a secure posture when operating at scale requires automating security tasks and compliance assessments Manual or semi manual processes cannot easily scale with business grow th With automation AWS services or Amazon Virtual Private Cloud (Amazon VPC) baseline configurations can be provisioned using standardized AWS configurations or AWS CloudFormation templates These templates align with the company’s security and complianc e requirements and have been evaluated and approved by compan y’s risk decision makers The provisioning process interfaces with the compan y’s Governance Risk and Compliance (GRC) tools or systems of recor d2 These templates generate security documentation and implementation details for newly provisioned baseline architectures and shorten the overall time required for a system or project to be assessed and approved for operations Well implemented security automation is responsive to security incidents This includes processes to respond to policy violations by revoking IAM user access preventing new resource allocation terminating resources or isolating existing cloud resources for forensic analysis Automation can be accomplished by colle cting and storing AWS logging data into centralized data lakes and performing analytics or basing responses on the output of other analytics tools 2 Partner Solutions include Telos Xacta 360 RSA Archer ArchivedAmazon Web Services – AWS Governance at Scale Page 11 Policy Enforcement AWS guidance to achieve governance at scale helps you achieve policy enfor cement on AWS Regions AWS S ervices and resource configurations Enforcement is based on stakeholder roles and responsibilities and in accordance with compliance regulations (eg HIPAA FedRAMP PCI/DSS) At each level of the hierarchy the company can specify which AWS Services features and resources are approved for use on a per department per user or per project basis This ensures self service requests can’t provision unapproved items as illustrated in the following diagram Figure 4: Security and compliance guardrails flow down through hierarchy Circles indicates third party security requirements: FedRAMP HIPAA and PCI Deciding on Your Solution Designing a system to achieve governance at scale addresses key issues for companies around account management cost enforcement and security and compliance Companies can build a governance at scale solution or the y can build one in partnership with AWS Professional Services or an AWS Partner3 Decision Factor 1 Determine need Does the company’s AWS footprint exceed or will it exceed the number of AWS 3 Partner offerings include Cloudtamerio Turbot and Dome9 Security ArchivedAmazon Web Services – AWS Governance at Scale Page 12 accounts and resources that can be manage d using a manual process? Example do you review account billing details us e spreadsheets for tracking or do you u se the AWS Management Console to create and manage all accounts? If the answer to the top question is yes then a governance at scale solution is needed Decision Factor 2 Is it feasible to build versus buy? In order to build a custom solution your company should be able to answer Yes to the following questions : • Does your company have a robust AWS resource tagging or account management methodology for budget control and enforcement? • Does your company have an existing governance model with business processes that can be automated? • Does your company have the resources to build and maintain an enterprise software solution for managing governance at scale across your company ? This includes : engineers and developers with an advanced understanding of the AWS Cloud APIs security features and services and sufficient staff to maintain the enterprise solution over time? To determine if your company can develop a solution that mee ts all of the governance at scale requirements see Appendix B Decision Factor 3 Criteria selection for buying a commercial solution A commercial solution may include one or more products and/or professional services assistance with integration and building key components If you decide to purchase a third party solution to achieve governance at scale see Appendix B to determine if partner products or p rofessional services meet all of your requirements What does a Governance at Scale solution look like to an organizational stakeholder? The following diagram illustrates a finalized governance at scale implementation dashboard overlaying cost and compliance indicators in the company ArchivedAmazon Web Services – AWS Governance at Scale Page 13 Figure 5: Example Company cloud environment Decision makers at each layer of the hierarchy are provided real time data and metrics that are tailored to their company role and/or business units: • Executive – Executives can assign budgets and security policies to any segment of the company Data is collected from the all segments and is presented in a summary view to include overall compliance status and financial health • Senior Leadership – Senior leaders can view their respective financial health within their sub organization They are responsible for assigning budgets to their respective employees and applying additional security policies as needed • Upper Management – Management monitor s budgets grants personnel access to projects and assigns focused security policies This is achieved by assigning specific budget and security policies to business units and teams responsible for applications • Employee – Employees interact directly with cloud accounts and have operational awareness of current spend vs the assigned budget They can request access to other projects and exceptions to security and financial policies as appropriate Archived Amazon Web Services – AWS Governance at Scale Page 14 Conclusion Govern ance at Scale is a new concept for automating cloud governance that can help your company retire manual processes in account management budget enforcement and security and compliance By automating these common challenges the company can scale without i nhibiting agility speed and innovation while providing decision makers with the visibility control and governance that is necessary to protect sensitive data and systems Carefully consider which solution you chose for your company The decision to bu ild or buy a solution can have critical implications on your AWS migration strategy Discuss the potential impact with your AWS Solution Architect and/or Professional Services consultant They can help ensure your solution meets your specific requirements The use case example in Appendix A offers one way to formalize implementation This example shows the challenges companies face and the effect a governance at scale implementation can have Appendix B provides you with a list of the key capabilities for each governance at scale focal point The Governance at Scale framework provides a compass and map to help companies build or buy solutions that can help them scale with confidence by replacing human based governance processes with automation that is familiar and easy to use for all stakehol dersArchived Amazon Web Services – AWS Governance at Scale Page 15 Appendix A : Example Use Case Example use case for i mplementing governance at scale to manage AWS accounts within a company: ACME organization has outgrown their manual and spreadsheet based governance process The company is large and profitable (1B yearly revenue) but have diverse business units that require autonomy and flexibility They have a small governance team and a limited budget for a custom home grown solution Because of their organizational and financial constraints they decided to purchase a solution from an AWS partner4 Once the solution is deployed and configured to align with the company specific processes and requirements the solution is available for developers and decision makers to centrally manage their cloud resources The workflow below describes how a new developer would access and manage their resources within a governance at scale solution John is a developer joining a team that designs application environments for deployment in the AWS Cloud Therefore he needs an AWS development environment that allows him to manipulate infrastructure components using code without affecting other develo pers or systems Each developer within the team is approved for individual monthly billing budgets for the use of AWS A governance at scale implementation and workflow for this scenario is: 1 John navigates to a portal to submit a request for an AWS account for developers From the list he chooses from a set of standard corporate AWS account types and then specifies that he needs a monthly billing budget of $5000 2 His request triggers a notification that is sent to his manager His manager uses the portal to confirm or change the monthly billing budget that John specified and selects any preapproved/assessed system boundary that John’s environment is allowed to operate within 3 An automated process creates a new AWS account for John and uses AWS CloudForm ation to build a baseline architecture and apply predefined IAM policies and AWS service configurations within John’s new AWS account o IAM policies include what services and resources that John is allowed to access and the AWS Service API calls he is all owed to perform See https://awsamazoncom/iam for details 4 Partner offerings include Cloudtamerio Turbot and Dome9 Security Archived Amazon Web Services – AWS Governance at Scale Page 16 o AWS service configurations include services such as an Amazon Virtual Private Cloud ( Amazon VPC) architecture that includes predefined AWS security groups to be assigned to Amazon Elastic Compute Cloud (Amazon EC2) instances Amazon Simple Storage Service (Amazon S3) buckets provisioned with predefined access control policies and network connectivity to access functional and s ecurity enabling shared services Example code repositories patch repositories security scanning tools antimalware services authentication services time synchronization services directory services backup and recovery services and etc 4 An automate d process interfaces with the company’s governance risk and compliance (GRC) tool to link John’s AWS account with the preapproved/assessed system boundary This allows the GRC tool to access the account for the system inventory and monitor for complianc e violations as part of automated IT auditing and continuous monitoring 5 An automated process begins tracking the AWS services and resources that John provisions to record the spending rate within John’s AWS account 6 As the monthly spend limit is approache d an automated series of notifications is sent to John so he can act to ensure that he does not overspend his budget It is escalated to his management if he fails to react appropriately Additionally a series of automated predefined budget enforcement actions take place including preventing new AWS resources from being provisioned and shutting down or de provisioning AWS resources Archived Amazon Web Services – AWS Governance at Scale Page 17 Appendix B: Governance at Scale Capability Checklist There are several Amazon Partner Network ( APN) solutions that you can use to meet your company’s governance at scale requirements We encourage companies to evaluate each solution and decide based on your specific requirements AWS Prof essional Services and Solution Architects can assist in your evaluation process If you want to discuss partner products reach out to your AWS Sales teams or send an email to compliance accelerator@amazoncom Account Management Capability Programmatically provision and delete AWS accounts using AWS APIs to ensure uniformity Allow external IAM accounts to enable and disable users Provide single sign on to the AWS Management Console for AWS account users to manage cloud resources Integrate with external IAM providers such as Active Directory Support MFA token management Associate AWS accounts with one or more master billing accounts Associate users with IAM policies to control access Support multi level organizational hierarchy Support use of Enterprise Accelerators to apply baseline configurations to accounts Provide self service workflow that allows users to join projects Provide self service workflow that allows users to create new project s Provide self service workflow that allows users to connect one or more accounts Control access to custom Amazon Machine Images (AMIs) Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 18 Allow user access to the AWS API AWS Management Console and SDKs Budget Management Capability Manage funding sources used to pay for AWS usage Allocate funding sources to individuals and AWS accounts based on organizational hierarchy Set monthly and yearly budgets for AWS accounts View current spending accrual of AWS accounts Aggregate spending of AWS accounts based on organization structure and purpose Apply cost restrictions to AWS accounts (for example force use of Reserved Instances restrict Amazon EC2 instance usage to instances less than $x/ hr etc) Set rules to define enforcement actions (including notification limit creating new cloud resources archiving cloud resources and termination of cloud resources) when financial thresholds are reached for each AWS account Send alerts to financial stakeholders when predefined limits and thresholds are met Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 17 Security and Compliance Automation Capability Programmatically apply access control policies to restrict user access to AWS services that do not meet regulatory compliance standards (such as HIPAA FedRAMP PCI/DSS) Programmatically apply access control policies to restrict user access to AWS Regions that do not meet regulatory compliance standards (for example HIPAA FedRAMP and PCI/DSS) Programmatically apply access control policies to restrict user access to AW S resource configurations that do not meet regulatory compliance standards (for example HIPAA FedRAMP and PCI/DSS) Support multi level organizational hierarchy to apply and inherit access control policies Collect and store logs for all AWS accounts resources and API actions Programmatically verify that cloud resources are configured in alignment with best practices organizational policies and regulatory compliance standards Programmatically generate Authorization to Operate (ATO) artifacts i ncluding system security plans (SSPs) based on current cloud resources within AWS accounts Schedule continuous monitoring tasks (for example vulnerability scans within and across AWS accounts) to determine whether the system is compliant Set rules to define enforcement actions (including notification limit creating new cloud resources and isolation of cloud resources) when compliance violation thresholds are reached for each AWS account Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 19 Contributors The following individuals and organizations contributed to this document: • Doug Vanderpool Principal Consultant Advisory AWS Professional Services • Brett Miller Technical Program Manager WWPS Security and Compliance Business Acceleration Team • Lou Vecchioni Senior Consultant AWS Professional Services • Colin Desa Head Envision Engineering Center • Tim And erson Program Manager WWPS Security and Compliance Business Acceleration Team • Nathan Case Senior Consultant AWS Professional Services Resources • AWS Whitepapers • AWS Documentation • AWS Compliance Quick Starts Document Revisions Date Change May 2017 First DRAFT Version August 2017 DRAFT Version 20 November 2017 DRAFT Version 21 July 2018 DRAFT Version 22 November 2018 DRAFT Version 2 3,General,consultant,Best Practices AWS_Key_Management_Service_Best_Practices,"ArchivedAWS Key Management Service Best Practices AWS Whitepaper For the latest technical content refer to : https://docsawsamazoncom/kms/latest/ developerguide/bestpracticeshtmlArchivedAWS Key Management Service Best Practices AWS Whitepaper AWS Key Management Service Best Practices: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedAWS Key Management Service Best Practices AWS Whitepaper Table of Contents Abstract 1 Abstract 1 Introduction 2 Identity and Access Management 3 AWS KMS and IAM Policies 3 Key Policies 3 Least Privilege / Separation of Duties 4 Cross Account Sharing of Keys 5 CMK Grants 5 Encryption Context 5 MultiFactor Authentication 6 Detective Controls 8 CMK Auditing 8 CMK Use Validation 8 Key Tags 8 Infrastructure Security 9 Customer Master Keys 9 AWSmanaged and Customermanaged CMKs 9 Key Creation and Management 10 Key Aliases 10 Using AWS KMS at Scale 11 Data Protection 12 Common AWS KMS Use Cases 12 Encrypting PCI Data Using AWS KMS 12 Secret Management Using AWS KMS and Amazon S3 12 Encrypting Lambda Environment Variables 12 Encrypting Data within Systems Manager Parameter Store 12 Enforcing Data at Rest Encryption within AWS Services 13 Data at Rest Encryption with Amazon S3 13 Data at Rest Encryption with Amazon EBS 14 Data at Rest Encryption with Amazon RDS 14 Incident Response 15 Security Automation of AWS KMS 15 Deleting and Disabling CMKs 15 Conclusion 16 Contributors 17 Document Revisions 18 Notices 19 iiiArchivedAWS Key Management Service Best Practices AWS Whitepaper Abstract AWS Key Management Service Best Practices Publication date: April 1 2017 (Document Revisions (p 18)) Abstract AWS Key Management Service (AWS KMS) is a managed service that allows you to concentrate on the cryptographic needs of your applications while Amazon Web Services (AWS) manages availability physical security logical access control and maintenance of the underlying infrastructure Further AWS KMS allows you to audit usage of your keys by providing logs of all API calls made on them to help you meet compliance and regulatory requirements Customers want to know how to effectively implement AWS KMS in their environment This whitepaper discusses how to use AWS KMS for each capability described in the AWS Cloud Adoption Framework (CAF) Security Perspective whitepaper including the differences between the different types of customer master keys using AWS KMS key policies to ensure least privilege auditing the use of the keys and listing some use cases that work to protect sensitive information within AWS 1ArchivedAWS Key Management Service Best Practices AWS Whitepaper Introduction AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data AWS KMS uses Hardware Security Modules (HSMs) to protect the security of your keys You can use AWS KMS to protect your data in AWS services and in your applications The AWS Key Management Service Cryptographic Details whitepaper describes the design and controls implemented within the service to ensure the security and privacy of your data The AWS Cloud Adoption Framework (CAF) whitepaper provides guidance for coordinating the different parts of organizations that are moving to cloud computing The AWS CAF guidance is broken into areas of focus that are relevant to implementing cloudbased IT systems which we refer to as perspectives The CAF Security Perspective whitepaper organizes the principles that will help drive the transformation of your organization’s security through five core capabilities: Identity and Access Management Detective Control Infrastructure Security Data Protection and Incident Response For each capability in the CAF Security Perspective this whitepaper provides details on how your organization should use AWS KMS to protect sensitive information across a number of different use cases and the means of measuring progress: •Identity and Access Management: Enables you to create multiple access control mechanisms and manage the permissions for each •Detective Controls: Provides you the capability for native logging and visibility into the service •Infrastructure Security: Provides you with the capability to shape your security controls to fit your requirements •Data Protection: Provides you with the capability for maintaining visibility and control over data •Incident Response: Provides you with the capability to respond to manage reduce harm and restore operations during and after an incident 2ArchivedAWS Key Management Service Best Practices AWS Whitepaper AWS KMS and IAM Policies Identity and Access Management The Identity and Access Management capability provides guidance on determining the controls for access management within AWS KMS to secure your infrastructure according to established best practices and internal policies AWS KMS and IAM Policies You can use AWS Identity and Access Management (IAM) policies in combination with key policies to control access to your customer master keys (CMKs) in AWS KMS This section discusses using IAM in the context of AWS KMS It doesn’t provide detailed information about the IAM service For complete IAM documentation see the AWS IAM User Guide Policies attached to IAM identities (that is users groups and roles) are called identitybased policies (or IAM policies ) Policies attached to resources outside of IAM are called resourcebased policies In AWS KMS you must attach resourcebased policies to your customer master keys (CMKs) These are called key policies All KMS CMKs have a key policy and you must use it to control access to a CMK IAM policies by themselves are not sufficient to allow access to a CMK although you can use them in combination with a CMK key policy To do so ensure that the CMK key policy includes the policy statement that enables IAM policies By using an identitybased IAM policy you can enforce least privilege by granting granular access to KMS API calls within an AWS account Remember IAM policies are based on a policy of defaultdenied unless you explicitly grant permission to a principal to perform an action Key Policies Key policies are the primary way to control access to CMKs in AWS KMS Each CMK has a key policy attached to it that defines permissions on the use and management of the key The default policy enables any principals you define as well as enables the root user in the account to add IAM policies that reference the key We recommend that you edit the default CMK policy to align with your organization’s best practices for least privilege To access an encrypted resource the principal needs to have permissions to use the resource as well as to use the encryption key that protects the resource If the principal does not have the necessary permissions for either of those actions the request to use the encrypted resource will be denied It’s also possible to constrain a CMK so that it can only be used by specific AWS services through the use of the kms:ViaService conditional statement within the CMK key policy For more information see the AWS KMS Developer Guide To create and use an encrypted Amazon Elastic Block Store (EBS) volume you need permissions to use Amazon EBS The key policy associated with the CMK would need to include something similar to the following: { ""Sid"": ""Allow for use of this Key"" ""Effect"": ""Allow"" ""Principal"": { ""AWS"": ""arn:aws:iam:: 111122223333:role/UserRole "" } ""Action"": [ ""kms:GenerateDataKeyWithoutPlaintext"" 3ArchivedAWS Key Management Service Best Practices AWS Whitepaper Least Privilege / Separation of Duties ""kms:Decrypt"" ] ""Resource"": ""*"" } { ""Sid"": ""Allow for EC2 Use"" ""Effect"": ""Allow"" ""Principal"": { ""AWS"": ""arn:aws:iam:: 111122223333:role/UserRole "" } ""Action"": [ ""kms:CreateGrant"" ""kms:ListGrants"" ""kms:RevokeGrant"" ] ""Resource"": ""*"" ""Condition"": { ""StringEquals"": { ""kms:ViaService"": ""ec2 uswest2amazonawscom"" } } } In this CMK policy the first statement provides a specified IAM principal the ability to generate a data key and decrypt that data key from the CMK when necessary These two APIs are necessary to encrypt the EBS volume while it’s attached to an Amazon Elastic Compute Cloud (EC2) instance The second statement in this policy provides the specified IAM principal the ability to create list and revoke grants for Amazon EC2 Grants are used to delegate a subset of permissions to AWS services or other principals so that they can use your keys on your behalf In this case the condition policy explicitly ensures that only Amazon EC2 can use the grants Amazon EC2 will use them to reattach an encrypted EBS volume back to an instance if the volume gets detached due to a planned or unplanned outage These events will be recorded within AWS CloudTrail when and if they do occur for your auditing When developing a CMK policy you should keep in mind how policy statements are evaluated within AWS This means that if you have enabled IAM to help control access to a CMK when AWS evaluates whether a permitted action is to be allowed or denied the CMK policy is joined with the IAM policy Additionally you should ensure that the use and management of a key is restricted to the parties that are necessary Least Privilege / Separation of Duties Key policies specify a resource action effect principal and conditions to grant access to CMKs Key policies allow you to push more granular permissions to CMKs to enforce least privilege For example an application might make a KMS API call to encrypt data but there is no use case for that same application to decrypt data In that use case a key policy could grant access to the kms:Encrypt action but not kms:Decrypt and reduce the possibility for exposure Additionally AWS allows you to separate the usage permissions from administration permissions associated with the key This means that an individual may have the ability to manipulate the key policy but might not have the necessary permissions to use the key for cryptographic functions Given that your CMKs are being used to protect your sensitive information you should work to ensure that the corresponding key policies follow a model of least privilege This includes ensuring that you do NOT include kms:* permissions in an IAM policy This policy would grant the principal both administrative and usage permissions on all CMKs to which the principal has access Similarly including kms:* permissions for the principals within your key policy gives them both administrative and usage permissions on the CMK It’s important to remember that explicit deny policies take precedence over implicit deny policies When you use NotPrincipal in the same policy statement as ""Effect: Deny"" the permissions specified in the 4ArchivedAWS Key Management Service Best Practices AWS Whitepaper Cross Account Sharing of Keys policy statement are explicitly denied to all principals except for the ones specified A toplevel KMS policy can explicitly deny access to virtually all KMS operations except for the roles that actually need them This technique helps prevent unauthorized users from granting themselves KMS access Cross Account Sharing of Keys Delegation of permissions to a CMK within AWS KMS can occur when you include the root principal of a trusted account within the CMK key policy The trusted account then has the ability to further delegate these permissions to IAM users and roles within their own account using IAM policies While this approach may simplify the management of the key policy it also relies on the trusted accounts to ensure that the delegated permissions are correctly managed The other approach would be to explicitly manage permissions to all authorized users using only the KMS key policy which in turn could make the key policy complex and less manageable Regardless of the approach you take the specific trust should be broken out on a per key basis to ensure that you adhere to the least privilege model CMK Grants Key policy changes follow the same permissions model used for policy editing elsewhere in AWS That is users either have permission to change the key policy or they do not Users with the PutKeyPolicy permission for a CMK can completely replace the key policy for a CMK with a different key policy of their choice You can use key policies to allow other principals to access a CMK but key policies work best for relatively static assignments of permissions To enable more granular permissions management you can use grants Grants are useful when you want to define scopeddown temporary permissions for other principals to use your CMK on your behalf in the absence of a direct API call from you It’s important to be aware of the grants per key and grants for a principal per key limits when you design applications that use grants to control access to keys Ensure that the retiring principal retires a grant after it’s used to avoid hitting these limits Encryption Context In addition to limiting permission to the AWS KMS APIs AWS KMS also gives you the ability to add an additional layer of authentication for your KMS API calls utilizing encryption context The encryption context is a keyvalue pair of additional data that you want associated with AWS KMSprotected information This is then incorporated into the additional authenticated data (AAD) of the authenticated encryption in AWS KMSencrypted ciphertexts If you submit the encryption context value in the encryption operation you are required to pass it in the corresponding decryption operation You can use the encryption context inside your policies to enforce tighter controls for your encrypted resources Because the encryption context is logged in CloudTrail you can get more insight into the usage of your keys from an audit perspective Be aware that the encryption context is not encrypted and will be visible within CloudTrail logs The encryption context should not be considered sensitive information and should not require secrecy AWS services that use AWS KMS use encryption context to limit the scope of keys For example Amazon EBS sends the volume ID as the encryption context when encrypting/decrypting a volume and when you take a snapshot the snapshot ID is used as the context If Amazon EBS did not use this encryption context an EC2 instance would be able to decrypt any EBS volume under that specific CMK An encryption context can also be used for custom applications that you develop and acts as an additional layer of control by ensuring that decrypt calls will succeed only if the encryption context 5ArchivedAWS Key Management Service Best Practices AWS Whitepaper MultiFactor Authentication matches what was passed in the encrypt call If the encryption context for a specific application does not change you can include that context within the AWS KMS key policy as a conditional statement For example if you have an application that requires the ability to encrypt and decrypt data you can create a key policy on the CMK that ensures that it provides expected values In the following policy it is checking that the application name “ExampleApp” and its current version “1024” are the values that are passed to AWS KMS during the encrypt and decrypt calls If different values are passed the call will be denied and the decrypt or encrypt action will not be performed { ""Effect"": ""Allow"" ""Principal"": { ""AWS"": ""arn:aws:iam::111122223333:role/RoleForExampleApp"" } ""Action"": [ ""kms:Encrypt"" ""kms:Decrypt"" ] ""Resource"": ""*"" ""Condition"": { ""StringEquals"": { ""kms:EncryptionContext:AppName"": ""ExampleApp"" ""kms:EncryptionContext:Version"": ""1024"" } } } This use of encryption context will help to further ensure that only authorized parties and/or applications can access and use the CMKs Now the party will need to have IAM permissions to AWS KMS a CMK policy that allows them to use the key in the requested fashion and finally know the expected encryption context values MultiFactor Authentication To provide an additional layer of security over specific actions you can implement an additional layer of protection using multifactor authentication (MFA) on critical KMS API calls Some of those calls are PutKeyPolicy ScheduleKeyDeletion DeleteAlias and DeleteImportedKeyMaterial This can be accomplished through a conditional statement within the key policy that checks for when or if an MFA device was used as part of authentication If someone attempts to perform one of the critical AWS KMS actions the following CMK policy will validate that their MFA was authenticated within the last 300 seconds or 5 minutes before performing the action { ""Sid"": ""MFACriticalKMSEvents"" ""Effect"": ""Allow"" ""Principal"": { ""AWS"": ""arn:aws:iam::111122223333:user/ExampleUser"" } ""Action"": [ ""kms:DeleteAlias"" ""kms:DeleteImportedKeyMaterial"" ""kms:PutKeyPolicy"" ""kms:ScheduleKeyDeletion"" ] ""Resource"": ""*"" ""Condition"":{ "" NumericLessThan "":{""aws: MultiFactorAuthAge"":""300""} } 6ArchivedAWS Key Management Service Best Practices AWS Whitepaper MultiFactor Authentication } 7ArchivedAWS Key Management Service Best Practices AWS Whitepaper CMK Auditing Detective Controls The Detective Controls capability ensures that you properly configure AWS KMS to log the necessary information you need to gain greater visibility into your environment CMK Auditing AWS KMS is integrated with CloudTrail To audit the usage of your keys in AWS KMS you should enable CloudTrail logging in your AWS account This ensures that all KMS API calls made on keys in your AWS account are automatically logged in files that are then delivered to an Amazon Simple Storage Service (S3) bucket that you specify Using the information collected by CloudTrail you can determine what request was made the source IP address from which the request was made who made the request when it was made and so on AWS KMS integrates natively with many other AWS services to make monitoring easy You can use these AWS services or your existing security tool suite to monitor your CloudTrail logs for specific actions such as ScheduleKeyDeletion PutKeyPolicy DeleteAlias DisableKey DeleteImportedKeyMaterial on your KMS key Furthermore AWS KMS emits Amazon CloudWatch Events when your CMK is rotated deleted and imported key material in your CMK expires CMK Use Validation In addition to capturing audit data associated with key management and use you should ensure that the data you are reviewing aligns with your established best practices and policies One method is to continuously monitor and verify the CloudTrail logs as they come in Another method is to use AWS Config rules By using AWS Config rules you can ensure that the configuration of many of the AWS services are set up appropriately For example with EBS volumes you can use the AWS Config rule ENCRYPTED_VOLUMES to validate that attached EBS volumes are encrypted Key Tags A CMK can have a tag applied to it for a variety of purposes The most common use is to correlate a specific CMK back to a business category (such as a cost center application name or owner) The tags can then be used to verify that the correct CMK is being used for a given action For example in CloudTrail logs for a given KMS action you can verify that the CMK being used belongs to the same business category as the resource that it’s being used on Previously this might have required a look up within a resource catalog but now this external lookup is not required because of tagging within AWS KMS as well as many of the other AWS services 8ArchivedAWS Key Management Service Best Practices AWS Whitepaper Customer Master Keys Infrastructure Security The Infrastructure Security capability provides you with best practices on how to configure AWS KMS to ensure that you have an agile implementation that can scale with your business while protecting your sensitive information Topics •Customer Master Keys (p 9) •Using AWS KMS at Scale (p 11) Customer Master Keys Within AWS KMS your key hierarchy starts with a CMK A CMK can be used to directly encrypt data blocks up to 4 KB or it can be used to secure data keys which protect underlying data of any size AWSmanaged and Customermanaged CMKs CMKs can be broken down into two general types: AWSmanaged and customermanaged An AWS managed CMK is created when you choose to enable serverside encryption of an AWS resource under the AWSmanaged CMK for that service for the first time (eg SSEKMS) The AWSmanaged CMK is unique to your AWS account and the Region in which it’s used An AWSmanaged CMK can only be used to protect resources within the specific AWS service for which it’s created It does not provide the level of granular control that a customermanaged CMK provides For more control a best practice is to use a customermanaged CMK in all supported AWS services and in your applications A customermanaged CMK is created at your request and should be configured based upon your explicit use case The following chart summarizes the key differences and similarities between AWSmanaged CMKs and customermanaged CMKs AWSmanaged CMK Customermanaged CMK Creation AWS generated on customer’s behalfCustomer generated Rotation Once every three years automaticallyOnce a year automatically through optin or ondemand manually Deletion Can’t be deleted Can be deleted Scope of use Limited to a specific AWS service Controlled via KMS/IAM policy Key Access Policy AWS managed Customer managed User Access Management IAM policy IAM policy For customermanaged CMKs you have two options for creating the underlying key material When you choose to create a CMK using AWS KMS you can let KMS create the cryptographic material for you or you can choose to import your own key material Both of these options provide you with the same level 9ArchivedAWS Key Management Service Best Practices AWS Whitepaper Key Creation and Management of control and auditing for the use of the CMK within your environment The ability to import your own cryptographic material allows you to do the following: • Prove that you generated the key material using your approved source that meets your randomness requirements • Use key material from your own infrastructure with AWS services and use AWS KMS to manage the lifecycle of that key material within AWS • Gain the ability to set an expiration time for the key material in AWS and manually delete it but also make it available again in the future • Own the original copy of the key material and to keep it outside of AWS for additional durability and disaster recovery during the complete lifecycle of the key material The decision to use imported key material or KMSgenerated key material would depend on your organization’s policies and compliance requirements Key Creation and Management Since AWS makes creating and managing keys easy through the use of AWS KMS we recommend that you have a plan for how to use the service to best control the blast radius around individual keys Previously you may have used the same key across different geographic regions environments or even applications With AWS KMS you should define data classification levels and have at least one CMK per level For example you could define a CMK for data classified as “Confidential” and so on This ensures that authorized users only have permissions for the key material that they require to complete their job You should also decide how you want to manage usage of AWS KMS Creating KMS keys within each account that requires the ability to encrypt and decrypt sensitive data works best for most customers but another option is to share the CMKs from a few centralized accounts Maintaining the CMKs in the same account as the majority of the infrastructure using them helps users provision and run AWS services that use those keys AWS services don’t allow for crossaccount searching unless the principal doing the searching has explicit List* permissions on resources owned by the external account This can also only be accomplished via the CLI or SDK and not through service consolebased searches Additionally by storing the credentials in the local accounts it might be easier to delegate permissions to individuals who know the IAM principals that require access to the specific CMKs If you were sharing the keys via a centralized model the AWS KMS administrators would need to know the full Amazon Resource Name (ARN) for all users of the CMKs to ensure least privilege Otherwise the administrators might provide overly permissive permissions on the keys Your organization should also consider the frequency of rotation for CMKs Many organizations rotate CMKs yearly For customermanaged CMKs with KMSgenerated key material this is easy to enforce You simply have to opt in to a yearly rotation schedule for your CMK When the CMK is due for rotation a new backing key is created and marked as the active key for all new requests to protect information The old backing key remains available for use to decrypt any existing ciphertext values that were encrypted using this key To rotate CMKs more frequently you can also call UpdateAlias to point an alias to a new CMK as described in the next section The UpdateAlias method works for both customermanaged CMKs and CMKs with imported key material AWS has found that the frequency of key rotation is highly dependent upon laws regulations and corporate policies Key Aliases A key alias allows you to abstract key users away from the underlying Regionspecific key ID and key ARN Authorized individuals can create a key alias that allows their applications to use a specific CMK independent of the Region or rotation schedule Thus multiRegion applications can use the same key alias to refer to KMS keys in multiple Regions without worrying about the key ID or the key ARN You can also trigger manual rotation of a CMK by pointing a given key alias to a different CMK Similar to how Domain Name Services (DNS) allows the abstraction of IP addresses a key alias does the same for the 10ArchivedAWS Key Management Service Best Practices AWS Whitepaper Using AWS KMS at Scale key ID When you are creating a key alias we recommend that you determine a naming scheme that can be applied across your accounts such as alias/ It should be noted that CMK aliases can’t be used within policies This is because the mapping of aliases to keys can be manipulated outside the policy which would allow for an escalation of privilege Therefore key IDs must be used in KMS key policies IAM policies and KMS grants Using AWS KMS at Scale As noted earlier a best practice is to use at least one CMK for a particular class of data This will help you define policies that scope down permissions to the key and hence the data to authorized users You may choose to further distribute your data across multiple CMKs to provide stronger security controls within a given data classification AWS recommends using envelope encryption to scale your KMS implementation Envelope encryption is the practice of encrypting plaintext data with a unique data key and then encrypting the data key with a key encryption key (KEK) Within AWS KMS the CMK is the KEK You can encrypt your message with the data key and then encrypt the data key with the CMK Then the encrypted data key can be stored along with the encrypted message You can cache the plaintext version of the data key for repeated use reducing the number of requests to AWS KMS Additionally envelope encryption can help to design your application for disaster recovery You can move your encrypted data asis between Regions and only have to reencrypt the data keys with the Regionspecific CMKs The AWS Cryptographic team has released an AWS Encryption SDK that makes it easier to use AWS KMS in an efficient manner This SDK transparently implements the lowlevel details for using AWS KMS It also provides developers options for protecting their data keys after use to ensure that the performance of their application isn’t significantly affected by encrypting your sensitive data 11ArchivedAWS Key Management Service Best Practices AWS Whitepaper Common AWS KMS Use Cases Data Protection The Data Protection capability addresses some of the common AWS use cases for using AWS KMS within your organization to protect your sensitive information Common AWS KMS Use Cases Encrypting PCI Data Using AWS KMS Since security and quality controls in AWS KMS have been validated and certified to meet the requirements of PCI DSS Level 1 certification you can directly encrypt Primary Account Number (PAN) data with an AWS KMS CMK The use of a CMK to directly encrypt data removes some of the burden of managing encryption libraries Additionally a CMK can’t be exported from AWS KMS which alleviates the concern about the encryption key being stored in an insecure manner As all KMS requests are logged in CloudTrail use of the CMK can be audited by reviewing the CloudTrail logs It’s important to be aware of the requests per second limit when designing applications that use the CMK directly to protect Payment Card Industry (PCI) data Secret Management Using AWS KMS and Amazon S3 Although AWS KMS primarily provides key management functions you can leverage AWS KMS and Amazon S3 to build your own secret management solution Create a new Amazon s3 bucket to hold your secrets Deploy a bucket policy onto the bucket to limit access to only authorized individuals and services The secrets stored in the bucket utilize a predefined prefix per file to allow for granular control of access to the secrets Each secret when placed in the S3 bucket is encrypted using a specific customermanaged KMS key Furthermore due to the highly sensitive nature of the information being stored within this bucket S3 access logging or CloudTrail Data Events are enabled for audit purposes Then when a user or service requires access to the secret they assume an identity within AWS that has permissions to use both the object in the S3 bucket as well as the KMS key An application that runs in an EC2 instance uses an instance role that has the necessary permissions Encrypting Lambda Environment Variables By default when you create or update Lambda functions that use environment variables those variables are encrypted using AWS KMS When your Lambda function is invoked those values are decrypted and made available to the Lambda code You have the option to use the default KMS key for Lambda or specify a specific CMK of your choice To further protect your environment variables you should select the “Enable encryption helpers” checkbox By selecting this option your environment variables will also be individually encrypted using a CMK of your choice and then your Lambda function will have to specifically decrypt each encrypted environment variable that is needed Encrypting Data within Systems Manager Parameter Store Amazon EC2 Systems Manager is a collection of capabilities that can help you automate management tasks at scale To efficiently store and reference sensitive configuration data such as passwords license keys and certificates the Parameter Store lets you protect sensitive information within secure string parameters 12ArchivedAWS Key Management Service Best Practices AWS Whitepaper Enforcing Data at Rest Encryption within AWS Services A secure string is any sensitive data that needs to be stored and referenced in a secure manner If you have data that you don't want users to alter or reference in clear text such as domain join passwords or license keys then specify those values using the Secure String data type You should use secure strings in the following circumstances: • You want to use data/parameters across AWS services without exposing the values as clear text in commands functions agent logs or CloudTrail logs • You want to control who has access to sensitive data • You want to be able to audit when sensitive data is accessed using CloudTrail • You want AWSlevel encryption for your sensitive data and you want to bring your own encryption keys to manage access By selecting this option when you create your parameter the Systems Manager encrypts that value when it’s passed into a command and decrypts it when processing it on the managed instance The encryption is handled by AWS KMS and can be either a default KMS key for the Systems Manager or you can specify a specific CMK per parameter Enforcing Data at Rest Encryption within AWS Services Your organization might require the encryption of all data that meets a specific classification Depending on the specific service you can enforce data encryption policies through preventative or detective controls For some services like Amazon S3 a policy can prevent storing unencrypted data For other services the most efficient mechanism is to monitor the creation of storage resources and check whether encryption is enabled appropriately In the event that unencrypted storage is created you have a number of possible responses ranging from deleting the storage resource to notifying an administrator Data at Rest Encryption with Amazon S3 Using Amazon S3 it’s possible to deploy an S3 bucket policy that ensures that all objects being uploaded are encrypted The policy looks like the following: { ""Version"":""20121017"" ""Id"":""PutObjPolicy"" ""Statement"":[{ ""Sid"":""DenyUnEncryptedObjectUploads"" ""Effect"":""Deny"" ""Principal"":""*"" ""Action"":""s3:PutObject"" ""Resource"":""arn:aws:s3:::YourBucket/*"" ""Condition"":{ ""StringNotEquals"":{ ""s3:xamzserversideencryption"":""aws:kms"" } } } ] } Note that this doesn’t cause objects already in the bucket to be encrypted This policy denies attempts to add new objects to the bucket unless those objects are encrypted Objects already in the bucket before this policy is applied will remain either encrypted or unencrypted based on how they were first uploaded 13ArchivedAWS Key Management Service Best Practices AWS Whitepaper Data at Rest Encryption with Amazon EBS Data at Rest Encryption with Amazon EBS You can create Amazon Machine Images (AMIs) that make use of encrypted EBS boot volumes and use the AMIs to launch EC2 instances The stored data is encrypted as is the data transfer path between the EBS volume and the EC2 instance The data is decrypted on the hypervisor of that instance on an asneeded basis then stored only in memory This feature aids your security compliance and auditing efforts by allowing you to verify that all of the data that you store on the EBS volume is encrypted whether it’s stored on a boot volume or on a data volume Further because this feature makes use of AWS KMS you can track and audit all uses of the encryption keys There are two methods to ensure that EBS volumes are always encrypted You can verify that the encryption flag as part of the CreateVolume context is set to “true” through an IAM policy If the flag is not “true” then the IAM policy can prevent an individual from creating the EBS volume The other method is to monitor the creation of EBS volumes If a new EBS volume is created CloudTrail will log an event A Lambda function can be triggered by the CloudTrail event to check if the EBS volume is encrypted or not and also what KMS key was used for the encryption An AWS Lambda function can respond to the creation of an unencrypted volume in several different ways The function could call the CopyImage API with the encrypted option to create a new encrypted version of the EBS volume and then attach it to the instance and delete the old version Some customers choose to automatically delete the EC2 instance that has the unencrypted volume Others choose to automatically quarantine the instance it by applying security groups that prevent most inbound connections It’s also easy to write a Lambda function that posts to an Amazon Simple Notification Service (SNS) topic that alerts administrators to do a manual investigation and intervention Note that most enforcement responses can—and should—be accomplished programmatically without human intervention Data at Rest Encryption with Amazon RDS Amazon Relational Database Service (RDS) builds on Amazon EBS encryption to provide full disk encryption for database volumes When you create an encrypted database instance with Amazon RDS Amazon RDS creates an encrypted EBS volume on your behalf to store the database Data stored at rest on the volume database snapshots automated backups and read replicas are all encrypted under the KMS CMK that you specified when you created the database instance Similar to Amazon EBS you can set up an AWS Lambda function to monitor for the creation of new RDS instances via the CreateDBInstance API call via CloudTrail Within the CreateDBInstance event ensure that KmsKeyId parameter is set to the expected CMK 14ArchivedAWS Key Management Service Best Practices AWS Whitepaper Security Automation of AWS KMS Incident Response The Incident Response capability focuses on your organization’s capability to remediate incidents that may involve AWS KMS Security Automation of AWS KMS During your monitoring of your CMKs if a specific action is detected an AWS Lambda function could be configured to disable the CMK or perform any other incident response actions as dictated by your local security policies Without human intervention a potential exposure could be cut off in minutes by leveraging the automation tools inside AWS Deleting and Disabling CMKs While deleting CMKs is possible it has significant ramifications to an organization You should first consider whether it’s sufficient to set the CMK state to disabled on keys that you no longer intend to use This will prevent all future use of the CMK The CMK is still available however and can be reenabled in the future if it’s needed Disabled keys are still stored by AWS KMS; thus they continue to incur recurring storage charges You should strongly consider disabling keys instead of deleting them until you are confident in their encrypted data management Deleting a key must be very carefully thought out Data can’t be decrypted if the corresponding CMK has been deleted Moreover once a CMK is deleted it’s gone forever AWS has no means to recover a deleted CMK once it’s finally deleted Just as with other critical operations in AWS you should apply a policy that requires MFA for CMK deletion To help ensure that a CMK is not deleted by mistake KMS enforces a minimum waiting period of seven days before the CMK is actually deleted You can choose to increase this waiting period up to a maximum value of 30 days During the waiting period the CMK is still stored in KMS in a “Pending Deletion” state It can’t be used for encrypt or decrypt operations Any attempt to use a key that is in the “Pending Deletion” state for encryption or decryption will be logged to CloudTrail You can set an Amazon CloudWatch Alarm for these events in your CloudTrail logs This gives you a chance to cancel the deletion process if needed Until the waiting period has expired the CMK can be recovered from the “Pending Deletion” state and restored to either the disabled or enabled state Finally it should also be noted that if you are using a CMK with imported key material you can delete the imported key material immediately This is different from deleting a CMK directly in several ways When you perform the DeleteImportedKeyMaterial action AWS KMS deletes the key material and the CMK key state changes to pending import When the key material is deleted the CMK is immediately unusable There is no waiting period To enable use of the CMK again you must reimport the same key material Deleting key material affects the CMK right away but data encryption keys that are actively in use by AWS services are not immediately affected For example let’s say a CMK using your imported material was used to encrypt an object being placed in an S3 bucket using SSEKMS Right before you upload the object into the S3 bucket you place the imported material into your CMK After the object is uploaded you can delete your key material from that CMK The object will continue to sit in the S3 bucket in an encrypted state but no one will be able to access it until the same key material is reimported into the CMK This flow obviously requires precise automation for importing and deleting key material from a CMK but can provide an additional level of control within an environment 15ArchivedAWS Key Management Service Best Practices AWS Whitepaper Conclusion AWS KMS provides your organization with a fully managed service to centrally control your encryption keys Its native integration with other AWS services makes it easier for AWS KMS to encrypt the data that you store and process By taking the time to properly architect and implement AWS KMS you can ensure that your encryption keys are secure and available for applications and their authorized users Additionally you can show your auditors detailed logs associated with your key usage 16ArchivedAWS Key Management Service Best Practices AWS Whitepaper Contributors The following individuals and organizations contributed to this document: • Matthew Bretan Senior Security Consultant AWS Professional Services • Sree Pisharody Senior Product Manager – Technical AWS Cryptography • Ken Beer Senior Manager Software Development AWS Cryptography • Brian Wagner Security Consultant AWS Professional Services • Eugene Yu Managing Consultant AWS Professional Services • Michael StOnge Global Cloud Security Architect AWS Professional Services • Balaji Palanisamy Senior Consultant AWS Professional Services • Jonathan Rault Senior Consultant AWS Professional Services • Reef Dsouza Consultant AWS Professional Services • Paco Hope Principal Consultant AWS Professional Services 17ArchivedAWS Key Management Service Best Practices AWS Whitepaper Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Initial publication (p 18) First published April 1 2017 18ArchivedAWS Key Management Service Best Practices AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved 19",General,consultant,Best Practices AWS_Key_Management_Service_Cryptographic_Details,"Archived AWS Key Manag ement Service Cryptographi c Details August 2018 This paper has been archived For the latest technical content about AWS KMS Cryptographic Details see https://docsawsamazoncom/kms/latest/cryptographic details/introhtmlArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 2 of 42 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents the current AWS product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own in dependent assessment of the information in this document Any use of AWS products or services is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitmen ts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 3 of 42 Contents Abstract 4 Introduction 4 Design Goals 6 Background 7 Cryptographic Primitives 7 Basic Concepts 10 Customer’s Key Hierarchy 11 Use Cases 13 Amazon EBS Volume Encryption 13 Client side Encryption 15 Customer Master Keys 17 Imported Master Keys 19 Enable and Disable Key 22 Key Deletion 22 Rotate Customer Master Key 23 Customer Data Operations 23 Generating Data Keys 24 Encrypt 26 Decrypt 26 ReEncrypting an Encrypted Object 28 Domains and the Domain State 29 Domain Keys 30 Exported Domain Tokens 30 Managing Do main State 31 Internal Communication Security 33 ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 4 of 42 HSM Security Boundary 33 Quorum Signed Commands 34 Authenticated Sessions 35 Durability Protection 36 References 38 Appendix Abbreviations and Keys 40 Abbreviations 40 Keys 41 Contributors 42 Document Revisions 42 Abstract AWS Key Management Service (AWS KMS) provides cryptographic keys and operations secured by FIPS 140 2 [1] certified hardware security modules (HSMs) scaled for the cloud AWS KMS keys and functionality are used by multiple AWS Cloud services and you can use them to protect data in your applications This whitepaper provides details on the cryptographic operations that are executed within AWS when you use AWS KMS Introduction AWS KMS provides a web interface to generate and manage cryptographic keys and operate s as a cryptographic servic e provider for protecting data AWS KMS offers traditional key management services integrated with AWS services to provide a consistent view of customers’ keys across AWS with centralized management and auditing This whitepaper provides a detailed descri ption of the cryptographic operations of AWS KMS to assist you in evaluating the features offered by the service AWS KMS includes a web interface through the AWS Management Console command line interface and RESTful API operation s to request cryptograph ic ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 5 of 42 operations of a distributed fleet of FIPS 140 2 validated hardware security module s (HSM )[1] The AWS Key Management Service HSM is a multichip standalone hardware cryptographic appliance designed to provide dedicated cryptographic functions to meet the security and scalability requirements of AWS KMS You can establish your own HSMbased cryptographic hierarchy under keys that you manage as customer master keys (CMK s) These keys are made available only on the HSMs for the necessary cycles needed to process your cryptographic request You can create multiple CMKs each represented by its key ID You can define access controls o n who can manage and/or use CMKs by creating a policy that is attached to the key This allows you to define application specific uses for your keys for each API operation Figure 1: AWS KMS architecture AWS KMS is a tiered service consisting of web facing KMS hosts and a tier of HSM s The grouping of these tiered hosts forms the AWS KMS stack All requests to AWS KMS must be made over the Transport Layer Security protocol (TLS) and terminate on a n AWS KMS host AWS KMS hosts only allow TLS with a ciphersuite that provides perfect forward secrecy [2] The AWS KMS hosts use protocols and procedures defined within this whitepaper to fulfill those requests through the HSM s AWS KMS authenticates and authorizes your requests using the same credential and policy mechanisms that are available for all other AWS API operation s including AWS Identity and Access Management (IAM) ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 6 of 42 Design Goals AWS KMS is designed to meet the following requirements Durability : The durability of cryptographic keys is designed to equal that of the highest durability services in AWS A single cryptographic key can encrypt large volumes of customer data accumulated over a long time period However data encrypted under a key becomes irretrievable if the key is lost Quorum based access : Multiple Amazon employee s with rolespecific access are required to perform a dministr ative actions on the HSMs There is no mechanism to export plaintext CMKs The confidentiality of your cryptographic keys is crucial Access control : Use of keys is protected by access control policies defined and managed by you Low latency and high throughput : AWS KMS provide s cryptographic operations at latency and throughput leve ls suitable for use by other services in AWS Regional independence : AWS provides regional independence for customer data Key usage is isolated within an AWS Region Secure source of random numbers : Because strong cryptography depends on truly unpredicta ble random number generation AWS provides a high quality and validated s ource of random numbers Audit : AWS records the use of cryptographic keys in AWS CloudTrail logs You can use AWS CloudTrail logs to inspect use of your cryptographic keys including use of keys by AWS services on your behalf To achieve these goals the AWS KMS system includes a set of KMS operators and service host operators (collectively “operators ”) that administer “domains” A domain is a regionall y defined set of AWS KMS servers HSM s and operators Each KMS operator has a hardware token that contains a private and public key pair used to authenticate its actions The HSM s have an additional private and ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 7 of 42 public key pair to establish encryption keys that protect HSM state synchronization This whitepaper illustrates how the AWS KMS protects your keys and other data that you want to encrypt Throughout th is document encryption keys or data you want to encrypt are referred to as “secrets” or “secret m aterial” Background This section contains a description of the cryptographic primitives and where they are used In addition it introduces the basic elements of AWS KMS Cryptographic Primitives AWS KMS uses configurable cryptographic algorithms so that the system can quickly migrate from one approved algorithm or mode to another The initial default set of cryptographic algorithms has been selected from Federal Information Processing Standard ( FIPS approved ) algorithms for their security properties and performance Entropy and Random Number Generation AWS KMS key generation is performed on the KMS HSM s The HSM s implement a hybrid random number generator that uses the NIST SP800 90A Deterministic Random Bit Generator (DRBG) CTR_DRBG using AES 256[ 3] It is seeded with a nondeterministic random bit generator with 384bits of entropy and updated with additional entropy to provide prediction resistanc e on every call for cryptographic material Encryption All symmetric key encrypt commands used within HSM s use the Advanced Encryption Standards (AES) [ 4] in Galois Counter Mode (GCM) [ 5] using 256 bit keys The analogous calls to decrypt use the inverse function AES GCM is an authenticated encryption scheme In addition to encrypting plaintext to produce ciphertext it computes an authentication tag over the ciphertext and any additional data over which au thentication is required (additionally authenticated data or AAD) The authentication tag helps ensure ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 8 of 42 that the data is from the purported source and that the ciphertext and AAD have not been modified Frequently AWS omits the inclusion of the AAD in o ur descriptions especially when referring to the encryption of data keys It is implied by surrounding text in these cases that the structure to be encrypted is partitioned between the plaintext to be encrypted and the cleartext AAD to be protected AWS K MS provides an option for you to import CMK key material instead of relying on the service to generate the key This imported key material can be encrypted using RSAES PKCS1 v1_5 or RSAES OAEP [ 6] to protect the key during transport to the KMS HSM The RSA key pairs are generated on KMS HSM s The imported key material is decrypted on a KMS HSM and reencrypted under AES GCM before being stored by the service Key Derivation Functions A key deriv ation function is used to derive additional keys from an initial secret or key AWS KMS uses a key derivation function (KDF) to derive per call keys for every encryption under a CMK All KDF operations use the KDF in counter mode [7] using HMAC [FIPS197] [8] with SHA256 [FIPS180] [9] The 256 bit derived key is used with AES GCM to encrypt or decrypt customer data and keys Digital Signatures All service entities have an elliptic cur ve digital signature algorithm (ECDSA) key pair They perform ECDSA as defined in Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS) [10] and X962 2005: Public Key Cry ptography for the Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA)[ 11] The entities use the secure hash algorithm defined in Federal Information Processing Standards Publications FIPS PUB 1804 [9] known as SHA384 The keys are generated on the curve secp384r1 (NIST P384) [12] Digital signatures are used to authenticate commands and communications between AWS KMS entities A key pair is denoted as (d Q) the sign ing operation as Sig = Sign(d msg) and the verify operation as Verify(Q msg Sig) The verify operation returns an indication of success or failure ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 9 of 42 It is frequently convenient to represent an entity by its public key Q In these cases the identifying information such as an identifier or a role is assumed to accompan y the public key Key Establishment AWS KMS uses t wo different key establishment methods The first is defined as C(1 2 ECC DH) in Recommendation for Pair Wise K ey Establishment Schemes Using Discrete Logarithm Cryptography (Revision 2) [1 3] This scheme has an initiator with a static signing key The initiator generates and signs an ephemeral elliptic curve Diffie Hellman (ECDH) key intended for a recipient with a static ECDH agreement key This method uses one ephemeral key and two static keys using ECDH That is the derivation of the label C(1 2 ECC DH) This method is sometimes called one pass ECDH The second key establishment method is C(2 2 ECC DH) [1 3] In this scheme both parties have a static signing key and they generate sign and exchange an ephemeral ECDH key This method uses two static keys and two ephemeral keys using ECDH That is the derivation of the label C(2 2 ECC DH) This method is sometimes called ECDH ephemeral or ECDHE All ECDH keys are generated on the curve secp3 84r1 (NIST P384) [12] Envelope Encryption A basic construction used within many cryptographic systems is envelope encryption Envelope encryption uses two or more cryptographic keys to secure a message Typically one key is derived from a longer term sta tic key k and another key is a per message key msgKey which is generated to encrypt the message The envelope is formed by encrypting the message ciphertext = Encrypt(msgKey message) encrypting the message key with the long term static key encKey = Encrypt(k msgKey) and packaging the two values (encKey ciphertext) into a single structure or envelope encrypted message The recipient with access to k can open the enveloped message by first decrypting the encrypted key and then decrypting the message AWS KMS provides the ability to manage these longer term static keys and automate the process of envelope encryption of your data ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 10 of 42 AWS KMS uses envelope encryption internally to secure confidential material between service endpoint s In addition to the encryption cap abilities provided within the KMS service the AWS Encryption SDK [14] provides client side envelope encryption libraries You can use these libraries to protect your data and the encryption keys used to encrypt that data Basic Concepts This section introduces some basic AWS KMS concepts that are elaborated on throughout this whitepaper Customer master key (CMK) : A logical key that represents the top of your key hierarchy A CMK is given an Ama zon Resource Name (ARN) that includes a unique key identifier or key ID Alias: A user friendly name or alias can be associated with a CMK The alias can be used interchangeably with key ID in many of the AWS KMS API operation s Permissions: A policy a ttached to a CMK that defines permissions on the key The default policy allows any principals that you define as well as allowing the AWS account root user to add IAM policies that reference the key Grants: Grants are intended to allow delegated use of CMKs when the duration of usage is not known at the outset One use of grants is to define scoped down permissions for an AWS service The service uses your key to do asynchronous work on your behalf on encrypted data in the absence of a direct signed API call from you Data keys: Cryptographic keys generated on HSM s under a CMK AWS KMS allows authorized entities to obtain data keys protected by a CMK They can be returned both as plaintext (unencrypted) data keys and as encrypted data keys Ciph ertexts : Encrypted output of AWS KMS is referred to as customer ciphertext or just ciphertext when there is no confusion Ciphertext contains ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 11 of 42 encrypted data with additional information that identifies the CMK to use in the decryption process Encryption context: A key–value pair map of additional information associated with AWS KMS –protected infor mation AWS KMS uses authenticated encryption to protect data keys The encryption context is incorporated into the AAD of the authenticated encryption in AWS KMS –encrypted ciphertexts This context information is optional and not returned when requesting a key (or an encryption operation) But if used this context value is required to successfully complete a decryption operation An intended use of the encryption context is to provide additional authenticated information that can be used to enforce policie s and be included in the AWS CloudTrail logs For example a key –value pair of {""key name"":""satellite uplink key""} could be used to name the data key Subsequently whenever the key is used a AWS CloudTrail entry is made that includes “key name”: “satellite uplink key” This additional information can provide useful context to understand why a given master key was used Customer ’s Key Hierarchy Your key hierarchy starts with a top level logical key a CMK A CMK represents a container for top level key material and is uniquely defined within the AWS service namespace with an ARN The ARN include s a uniquely generated key identifier a CMK key ID A CMK is created based on a user initiated request through AWS KMS Upon reception AWS KMS request s the creation of an initial HSM backing key ( HBK ) to be placed into the CMK container All such HSM resident only keys are denoted in red The HBK is generated on an HSM in the domain and is designed never to be exported from the HSM in plaintext Instead the HBK is exported encrypted under HSM managed domain keys These exported HBK s are referred to as exported key tokens (EKT s) The EKT is exported to a highly durable low latency storage You receive an ARN to the logical CMK This represents the top of a key hierarchy or cryptographic context for you You can create multiple CMK s within your account and set policies on your CMKs like any other AWS named resource Within the hierarchy of a specific CMK the HBK can be though t of as a version of the CMK When you want to rotate the CMK through AWS KMS a new HBK is created and associated with the CMK as the active HBK for the CMK The older ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 12 of 42 HBK s are preserved and can be used to decrypt and verify previously protected data but only the active cryptographic key can be used to protect new information Figure 2: CMK hierarchy You can make requests through AWS KMS to use your CMK s to directly protect information or request additional HSM generated keys protected under you r CMK These keys are called customer data keys or CDKs CDKs can be returned encrypted as ciphertext (CT) in plaintext or both All objects encrypted under a CMK (either customer supplied data or HSM generated keys ) can be decrypted only on an HSM via a call through AWS KMS The returned ciphertext or the decrypted payload is never stored within AWS KMS The information is returned to you over your TLS connection to AWS KMS This also applies to calls made by AWS services on your behalf We summ arize the key hierarchy and the specific key properties in the following table ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 13 of 42 Key Description Lifecycle Domain key A 256 bit AES GCM key only in memory of an HSM used to wrap versions of the CMKs the HSM backing keys Rotated daily1 HSM backing key A 256 bit symmetric key only in memory of an HSM used to protect customer data and keys Stored encrypted under domain keys Rotated yearly2 (optional config ) Data encryption key A 256 bit AES GCM key only in memory of an HSM used to encrypt customer data and keys Derived from an HBK for each encryption Used once per encrypt and regenerated on decrypt Customer data key User defined key exported from HSM in plaintext and ciphertext Encrypted under an HSM backing key and returned to authorized users over TLS channel Rotation and use controlled by application Use Cases This whitepaper presents two use cases The first demonstrates how AWS KMS performs se rverside encryption with CMKs on an Amazon Elastic Block Stor e (Amazon EBS) volume The second is a client side application that demonstrates how you can use envelope encryption to protect content with AWS KMS Amazon EBS Volume Encryption Amazon EBS offe rs volume encryption capability Each volume is encrypted using AES 256XTS [1 5] This requires two 256 bit volume keys which you can think of as one 512 bit volume key The volume key is encrypted under a CMK in your account For Amazon EBS to encrypt a volume for you it must have access to generate a volume key (VK) under a CMK in the account You do this by providing a grant for Amazon EBS to the CMK to create data keys and to encrypt and decrypt these volume keys Now Amazon E BS uses AWS KMS with a CMK to generate AWS KMS –encrypted volume keys 1 AWS KMS may from time to time relax domain key rotation to at most weekly to account for domain administration and configuration tasks 2 Default service master keys created and managed by AWS KMS on your behalf are automatically rotated every 3 years ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 14 of 42 Figure 3: Amazon EBS volume encryption with AWS KMS keys Encrypt ing data being written to an Amazon EBS volume involves five steps : 1 Amazon EBS obtains an encrypted volume key under a CMK through AWS KMS over a TLS session and stores the encrypted key with the volume metadata 2 When the Amazon EBS volume is mounted the encrypted volume key is retrieved 3 A call to AWS KMS over TLS is made to d ecrypt the encrypted volume key AWS KMS identif ies the CMK and make s an internal request to an HSM in the fleet to decrypt the encrypted volume key AWS KMS then return s the volume key back to the Amazon Elastic Compute Cloud (Amazon EC2) host that contai ns your instance over the TLS session 4 The volume key is used to encrypt and decrypt all data going to and from the attached Amazon EBS volume Amazon EBS retains the encrypted volume key for later use in case the volume key in memory is no longer availabl e ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 15 of 42 Client side Encryption The AWS Encryption SDK [14] includes an API operation for performing envelope encryption using a CMK from AWS KMS For complete recommendations and usage details see the related documentation [14] Client applications can use the AWS Encryption SDK to perform envelope encryption using AWS KMS // Instantiate the SDK final AwsCrypto crypto = new AwsCrypto(); // Set up the KmsMasterKe yProvider backed by the default credentials final KmsMasterKeyProvider prov = new KmsMasterKeyProvider(keyId); // Do the encryption final byte[] ciphertext = cryptoencryptData(prov message); The client applicati on can execute the following steps: 1 A request is made under a CMK for a new data key An encrypted data key and a plaintext version of the data key are returned 2 Within the AWS Encryption SDK the plaintext data key is used to encrypt the message The plai ntext data key is then deleted from memory 3 The encrypted data key and encrypted message are combined into a single ciphertext byte array Figure 4: AWS Encryption SDK envelope encryption ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 16 of 42 The envelope encrypted message can be decrypted using the decrypt functionality to obtain the originally encrypted message final AwsCrypto crypto = new AwsCrypto(); final KmsMasterKeyProvider prov = new KmsMasterKeyProvider(keyId); // Decrypt the data final CryptoResult res = cryptodecryptData(pr ov ciphertext); // We need to check the master key to ensure that the // assumed key was used if (!resgetMasterKeyIds()get(0)equals(keyId)) { throw new IllegalStateException(""Wrong key id!""); } byte[] plaintext = resgetResult(); 1 The AWS Encrypt ion SDK parse s the envelope encrypted message to obtain the encrypted data key and make a request to AWS KMS to decrypt the data key 2 The AWS Encryption SDK receive s the plaintext data key from AWS KMS 3 The data key is then used to decrypt the message returning the initial plaintext Figure 5: AWS Encryption SDK envelope decryption ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 17 of 42 Customer Master Keys A CMK refers to a logical key that may refer to one or more HBK s It is generated as a result of a call to the CreateKey API call The following is the CreateKey request syntax { ""Description"": ""string"" ""KeyUsage"": ""string"" “Origin”: “string”; ""Policy"": ""string"" } The request accepts the following data in JSON format Optional Description: Description of the key We recommend that you choose a description that helps you decide whether the key is appropriate for a task Optional KeyUsage: Specifies the intended use of the key Currently this defaults to “ENCRYPT/DECRYPT” since only symmetri c encryption and decryption are supported Optional Origin: The source of the CMK's key material The default is “AWS_KMS” In addition to the default value “AMS_KMS” the value “EXTERNAL” may be used to create a CMK without key material so that you can im port key material from your existing key management infrastructure The use of EXTERNAL is covered in the following section on Imported Master Keys Optional Policy: Policy to attach to the key If the policy is omitted the key is created with the default policy (below) that enables IAM users with AWS KMS permissions as well as the root account to manage it For details on the policy see https://docsawsamazon com/kms/latest/developerguide/key policieshtml ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 18 of 42 The call return s a response containing an ARN with the key identifier arn:aws:kms:::key/ If the Origin is AWS_KMS after the ARN is created a request to an HSM is made over an authenticated session to provision an HBK The HBK is a 256 bit key that is associated with this CMK key ID It can be generated only on an HSM and is designed never to be exported outside of the HSM boundary in cleartext An HBK is generated on the HSM and encrypted under the current domain key DK 0 These encrypted HBK s are referred to as EKTs Although the HSMs can be configured to use a variety of key wrapping methods the current implementation uses the authenticated encryption scheme known as AES 256 in Galois Counter Mode (GCM) [ 5] As part of the authenticated encryption mode some cleartext exported key token metadata can be protected This is stylistically represented as EKT = Encrypt( DK 0 HBK ) Two fundamental forms of protection are provided to your CMKs and the subsequent HBK s: authorization policies set on your CMK s and the cryptographic protections on your associated HBK s The remaining sections describe the cryptographic protections and the security of the management functions in AWS KMS In addition to the ARN a user friendly name can be associated with the CMK by creating an alias for the key Once an alias has been associated with a CMK the alias can be used in place of the ARN Multiple levels of authorizations surround the use of CMK s AWS KMS enables separate authorization policies between the encrypted co ntent and the CMK For instance an AWS KMS envelope encrypted Amazon Simple Storage Service (Amazon S3) object inherit s the policy on the Amazon S3 bucket However access to the necessary encryption key is determined by the access policy on the CMK ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 19 of 42 For the latest information about authentication and authorization policies for AWS KMS see https://docsawsamazoncom/kms/latest/developerguide/control accesshtml Imported Master Keys AWS KMS provides a mechanism for importing the cryptographic material used for a n HBK As described in the section on Customer Master Keys earlier when the CreateKey command is used with Origin set to EXTERNAL a logical CMK is created that contains no underlying HBK The cryptographic material must be imported using the ImportKeyMaterial API call This feature allows you to control the key creation and durability of the cryptographic material It is recommended that if you use this feature you take significant caution in the handling and durability of these keys in your environment For complete details and recommendations for importing master keys see https://docsawsamazoncom/kms/latest/developerguide/importing keyshtml GetParametersForImport Prior to importing the key material for an imported master key you must obtain the necessary parameter s to import the key The following is the GetParametersForImport request syntax { ""KeyId"": ""string"" ""WrappingAlgorithm"": ""string"" “WrappingKeySpec” : “string” } KeyId : A unique key identifier for a CMK This value can be a globally unique identifier an ARN or an ali as WrappingAlgorithm: The algorithm you use when you encrypt your key material The valid values are “RSAES_OAEP_SHA256” “RSAES_OAEP_SHA1” or “RSAES_PKCS1_V 1_5” AWS KMS recommends that you use ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 20 of 42 RSAES_OAEP_SHA256 You may have to use another key wrapp ing algorithm depending on what your key management infrastructure supports WrappingKeySpec: The type of wrapping key (public key) to return in the response Only RSA 2048 bit public keys are supported The only valid value is “RSA_2048” This call resul ts in a request from the AWS KMS host to an HSM to generate a new RSA 2048 bit key pair This key pair is used to import an HBK for the specified CMK key ID The private key is protected and accessible only by an HSM member of the domain A successful call results in the following return values { ""ImportToken"": blob ""KeyId"": ""string"" ""PublicKey"": blob ""ValidTo"": number } ImportToken: A token that contains metad ata to ensure that your key material is impor ted correctly Store this value and send it in a subsequent ImportKeyMaterial request KeyId: The CMK to use when you subsequently import the key material This is the same CMK specified in the request PublicKey : The public key to use to encrypt your key material The public key is encoded as specified in section A11 of PKCS#1 [ 6] an ASN1 DER encoding of the RSAPublicKey It is the ASN1 encoding of two integers as an ASN1 sequence ValidTo: The time at which the import token and public key expire These items are valid for 24 hours If you do not use them for a subsequent ImportKeyMaterial request within 24 hours you must retrieve new ones The import token and public key from the same response must be used together ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 21 of 42 ImportKeyMaterial The ImportKeyMaterial request imports the necessary cryptographic material for the HBK The cryptographic material must be a 256 bit symmetric key It must be encrypted using the algorithm specified in WrappingAlgorithm under the returne d public key from a recent GetParametersForImport request ImportKeyMaterial takes the following arguments { ""EncryptedKey"": blob ""ExpirationModel"": ""string"" ""ImportToken"": blob ""KeyId"": ""string"" ""ValidTo"": number } EncryptedKey: The encrypt ed key material Encrypt the key material with the algorithm that you specified in a previous GetParametersForImport request and the public key that you received in the response to that request ExpirationModel: Specifies whether the key material expires When this value is KEY_MATERIAL_EXPIRES the ValidTo parameter must contain an expiration date When this value is KEY_MATERIAL_DOES_NOT_EXPIRE do not include the ValidTo parameter The valid values are “KEY_MATERIAL_EXPIRES” and “KEY_MATERIAL_DOES_NOT_E XPIRE” ImportToken: The import token you received in a previous GetParametersForImport response Use the import token from the same response that contained the public key that you used to encrypt the key material KeyId: The CMK to import key material int o The CMK's Origin must be EXTERNAL Optional ValidTo: The time at which the imported key material expires When the key material expires AWS KMS deletes the key material and the CMK ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 22 of 42 becomes unusable You must omit this parameter when ExpirationModel is set to KEY_MATERIAL_DOES_NOT_EXPIRE Otherwise it is required On success the CMK is available for use within AWS KMS until the specified validity date Once an imported CMK expires the EKT is deleted from the service’s storage layer Enable and Disable Key The ability to enable or disable a CMK is separate from the key lifecycle This does not modify the actual state of the key but instead suspe nds the ability to use all HBK s that are tied to a CMK These are simple commands that take just the CMK key ID Figure 6: AWS KMS CMK lifecycle3 Key Deletion You can delete a CMK and all associated HBK s This is an inherently destructive operation and you should exercise caution when deleting keys from KMS AWS KMS enforces a minimal wait time of seven days when deleting CMKs During the waiting period the key is placed in a disable d state with a key state indicating Pending Deletion All calls to use the key for cryptographic operations will fail 3 The lifecycle for an EXTERNAL CMK differs It can be in the state of pending import and key rotation is not currently available Furthe r the EKT can be removed without requiring a waiting period by calling DeleteImportedKeyMaterial DeactivatedDeactivatedEnabled key(s) DeactivatedDeactivatedKey Gene rationActive DeactivatedDeletedRotation Schedule key for deletionCreateKe yArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 23 of 42 CMKs can be deleted using the ScheduleKeyDeletion API call It takes the following arguments { ""KeyId"": “string” ""PendingWindowInDays"": number } KeyId: The unique identifier for the CMK to delete To specify this value use the uniqu e key ID or the ARN of the CMK Optional PendingWindowInDays: The waiting period specified in number of days After the waiting period ends AWS KMS deletes the CMK and a ll associated HBK s This value is optional If you include a value it must be between 7 and 30 inclusive If you do not include a value it defaults to 30 Rotate Customer Master Key You can induce a rotation of your CMK The current system allows you to opt in to a yearly rotation schedule for your CMK When a CMK is rotated a new HBK is created and marked as the active key for all new requests to protect information The current active key is moved to the deactivated state and remains available for use to decrypt any existing ciphertext values that have been encrypted using this versi on of the HBK AWS KMS does not store any ciphertext values encrypted under a CMK As a direct consequence these ciphertext values require the deactivated HBK to decrypt These older ciphertexts can be re encrypted to the new HBK by calling the ReEncrypt API call You can set up key rotation using a simple API call or from the AWS Management Console Customer Data Operations After you have established a CMK it can be used to perform cryptographic operations Whenever data is encrypted under a CMK the re sulting object is a customer ciphertext The ciphertext contain s two sections: an unencrypted ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 24 of 42 header (or cleartext) portion protected by the authenticated encryption scheme as the additional authenticated data and an encrypted portion The cleartext port ion include s the HBK identifier (HBKID) These two immutable fields of the ciphertext value help ensure that AWS KMS can decrypt the object in the future Generating Data Keys A request can be made for a specific type of data key or a random key of arbitrary length through the GenerateDataKey API call A simplified view of this API operation is provided here and in other examples You can find a detailed description of the full API here http s://docsawsamazoncom/kms/latest/APIReference/Welcomehtml The following is the GenerateDataKey request syntax { ""EncryptionContext"": {""string"" : ""string""} ""GrantTokens"": [ ""string""] ""KeyId"": ""string"" ""KeySpec"": ""string"" ""NumberOfB ytes"": ""number"" } The reque st accepts the fo llowing data in JSON format Optional EncryptionContext : Name :value pair that contains additional data to authenticate during the encryption and decryption processes that use the key Optional GrantTokens : A list of grant tokens that represent grants that provide permissions to generate or use a key For more information on grants and grant tokens see https://docsawsamazo ncom/kms/latest/developerguide/control accesshtml Optional KeySpec: A value that identifies the encryption algorithm and key size Currently this can be AES_128 or AES_256 ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 25 of 42 Optional NumberOfBytes: An integer that contains the number of bytes to generate AWS KMS after authenticating the command acquire s the current active EKT pertaining to the CMK It pass es the EKT along with your provided request and any encryption context to an HSM over a protected session between the AWS KMS host and an HSM in the domain The HSM does the following: 1 Generate s the requested secret material and hold it in volatile memory 2 Decrypt s the EKT matching the key ID of the CMK that is defined in the request to obtain the active HBK = Decrypt( DK i EKT) 3 Generate s a random nonce N 4 Derives a 256 bit AES GCM Data Encryption K ey K from HBK and N 5 Encrypt s the secret material cipher text = Encrypt( K context secret) The ciphertext value is returned to you and is not retained anywhere in the AWS infrastructure Without possession of the cipher text the encryption context and the authorization to use the CMK the underlying secret cannot be returned The GenerateDataKey returns the plaintext secret material and the ciphertext to you over the secure channel between the AWS KMS host and the HSM AWS KMS then sends it to you over the TLS session The following is the r esponse syntax { ""CiphertextBlob"": ""blob"" ""KeyId"": ""string"" ""Plaintext"": ""blob"" } The management of data keys is left to you as the application developer They can be rotated at any frequency Further the data key itself can be reencrypted t o ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 26 of 42 a different CMK or a rotated CMK using the ReEncrypt API operation Full details can be found here: http s://docsawsamazoncom/kms/latest/APIReference/Welcomehtml Encrypt A basic function of AWS KMS is to encrypt an object under a CMK By design AWS KMS provides low latency cryptographic operations on HSMs Thus there is a limit of 4 KB on the amount of plaintext that can be encrypted in a direct call to the encrypt functi on The KMS Encryption SDK can be used to encrypt larger messages AWS KMS after authenticating the command acquire s the current active EKT pertaining to the CMK It pass es the EKT along with the plaintext provided by you and encryption context to an y available HSM in the region over an authenticated session between the AWS KMS host and an HSM in the domain The HSM execute s the following: 1 Decrypt s the EKT to obtain the HBK = Decrypt( DK i EKT) 2 Generate s a random nonce N 3 Derive s a 256 bit AES GCM Data E ncryption Key K from HBK and N 4 Encrypt s the plaintext ciphertext = Encrypt( K context plaintext) The ciphertext value is returned to you and neither the plaintext data or ciphertext is retained anywhere in the AWS infrastructure Without possession of the ciphertext and the encryption context and the authorization to use the CMK the underlying plaintext cannot be returned Decrypt A call to AWS KMS to decrypt a ciphertext value accepts an encrypted value ciphertext and an encryption context AWS KMS authenticate s the call using AWS signature version 4 signed requests [16] and extract s the HBKID for the wrapp ing key from the ciphertext The HBKID is used to obtain the EKT required to decrypt the ciphertext the key ID and the policy for the key ID The request is authorized based on the key policy grants that may be present and any associated IAM pol icies that reference the key ID The Decrypt function is analogous to the encryption function ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 27 of 42 The following is the Decrypt request syntax { ""CiphertextBlob"": ""blob"" ""EncryptionContext"": { ""string"" : ""string"" } ""GrantTokens"": [""string""] } The following are the r equest parameters CiphertextBlob: Ciphertext including metadata Optional EncryptionContext : The encryption context If this was specified in the Encrypt function it must be specified here or the decryption operation fail s For more in formation see https://docsawsamazoncom/kms/latest/developerguide/encrypt contexthtml Optional GrantTokens : A list of grant tokens that represent grants that provide permissions to perform decryption The ciphertext and the EKT are sent along with the encryption context over an authenticated session to an HSM for decryption The HSM execute s the following: 1 Decrypt s the EKT to obtain the HBK = Decrypt( DK i EKT) 2 Extract s the nonce N from the ciphertext structure 3 Regenerate s a 256 bit AES GCM Data E ncryption Key K from HBK and N 4 Decrypt s the ciphertext to obtain plaintext = Decrypt( K context ciphertext ) ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 28 of 42 The resulting key ID and plaintext are returned to the AWS KMS host over the secure session and then back to the calling customer application over a TLS connection The following is the r esponse syntax { ""KeyId"": ""string"" ""Plaintext"": blob } If the calling applicati on wants to ensure that the authenticity of the plaintext it must verify the key ID returned is the one expected ReEncrypting an Encrypted Object An existing customer ciphertext encrypted under one CMK can be reencrypted to another CMK through a re encr ypt command Reencrypt encrypts data on the server side with a new CMK without exposing the plaintext of the key on the client side The data is first decrypted and then encrypted The following is the r equest syntax { ""CiphertextBlob"": ""blob"" ""DestinationEncryptionContext"": { ""string"" : ""string"" } ""DestinationKeyId"": ""string"" ""GrantTokens"": [""string""] ""SourceEncryptionContext"": { ""string"" : ""string""} } The request accepts the following data in JSON format CiphertextBlob: Ciphertext of the data to reencrypt ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 29 of 42 Optional DestinationEncryptionContext: Encryption context to be used when the data is reencrypted DestinationKeyId: Key identifier of the key used to reencrypt the data Optional GrantTokens : A list of grant tokens th at represent grants that provide permissions to perform decryption Optional SourceEncryptionContext: Encryption context used to encrypt and decrypt the data specified in the CiphertextBlob parameter The process combines the decrypt and encrypt operations of the previous descriptions : The customer ciphertext is decrypted under the initial HBK referenced by the customer ciphertext to the current HBK under the intended CMK When the CMK s used in this command are the same this command moves the customer ciph ertext from an old version of an HBK to the latest version of an HBK The following is the r esponse syntax { ""CiphertextBlob "": blob ""KeyId"": ""string"" ""SourceKeyId "": ""string"" } If the calling application wants to ensure the authenticity of the underlying plaintext it must verify the SourceKeyId returned is the one expected Domains and the Domain State A cooperative collection of trusted internal AWS KMS entities within an AWS Region is referred to as a domain A domain includes a set of trusted entities a set of rules and a set of secret keys called domain keys The domain keys are shared among HS Ms that are members of the domain A domain state consists of the following fields ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 30 of 42 Field Description Name A domain name to identify this domain Members A list of HS Ms that are members of the domain including their public signing key and public agreement keys Operators A list of entities public signing keys and a role (KMS operator or service host) that re presents the operators of this service Rules A list of quorum rules for each command that must be satisfied to execute a command on the HSM Domain keys A list of domain keys (symmetric keys) currently in use within the domain The full domain state is available only on the HSM The domain state is synchronized between HSM domain members as an exported domain token Domain Keys All the HS Ms in a domain share a set of domain keys {DK r } These keys are shared through a domain state export routine The exported domain state can be imported into any HSM that is a member of the domain How this is accomplished and the additional contents of the domain state are detailed in a following secti on on Managing Domain State The set of domain keys {DK r } always includes one active domain key and several deactivated domain keys Domain keys are rotated daily to ensure that we comply with Recommendation for Key Management Part 1 [1 7] During domain key rotation all existing CMK keys encrypted under the outgoing domain key are reencrypted under the new active domain key The active domain key is used to encrypt any new EKTs The expired domain keys can be used only to decrypt previously encrypted EKTs for a number of days equivalent to the number of recently rotated domain keys Exported Domain Tokens There is a regular need to synchronize state between domain participants This is accomplished through exporting the domain state whenever a change is made to the domain The domain state is exported as an exported domain token ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 31 of 42 Field Description Name A domain name to identify this domain Members A list of HS Ms that are members of the domain including their signing and agreement public keys Operators A list of entities public signing keys and a role that represents the operators of this service Rules A list of quorum rules for each command that must be satisfied to execute a command on an HSM domain member Encrypted domain keys Envelope encrypted domain keys The domain keys are encrypted by the signing member for each of the members listed above enveloped to their public agreement key Signature A signature on the domain state produced by an HSM necessarily a member of the domain that exported the domain state The exported domain token forms the fundamental source of trust for entities operatin g within the domain Managing Domain State The domain state is managed through quorum authenticated commands These changes include modifying the list of trusted participants in the domain modifying the quorum rules for executing HSM commands and period ically rotating the domain keys These commands are authenticated on a per command basis as opposed to authenticated session operations; see the API model depicted in Figure 7 An HSM in its initialized and operational state conta ins a set of self generated asymmetric identity keys a signing key pair and a key establishment key pair Through a manual process a KMS operator can establish an initial domain to be created on a first HSM in a region This initial domain consist s of a full domain state as defined in Domains and the domain state section It is installed through a join command to each of the defined HSM members in the domain After an HSM has joined an initial domain it is bound to the rules defined in that domain These rules govern the commands that use customer cryptographic keys or make changes to the host or domain state The authenticated session API operation s that use your cryptographic keys have been defined earlier ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 32 of 42 Figure 9: Domain management Figure 9 depicts how a domain state gets modified It consists of four steps: 1 A quorum based command is sent to an HSM to modify the domain 2 A new domain state is generated and exported as a new exported domain token The state on the HSM is not modified meaning that the change is not enact ed on the HSM 3 A second command is sent to each of the HS Ms in the newly exported domain token to update their domain state with the new domain token 4 The HSM s listed in the new exported domai n token can authenticate t he command and the domain token They can also unpack the domain keys to update the domain state on all HSM s in the domain HSM s do not communicate directly with each other Instead a quorum of operators requests a change to the domain state that result s in a new exported domain token A service host member of the domain is used to distribute the new domain state to every HSM in the domain The leaving and joining of a domain are done through the HSM management functions and th e modification of the domain state is done through the domain management functions ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 33 of 42 Command Description of HSM management Leave domain Causes an HSM to leave a domain deleting all remnants and keys of that domain from memory Join domain Causes an HSM to join a new domain or update its current domain state to the new domain state using the existing domain as source of the initial set of rules to authenticate this message Command Description of domain management Create domain Causes a new domain to be created on an HSM Returns a first domain token that can be distributed to member HSM s of the domain Modify operators Adds or removes operators from the list of authorized operators and their roles in the domain Modify members Adds or removes an HSM from the list of authorized HSM s in the domain Modify rules Modifies the set of quorum rules required to execute commands on an HSM Rotate domain keys Causes a new domain key to be created and marked as the active domain key This moves the existing active key to a deactivated key and removes the oldest deactivated key from the domain state Internal Communication Security Commands between the service hosts /KMS operators and the HSMs are secu red through two mechanisms depicted in Figure 7: a quorum signed request method and an authenticated session using a n HSM service host protocol The quorum signed commands are designed so that no single operator can modify the criti cal security protections provided by the HSMs The commands executed over the authenticated sessions help ensure that only authorized service operators can perform operations involving CMKs A ll customer bound secret information is secured across the AWS infrastructure HSM Security Boundary The inner security boundary of AWS KMS is the HSM The HSM has a limited webbased API and no other active physical interfaces in its operational state An operational HSM is provisioned during initialization with the necessary cryptographic keys to establish its role in the domain Sensitive cryptographic materials of the HSM are only stored in volatile memory and erased when the ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 34 of 42 HSM moves out of the opera tional state including intended or unintended shutdowns or resets The HSM API operation s are authenticated either by individual commands or over a mutually authenticated confidential session established by a service host Figure 7: HSM API operation s Quorum Signed Commands Quorum signed commands are issued by operators to HSMs This section describes how quorum based commands are created signed and authenticated These rules are fairly simple For example command Foo requires two me mbers from role Bar to be authenticate d There are three steps in the creation and verification of a quorum based command The first step is the initial command creation ; the second is the submission to additional operators to sign ; and the third is the verification and execution For the purpose of introducing the concepts assume that there is an authentic set of operator ’s public keys and roles {QOS s } and a set of quo rum rules QR = { Command i { Rule {i t}} where each Rule is a set of roles and minimum number N {Role t Nt } For a command to satisfy the quorum rule the command dataset must be signed by a set of operators listed in {QOS s } such that they meet one of the rules listed for that command As mentioned earlier in this whitepaper the set of quorum rules and operators are stored in the domain state and the exported domain token In practice an initial signer signs the command Sig 1 = Sign(d Op1 Command) A second operator also signs the command Sig 2 = Sign(d Op2 Command) The doubly signed message is sent to an HSM for execution The HSM performs the following: ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 35 of 42 1 For each signature it extracts the signer ’s public key from the domain state and verifies the signature on the command 2 It verifies that the set of signers satisfies a rule for the command Authenticated Sessions Your key operations are executed between the externally facing AWS KMS hosts and the HS Ms These commands pertain to the creation and use of cryptographic keys and secure random number generation The commands execute over a session authenticated channel between the service hosts and the HS Ms In addition to the need for authenticity these sessions r equire confidentiality Commands executing over these sessions include the returning of cleartext data keys and decrypted messages intended for you To ensure that these sessions cannot be subverted through man inthemiddle attacks sessions are authentic ated This protocol performs a mutually authenticated ECDHE key agreement between the HSM and the service host The exchange is initiated by the service host and completed by the HSM The HSM also returns a session key (SK) encrypted by the negotiated key and an exported key token that contains the session key The exported key token contains a validity period after which the service host must renegotiate a session key A service host is a member of the domain and has an identity signing key pair (dHOS i QHOS i) and an authentic copy of the HSMs’ identity public keys It uses its set of identity signing keys to securely negotiate a session key that can be used between the service host and any HSM in the domain The exported key tokens have a validity period associated with them after which a new key must be negotiated Figure 8: HSM service host oper ator authenticated sessions ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 36 of 42 The process begins with the service host recognition that it requires a session key to send and receive sensitive communication flows between itself and an HSM member of the domain 1 A service host generates an ECDH ephemeral key pair (d1 Q 1) and signs it with its identity key Sig 1 = Sign(dOSQ 1) 2 The HSM verifies the signature on the received public key using its current domain token and creates an ECDH ephemeral key pair ( d2 Q 2) It then completes the ECDH keyexchange accordi ng to Recommendation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) [1 3] to form a negotiated 256 bit AES GCM key The HSM generates a fresh 256 bit AES GCM session key It encrypts the session key with the negotiated key to form the encrypted session key (ESK) It also encrypts the session key under the domain key as an exported key token EKT Finally it signs a return valu e with its identity key pair Sig 2 = Sign( dHSK (Q 2 ESK EKT)) 3 The service host verifies the signature on the received key s using its current domain token The service host then completes the ECDH key exchange according to Recomme ndation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) [1 3] It next decrypts the ESK to obtain the session key SK During the validity period in the EKT the service host can use the negotiated session key SK to send envelope encrypted commands to the HSM Every service host initiated command over this authenticated session includes the EKT The HSM respond s using the same negotiated session key SK Durability Protection Additional service durability is provided by the use of offline HSMs multiple nonvolatile storage of exported domain tokens and redundant storage of encrypted CMKs The offline HSMs are members of the existing domains With the exception of not being onli ne and participating in the regular domain operations the offline HSMs appear identically in the domain state as the existing HSM members ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 37 of 42 The durability design is intended to protect all CMKs in a region should AWS experience a wide scale loss of either the online HSM s or the set of CMKs stored within our primary storage system Imported master keys are not included under the durability protections afforded other CMKs In the event of a regionwide failure in AWS KMS imported master keys may need to be reimported The offline HSM s and the credentials to access them are stored in safes within monitored safe rooms in multiple independent geographical locations Each safe requires at least one AWS security officer and one AWS KMS operator from two independent teams in AWS to obtain these materials The use of these materials is governed by internal policy requiring a quorum of AWS KMS operators to be present ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 38 of 42 References [1] Amazon Web Services “FIPS 140 2 Non proprietary Sec urity Policy AWS Key Management Service HSM” version 10101 18 January 2018 https://csrcnistgov/CSRC/media/pr ojects/cryptographic module validation program/documents/security policies/140sp3139pdf [2] NIST Special Publication 800 52 Revision 1 Guidelines for the Selection Configuration and Use of Transport Layer Security (TLS) Implementations April 2014 https ://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 52r1pdf [3] Recommendation for Random Number Generation Using Deterministic Random Bit Generators NIST Special Publication 800 90A Revision 1 June 2015 Available from https://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 90Ar1pdf [4] Federal Information Processing Standards Publication 197 Announcing the Advanced Encryption Standard (AES) November 2001 Available from http://csrcnistgov/publications/fips/fips197/fips 197pdf [5] Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC NIST Special Publication 800 38D November 2007 Available from http://csrcnistgov/publications/nistpubs/800 38D/SP 800 38Dpdf [6] PKCS#1 v22: RSA Cryptograph y Standard RSA Laboratories October 2012 Available from http://wwwemccom/emc plus/rsa labs/pkcs/files/h11300 wp pkcs 1v22rsacryptogra phystandardpdf [7] Recommendation for Key Derivation Using Pseudorandom Functions NIST Special Publication 800 108 October 2009 Available from https://nvl pubsnistgov/nistpubs/legacy/sp/nistspecialpublication800 108pdf [8] Federal Information Processing Standards Publication 198 1 The Keyed Hash Message Authentication Code (HMAC) July 2008 Available from http://csrcnistgov/publications/fips/fips198 1/FIPS 1981_finalpdf ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 39 of 42 [9] Federal Information Processing Standards Publications FIPS PUB 180 4 Secure Hash Standard Aug ust 2012 Available from https://nvlpubsnistgov/nistpubs/FIPS/NISTFIPS180 4pdf [10] Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS) Brown D Turner S Internet Engineering Task Force July 2010 http://toolsietforg/html/rfc5753/ [11] X962 2005: Public Key Cryptography for the Financial Services Industry: The Elliptic Curve Di gital Signature Algorithm (ECDSA) American National Standards Institute 2005 [12] SEC 2: Recommended Elliptic Curve Domain Parameters Standards for Efficient Cryptography Group Version 20 27 January 2010 http://wwwsecgorg/sec2 v2pdf [13] Recommendation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) NIST Special Publication 800 56A Revision 2 May 2013 Available from http://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 56Ar2pdf [14] Amazon Web S ervices “What is the AWS Encryption SDK” http://docsawsamazoncom/encryption sdk/latest/developer guide/introductionhtml [15] Recommendation for Block Cipher Modes of Operation: The XTS AES Mode for Confidentiality on Storage Devices NIST Special Publication 800 38E January 2010 Available from http://csrcnistgov/p ublications/nistpubs/800 38E/nist sp800 38Epdf [16] Amazon Web Services General Reference (Version 10) “Signing AWS API Request ” http://docsawsamazoncom/g eneral/latest/gr/signing_aws_api_requestshtml [17] Recommendation for Key Management Part 1: General (Revision 3) NIST Special Publication 800 57A January 2016 Available from https://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 57pt1r4pdf ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 40 of 42 Appendix Abbreviations and Keys This section lists abbreviations and keys reference d throughout the document Abbreviations Abbreviation Definition AES Advanced Encryption Standard CDK customer data key CMK customer master key CMKID customer master key identifier DK domain key ECDH Elliptic Curve Diffie Hellman ECDHE Elliptic Curve Diffie Hellman Ephemeral ECDSA Elliptic Curve Digital Signature Algorithm EKT exported key token ESK encrypted session key GCM Galois Counter Mode HBK HSM backing key HBKID HSM backing key identifier HSM hardware security module RSA Rivest Shamir and Adleman (cryptologic) secp384r1 Standards for Efficient Cryptography prime 384 bit random curve 1 SHA256 Secure Hash Algorithm of digest length 256bits ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 41 of 42 Keys Abbreviation Name: Description HBK HSM backing key : HSM backing k eys are 256 bit master keys from which specific use keys are derived DK Domain key: A domain key is a 256bit AESGCM key It is shared among all the members of a domain and is used to protect HSM backing keys material and HSM service host session keys DKEK Domain key encryption key : A domain key encryption Key is an AES 256 GCM key generated on a host and used for encrypting the current set of domain keys synchronizing domain state across the HSM hosts (dHAK QHAK ) HSM agreement key pair : Every initiated HSM has a locally generated Elliptic Curve Diffie Hellman agreement key pair on the curve secp384r1 (NIST P384) (dE QE) Ephemeral agreement key p air: HSM and service hosts generate ephemeral agreement keys These are Elliptic Curve Diffie Hellman keys on the curve secp384r1 (NIST P384) These are generated in two use cases : to establish a hosttohost encryption key to transport domain key encryption keys in domain tokens and to establish HSM service host session keys to protect sensitive communications (dHSK QHSK ) HSM signature key pair: Every initiated HSM has a locally generated Elliptic Curve Digital Signature key pair on the curve secp384r1 (NIST P384) (dOS QOS ) Operator signature key pair: Both the service host operators and KMS operators have an identity signing key used to authenticate itself to other domain participants K Data encryption key : A 256 bit AES GCM key derived from an HBK using the NIST SP800 108 KDF in counter mode using HMAC with SHA256 SK Session key: A session key is created as a result of an authenticated Elliptic Curve Diffie Hellman key exchanged between a service host operator and an HSM The purpose of the exchange is to secur e communication between the service host and the member s of the domain ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 42 of 42 Contributors The following individu als and organizations contributed to this document: • Ken Beer General Manager KMS AWS Cryptography • Richard Moulds Principal Product Manager – KMS AWS Cryptography • Matthew Campagna Principal Security Engineer AWS C ryptography • Raj Copparapu Sr Prod uct Manager KMS AWS Cryptography Document Revisions For the most up to date version of this white paper please visit: https://d1awsstaticcom/whitepapers/KMS Cryptographic Detailspdf",General,consultant,Best Practices AWS_Migration_Whitepaper,Archived1 AWS Migration Whitepaper AWS Professional Services March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments cond itions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its cu stomers Archived 3 Contents Introduction 1 Using the AWS Cloud Adoption Framework ( AWS CAF) to Assess Migration Readiness 2 Impact of Culture on Cloud Migration 4 Business Drivers 5 Migration Strategies 7 “The 6 R’s”: 6 Application Migration Strategies 7 Which Migration Strategy is Righ t for Me? 9 Building a Business Case for Migration 12 People and Organization 15 Orga nizing Your Company’s Cloud Teams 15 Creating a Cloud Center of Excellence 15 Migration Readiness and Planning 17 Assessing Migration Readiness 17 Application Discovery 19 Application Discovery Tools 20 Application Portfolio Analysis 21 Migration Planning 22 Technical Planning 23 The Virtual Private Cloud Environment 24 Migrating 29 First Migrations – Build Experience 29 Migration Execution 30 Application Migration Process 30 Team Models 32 Conclusion 34 Contributors 35 Archived 4 Resources 35 Additional Information 35 FAQ 36 Glossary 37 Archived 5 Abstract Adopting Amazon Web Services presents many benefits such as increased business agility flexibility and reduced costs As an enterprise ’s cloud journey evolves from building and running cloud native applications on AWS to mapping out the migration of an entire enterprise IT estate certain challenges surface Migrating at scale to AWS calls for a level of business transformation in order to fully realize the numerous benefits of operating in a cloud environment including changes to tools processes and skillsets The AWS approach is a culmination of our experiences in helping large companies migrate to the cloud From these experiences we have developed a set of methods and best practices to enable a successful move to AWS Here we discuss the importance of driving organiz ational change and leadership how to establish foundational readiness and plan for migrating at scale and our iterative a pproach to migration execution Migrating to AWS requires an iterative approach which begins with building and evolving your business case as you and your team learn and uncover more data over time through activities like application portfolio discovery and portfol io analysis There are the common migration strategies which will inform your business plan and a recommended approach to organizing and evolving your cloud teams as confidence and capability increases You will stand up a Cloud Center of Excellence (CCoE ) to lead and drive change evangelize your cloud migration initiative establish cloud governance guardrails and enable and prepare your organization to provide and consume new services Our approach walks you through what it means to be ready to migrat e at scale and how to establish a solid foundation to save time and prevent roadblocks down the road We will cover our approach to migration execution continuing on with building momentum and acceleration through a method of learn anditerate ArchivedAmazon Web Services – AWS Migration Whitepaper Page 1 Introduction Migrating your existing applications and IT assets to the Amazon Web Services (AWS) Cloud presents an opportunity to transform the way your organization does business It can help you lower costs become more agile develop new skills more quickly and deliver reliable globally available services to your customers Our goal is to help you to implement your cloud strategy successfully AWS has identified key factors to successful IT tra nsformation through our experience engaging and supporting enterprise customers We have organized these into a set of best practices for successful cloud migration Customer scenarios range from migrating smal l single application s to migrating entire data centers with hundreds o f applications We provide an overview of the AWS migration methodology which is built on iterative and continuous progress We discuss the principles that drive our approach and the essential activities that are neces sary for successful enterprise migrations Migrating to AWS is an iterative process that evolve s as your organization develops new skills processes tools and capabilities The initial migrations help build experience and momentum that accelerate your later migration efforts Establish ing the right foundation is key to a successful migration Our migration process balances the business and technical efforts needed to complete a cloud migration W e identify key business drivers for migration and present best strategies for planning and executing a cloud migration Once you understand why you are moving to the cloud it is time to address how to get there There are many challenges to completing a successful cloud migration We have collected common custom er questions from hundreds of cloud migration journeys and listed them here to illustrate common concerns as you embark on your cloud migration journey The order and priorit ization will vary based on your unique circumstances but we believe the exercise of thinking through and prioritizing your organization’s concerns upfront is beneficial:  How do I build the right business case?  How do I ac curately assess my environment? ArchivedAmazon Web Services – AWS Migration Whitepaper Page 2  How do I learn what I don’t know about my enterprise network topology and application portfolio ?  How do I create a migration plan?  How do I identify and evaluate the right partners to help me?  How do I estimate the cost of a large transition like this?  How long will the migration process take to complete?  What tools will I need to complete the migration ?  How do I handle my legacy applications?  How do I accelerate the migration effort to realize the business and technology benefits? These questions and many more will be a nswered throughout this paper We have include d support and documentation such as the AWS Cloud Migration Portal 1 The best practices described in this paper will help you build a foundation for a successful migration including build ing a solid business plan defining appropriate processes and identify ing best inclass migration tools and resources to complete the migration Having this foundation will help you avoid the typical migration pitfalls that can lead to cost overruns and migr ation delays The Cloud Adoption Framework ( AWS CAF) AWS developed the AWS Cloud Adoption Framework (AWS CAF) which helps organizations understand how cloud adoption transforms the way they work AWS CAF leverages our experiences assisting companies arou nd the world with their Cloud Adoption Journey Assessing migration readiness across key business and technical areas referred to as Perspectives helps determine the most effective approach to an enterprise cloud migration effort First let’s outline wha t we mean by perspective AWS CAF is organized into six areas of focus which span your entire organization We describe these areas of focus as Perspectives: Business People Governance Platform Security and Operations For further reading please see the AWS CAF Whitepaper 2 AWS CAF provides a mental model to establish areas of focus in determining readiness to migrate and creating a set of migration execution workstreams As these are key areas of ArchivedAmazon Web Services – AWS Migration Whitepaper Page 3 the business impacted by cloud adoption it’s important that we create a migration plan which considers and incorporates the necessary requir ements across each area Figure 1: AWS Cloud Adoption Framework People and Technology Perspectives The following table presents a description of each Perspective and the common roles involved Table 1: AWS CAF perspectives Perspective Description and Com mon Roles Involved Business Business support capabilities to optimize business value with cloud adoption Common Roles: Business Managers; Finance Managers; Budget Owners; Strategy Stakeholders People People development training communications and change management Common Roles: Human Resources; Staffing; People Managers Governance Managing and measuring resulting business outcomes Common Roles: CIO; Program Managers; Project Managers; Enterprise Architects; Business Analysts; Portfolio Managers Platform Develop maintain and optimize cloud platform solutions and services Common Roles: CTO; IT Managers; Solution Architects Security Designs and allows that the workloads deployed or developed in the cloud align to the organization’s security control resiliency and compliance requirements Common Roles: CISO; IT Security Managers; IT Security Analysts; Head of Audit and Compliance Operations Allows system health and reliability through the move to the cloud and delivers an agile cloud comp uting operation Common Roles: IT Operations Managers; IT Support Managers ArchivedAmazon Web Services – AWS Migration Whitepaper Page 4 Motivating Change Cultural issues are at the root of many failed business transformations yet most organizations do not assign explicit responsibility for culture – Gartner 2016 Culture is critical to cloud migration Cloud adoption can fail to reach maximum potential if co mpanies do not consider the impact to culture people and processes in addition to the technology Onpremise s infrastructure h as been historically manag ed by people and even with advancements in server virtualization most companies have not been able to implement the levels of automation that the cloud can provide The AWS platform provides customers instant access to infrastructure and applications serv ices through a pay asyou go pricing model You can automate the provision ing of AWS resources using AWS service APIs As a result roles and responsibilities within your organization will change as application teams take more control of their infrastructu re and application services The impact of culture on cloud and cloud on culture does not need to be a daunting or arduous proposition Be aware and intentional about the cultural changes you are looking to drive and manag e the people side of change Measure and track the cultural change just as you would the technology change We recommend implementing an organizational change management (OCM) framework to help drive the desired changes throughout your organization ArchivedAmazon Web Services – AWS Migration Whitepaper Page 5 Table 2: Organizational c hange man agement to accelerate your cloud transformation he AWS OCM Framework guide s you through mobilizing your people aligning leadership envisioning the future state of operating in the cloud engaging your organization beyond the IT environment enabling capacity and making all of those changes stick for the long term You can find a dditional information on this topic in the R esources section of this paper Business Drivers The number one reason customers choose to move to the cloud is for the agility they gain The AWS Cloud provides more than 90 services including everything from compute storage and databases to continuous integration data analytics and artificial intelligence You are able to move from idea to implementation in minutes rather than the months it can take to provision services on premises In addition to agility other common reasons customers migrat e to the cloud include increased productivity data center consolidation or rationaliz ation and preparing for an acquisition divestiture or reduction in infrastructure sprawl Some companies want to completely re imagine their business as part of a larger digital transformation program And of course organizations are always looking fo r ways to reduce costs Common drivers that apply when migrating to the cloud are: ArchivedAmazon Web Services – AWS Migration Whitepaper Page 6 Operational Costs – Operational costs are the cost s of running your infrastructure They include the unit price of infrastructure matching supply and demand investment risk for new application s market s and venture s employing an elastic cost base and building transparency into the IT operating model Workforce Productivity – Workforce productivity is how efficiently you are able to get your services to market You ca n quickly provision AWS services which increases your productivity by letting you focus on the things that make your business different ; rather than spending time on the things that don’t like managing data centers With over 90 services at your disposal you eliminate the need to build and maintain these independently We see workforce productivity improvements of 30 %50% following a large migration Cost Avoidance – Cost avoidance is setting up an environment that does not create unnecessary costs E liminating the need for hardware refresh and maintenance programs is a key contributor to cost avoidance Customers tell us they are not interest ed in the cost and effort required to execute a big refresh cycle or data center renewal and are accelerating thei r move to the cloud as a result Operational Resilience – Operation al resilience is reducing your organization’s risk profile and the cost of risk mitigation With 16 Regions comprising 42 Availability Zones (AZs) as of June 2017 With AWS you can deploy your applications in multiple regions around the world which improve s your uptime and reduces your risk related costs After migrating to AWS o ur customers have seen improvements in application performance better security and reduction in high severity incidents For example GE Oil & Gas saw a 98% reduction in P1/P0 incidents with im proved application performance Business Agility – Business agility is the ability to react quickly to changing market conditions Migrating to the AWS Cloud helps increase your overall operational agility You can expand into new markets take products to market quickly and acquir e assets that offer a competitive advantage You also have the flexibility to speed up divestiture or acquisition of lines of business Operation al speed standardization and flexibility develop when you use DevOps models automation monitoring and auto recovery or high availability capabilities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 7 Migration Strateg ies This is where you start to develop a migration strategy Consider where your cloud journey fits into your organization’s larger business strategy and find opportunities for alignment of vision A well aligned migration strategy with a supporting business case and a well thought out migration plan sets the proper groundwork for c loud adoption success One critical aspect of developing your migration strategy is to c ollect application portfolio data and rationalize it into what we refer to as the 6 R’s: Re host Re platform Re factor/Re architect Re purchase Retire and Retain This is a method for categorizing what is in your environment what the interdependencies are technical complexity to migrate and how you’ll go about migrating each application or set of applications Using the “6 R” Framework outlined below group your applications into R ehost Re platform Re factor/Re architect Re purchase Retire and Retain Using this knowledge you will outline a migration plan for each of the applications in your portfolio This plan will be iterated on and mature as you progr ess through the migration build confidence learn new capabilities and better understand your existing estate The complexity of migrating existing applications varies depending on considerations such as architecture existing licensing agreements and business requirements For example migrating a virtualized service oriented architecture is at the low complexity end of the spectrum A monolithic mainframe is at the high complexity end of the spectrum Typically you want to begin with an application on the low complexity end of the spectrum to allow for a quick win to build team confidence and to provide a learning experience You also want to choose an application that has business impact These strategies will help build momentum “The 6 R’s”: 6 Application Migration Strategies The 6 most common application migration strategies we see are: 1 Rehost (Referred to as a “lift and shift” ) Move applications without changes In large scale legacy migration s organization s are looking to move quickly to meet business objectives T he majority of these applications are re hosted GE Oil & Gas found that even ArchivedAmazon Web Services – AWS Migration Whitepaper Page 8 without implementing any cloud optimizations it could save roughly 30 % of its costs by re hosting Most re hosting can be automated with tools (eg AWS VM Import/Export ) Some customers prefer to do this manually as they learn how to apply their legacy systems to the new cloud platform Applications are easier to optimize/re architect once they’re already running in the cloud Partly becau se your organization will have developed the skills to do so and partly because the hard part — migrating the application data and traffic — has already been done 2 Replatform (Referred to as “lift tinker and shift” ) Make a few cloud optimizations to achieve a tangible benefit You will not change the core architecture of the application For example reduce the amount of time you spend managing database instances by migrating to a database as aservice platform like Amazon Relational Database Service (Amazon RDS) or migrating your application to a fully managed platform like AWS Elastic Beanstalk A large media company migrated hundreds of web servers that it ran on premises to AWS In the process it moved from WebLogic (a Java application container that requires an expensive license) to Apache Tomcat an open source equivalent By migrating to AWS t his media company saved millions of dollars in licensing costs and increased savings and agility 3 Refactor / Re architect Reimagine how the applicat ion is architected and developed using cloud native features This is driven by a strong business need to add features scale or performance that would otherwise be difficult to achieve in the application’s existing environment Are you looking to migrate from a monolithic architecture to a service oriented (or server less) architecture to boost agility or improve business continuity? This strategy tends to be the most expensive but it can also be the most beneficial if you have a good product market fit 4 Repurchase ArchivedAmazon Web Services – AWS Migration Whitepaper Page 9 Move from perpetual licenses to a software asaservice model For example move from a customer relationship management ( CRM ) to Salesforcecom an HR system to Workday or a content management system ( CMS ) to Drupal 5 Retire Remove applications that are no longer needed Once you have completed discovery for your environment ask who owns each application As much as 10% 20% of an enterprise IT portfolio is no longer useful and can be turned off These savings can boost your business case direct your team’s attention to the applications people use and reduce the number of applications you have to secure 6 Retain ( Referred to as re visit ) Keep applications that are critical for the business but that require major refactoring before they can be migrated You can revisit a ll applications that fall in this category at a later point in time Figure 2: Six most common application migration strategies Which Migration Strategy is Right for Me? Choosing the right migration strategy depends on your business drivers for cloud adoption as well as time considerations business and financial constraints and resource requirements Replatform if you are migrating for ArchivedAmazon Web Services – AWS Migration Whitepaper Page 10 cost avoidance and to eliminat e the need for a hardware refresh Figure 3 shows that this strategy involves more effort than a Rehost strategy but less than a Re factor strategy Rehost the majority of your platform and Re factor later i f your data center contract will end in 12 months and you do not want to renew Figure 3: Comparison of c loud migration strategies Consider a phased approach to migrating applications prioritizing business functionality in the first phase rather than attempting to do it all in one step In the next phase o ptimi ze applications where the AWS Platform can make a notable difference in cost performance productivity or compliance For example if you are migrating an application that leverages an Oracle database and your strategy includes replacing Oracle with Auro ra PostgreSQL the best migration approach may be to migrate the application and stabilize it in the migration phase Then execute the database change effort in a subsequent phase This approach controls risk during the migration phase and focuses on the m igration business case and value proposition There are common objectives that will improve application performance resilience and compliance across the port folio that should be included in every migration They should be packaged into the migration pro cess for consistent execution ArchivedAmazon Web Services – AWS Migration Whitepaper Page 11 Your migration strategy should guide your teams to move quickly and independently Applying project management best practices that include clear budgets timelines and business outcomes supports this goal Your strategy shou ld address the following questions :  Is there a time sensitivity to the business case or business driver for example a data center shutdown or contract expiration ?  Who will operate your AWS environment and your applications? Do you use an outsourced provider today? What operating model wo uld you like to have long term?  What standards are critical to impose on all applications that you migrate?  What automation requirements will you impose on applications as a starting point for cloud operations flexib ility and speed? Will these requirements be imposed on all applications or a defined subset? How will you impose these standards? The following are examples:  We will drive the migration timeline to retire specific facilities and use savings to fund the tr ansformation to cloud computing Time is very important but we will consider any changes that can be done quickly and safely while creating immediate savings  We will insource core engineering functions that have been historically outsourced We will look at technology platforms that remove operational barriers and allow us to scale this function  Business continuity is a critical driver for our migration We will take the time during the migration to improve our position Where application risk and costs are high we will consider a phased approach : migrate first and optimize in subsequent phases In these cases the migration plan must include the second phase  For all custom development we will move to a DevOps model We will take the time to build the development and release processes and educate development teams in each application migration plan matching this pattern ArchivedAmazon Web Services – AWS Migration Whitepaper Page 12 Understanding your application portfolio is an important step for determining your migration strategy and subsequent migration plan and business case This strategy does not need to be elaborate but addressing the questions above helps align the organization a nd test your operational norms Building a Business Case for Migration IT leaders understand the value that AWS brings to their organization including cost savings operational resilience productivity and speed of delivery Building a clear and compelling migration business case provides your organization’s leadership with a data driven ratio nale to support the initiative A migration busines s case has four categories : 1) run cost analysis 2) cost of change 3) labor productivity and 4) business value A business case for migration address es the following questions:  What is the future expected IT cost on AWS versus the existing (base) cost?  What are the estimated migration investment costs?  What is the expected ROI and when will the project be cash flow positive?  What are the business benefi ts beyond cost savings?  How will using AWS improve your ability to respond to business changes? The following table outlines each cost or value category Table 3: Business case cost/ value categorization Category Inputs for Consideration Run Cost Analysis  Total Cost of Ownership (TCO) comparison of run costs on AWS post migration vs current operating model  Impact of AWS purchasing/ pricing options (Reserved Instances volume discounts )  Impacts of AWS discounts (E nterprise Discount Program service credits eg Migration Acceleration Program incentives ) Cost of Change  Migration planning/consulting costs ArchivedAmazon Web Services – AWS Migration Whitepaper Page 13 Category Inputs for Consideration  Compelling events (eg planned refresh data center lease renewal divestiture )  Change management (eg training establishment of a Cloud Center of Excellence governance and op erations model )  Application migration cost estimate parallel environments cost Labor Productivity  Estimate of reduction in number of hours spent conducting legacy operational activities (requisitioning racking patching )  Productivity gains from automation  Developer productivity Business Value  Agility (faster time to deploy flexibility to scale up/scale down mergers and acquisitions global expansion)  Cost avoidance (eg server refresh maintenance contracts )  Risk miti gation (eg resilience for disaster recovery or performance)  Decommissioned asset reductions For an enterprise Oil & Gas customer cost savings was a primary migration driver This customer realized a dditional financial and overall business benefit s through the course of migrating 300+ applications to AWS For example this customer was able to increase business agility operational resilience improve workforce productivity and decrease operational costs The data from each value category shown in the following table provides a compelling case for migration Table 4: A case for migration ArchivedAmazon Web Services – AWS Migration Whitepaper Page 14 Drafting Your Business Case Your business case will go through several phases of evolution: directional refined and detailed The directional business case uses an estimate for the number of servers and rough order of magnitude (ROM) assumptions around server utilization The purpose is to gain early buy in allowing budgets to be assigned and resources applied You can develop a refined business case w hen you have additional data about the scope of the migration and workloads The initial discovery process refines the scope of your migration and business case The detailed business case requires a deep discovery of the on premise s environment and server utilization We recommend using a n automated discovery tool for deep discovery This is discussed later in the Application Discovery section Items to Consider In building your business case consider the following items:  Right size mapping provides estimates of the AWS service s (compute storage etc) required to run th e existing applications and processes on AWS It includes capacity views (as provisioned) and utilization views (based on actual use) This is a significant part of the value proposition especially in overprovisioned virtualized data centers  Extend rightsize mapping to consider resources that are not required full time for example turning off development and test servers when not in use and reducing run costs  Identify early candidates for migration to establish migration processes and develop experien ce in the migration readiness and planning phase This early analysis of the application discovery data will help you determine run rate cost migration cost resource requirements a nd timelines for the migration AWS has a series of tools and processes t hat can help you develop your business case for a migration The AWS Simple Monthly Calculator can provide directional business case inputs3 while the AWS Total Cost of Operation ( TCO ) calculator s can provide a more refined business case4 Additional ly AWS has tools that can help you estimate the cost of migr ation ArchivedAmazon Web Services – AWS Migration Whitepaper Page 15 People and Organization It is import ant to develop a critical mass of people with production AWS experience are you prepare for a large migration Establish operational processes and form a Cloud Center of Excellence (CCoE) that’s dedicated to mobilizing the appropriate resources The CCoE will lead your company through organizational and business transformation s over the course of the migration effort A CCoE institutionalize s best practices governance standards automation and drive s change throughout the organization When done well a CCoE inspire s a cultural shift to innovation and a change isnormal mindset Organizing Your Company’s Cloud Teams An effective CCoE team evolves over time in size makeup function and purpose Long term and short term objectives as well as key opera ting model decisions will require adjustments to your team In the early stages of cloud adoption team development begins as a small informal group connected by a shared interest —experimentation with cloud implementation As the cloud initiative grows a nd the need for a more formalized structure increases it becomes beneficial to esta blish a CCoE dedicated to evangelizing the value of cloud While the CCoE establishes best practices methods and governance for your evolving technology oper ations addit ional small cloud teams form These small teams migrate candidate applications and application groupings commonly referred to as migration waves to the cloud environment The CCoE direct s the operating parameters of the migration teams and both the CCoE and migration teams provide feedback Collectively lessons are learned and documented improving efficiency and confidence through hands on experience Creating a Cloud Center of Excellence The following are guiding principles for the creation of a CCoE  The CCoE structure will evolve and change as your organization transforms Diverse cross functional representation is key  Treat the cloud as your product and the application team leaders as your customers Drive enab lement not command and control  Buil d company culture into everything you do ArchivedAmazon Web Services – AWS Migration Whitepaper Page 16  Organizational change management is central to business transformation Use intentional and targeted organizational change management to change company cultur e and norms  Embrace a change asnormal mindset Change of applications IT systems and business direction is expected  Operating model decisions will determine how people fill roles that achieve business outcomes Structure the CCoE to Prepare for Migration at Scale Designing a CCoE to include people from acr oss impacted business segments with cross functional skills and experiences is important for successful migration at scale you build subject matter expertise achieve buyin earn trust across your organization and establish effective guidelines that balance your business requirements There is no single organizational structure that works for everyone The following guidelines will help you design a CCoE that represen ts your company A CCoE is comprised of two functional groups : the Cloud Business Office (CBO) and Cloud Engineering (see Figure 4) The functions of each group will help you determine who to include in each group and in the larger CCoE The CBO owns maki ng sure that the cloud service s meet the needs of your internal customer business services Business services and the applications that support them consume the cloud services provided by IT IT should adopt a customer centric model toward business application owners This tenet represents a shift for most organizations It is an important consideration when developing your cloud operating model CCoE and cloud team approach The CBO owns functions such as organizational change management stakehold er requirements governance and cost optimization It develop s user requirements and onboard s new applications and users onto the cloud It also handle s vendor management internal marketing communications and status updates to users You will se lect IT Leadership responsible for the cloud service vision O rganizational Change Management Human Resources financial management vendor management and enterprise architecture One individual may represent m ultiple functional areas or multiple indivi duals may represent one functional area ArchivedAmazon Web Services – AWS Migration Whitepaper Page 17 The Cloud Engineering group owns functions such as infrastructure automation operational tools and processes security tooling and controls and migration landing zones T hey optimize the speed at which a business unit can access cloud resources and optimize use patterns The Cloud Engineering group focuses on performance availability and security The following figure shows the functional groups that require representation within your company’s CCoE Figure 4: Functional organization of a CCoE Migration Readiness and Planning Migration Readiness and Planning (MRP) is a method that consist s of tools processes and best practices to prepare an enterprise for cloud migration The MRP method aligns to the AWS Clou d Adoption Framework and is execution driven MRP describes a specific program that AWS Professional Services offers However we highlight the main topi c areas and key concepts below Assessing Migration Readiness The AWS Cloud Adoption Framework ( AWS CAF ) is a framework for analyzing your IT environment Using this framework lets you determine your cloud migration readiness Each perspective of the AWS CAF provides ways of looking at your environment through different lenses to make sure all areas of your business are addressed Being ready for a large migration initiative requires preparation across several key areas ArchivedAmazon Web Services – AWS Migration Whitepaper Page 18 Items to consider:  Have you clearly defined the scope and the business case for the migration?  Have you evaluated the environment and applications in scope through the lenses of the AWS CAF?  Is your virtual private cloud (VPC) secure and can it act as a landing zone for all applications in scope?  Have your operations and employee skills been review ed and updated to accommodate the change?  Do you (or does a partner) have the experience necessary to move the tech stacks that are in scope? AWS has developed a set of tools and processes to help you assess your organization’s current migration readiness state in each of the AWS CAF perspectives The Migration Readiness Assessment (MRA) process identifies readiness gaps and makes recommendations to fill those gaps in preparation for a large migration effort The MRA is completed interactively in a cross group setting involving key stakeholders and team members from across the IT organization to build a common view of the current state You may have representatives from IT Leadership Networking Operations Security Risk and Compliance Application Devel opment Enterprise Architecture and your CCoE or CBO The MRA output includes actions and next steps and visuals like a heat map ( see Figure 5 ) The MRA is available through A WS or an AWS Migration Partner ArchivedAmazon Web Services – AWS Migration Whitepaper Page 19 Figure 5: Migrat ion Readiness Assessment heat map Application Discovery Application Discovery is the process of understanding your onpremise s environment determining what physical and virtual servers exist and what applications are running on those servers You will need to take stock of your existing on premises portfolio of applications servers and other resources to build your business c ase and plan your migration You can categorize your organization’s on premise s environment based on operating system mix application patterns and business scenarios This categorization can be simple to start For example you may group applications bas ed on an end oflife operating system or by applications dependent on a specific database or sub system Application Discovery will help you develop a strategic approach for each group of applications Application Discovery provides you with the required data for project planning and cost estimation It includes data collection from multiple sources A common source is an existing Configuration Management Database (CMDB ) The CMDB help s with high level analysis but often lacks fidelity For ArchivedAmazon Web Services – AWS Migration Whitepaper Page 20 example perfor mance and utilization data need to pair the resources to the appropriate AWS resource ( for example matching Amazon EC2 instance types) Manually performing discovery can take weeks or months to perform so we recommend taking advantage of automated discov ery tools These discovery tools can automate the discovery of all the applications and supporting infrastructure including sizing performance utilization and dependencies Items to consider:  We recommend using an automated discovery tool  Your environm ent will change over time Plan how to keep your data curren t by continuously running your automated discovery tool  It may be useful to do an initial application discovery during business case development to accurately reflect the scope Discovery Tools Discovery tools are available in the AWS Marketplace under the Migration category Additionally AWS has built the Application Discovery Service (ADS) ADS discovers server inventories and performance characteristics through either an appliance connector for virtual server s or agents installed on physical or virtual hosts An application discovery tool can :  Automatical ly discover the inventory of infrastructure and applications running in your data center and maintain the inventory by continually monitori ng your systems  Help determine how application s are dependent on each other or on underlying infrastructure  Inventory versions of operating systems and services for analysis and planning  Measure applications and processes running on hosts to determine performance baseline s and optimization opportunities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 21  Provide a means to categorize applications and servers and describe them in a way that’s meaningful to the people who will be involved in the migration project You can use the se tools to build a high fidelity real time model of your applications and their dependencies This automat es the time consuming process of discovery and data collection and analysis Items to consider:  An automated discovery tool can save time and energy when bring ing a CMDB up to date  Keeping the inventory up to date is key as the project progresses and a tool helps make this less painful  Discovery tools on the market each have their special purpose or capability so analyzing this against your needs will help you select the right tool for your environment Application Portfolio Analysis Application portfolio analysis takes the application discovery data and then begins grouping applications based on patterns in the portfolio It identifies order of migration and the migration strategy (ie which of the 6 R’s out lined on page 9 will be used) for migrating the given pattern The result of this analysis is a broad categorization of resources aligned by common traits Special cases may also be identified that need special handling Examples of this high level analysis are:  The m ajority of the servers are Windows based with a consistent standard OS version Some of the servers might require an OS upgrade  Distribution of databases across multiple database platforms : 80% of the databases are Oracle and 20% are SQL Server  Grouping of applications and servers by business unit : 30% marketing and sales application s 20% HR applications 40% internal productivity applications and 10% infrastructure management applications  Grouping of resources across type of environment: 50% production 30% test and 20% development ArchivedAmazon Web Services – AWS Migration Whitepaper Page 22  Scoring and prioritizing based on different fac tors: opportunity for cost saving business criticality of the application utilization of servers and complexity of migration  Grouping based on 6 R’s: 30% of the portfolio could use a re host pattern 30% require some level of re platforming changes 30% require application work (re architecture) to migrate and 10% can be retired The data driven insights you get from the application discovery work will become the foundation for migration planning as you move into the migration readiness phase of your p roject Migration Planning The primary objective of the migration plan is to lead the overall migration effort This includes managing the scope schedule resource plan issues and risks coordination and communication to all stakeholders Working on th e plan early can organize the project as multiple teams migrate multiple applications The migration plan considers critical factors such as the migration order for workloads when resources are needed and tracking the progress of the migration We recomm end your team use agile delivery methodologies project control best practices a robust business communication plan and a welldefined delivery approach Recommended migration plan activities include :  Review of project management methods tools and capabilities to assess any gaps  Define project management methods and tools to be used during the migration  Define and create the Migration Project Charter/Communication Plan including repor ting and escalation procedures  Develop a project plan risk/mitigation log and roles and responsibilities matrix (eg RACI) to manage the risks that occur during the project and identify owner ship for each resource involved  Procure and deploy project management tools to support the delivery of the project ArchivedAmazon Web Services – AWS Migration Whitepaper Page 23  Identi fy key resources and leads for each of the migration work streams defined in this section  Facilitate the coordination and activities outlined in the plan  Outline resources timelines and cost to migrate the targeted environment to AWS Technical Plannin g Planning a migration goes beyond cost schedule and scope It includes taking the application portfolio analysis data and building an initial backlog of prioritize d applications Build the backlog by conducting a deep analysis on your portfolio by gathe ring data on use patterns A small team can lead this process often from the enterprise architecture team which is part of your CCoE The team analyzes and prioritizes the application portfolio and gather s information about the current architecture for each application They develop the future architecture and capture workload details to execute a streamlined migration It is not important to get through every application before begging execution of the pla n To be agile do a deep analysis of the first two to three prioritized apps and then begin the migration Continue deeper analyses of the next applications while the first a pplications are being migrated An iterative process helps you avoid feeling overwhelmed by the scale of the project or limiting your progress as the initial design plans become dated Organize applications into migration patterns and into move groups to determine the number of migration teams cost and migration project timeline Maintain a backlog of applications (about three 2week sprints) for each migration team in the overall project plan As you migrate you gain technical and organizational expertise that you will build into your planning and execution processes You will be a ble to take advantage of opportunities to optimize as you progress through your application portfolio The iterative process allows the project to scale to support migration teams structured by pattern business unit geography or other dimensions that al ign to your organization and project scope A high fidelity model that provid es accurate and current application and infrastructure data is critical to make performance and dependency decisions during your migration phase Having a well informed plan with good data is one of the key enablers for migrating at speed ArchivedAmazon Web Services – AWS Migration Whitepaper Page 24 Items to consider:  Application discovery and portfolio analysis data are important for categorization prioritization and planning at this stage  An agile approach allows you to use this data for the migration before it becomes obsolete  Iteration helps migrations continue as the detailed plan evolve s with new learnings The Virtual Private Cloud Environment The VPC environment is an integrated collection of AWS accounts and configurations where your applications will run It includes third party solutions from the AWS Marketplace that address requirements not directly controlled on AWS You can implement t he AWS CAF Security Operatio ns and Platform Perspectives to migrate and operate in the cloud environment securely and efficiently They will be covered together in this section Security Building security into your VPC architecture will save you time and will improve your company’s security posture Cloud security at AWS is the highest priority AWS customers benefit from the AWS Cloud data centers and network architectures that are built to meet the requirements of the most security sensitive organizations A compelling advantage o f the AWS Cloud is that it allows you to scale and innovate while maintaining a secure environment The AWS CAF Security Perspective outlines a structured approach to help you build a foundation of security risk and compliance capabilities that will acc elerate your readiness and pl anning for a migration project To learn more about cloud security see the AWS security whitepapers 5 The AWS CAF Security Perspective details how to build and control a secure VPC in the AWS Cloud Figure 6 illustrates the AWS CAF Security Perspective Capabilities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 25 Figure 6: AWS CAF Security Perspective Capabilities The AWS CAF Security Perspective is comprised of 10 themes:  Five core security themes – Fundamental themes that manage risk as well as progress by functions outside of information security: identity and access management logging and monitoring infrastructure security data protection and incident response  Five augmenting security theme s – Themes that drive continuous operational excellence through availability automation and audit: resilience compliance validation secure continuous integration/continuous deployment ( CI/CD) configuration and vulnerability analysis and security big data analytics By u sing the ten themes of the Security Perspective you can quickly iterate and mature security capabilities on AWS while maintaining flexibility to adapt to business pace and demand Items to consider:  Read the AWS security whitepapers for information on best security practices ArchivedAmazon Web Services – AWS Migration Whitepaper Page 26  Engage with AWS to run security workshops to speed up your teams ’ understanding and implementation  Read the AWS Well Architected Framework and the AWS Well Architected Security Pillar whitepaper s for information on how to architect a secure environment6 7 Operations The AWS CAF Operations Perspective describes the focus areas to run use operate and recover IT workloads Your operations group defines how day to day quarter toquarter and year toyear business is conducted IT operations must align with and support the operations of your business The Operations Perspective defines current operating procedures and identifies the process changes and training that is needed for successful cloud adoption Figure 7: AWS CAF Operations Perspective Capabilities The Operations Perspective helps you examine how you currently operate and how you would like to operate in the future Operational decisions relate to the specific applica tions being migrated Determine the appropriate Cloud Operating Model (COM) for a particular application or set of applications when envisioning the future state To learn more about cloud operations see the AWS ArchivedAmazon Web Services – AWS Migration Whitepaper Page 27 operations whitepapers8 and the AWS Well Architected Operational Excellence Pillar whitepaper9 There are different uses and us ers for applications across your business Products and services will be consumed in different patterns across your organization Therefore you will have multiple modes of operating in a cloud environment When planning for your migration you will f irst d efine the use cases and actors and then determine how to deliver the solution To build an organization that is capable of delivering and consuming cloud services create a Cloud Services Organization Cloud organizational constructs such as a CCoE a CBO and Cloud Shared Services teams all fall within this Cloud Services Organization The last piece of the COM is the set of capabilities such as ticketing workflows service catalogs and pipelines that are required to deliver and consume cloud services These capabilities help the Cloud Services Organization function effectively Items to consider:  Building a Cloud Center of Excellence early in the process will centralize best practices  Recognize that your organization will have multiple operating models (eg R&D applications are different than back office applications)  A managed service such as AWS Managed Services 10 can reduce the time need ed to solve operational problems in the early phases It lets your team focus on improving the migrated applications Platform The AWS CAF Platform Perspective includes principles and patterns for implementing new solutions on the cloud and migrating on premises workloads to the cloud IT architects use models to understand and communicate the design of IT systems and their relationships The Platform Perspective capabilities help you describe the architecture of the targ et state environment in detail ArchivedAmazon Web Services – AWS Migration Whitepaper Page 28 Figure 8: AWS CAF Platform Perspective Capabilities The Platform work stream provides you with proven implementation guidelines You can repeatedly set up AWS environments that can scale as you deploy new or migrate existing workloads You can establish key platform components that support flexible baseline AWS environments These environments can accommodate changing business requirements and workloads Once in place your platform can simplify and streamline the decision making process involv ed in configuring an AWS infras tructure The following are k ey elements of the platform work stream : AWS landing zone – provides an initial structure and pre defined configurations for AWS accounts networks identity and billing frameworks and custome rselectable optional packages Account structure – defines an initial multi account structure and pre configured baseline security that can be easily adopted into your organizational model Network structure – provides baseline network configurations that support the most co mmon patterns for network isolation implements baseline network connectivity between AWS and on premises networks and provides user configurable options for network access and administration ArchivedAmazon Web Services – AWS Migration Whitepaper Page 29 Predefined identity and billing framework s – provide frameworks for cross account user identity and access management (based on Microsoft A ctive Directory ) and centralized cost management and reporting Predefined userselectable packages – provide a series of user selectable packages to integ rate AWS related logs into popular reporting tools integrat e with the AWS Service Catalog and automate infrastructure It offers third party tools to help you manage and monitor AWS usage and costs Items to consider:  If your business is new to AWS consider a managed service provider such as AWS Managed Services to build out and manage the platform  Identify account structures up front that allow for effective bill back processes  You will have both on premises and cloud servers working together at least initially Consider a hybrid cloud solution Migrating First Migrations – Build Experience MRP develop s core operations security and platform capabilities to operate at scale You will build confidence and momentum for your migration project Running applications in the new operating model and environment will help you mature these capabilities It is important to develop migration skills and experience early to help you make informed choices about your workload patterns We recommend migrating three to five applications These applications should be representative of common migration patterns in the portfolio One example is rehost ing an application using existing server replication tools Other examples are replatform ing an application to have its database running on Amazon RDS or migrating an application that has internet facing requirements and validating the controls and services involved Choose the applications before you start the MRP in order to develop an approac h and schedule that accommodate s your selections ArchivedAmazon Web Services – AWS Migration Whitepaper Page 30 Working through these initial migrations build s confidence and experience It inform s the migration plan with the patterns and tool choices that fit your organization ’s needs It provide s validation and testing of the operational and security processes Items to c onsider:  Identify patterns ( eg common architectures technology stacks etc) in the portfolio to create a list of application groupings based on common pattern s This create s a common pro cess for group migrations  Your first three to five applications should be representative of common patterns in your portfolio This will determine the process for moving that pattern in the mass migration to follow Migration Execution In the early migrat ions you tested specific migration patterns and your CCoE gained experience Now you will scale teams to support your initial wave of migrations The core team s expand to form migration sprint teams that operate in parallel This is useful for rehost and replatforming patterns that can use automation and tooling to accelerate application migration In the next section we will cover the migration factory process and expand on the agile team model Application Migration Process Specific patterns with larger volumes such as rehosting offer the opportunity to define methods and tools for moving data and application components However every application in the execution phase of a migration follow s the same six step process: Discover Design Bu ild Integrate Validate and Cutover Discover In the Discover stage the application portfolio analysis and planning backlog are used to understand the current and future architectures If needed more data is collected about the application There are two categories of information: Discover Business Information (DBI) and Discover Technical Information ArchivedAmazon Web Services – AWS Migration Whitepaper Page 31 (DTI) Examples of DBI are application o wner roadmap cutover plans and operation runbooks Examples of DTI are server statistics connectivit y process information and data flow This information can be captured via tools and confirmed with the application owner The data is then analyzed and a migration plan for that application is confirmed with both the sprint team and the application owne r In the case of rehost patterns this is done in groups that match the patterns The portfolio discovery and planning process provide s this information Design In the Design stage the target state is developed and documented The target state includes the AWS architecture application architecture and supporting operational components and processes A member of the sprint team and engineering team uses the information collected during the Discover stage to design the application for the targeted AWS e nvironment This work depends on the migration pattern and includes an infrastructure architecture document that outlines what services to use The document also includes information about data flow foundational elements monitoring design and how the application will consume external resources Build In the Build stage the migration design created during the Design stage is executed The required people tools and reusable templates are identified and given to the migration teams A migration team is select ed based on the migration strategy chosen for the application The team will use these pre defined methods and tools to migrate to AWS They assert basic validations against the AWS hosted application Integrate In the Integrate stage your migration team make s the external connections for the application Your team work s with external service providers and consumers of the application to make the connections or service calls to the application The team then run the application to demonstrate functionality and operation before the application is ready for the Validat e stage Validate In the Validate stage each application go es through a series of specific test s (that is build verification functional performan ce disaster recovery and ArchivedAmazon Web Services – AWS Migration Whitepaper Page 32 business continuity tests ) before being finalized and released for the Cutover stage Your teams evaluate release management verify rollout and rollback plans and evaluate performance baselines Rollback procedures are defined by application within a rollback playbook which consist s of an operations communication plan for users and define s integration application and performance impacts You complete business acceptance criteria by running parallel testing for pre migrated and migrated applications Cutover In the Cutover stage you execute the cutover plan that was agreed upon by the migration team and application owners Perform a user acceptance test at this stage to support a successful cutover Use the o utline d rollback procedure in the cutover plan if the migration is not successful Items to consider:  Make sure the team is familiar with agile practices  An iterative approach to maximizes immediate requirements gathering You will not do up front work that will be out of date by the time you are ready to use it  The CC oE play s a key role in sharing best practices and lessons learned across the different migration teams Team Models Core migration teams persist through the project as part of your new IT operating model These teams each have their own are as of specialization Core Cloud Teams The Core Cloud teams work across the migration teams They act as a central hub for managing projects sharing lessons learned coordinating resources and building common solutions These teams include: Cloud Business Office (Program Control) – Drives the program manag es resources and budgets manag es and report s risk and drives communication and change management Typically this team reports to the overall migration or cloud lead and becomes the program office for your migrations ArchivedAmazon Web Services – AWS Migration Whitepaper Page 33 Cloud Engineering & Operations – Builds and validates the fundamental components that ensure development test and production environments are scalable automated maintained and monitored This team also p repares landing zones as needed for migrations Innovation – Develops repeatable solutions that will expedite migrations in coordination with the platform engineering migration and transition teams They work on larger or more complex technical issues for the migration teams Portfolio Discovery & Planning – Accelerates downstream activities by executing application discovery and optimizing application backlogs They work to eliminate objecti ons and minimize wasted effort Migration Factory Teams In the scale out phase of a migration project multiple teams operat e concurrently Some support a large volume of migrations in the rehost and minor replatform patterns These teams are referred to as migration factory teams Your migration factory team increase s the speed of execution of your migration plan Between 20 %50% of an enterprise application portfolio consists of repeated pattern s that can be optimized by a factory approach This is an agile delivery model and it is important to create a release management plan Your plan should be based on current workloads and information generated during the MRP phase You should optimize it continu ally for future migration waves and future migration teams We recommend that you have a backlog of application s that support three sprints for each team This allows you to reprioritiz e applications if you encounter problems that affect the schedule Larger and more complex applications often follow the refactor/ rearchitect pattern They are generally conducted in planned release cycles by the application owner The factory teams are self sufficient and include five to six cross functional roles The se include operations business analyst s and owner s migration engineer s developer s and DevOps professional s The following are examples of migration factory teams that are focused on specific migration patterns : Rehost migration team – Migrates high volume low complexity applications that don’t require material change This team leverage s migration automation tools This approach is integrated into patch andrelease management processes ArchivedAmazon Web Services – AWS Migration Whitepaper Page 34 Replatform migration team – Designs and migrates applications that require a change of platform or a repeatable change in application architecture Refactor/ rearchitect migration team – Designs and migrat es complex or core business applications that have many dependencies In most cases development and technical o perations teams support this business capability The migration becomes a release cycle or a few release cycles within the plan for that team There can be many of these in flight and the role of the CBO is to track timing risks and issues until migrati on completion This team owns the application migration process Items to c onsider:  Perform a portfolio analysis to understand common patterns across all applications This can help build repeatable work for the factory teams to execute efficiently  Use a partner to help with resource constraints as your team supports regular business activities  AWS and the AWS Partner Network (APN) ecosystem can bring specialized resources for specific topics such as databases application development and migration tooling Conclusion We have introduced both the preparation and execution steps required for large migrations to the cloud Analyzing your current state building a plan and iterating the work breaks a large migration into manageable activities for efficient execution Looking at a migration as an organizational change project empowers you to build buyin and maintain communications through each stage of the process Build a business case and refine the return on investment as the project progres ses Use the AWS Cloud Adoption Framework to analyze your environment through the different Perspectives : Business People Governance Platform Security and Operations This gives you a complete view of which areas to improve before moving forward with a large migration effort Use a migration factory construct and iterat e the migration patterns to create an optimal move to the AWS Cloud Today migrating to the cloud has moved from ArchivedAmazon Web Services – AWS Migration Whitepaper Page 35 asking “why” to asking “when ” Building an effective migration strategy and plan will change your response to “NOW!” Migration is just the beginning of what is possible Once you have migrated an application consider your migration experience as a capability that you can use for the optimization phases for this application You will have a current architecture and a future design You will implement test and validate changes You will cutover and go live You now have a new IT capability that can drive speed agility and business value for your organization and your compan y Contributors The following individuals and organizations contributed to this document:  AWS Professional Services Global Migrations Practice Resources  AWS M igration Competency and Partners: https://awsamazoncom/partners/find  AWS Whitepapers : https://awsamazoncom/whitepapers  AWS Migration Acceleration Program: https: //awsamazoncom/migration acceleration program/  AWS Webinar: How to Manage Organizational Change and Cultural Impact During a Cloud Transformation : https://youtube/2WmDQG3vp0c Additional Information Articles by Stephen Orban Head of Enterprise Strategy at AWS on cloud migration :  http://amznto/considering mass migration  http://amznto/migration process  http://amznto/migration strategies  http://amznto/cloud native vsliftandshift ArchivedAmazon Web Services – AWS Migration Whitepaper Page 36  http://amznto/migrate mainframe tocloud FAQ 1 How do I build the right business case? Your business case should be driven by your organizational KPIs and common drivers such as operational costs workforce productivity cost avoidance operational re silience and business agility 2 How do I accurately assess my environment? How do I learn what I don’t know about my enterprise network topology and application portfolio and create a migration plan ? Consider the volume of resources used by each applicatio n and automate the assessment process to confirm that it’s done rapidly and accurately Assessing your environment manually is a time consuming process It exposes your organization to human error Automating the process will help you gain insight into what you don’t know and it will help you more clearly understand and define these uncertainties so they can be factored into your migration strategy 3 How do I identify and evaluate the right partners to help me? Details on Partner offerings can be found at : o AWS Migration Partner Solutions11 o Migration Solutions in AWS Marketplace12 4 How do I estimate the cost of a large transition like this? The AWS Total Cost of Ownership Calculator can compare how much it costs to run your applications in an on premises or colocation environment to what it cost s on AWS 13 5 How long will the migration process take to complete? Enterprise migrations that are completed within 18 months generate the greatest ROI The duration of a migration depends on scope and resources 6 How do I handle my legacy applications? Consider taking an incremental approach to your migration by determining which of your legacy applications can be moved most easily ArchivedAmazon Web Services – AWS Migration Whitepaper Page 37 Move these applications to the cloud first For legacy applications that require a more complicated approach you can develop an effective plan for migration 7 How do I accelerate the migration ef fort to realize the business and technology benefits more quickly? Automat e the migration process as much as possible Using migration tools from AWS and APN Partners is the best way to accelerate th e migration effort Glossary • Application Portfolio – Collection of detailed information about each application of an organization including the cost to build and maintain the application and the business value • AWS Cloud Adoption Framework ( AWS CAF ) –A structure for developing an efficient and effective pla n for organizations to successfully move to the cloud • Cloud Center of Excellence ( CCoE ) –A diverse team of key members who play the primary role in establishing the migration timeline and evangelize about moving to the cloud • Landing Zone – The initial de stination area that is established on AWS where the first applications operate from to ensure they have been migrated successfully • Migration Acceleration Program ( MAP ) –Designed to provide consulting support and help enterprises who are migrating to the c loud realize the business benefits of moving to the cloud • Migration at Scale – The stage in the migration process when the majority of the portfolio is moved to the cloud in waves with more applications moved at a faster rate in each wave • Migration Method or Migration Process – Refers to Readiness Mobilization Migration at Scale and Operate ArchivedAmazon Web Services – AWS Migration Whitepaper Page 38 • Migration Readiness and Planning ( MRP ) – A preplanning service to prepare for migration when the resources processes and team members who will be engaged in carrying out a successful migration to AWS are identified Part of the Readiness stage of the migration process • Migration Readiness Asses sment ( MRA ) –A tool to determine level of commitment competence and capability • Mobilization – The stage in the migration process in which roles and responsibilities are assigned an in depth portfolio assessment is conducted and a small number of selec t applications is migrated to the cloud • Operate – The stage in the migration process when most of the portfolio has been migrated to the cloud and is optimized for peak performance • Readiness – The initial stage in the migration process when the opportuni ty is evaluated the business case is confirmed and organizational alignment is achieved for migrating to the cloud • Stage – The individual topics of the migration process Readiness Mobilization Migration at Scale and Operate are all stages in the migration process 1 https://awsamazoncom/cloud migration/ 2 https://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf 3 https://calculators3amazonawscom/indexhtml 4 https://awsamazoncom/tco calculator/ 5 https://awsamazoncom/whitepapers/#security 6 https://d1awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 7 https://d1awsstaticcom/whitepapers/architecture/AWS Security Pillarpdf 8 https://awsamazoncom/whitepapers/#operations Notes ArchivedAmazon Web Services – AWS Migration Whitepaper Page 39 9 https://d1awsstaticcom/whitepapers/architecture/AWS Operational Excellence Pillarpdf 10 https://awsamazoncom/managed services/ 11 https://awsamazoncom/migration/partner solutions/ 12 https://awsamazoncom/marketpl ace/search/results?searchTerms=migratio n&page=1&ref_=nav_search_box&x=0&y=0 13 https://awstcocalculatorcom/,General,consultant,Best Practices AWS_Operational_Resilience,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsoperational resilience/awsoperationalresiliencehtmlPage 1 Amazon Web Services ’ Approach to Operational Resilience in the Financial Sector & Beyond First published March 2019 Updated April 02 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Opera tional Resilience in the Financial Sector & Beyond 3 Contents Introduction 5 What does operational resilience mean at AWS? 5 Operational resilience is a shared responsibility 5 How AWS maintains operational resilience and continuity of service 6 Incident management 8 Customers can achieve and test res iliency on AWS 8 Starting with first principles 9 From design principles to implementation 11 Assurance mechanisms 14 Independent thirdparty verification 14 Direct assurance for customers 15 Document revisions 16 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 4 Abstract The purpose of this paper is to describe how Amazon Web Services ( AWS ) and our customers in the financial services industry achieve operational resilience using AWS services The primary audience of this paper is organizations with an interest in how AWS and our financial services customers can operate services in the face o f constant change ranging from minor weather events to cyber issues This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 5 Introduction AWS provides information technology (IT) services and building blocks that all types of businesses public authorities universities and individuals utilize to become more secure innovative and responsive to their own needs and the needs of their customers AWS offers IT services in categories ranging from compute storage database and networking to artificial intelligence and machine learning AWS standardizes its servi ces and makes them available to all customers including financial institutions Across the world financial institutions have used AWS services to build their own applications for mobile banking regulatory reporting and market analysis AWS and the finan cial services industry share a common interest in maintaining operational resilience ; for example the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for fi nancial stability AWS recognizes that financial institutions which use AWS services need to comply with sector specific regulatory obligations and internal requirements regarding operational resilience These obligations and requirements are found inte r alia in IT guidelines1 and cyber resilience guidance2 Financial institution customers are able to rely on AWS to provide resilient infrastructure and services while at the same time designing their applications in a manner that meets regulatory and compliance obligations This dual approach to operational resilience is something that we call “shared responsibility” What does operational resilience mean at AWS? Operational resilience is the ability to provide continuous service through people proces ses and technology that are aware of and adaptive to constant change It is a realtime execution oriented norm embedded in the culture of AWS that is distinct from traditional approaches in Business Continuity Disaster Recovery and Crisis Management which rely primarily on centralized hierarchical programs focused on documentation development and maintenance Operational resilience is a shared responsibility AWS is responsible for ensuring that the services used by our customers —the building blocks for their applications —are continuously available as well as ensuring that we are prepared to handle a wide range of events that could affect our infrastructure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ A pproach to Operational Resilience in the Financial Sector & Beyond 6 In this paper we also explore customers’ responsibility for operational resilience —how customers can design deploy and test their applications on AWS to achieve the availability and resiliency they need including for mission critical applications that require almost no downtime Those kinds of applications require that AWS infrastructur e and services are available when customers need them even upon the occurrence of a disruption As discussed below customers are able to use AWS’s services to design applications that meet this standard and provide a level of security and resilience that we consider is greater than what existing on premises IT environments can offer Finally given the importance of operational resilience to our customers this paper explore s the variety of mechanisms AWS offers to customers to demonstrate assurance3 How AWS maintains operational resilience and continuity of service AWS builds to guard against outages and incidents and accounts for them in the design of AWS services —so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible To avoid single points of failure AWS minimizes interconnectedness within our global infrastructure AWS’s global infrastructure is geographically dispersed over five continents It is composed of 20 geographic Regions which are composed of 61 Availability Zones (AZs) which in turn are composed of data centers4 The AZs which are physically separated and independent from each other are also bu ilt with highly redundant networking to withstand local disruptions Regions are isolated from each other meaning that a disruption in one Region does not result in contagion in other Regions Compared to global financial institutions’ on premises environ ments today the locational diversity of AWS’s infrastructure greatly reduces geographic concentration risk We are continuously adding new Regions and AZs and you can view our most current global infrastructure map here: https://awsamazoncom/about aws/global infrastructure At AWS we employ compartmentalization throughout our infrastructure and services We have multiple constructs that provide different levels of independent r edundant components Starting at a high level consider our AWS Regions To minimize interconnectedness AWS deploys a dedicated stack of infrastructure and services to each Region Regions are autonomous and isolated from each other even though we allow customers to replicate data and perform other operations across Regions To allow these cross Region capabilities AWS takes enormous care to ensure that the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 7 dependencies and calling patterns between Regions are asynchronous and ring fenced with safety mec hanisms For example we have designed Amazon Simple Storage Service (Amazon S3) to allow customers to replicate data from one Region ( for example USEAST 1) to another Region (eg US WEST 1) but at the same time we have designed S3 to operate autonom ously within each Region so that an outage of S3 in US EAST does not result in an S3 outage in US WEST5 The vast majority of services operate entirely within single Regions The very few exceptions to this approach involve services that provide global d elivery such as Amazon Route 53 (an authoritative Domain Name System) whose data plane is designed for 100000% availability As discussed below financial institutions and other customers can architect across both multiple Availability Zones and Regions Availability Zones (AZs) which comprise a Region and are composed of multiple data centers demonstrate further compartmentalization Locating AZs within the same Region allows for data replication that provides redundancy without a substantial impact on latency —an important benefit for financial institutions and other customers who need low latency to run applications At the same time we make sure that AZs are independent in order to ensure services remain available in the event of major incidents AZs have independent physical infrastructure and are distant from each other to mitigate the effects of fires floods and other events Many AWS services run autonomously within AZs; this means that if one AZ within a single Region loses power or connectivi ty the other AZs in the Region are unaffected or in the case of a software error the risk of that error propagating is limited AZ independence allows AWS to build Regional services using multiple AZs that in turn provide high availability to and resiliency for our customers In addition AWS leverages another concept known as cell based architecture Cells are multiple instantiations of a service that are isolated from each other; these internal service structures are invisible to customers In a cell based architecture resources and requests are partitioned into cells which are capped in size This design minimizes the chance that a disruption in one cell —for example one subset of customers —would disrupt other cells By reducing the blast radius of a given failure within a service based on cells overall availability increases and continuity of service remains A rough analogy is a set of watertight bulkheads on a ship: enough bulkheads appropriately designed can contain water in case the ship’s h ull is breached and will allow the ship to remain afloat This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 8 Incident management Although the likelihood of such incidents is very low AWS is prepared to manage large scale events that affect our infrastructure and services AWS becomes aware of incidents or degradations in service based on continuous monitoring through metrics and alarms high severity tickets customer reports and the 24x7x365 service and technical support hotlines In case of a significant event an on call engineer convenes a call with p roblem resolvers to analyze the event to determine if additional resolvers should be engaged A call leader drives the group of resolvers to find the approximate root cause to mitigate the event The relevant resolvers will perform the necessary actions to address the event After addressing troubleshooting repair procedures and affected components the call leader will assign follow up documentation and actions and end the call engagement The call leader will declare the recovery phase complete after th e relevant fix activities have been addressed The post mortem and deep root cause analysis of the incident will be assigned to the relevant team Post mortems are convened after any significant operational issue regardless of external impact and Correct ion of Errors (COE) documents are composed such that the root cause is captured and preventative actions may be taken for the future Implementation of the preventative measures is tracked during weekly operations meetings Customers can achieve and test resiliency on AWS AWS believes that financial institutions should ensure that they —and the critical economic functions they perform —are resilient to disruption and failure whatever the cause Prolonged outages or outright failures could ca use loss of trust and confidence in affected financial institutions in addition to causing direct financial losses due to failing to meet obligations AWS builds —and encourages its customers to build —for failure to occur at any time Similarly as the Ba nk of England recognizes “We want firms to plan on the assumption that any part of their infrastructure could be impacted whatever the reason” In the design building and testing of their applications on AWS customers are able to achieve their object ives for operational resilience AWS offers the building blocks for any type of customer from financial institutions to oil and gas companies to government agencies to construct applications that can withstand large scale events In this section This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 9 we walk through how financial institution customers can build that type of resilient application on the AWS cloud Starting with first principles AWS field teams composed of technical managers solution architects and security experts help financial institutio n customers build their applications according to customers’ design goals security objectives and other internal and regulatory requirements As reflected in our shared responsibility model customers remain responsible for deciding how to protect their data and systems in the AWS Cloud but we offer workbooks guidance documents and on site consulting to assist in the process Before deploying a mission critical application —whether on the AWS cloud or in another environment —significant financial institu tion customers will go through extensive development and testing For a customer who begins building an application on AWS with high availability and resiliency in mind we recommend that they begin by answering some fundamental questions6 including but not limited to: 1 What problems are you trying to solve? 2 What specific aspects of the application require specific levels of availability? 3 What is the amount of cumulative downtime that this workload can realistically accumulate in one year? 4 What is the actual impact of unavailability? Financial institutions and market utilities perform both critical and non critical types of functions in the financial services sector From deposit taking to loan processing trade execution to securities settlement finan cial entities across the world perform services whose continuity and resiliency are necessary to ensure the public’s trust and confidence in the financial system At the industrywide level for systemically important payment clearing settlement and othe r types of applications central banks and market regulators specify a discrete recovery time objective in the Principles for Financial Market Infrastructures (PFMI) standard: “The [business continuity] plan should incorporate the use of a secondary site a nd should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events The plan should be designed to enable the FMI to complete settlement by the end of the day of the disrupti on even in case of extreme circumstances”7 Beyond the 2 hour RTO financial regulatory agencies expect regulated entities to be able to meet RTOs and recovery point objectives (RPOs) according to the criticality of This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyo nd 10 their applications beginning with “Ti er 1 application” as the most critical For example regulated entities may classify their RTO and RPOs in the following way: Table 1 — How regulated entities classify RTO and RPO Resiliency requirement Tier 1 app Tier 2 app Tier 3 app Recovery Time Objective 2 Hours < 8 Hours 24 Hours Recovery Point Objective < 30 seconds < 4 Hours 24 Hours Although systemically important financial institutions may have upwards of 8000 to 10000 applications they do not classify all applications according to the same criticality For example disruptions in an application for processing mortgage loan requests are undesirable but a financial institution operating such an application may decide that it can tolerate an 8 hour RTO Other types of important but not n ecessarily systemically important workloads include post trade market analysis and customer facing chatbots While the majority of financial entities’ applications are non critical from a systemic perspective disruption of some Tier 1 applications would jeopardize not only the safety and soundness of the affected financial institution but also other financial services entities and possibly the broader economy For example a settlement application may be a Tier 1 application and have an associated RTO of 30 minutes and an RPO of < 30 seconds Such applications are the heart of financial markets and disruptions could cause operational liquidity and even credit risks to crystallize For such applications there is little to virtually no time for humans to make an active decision on how to recover from an outage or failover to a backup data center Recovery would need to be automatic and triggered based on metrics and alarms8 AWS provides guidance to customers on best practices for building highly available resilient applications including through our Well Architected Framework9 For example we recommend that the components comprising an application should be independent and isolated to provide redundancy When changing components or configurati ons in an application customers should make sure that they can roll back any changes to the application if it appears that the changes are not working Monitoring and alarming should be used to track latency error rates and availability for each request for all downstream dependencies and for key operations Data gathered through monitoring should allow for efficient diagnosis of problems10 Best practices for distributed systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 11 should be implemented to enable automated recovery Recovery paths should be tested frequently —and most frequently for complex or critical recovery paths For financial institutions it can be difficult to practice these principles in traditional on premises environments many of which reflect decades of consolidation with oth er entities and ad hoc changes in their IT infrastructures On the other hand these principles are what drive the design of AWS’s global infrastructure and services and form the basis of our guidance to customers on how to achieve continuity of service11 Financial institutions using AWS services can take advantage of AWS’s services to improve their resiliency regardless of the state of their existing systems From design principles to implementation Customers have to make many decisions: where to place t heir content where to run their applications and how to achieve higher levels of availability and resiliency For example a financial institution can choose to run its mobile banking application in a single AWS Region to take advantage of multiple AZs Figure 1 Example of Multi AZ Design Let’s take the example of a deployment across 2 AZs to illustrate how AZ independence provides resiliency As shown in Figure 1 the customer deploys its mobile banking application so that its architecture is stable and consistent across AZs ; for example the workload in each AZ has sufficient capacity as This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 12 well as stable infrastructure configurations and policies that keep both AZs up to date Elastic Load Balancing routes traffic only to healthy instances and data layer replication allows for fast failover in case a database instance fails in one AZ thus minimizing downtime for the financial institution’s mobile banking customers Compared to AWS’s infrastructure and services traditional on premises environ ments present several obstacles for achieving operational resilience For example let’s assume a significant event shuts down a financial institution’s primary on premises data center The financial institution also has a secondary data center in additio n to its primary data center The capacity of the secondary data center is able to handle only a proportion of the overall workload that would otherwise operate at the primary data center ( for example 11000 servers at the secondary center instead of 120 00 servers at the primary center; network capacity increased 300% at the primary center in the last 4 years but only 250% at the secondary center) and errors in replication mean that the secondary center’s data has not been updated in 36 hours Furthermor e macroeconomic factors have driven transaction volume higher at the primary data center by 15% over the past 6 months As a result the financial institution may find that its secondary data center cannot process current transaction volume within a given time period per its internal and regulatory requirements By using AWS services the financial institution would have been able to increase its capacity at frequent intervals to support increasing transaction volumes as well as track and manage changes t o maintain all of its deployments with the same up todate capacity and architecture In addition customers can maintain additional “cold” infrastructure and backups on AWS that can activate if necessary —at much lower cost than procuring their own physic al infrastructure This is not a hypothetical issue —key regulatory requirements highlight the need for regulated entities to account for capacity needs in adverse scenarios12 On AWS customers can also deploy workloads across AZs located in multiple Regio ns (Figure 2) to achieve both AZ redundancy and Region redundancy Customers that have regulatory or other requirements to store data in multiple Regions or to achieve even greater availability can use a multi Region design In a multi Region set up the customer will need to perform additional engineering to minimize data loss and ensure consistent data between Regions A routing component monitors the health of the customer’s application as well as dependencies This routing layer will also handle automat ic failovers changing the destination when a location is unhealthy and temporarily stopping data replication Traffic will go only to healthy Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financi al Sector & Beyond 13 AWS improves operational resilience compared to traditional on premises environments not only for failo ver but also for returning to full resiliency For the financial institution with a secondary data center it may have to perform data backup and restoration over several days Many traditional environments do not feature bidirectional replication result ing in current data at the backup site and “outdated” data in the primary site that makes fast failback difficult to achieve On AWS the financial institution is not “stuck” as it would be in a traditional environment —it can fail forward by quickly launch ing its workload in another location The key point is that AWS’s global infrastructure and services offer financial institutions the capacity and performance to meet aggressive resiliency objectives To achieve assurance about the resiliency of their appl ications we recommend that financial institution customers perform continuous performance load and failure testing; extensively use logging metrics and alarms; maintain runbooks for reporting and performance tracking; and validate their architecture t hrough realistic full scale tests known as “game day” exercises Per the regulatory requirements in their jurisdictions financial institutions may provide evidence of such tests runbooks and exercises to their financial regulatory authorities Figure 2 — Example of multiRegion design This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 14 Assurance mechanisms We are prepared to deliver assurance about AWS’s approach to operational resilience and to help customers achieve assurance about the security and resiliency of their workloads Financial institution s and other customers can gain assurance about the security and resiliency of their workloads on AWS through a variety of means including: reports on AWS’s infrastructure and services prepared by independent third party auditors; services and tools to mo nitor assess and test their AWS environments; and direct experience with AWS through our audit engagement offerings Independent thirdparty verification With our standardized offering and millions of active customers across virtually every business segment and in the public sector we provide assurance about our risk and control environment including how we address operational resilience AWS operates thousands of controls that meet the highest standards in the industry To understand these controls and how we operate them customers can access our System and Organization Control (SOC) 2 Type II report reflecting examination by our independent thirdparty auditor which provides an overview of the AWS Resiliency Program Furthermore an ind ependent third party auditor has validated AWS’s alignment with ISO 27001 standard The International Organization for Standardization (ISO) brings together experts to share knowledge and to develop and publish uniform international standards that support innovation and provide solutions to global challenges In addition to ISO 27001 AWS also aligns with the ISO 27017 guidance on information security in the cloud and ISO 27018 code of practice on protection of personal data in the cloud The basis of thes e standards are the development and implementation of a rigorous security program The Information Security Management System (ISMS) required under the ISO 27001 standard defines how AWS manages security in a holistic comprehensive manner and includes num erous control objectives (eg A16 and A17) relevant to operational resilience With a non disclosure agreement in place customers can download these reports and others through AWS Artifact — more than 2 600 security controls standards and requirements in all AWS can provide such reports upon request to regulatory agencies AWS also aligns with the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) Developed originally to apply to critical infrastructure entities the foundational set of security disciplines in the CSF can apply to any organization in any s ector and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 15 regardless of size The US Financial Services Sector Coordinating Council has developed a Financial Services Sector Specific Cybersecurity Profile (available here) that maps the CSF to a variety of international US federal and US state standards and regulations AWS’s alignment with CSF attested by a third party auditor reflects the suitability of AWS services to enhance the security and resiliency of fina ncial sector entities Direct assurance for customers Customers may also achieve continuous assurance about the resilience of their own workloads Through services and tools available from the AWS management console customers have unprecedented visibility monitoring and remediation capabilities to ensure the security and compliance of their own AWS environments Financial institution customers no longer have to rely on periodic snapshots or quarterly and annual assessments to validate their security and compliance Consider just a few examples of the many ways customers achieve direct assurance about the security and compliance of their AWS resources13 First customers can integrate their auditing controls into a notification and workflow system using AW S services For example in such a system a change in the state of a virtual server from pending to running would result in corrective action logging and as needed notify the appropriate personnel Customers can also integrate their notification and w orkflow system with a machine learning driven cybersecurity service offered by AWS that detects unusual API calls potentially unauthorized deployments and other malicious activity Second customers can also translate discrete regulatory requirements in to customizable managed rules and continuously track configuration changes among their resources; for example if a bank has a requirement that developers cannot launch unencrypted storage volumes the bank can predefine a rule for encryption that would flag the volume for non compliance and automatically remove the volume Finally and third another AWS service allows customers to automatically assess the security of their environment targeting their network file system and process activity and collecti ng a wide set of activity and configuration data This data includes details of communication with AWS services use of secure channels details of the running processes network traffic among the running processes and more —resulting in a list of findings and security problems ordered by severity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilienc e in the Financial Sector & Beyond 16 While these and other services correct for non compliant configurations or security vulnerabilities AWS also recommends that customers test their applications for operational resilience Financial institution cu stomers should test for the transient failures of their applications’ dependencies (including external dependencies) component failures and degraded network communications One major customer has developed open source software that can be a basis for this type of testing To address concerns that malicious actors may access critical functions or processes in customers’ environments customers can also conduct penetration testing of their AWS environments14 Finally AWS’s efforts to provide transparency about our risk and control environment do not stop at our third party audit reports or formal audit engagements Our security and compliance personnel security solution architects engineers a nd field teams engage daily with customers to address their questions and concerns Such interaction may be a phone call with the financial institution’s security team an executive meeting with a customer’s Chief Information Security Officer and Chief Information Officer a briefing on AWS’s premises — and countless other ways Customers drive our overall infrastructure and service roadmap and meeting and exceeding their security and resiliency needs is our number one objective Document revisions Date Description April 02 2021 Reviewed for technical accuracy March 2019 First publication Notes 1 US Federal Financial Institution Examination Council (FFIEC) IT Handbook; see https://ithandbookffiecgov 2 Committee on Payments and Market Infrastructures and Board of the International Organization of Securities Commissions (CPMI IOSCO) Guidance on cyber resilience This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 17 for financial market infrastructures (June 2016); see https://wwwbisorg/cpmi/publ/ d146pdf 3 This paper reflects only an overview of our ongoing efforts to ensure our customers can use AWS services safely To complement our concept of shared responsibility we are also dedicated to excee ding customer and regulatory expectations To that end AWS technical teams security architects and compliance experts assist financial institutions customers in meeting regulatory and internal requirements including by actively demonstrating their secu rity and resiliency through continuous monitoring remediation and testing AWS continuously engages with financial regulators around the world to explain how AWS’s infrastructure and services enable all sizes and types of financial institutions —from fintech startups to stock exchanges —to improve their security and resiliency compared to on premises environments We always want to receive feedback from customers and their regulators about AWS’s approach and their experience 4 You ca n take a virtual tour of an AWS data center here: https://awsamazoncom/compliance/data center 5 As evidenced by the Amazon S3 service disruption of February 28 2017 which occurred in the Northern Virginia (US EAST 1) Region but not in other Regions See “Summary of the Amazon S3 Service Disruption in the Northern Virginia (US EAST 1) Region” https://awsamazoncom/message/41926/ 6 We recommend that customers review the Cloud Adoption Framework to develop efficient and effectiv e adoption plans See Reliability Pillar AWS Well Architected Framework 7 Key Consideration 176 of PFMI available at https://wwwbisorg/cpmi/publ/d101apdf 8 Customers can enable automatic recovery using a variety of AWS services including Amazon Cl oudWatch metrics Amazon CloudWatch Events and AWS Lambda See also the following AWS re:Invent presentati on “Disaster Recovery and Business Continuity for Financial Institutions ” for additional information on applicable AWS services and example architecture: https://wwwyoutubecom/watch?v=Xa xTwhP 1UU 9 See https://awsamazoncom/architecture/well architected 10 A variety of AWS services support these practices; for examples see pp 26 28 at https://d0awsstaticcom/whitepapers/ architecture/AWS Reliability Pillarpdf This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 18 11 For a comprehensive overview of our guidance to customers see the “Reliability Pillar” whitepaper (September 2018) at https:// d0awsstaticcom/whitepapers/archit ecture/AWS Reliability Pillarpdf 12 See for example US Securities and Exchange Commission (SEC) Regulation Systems Compliance and Integrity 17 CFR § 240 242 & 249; see also adopting release: https://wwwsecgov/rules/final/2014/34 73639pdf See also FFIEC Business Continuity Planning IT Examination Handbook (February 2015) available at https://ithandbookffiecgov/media/274725/ffiec_itbooklet_businesscontinuityplanningp df 13 The AWS services discussed in this section include: Amazon CloudWatch Events AWS Config Amazon GuardDuty AWS Config Rules and Amazon Inspector 14 For example in the United Kingdom the Bank of England has developed the CBEST framework for testing financial firms’ cyber resilience Accredited penetration test companies attempt to access critical assets within the target firm An accredited threat intelligence company provides threat intelligence and provides guidance how the penetration testers can attack the firm Financial institution customers subject to the CBEST framework and planning to have a penetration test conducted on their AWS resources n eed to notify AWS by submitting a request (at https://awsamazoncom/security/penetration testing ) because such activity is indistinguishable from prohibited security violations and netwo rk abuse,General,consultant,Best Practices AWS_Overview_of_Security_Processes,"ArchivedAmazon Web Services: Overview of Security Processes March 2020 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/ architecture/securityidentity compliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Shared Security Responsibility Model 1 AWS Security Responsibilities 2 Customer Security Responsibilities 2 AWS Global Infrastructure Security 3 AWS Compliance Program 3 Physical and Environmental Security 4 Business Continuity Management 6 Network Security 7 AWS Access 11 Secure Design Principles 12 Change Management 12 AWS Account Security Features 14 Individual User Accounts 19 Secure HTTPS Access Points 19 Security Logs 20 AWS Trusted Advisor Security Checks 20 AWS Config Security Checks 21 AWS Service Specific Security 21 Compute Services 21 Networking Services 28 Storage Services 43 Database Services 55 Application Services 66 Analytics Services 73 Deployment and Management Services 77 ArchivedMobile Services 82 Applications 85 Document Revisions 88 ArchivedAbstract This document is intended to answer questions such as How does AWS help me ensure that my data is secure? Specifically this paper describes AWS physical and operational security processes for the network and server infrastructure under the management of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing pl atform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importan ce to AWS as is maintaining customer trust and confidence Shared Security Responsibility Model Before covering the details of how AWS secures its resources it is important to understand how security in the cloud is slightly different than security in yo ur on premises data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastructure that support s the cloud and you’re responsible for anything you put on the cloud or connect to the cloud This shared security responsibility model can reduce your operational burden in many ways and in some cases may even improve your default security posture witho ut additional action on your part Figure 1: AWS shared security responsibility model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 2 features —such as individual user accounts and credentials SSL/TLS for data transmissions and user activity logging —that you should configure no matter which AWS service you use For more information about these security featur es see the AWS Account Security Features sectio n AWS Security Responsibilities Amazon Web Services is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud Th is infrastructure comprise s the hardware software networking and facilities that run AWS services Protecting this infrastructure is the number one priority of AWS Although you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulations For more information visit AWS Compliance Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon EMR Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS handle s basic security tasks like guest operat ing system (OS) and database patching firewall configuration and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may requ ire additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desk tops in minutes instead of weeks You can also use cloud based analytics and workflow tools to process your data as you need it and then store it in your own data centers or in the cloud The AWS services that you use determine how much configuration work you have to perform as part of your security responsibilities AWS products that fall into the well understood category of Infrastructure asaService (IaaS) —such as Amazon EC2 Amazon VPC and Amazon S3 —are completely under your control and require you t o perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 3 software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amazon RDS or Amazon Redshift provide all of the resources you need to perform a specific task —but without the configuration work that can come with them With managed services you don’t have to worry about launching and maintaining instances patching the gues t OS or database or replicating databases —AWS handles that for you But as with all services you should protect your AWS Account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each of your users h as their own credentials and you can implement segregation of duties We also recommend using multi factor authentication (MFA) with each account requiring the use of SSL/TLS to communicate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Security Best Practices whitepaper and recommended reading on the AWS Security Learning webpage AWS Global Infrastructure Security AWS operates the global cloud infrastruct ure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that supp ort the provisioning and use of these resources The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building w eb architectures on top of some of the most secure computing infrastructure in the world AWS Compliance Program AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities are shared By tying together governance focused audit friendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs; helping customers to establish and operate in an AWS security control environment The IT infrastructure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 4 that AWS provides to its customers is d esigned and managed in alignment with security best practices and a variety of IT security standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA DIACAP and FedRAMP • DOD CSM Levels 1 5 • PCI DSS Level 1 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • FIPS 140 2 • MTCS Level 3 • HITRUST In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Servi ces (CJIS) • Cloud Security Alliance (CSA) • Family Educational Rights and Privacy Act (FERPA) • Health Insurance Portability and Accountability Act (HIPAA) • Motion Picture Association of America (MPAA) AWS provides a wide range of information regarding its IT co ntrol environment to customers through white papers reports certifications accreditations and other third party attestations For m ore information see AWS Compliance Physical and Environmental Securit y AWS data centers are state of the art utilizing innovative architectural and engineering approaches Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not ArchivedAmazon Web Services Amazon Web Services: Overview of Secu rity Processes Page 5 branded as AWS facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillan ce intrusion detection systems and other electronic means Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually esc orted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equ ipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterru ptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 6 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from be ing exposed to unauthorized individuals AWS uses the techniques detailed in NIST 800 88 (“Guidelines for Media Sanitization”) as part of the decommissioning process Business Continuity Management Amazon’s infrastructure has a high level of availability a nd provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Ama zon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from t he affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to pl ace instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed vi a different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 7 the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Au dit group has recently reviewed the AWS services resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Communication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees ; regular management meetings for updates on business performance and other matters; and electronics means such as video conferencing electronic mail messages and the posting of information via the Amazon intranet AWS has also implemented various method s of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A Service Health Dashboard i s available and maintained by the customer support team to alert customers to any issues that may be of broad impact The AWS Cloud Security Center is available to provide you with security and compliance details about AWS You can also subscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues Network Security The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographica lly dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 8 Secure Network Architecture Network devices including firewall and other boundary devic es are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system services ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are auto matically pushed using AWS’s ACL Manage tool to help ensure these managed interfaces enforce the most up todate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your st orage or compute instances within AWS To support customers with FIPS cryptographic requirements the SSL terminating load balancers in AWS GovCloud (US) are FIPS 140 2compliant In addition AWS has implemented network devices that are dedicated to manag ing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmi ssion Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional lay ers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center For more information about VPC configuration options see the Amazon Virtual Private Cloud (Amazon VPC) Security section ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 9 Amazon Corporate Segregation Logically the AWS Production network is se gregated from the Amazon Corporate network by means of a complex set of network security / segregation devices AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly req uest access through the AWS ticketing system All requests are reviewed and approved by the applicable service owner Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud com ponents logging all activity for security review Access to bastion hosts require SSH public key authentication for all user accounts on the host For more information on AWS developer and administrator logical access see AWS Access below Fault Toleran t Design Amazon’s infrastructure has a high level of availability and provides you with the capability to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data centers ar e built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located i n lower risk flood plains (specific flood zone categorization varies by region) In addition to utilizing discrete uninterruptable power supply (UPS) and onsite backup generators they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multi ple availability zones provides the ability to remain resilient in the face of most failure scenarios including natural disasters or system failures However you should be aware of location dependent ArchivedAmazon Web Services Amazon Web Services: Overview of Securi ty Processes Page 10 privacy and compliance requirements such as the EU Da ta Privacy Directive Data is not replicated between regions unless proactively done so by the customer thus allowing customers with these types of data placement and privacy requirements the ability to establish compliant environments It should be noted that all communications between regions is across public internet infrastructure; therefore appropriate encryption methods should be used to protect sensitive data Data centers are built in clusters in various global regions including: US East (Norther n Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) (Oregon) EU (Frankfurt) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) China (Beijing) and South America (Sao Paulo) For a complete list of AWS R egions see the AWS Global Infrastructure page AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move workloads into the cloud by helping them meet certain regulatory and compliance requirements The AWS GovCloud (US) framework allows US government agencies and their contractors to comply with US International Traffic in Arms Regulations (ITAR) reg ulations as well as the Federal Risk and Authorization Management Program (FedRAMP) requirements AWS GovCloud (US) has received an Agency Authorization to Operate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredit ed Third Party Assessment Organization (3PAO) for several AWS services The AWS GovCloud (US) Region provides the same fault tolerant design as other regions with two Availability Zones In addition the AWS GovCloud (US) region is a mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses For more information see AWS GovCloud (US) Network Monitoring and Protection AWS u ses a wide variety of automated monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at in gress and egress communication points These tools monitor server and network usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activ ity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 11 sche dule is used so personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in han dling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operatio nal issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in the future Implementation of the preventative measures is tracked during weekly operations meetings AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazo n Corporate network relies on user IDs passwords and Kerberos wh ereas the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS clou d components must explicitly request access through the AWS access management system All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatically revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automati cally revoked ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 12 Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the employee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Securi ty has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles The AWS development process follows secure software development be st practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring pene tration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration chan ges to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on the customer and their use of the servic es AWS will communicate with customers either via email or through the AWS Service Health Dashboard when service use is likely to be adversely affected Software AWS applies a systematic approach to mana ging change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to t he customer Changes deployed into production environments are: ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 13 • Reviewed – Peer reviews of the technical aspects of a change are required • Tested – Changes being applied are tested to help ensure they will behave as expected and not adversely impact perfor mance • Approved – All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarmi ng in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management proced ures are associated with an incident and are logged and approved as appropriate Periodically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management p rocess Any exceptions are analyzed to determine the root cause and appropriate actions are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infras tructure Amazon’s Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team ma intains and operates a UNIX/Linux configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 14 Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in c ompliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM user accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentia ls and their uses Table 1: Credential types and uses Credential Type Use Description Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters Multi Factor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A six digit single use code that is required in addition to your password to log in to your AWS Account or IAM user account ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 15 Credential Type Use Description Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST/Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmatic requests that you make to AWS Key Pairs SSH login to EC2 instances CloudFront signed URLs A key pair is required to connect to an EC2 instance launched from a public AMI The supported lengths are 1024 2048 and 4096 If you connect using SSH while using the EC2 Instance Connect API the supported lengths are 2048 and 4096 You can have a key pair generated automatically for you when you launch the instance or you can upload your own X509 Certificates Digitally signed S OAP requests to AWS APIs SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (currently used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials —whether they use a password whether their password expires and must be changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials ha ve been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular bas is without any downtime to your application This can help to mitigate risk from lost or ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 16 compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Password s are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials p age AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies see Managing Passwords for IAM Users AWS Multi Factor Authentication (MFA) AWS Multi Factor Authentication (MFA) is an additional layer of security for accessing AWS s ervices When you enable this optional feature you must provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this s ingle use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the prec ise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MFA protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additional layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA device uses a software application that generates sixdigit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than har dware MFA devices However you should be ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 17 aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authenticati on for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this by adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obtain hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information is available at AWS Multi Factor Authentication (MFA) Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature calculation is done for you; otherwise you can have your application calculate it and include it in your RE ST or Query requests by following the directions in Making Requests Using the AWS SDKs Not only does the signing process help protect message integrity by p reventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a ke y that is derived from your secret access key rather than using the secret access key itself In addition you derive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misuse d if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your code For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 18 provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 instances created from a public AMI use a public/private key pair rather than a password for signing in via Secure Shell (SSH)The public key is embedded in your instance and you use the private key to sign in securely without a password After you create your own AMIs you can choose other mechanisms to securely log in to your new instances You can have a key pair generated automatically for you when you launch the instance or you can upload your own Save the private key in a safe place on your system and record the location where you sa ved it For Amazon CloudFront you use key pairs to create signed URLs for private content such as when you want to distribute restricted content that someone paid for You create Amazon CloudFront key pairs by using the Security Credentials page CloudFr ont key pairs can be created only by the root account and cannot be created by IAM users X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date t hat AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then include that signature in the request along with your certificate AWS verifi es that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that you uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certificate (signing certificate) by using third party software In contrast with roo t account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP requests X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 19 private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a cert ificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certifi cate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management (IAM) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS reso urces either programmatically or through the AWS Management Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permis sions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you s hould use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering and forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Sec recy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised ArchivedAmazon Web Services Amazon Web Services: Overview of Security P rocesses Page 20 Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what happened and when but also identif y the source To help you with your after thefact investigations and near real time intrusion detection AWS CloudTrail provides a log of events within your account For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures API calls as well as other things such as console sign in events Once you have enabled CloudTrail event logs are delivered about every 5 minutes You can configure CloudTrail so that it aggregates log files from mu ltiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you c an upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take timely action to specific events By default log f iles are stored securely in Amazon S3 but you can also archive them to Amazon S3 Glacier to help meet audit and compliance requirements In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in near real time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Ad visor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on several of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 21 organization to automatically receive a weekly email with an updated status of your Trusted Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: speci fic ports unrestricted IAM use and MFA on root account When you sign up for Business or Enterprise level AWS Support you receive full access to all Trusted Advisor checks AWS Config Security Checks AWS Config is a continuous monitoring and assessment service that records changes to the configuration of your AWS resources You can view the current and historic configurations of a resource and use this information to troubleshoot outages conduct security attack analysis and much more You can view the configuration at any point in time and use that information to re configure your resources and bring them into a steady state during an outage situation Using AWS Config Rules you can run continuous assessment checks on your resources to verify that the y comply with your own security policies industry best practices and compliance regimes such as PCI/HIPAA For example AWS Config provides a managed AWS Config Rules to ensure that encryption is turned on for all EBS volumes in your account You can als o write a custom AWS Config Rule to essentially “codify” your own corporate security policies AWS Config alerts you in real time when a resource is misconfigured or when a resource violates a particular security policy AWS Service Specific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service prov ides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloud based computing services that include a wide selection of compute instances that can scale up and do wn automatically to meet the needs of your application or enterprise ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 22 Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud ( Amazon EC2) is a key component in Amazon’s Infrastructure asaService (IaaS) providing resizable comput ing capacity using server instances in AWS’s data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and software Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexib ility in configuration that customers demand Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because para virtualized guests rely on the hype rvisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 0 3 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit virtualization of the physical resources leads to a cl ear separation between guest and hypervisor resulting in additional security separation between the two Traditionally hypervisors protect the physical hardware and bios virtualize the CPU storage networking and provide a rich set of management capab ilities With the Nitro System we are able to break apart those functions offload them to dedicated hardware and software and reduce costs by delivering all of the resources of a server to your instances The Nitro Hypervisor provides consistent perform ance and increased compute and memory resources for EC2 virtualized instances by removing host system software components It allows AWS to offer larger instance sizes (like c518xlarge) that provide practically all of the resources from the server to cust omers Previously C3 and C4 instances each eliminated software components by moving VPC and EBS functionality ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 23 to hardware designed and built by AWS This hardware enables the Nitro Hypervisor to be very small and uninvolved in data processing tasks for ne tworking and storage Nevertheless as AWS expands its global cloud infrastructure Amazon EC2’s use of its Xenbased hypervisor will also continue to grow Xen will remain a core component of EC2 instances for the foreseeable future Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor Amazon is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervi sor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks The AWS proprietary disk virtualization layer automatical ly resets every block of storage used by the customer so that one customer’s data is never unintentionally exposed to another In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memor y is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 24 Figure 2: Amazon EC2 multiple layers of security Host Operating System: Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Guest Operating System: Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling password only access to your guests and utilizing some form of multi factor authentication to gain access to your instances (or at a minimum certificate based SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a per user basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use command line logging and use ‘sudo’ for privilege escalation You should gene rate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Pro cesses Page 25 AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Aut hentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate gener ated for your instance You also control the updating and patching of your guest OS including security updates Amazon provided Windows and Linux based AMIs are updated regularly with the latest patches so if you do not need to preserve data or customiza tions on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall: Amazon EC2 provides a complete firewa ll solution; this mandatory inbound firewall is configured in a default deny all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional three tiered web applica tion The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the databas e servers would have port 3306 (MySQL) open only to the application server group All three groups would permit administrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this ex pressive mechanism See the following figure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 26 Figure 3: Amazon EC2 security group firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications W ell informed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IPtables or the Windows Firewall and VPNs This can res trict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends alway s using SSL protected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 27 Elastic Block Storage (Amazon EBS) Security Amazon Elastic Block Storage ( Amazon EBS) allows you to create storage volum es from 1 GB to 16 TB that can be mounted as devices by Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volume s or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted ac cess to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for long term data durability For customer s who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a block level view of an entire EBS volume Note that da ta that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 28 so on Amazon EBS You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements Encryption of sensitive data is generally a good security practice and AWS provides the ability to encrypt EBS volumes and their snapshots with AES 256 The encryption occurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instan ces and EBS storage In order to be able to do this efficiently and with low latency the EBS encryption feature is only available on EC2's more powerful instance types (eg M3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to m inimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the requ est and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role are securely provisioned to the instance and are made availa ble to your application via the Amazon EC2 Instance Metadata Service The Metadata Service make s new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the i nstance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call For m ore information about using roles when launching instances see Identity and Access Management for Amazon EC2 Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 29 connection to the AWS cl oud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Elastic Load Balancing Security Elastic Load Balancing is used to manage traffic o n a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports crea tion and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connec tions When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Elastic Load Balancing configures your load balancer with a pre defined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI S OX etc) from clients to ensure that standards are met In these cases Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers You can choose to enable or disable the ciphers depending on your specifi c requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the cipher suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer select s a cipher suite based on the server’s prioritization ArchivedAmazon Web Services Amazon Web Services: Overview of Security Proce sses Page 30 of cipher suites rather than the client’s This gives you more control over the level of security that clients use to connect to your load ba lancer For even greater communication privacy Elastic Load Balanc ing allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTTPS or TCP load balancing Typically client connection information such as IP address and p ort is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitelists of IP addresses Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance that processed the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to backend instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 insta nce that you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses in the range of your choice (eg 10000/16) You can define subnets within your VPC grouping similar kinds of instances based on IP address range and then set up routing and security to control the flow of traffic in and out of the instances and subnets AWS offers a var iety of VPC architecture templates with configurations that provide varying levels of public access: • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs a nd security groups can be used to provide strict control over inbound and outbound network traffic to your instances ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 31 • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances a re not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This config uration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instances run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs using a private IP address which allows instances in the two VPCs to communicate with each other as if they are within the s ame network You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account within a single region Security features within Amazon VPC include security groups network ACLs routing tables and external gateways Each of these items is complementary to providing a secure isolated network that can be extended through selective enabling of direct Internet access or private connectivity to another network Amazon EC2 instances running within an Amazon VPC inherit all of t he benefits described below related to the guest OS and protection against packet sniffing Note however that you must create VPC security groups specifically for your Amazon VPC; any Amazon EC2 security groups you have created will not work inside your Amazon VPC Also Amazon VPC security groups have additional capabilities that Amazon EC2 security groups do not have such as being able to change the security group after the instance is launched and being able to specify any protocol with a standard pro tocol number (as opposed to just TCP UDP or ICMP) Each Amazon VPC is a distinct isolated network within the cloud; network traffic within each Amazon VPC is isolated from all other Amazon VPCs At creation time you select an IP address range for each Amazon VPC You may create and attach an Internet ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 32 gateway virtual private gateway or both to establish external connectivity subject to the controls below API Access: Calls to create and delete Amazon VPCs change routing security group and network A CL parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Account’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends always using SSL protected API endpoints AWS IAM also enables a customer to further control what APIs a newly crea ted user has permissions to call Subnets and Route Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 security attacks including MAC spoofing and ARP spo ofing are blocked Each subnet in an Amazon VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routing table to determine the destination Firewall (Security Groups): Like Amazon EC2 Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destination Traffic can be restric ted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) The firewall isn’t controlled through the guest OS; rather it can be modified only through the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose Well informed traffic management and security design are still required on a perinstance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IP tables or the Windows Firewall ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 33 Figure 4: Amazon VPC network architecture Network Access Control Lists: To add a further layer of security within Amazon VPC you can configure network ACLs These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol by service port as well as source/destination IP address Like security groups network ACLs are managed through Amazon VPC APIs adding an additional layer of protection and enabling additional security through separation of duties The diagram below depicts how the security controls above inter relate to enable flexible network topologies while providing complete control over network traffic flows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 34 Figure 5: Flexible network topologies Virtual Private Gateway: A virtual private gateway enables private connectivity between the Amazon VPC and another network Network traffic within each virtual private gateway is isolated from network traffic within all other virtual private gateways You can establish VPN connections to the virtual private gateway from gateway devices at your p remises Each connection is secured by a pre shared key in conjunction with the IP address of the customer gateway device Internet Gateway: An Internet gateway may be attached to an Amazon VPC to enable direct connectivity to Amazon S3 other AWS services and the Internet Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance Additionally network routes are configured (see above) to direct traffic to the Internet gateway AWS provide s reference NAT AMIs that you can extend to perform network logging deep packet inspection application layer filtering or other security controls This access can only be modified through the invocation of Amazon VPC APIs AWS supports the ability to gr ant granular access to different administrative functions on the instances and the Internet gateway therefore enabling you to implement additional security through separation of duties You can use a network address translation (NAT) ArchivedAmazon Web Services Amazon Web Services: Overview of Security Process es Page 35 gateway to enable ins tances in a private subnet to connect to the internet or other AWS services but prevent the internet from initiating a connection with those instances Dedicated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) An Amazon VPC can be created with ‘dedicated’ tenancy so that all instances launched into the Amazon VPC use this feature Alternatively an Amazon VPC may be created with ‘default’ tenancy but you can specify dedicated tenancy for particular instances launched into it Elastic Network Interfaces: Each Amazon EC2 instance has a default network interface that is assigned a private IP address on your Amazon VPC network You can create and attach an additional network interface known as an elastic network interface to any Amazon EC2 instance in your Amazon VPC for a total of two network interfaces per instance Attaching more than one network interface to an instance is useful when you want to create a management network use network and security appliances in your Amazon VPC or create dual homed instances with workloads/roles on distinct subnets A network interface 's attributes including the private IP address elastic IP addresses and MAC address follow s the network interface as it is attached or detached from an instance and reattached to another instance For m ore information about Amazon VPC see Amazon Virtual Private Cloud Addit ional Network Access Control with EC2 VPC If you launch instances in a Region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatically provisioned in a ready touse default VPC You can choose to create additional VPCs or you can create VPCs for instances in regions where you already had instances before we launched EC2 VPC If you create a VPC later using regular VPC you specify a CIDR block create subnets enter the routing and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automatically performed for you When y ou launch an instance into a default VPC using EC2 VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 36 • Create an internet gateway and connect it to your default VPC • Create a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also receive a public IP The following table summarizes the diffe rences between instances launched into EC2 Classic instances launched into a default VPC and instances launched into a non default VPC Table 2: Differences between different EC2 instances Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC IP address by default unless you specify otherwise during launch Unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2Classic range each time it's started Your instance receives a static private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple private IP addresses We select a single IP address for your instance Multiple IP addresses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 37 Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A security group can reference security groups that belong to other AWS accounts A security group can reference security groups for your VPC only A security group can reference security groups for your VPC only Secu rity group association You must terminate your instance to change its security group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inboun d traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic Tenancy Your instance runs on shared hardware; you cannot run an instance on single tenant hardware You can run your instance on shared hardware or single tenant hardware You can run your instance on shared hardware or single tenant hardware ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 38 Note: Security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC you can change secu rity groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS q ueries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure o utside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) —all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to e ndpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) Du ring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal information from the public Whois database in order to help thwart scraping and spamming ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 39 Amazon Route 53 is built using AWS’s highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsiv e Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from t he request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by cr eating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latenc y and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possi ble performance Amazon CloudFront is optimized to work with other AWS services like Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive versions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated fr om the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original definitive copies of objects delivered by Amazon CloudFront ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 40 If you want control over who is able to download content from Amazon CloudFront you can enable the service’s private content feature This feature has two components: the first controls how content is delivered from the Amazon CloudFront edge location t o viewers on the Internet The second controls how the Amazon CloudFront edge locations access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To contr ol access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Conso le Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create policy docum ents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the client maki ng the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects When Amazon C loudFront receives a request it will decode the signature using your public key Amazon CloudFront only serve s requests that have a valid policy document and matching signature Note: Private content is an optional feature that must be enabled when you s et up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront accept s requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 41 configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects Figure 6: Amazon CloudFront encrypted transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the object s For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront uses the SSLv3 or TLSv1 protocols and a selection of ci pher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origin ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note: If you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a third party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTT PS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or D edicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 42 because some ol der browsers do not support SNI (For a list of supported browsers visit CloudFront FAQs ) With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests for content including the object requested the date and time of the request the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provid e a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS Cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC With Direct Connect you bypass internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS D irect Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest AWS region as well as access to other US regions For example you can provision a single connection to any AWS Direct Connect location in the US and use it to access public AWS services in all US Regions and AWS GovCloud (US) Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to a ccess public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments Amazon Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash usin g your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 43 secret key You can have AWS automatically generate a BGP MD5 key or you can provide your own Storage Services Amazon Web Services provides low cost data storage with high durability and availability AWS offers storage choices for backup archivi ng and disaster recovery as well as block and object storage Amazon Simple Storage Service (Amazon S3) Security Amazon Simple Storage Service ( Amazon S3) allows you to upload and retrieve data at any time from anywhere on the web Amazon S3 stores data as objects within buckets An object can be any kind of file: a text file a photo a video etc When you add a file to Amazon S3 you have the option of including metadata with the file and setting permissions to control access to the file For each buc ket you can control access to the bucket (who can create delete and list objects in the bucket) view access logs for the bucket and its objects and choose the geographical region where Amazon S3 will store the bucket and its contents Data Access Acce ss to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources they create (note that a bucket/object owner is the AWS Account owner not the user who created the bucket/object) There are mult iple ways to control access to buckets and objects: • Identity and Access Management (IAM) Policies AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS Account IAM policies are attached to the users ena bling centralized control of permissions for users under your AWS Account to access buckets or objects With IAM policies you can only grant users within your own AWS account permission to access your Amazon S3 resources • Access Control Lists (ACLs) Within Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users With ACLs you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 44 • Bucket Policies Bucket policies in Amazon S3 ca n be used to add or deny permissions across some or all of the objects within a single bucket Policies can be attached to users groups or Amazon S3 buckets enabling centralized management of permissions With bucket policies you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources Table 3: Types of access control Type of Access Control AWS Account Level Control User Level Control IAM Policies No Yes ACLs Yes No Bucket Policies Yes Yes You can further restrict access to specific resources based on certain conditions For example you can restrict access based on request time (Date Condition) whether the request was sent using SSL (Boolean Conditions) a requester’s IP address (IP Addres s Condition) or based on the requester's client application (String Conditions) To identify these conditions you use policy keys For more information about action specific policy keys available within Amazon S3 see the Amazon Simple Storage Service Developer Guide Amazon S3 also gives developers the option to use query string authentication which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time Query string authentication is useful for giving HTTP or browser access to resources that would normally require authentication The signature in the query string secures the request Data Transfer For maximum security you ca n securely upload/download data to Amazon S3 via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Storage Amazon S3 provides multiple options for protecting data at rest For customers who prefer to manage their own encryption they can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 45 Alternatively you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you Data is encrypted with a key generated by AWS or with a key you supply depending on your requirements With Amazon S3 SSE you can encrypt data on upload simply by adding an additional request header when writing the object De cryption happens automatically when data is retrieved Note: Metadata which you can include with your object is not encrypted Therefore AWS recommends that customers not place sensitive information in Amazon S3 metadata Amazon S3 SSE uses one of the s trongest block ciphers available – 256bit Advanced Encryption Standard (AES 256) With Amazon S3 SSE every protected object is encrypted with a unique encryption key This object key itself is then encrypted with a regularly rotated master key Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts Amazon S3 SSE also makes it possible for you to enforce encryption requirements For example you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets For long term storage you can automatically archive the contents of your Amazon S3 buckets to AWS’s archival service called Amazon S3 Glacier You can have data transferred at specific intervals to Ama zon S3 Glacier by creating lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when As part of your data management strategy you can also specify how long Amazon S3 should wait after the objects are p ut into Amazon S3 to delete them When an object is deleted from Amazon S3 removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds Once the mapping is r emoved there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Durability and Reliability Amazon S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region To help provide durability Amazon S3 PUT and COPY operations synchronously store customer data across multiple facilities before returning SUCCESS Once stored Amazon S3 helps maintain the durability of ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 46 the objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired u sing redundant data In addition Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Amazon S3 provides further protection via Versioning You can use Versioning to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket With Versioning you can easily recover from both unintended user actions and application failures By default requests will retrieve the most recently written version Older versi ons of an object can be retrieved by specifying a version in the request You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Access Logs An Amazon S3 bucket can be configured to log access to the bucket and objects within it The access log contains details about each access request including request type the requested resource the requestor’s IP and the time and date of the request When logging is enabled for a bucket log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket Cross Origin Resource Sharing (CORS) AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross origin requests Modern browsers use the Same Origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross site scripting attacks) With the Cross Origin Resource S haring (CORS) policy enabled assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages style sheets and HTML5 applications Amazon S3 Glacier Security Like Amazon S3 the Amazon S3 Glacier service p rovides low cost secure and durable storage But where Amazon S3 is designed for rapid retrieval Amazon S3 Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 47 Amazon S3 Glacier stores files as archives within vaults Archives can be any data such as a photo video or document and can contain one or several files You can store an unlimited number of archives in a single vault and can create up to 1000 vault s per region Each archive can contain up to 40 TB of data Data Upload To transfer data into Amazon S3 Glacier vaults you can upload an archive in a single upload operation or a multipart operation In a single upload operation you can upload archives u p to 4 GB in size However customers can achieve better results using the Multipart Upload API to upload archives greater than 100 MB Using the Multipart Upload API allows you to upload large archives up to about 40000 GB The Multipart Upload API call is designed to improve the upload experience for larger archives; it enables the parts to be uploaded independently in any order and in parallel If a multipart upload fails you only need to upload the failed part again and not the entire archive When you upload data to Amazon S3 Glacier you must compute and supply a tree hash Amazon S3 Glacier checks the hash against the data to help ensure that it has not been altered en route A tree hash is generated by computing a hash for each megabyte sized se gment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data As an alternate to using the Multipart Upload feature customers with very large uploads to Amazon S3 Glacier may consider using the A WS Snowball service instead to transfer the data AWS Snowball facilitates moving large amounts of data into AWS using portable storage devices for transport AWS transfers your data directly off of storage devices using Amazon’s high speed internal networ k bypassing the Internet You can also set up Amazon S3 to transfer data at specific intervals to Amazon S3 Glacier You can create lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when You can als o specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them To achieve even greater security you can securely upload/download data to Amazon S3 Glacier via the SSL encrypted endpoints The encrypted endpoints are acce ssible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 48 Data Retrieval Retrieving archives from Amazon S3 Glacier requires the initiation of a retrieval job which is generally completed in 3 to 5 hours You can then access the data via HTTP GET requests The data will remain available to you for 24 hours You can retrieve an entire archive or several files from an archive If you want to retrieve only a subset of an archive you can use one retrieval request to specify the range of the archive that contains the files you are interested or you can initiate multiple retrieval requests each with a range for one or more files You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by setting a maximum items limit Whichever method you choose when you retrieve portions of your archive you can use the supplied checksum to help ensure the integrity of the file s provided that the range that is retrieved is aligned with the tree hash of the overall archive Data Storage Amazon S3 Glacier automatically encrypts the data using AES 256 and stores it durably in an immutable form Amazon S3 Glacier is designed to prov ide average annual durability of 99999999999% for an archive It stores each archive in multiple facilities and multiple devices Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 Glacier performs regula r systematic data integrity checks and is built to be automatically self healing When an object is deleted from Amazon S3 Glacier removal of the mapping from the public name to the object starts immediately and is generally processed across the distrib uted system within several seconds Once the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Access Only your account can access your data in Amazon S3 Glacier To control access to your data in Amazon S3 Glacier you can use AWS IAM to specify which users within your account have rights to operations on a given vault AWS Storage Gateway Security The AWS Storage Gateway service connects your on premises software appliance with cloud based storage to provide seamless and secure integration between your IT environment and the AWS storage infrastructure The service enables you to securely ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 49 upload data to AWS’ scalable reliable and secure Amazon S3 storage service f or cost effective backup and rapid disaster recovery AWS Storage Gateway transparently backs up data off site to Amazon S3 in the form of Amazon EBS snapshots Amazon S3 redundantly stores these snapshots on multiple devices across multiple facilities de tecting and repairing any lost redundancy The Amazon EBS snapshot provides a point intime backup that can be restored on premises or used to instantiate new Amazon EBS volumes Data is stored within a single region that you specify AWS Storage Gateway offers three options: • Gateway Stored Volumes (where the cloud is backup) In this option your volume data is stored locally and then pushed to Amazon S3 where it is stored in redundant encrypted form and made available in the form of Amazon Elastic Block Storage ( Amazon EBS) snapshots When you use this model the on premises storage is primary delivering low latency access to your entire dataset and the cloud storage is the backup • Gateway Cached Volumes (where the cloud is primary) In this option your volume data is stored encrypted in Amazon S3 visible within your enterprise's network via an iSCSI interface Recently accessed data is cached on premises for low latency local access When you use this model the cloud storage is primary b ut you get low latency access to your active working set in the cached volumes on premises • Gateway Virtual Tape Library (VTL) In this option you can configure a Gateway VTL with up to 10 virtual tape drives per gateway 1 media changer and up to 1500 v irtual tape cartridges Each virtual tape drive responds to the SCSI command set so your existing on premises backup applications (either disk to tape or disk todiskto tape) will work without modification No matter which option you choose data is asy nchronously transferred from your on premises storage hardware to AWS over SSL The data is stored encrypted in Amazon S3 using Advanced Encryption Standard (AES) 256 a symmetric key encryption standard using 256 bit encryption keys The AWS Storage Gate way only uploads data that has changed minimizing the amount of data sent over the Internet The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in your data center running VMware ESXi Hypervisor v 41 or v 5 or Microsoft Hype rV (you download the VMware software during the setup process) You can also run within EC2 using a gateway AMI During the installation and configuration process you can ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 50 create up to 12 stored volumes 20 Cached volumes or 1500 virtual tape cartridges per gateway Once installed each gateway will automatically download install and deploy updates and patches This activity takes place during a maintenance window that you can set on a per gateway basis The iSCSI protocol supports authentication betwee n targets and initiators via CHAP (Challenge Handshake Authentication Protocol) CHAP provides protection against maninthemiddle and playback attacks by periodically verifying the identity of an iSCSI initiator as authenticated to access a storage volum e target To set up CHAP you must configure it in both the AWS Storage Gateway console and in the iSCSI initiator software you use to connect to the target After you deploy the AWS Storage Gateway VM you must activate the gateway using the AWS Storage Gateway console The activation process associates your gateway with your AWS Account Once you establish this connection you can manage almost all aspects of your gateway from the console In the activation process you specify the IP address of your gateway name your gateway identify the AWS region in which you want your snapshot backups stored and specify the gateway time zone AWS Snowball Security AWS Snowball is a simple secure method for physically transferring large amounts of data to A mazon S3 EBS or Amazon S3 Glacier storage This service is typically used by customers who have over 100 GB of data and/or slow connection speeds that would result in very slow transfer rates over the Internet With AWS Snowball you prepare a portable s torage device that you ship to a secure AWS facility AWS transfers the data directly off of the storage device using Amazon’s high speed internal network thus bypassing the Internet Conversely data can also be exported from AWS to a portable storage de vice Like all other AWS services the AWS Snowball service requires that you securely identify and authenticate your storage device In this case you will submit a job request to AWS that includes your Amazon S3 bucket Amazon EBS region AWS Access Key ID and return shipping address You then receive a unique identifier for the job a digital signature for authenticating your device and an AWS address to ship the storage device to For Amazon S3 you place the signature file on the root directory of yo ur device For Amazon EBS you tape the signature barcode to the exterior of the device The signature file is used only for authentication and is not uploaded to Amazon S3 or EBS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 51 For transfers to Amazon S3 you specify the specific buckets to which the d ata should be loaded and ensure that the account doing the loading has write permission for the buckets You should also specify the access control list to be applied to each object loaded to Amazon S3 For transfers to EBS you specify the target region f or the EBS import operation If the storage device is less than or equal to the maximum volume size of 1 TB its contents are loaded directly into an Amazon EBS snapshot If the storage device’s capacity exceeds 1 TB a device image is stored within the sp ecified S3 log bucket You can then create a RAID of Amazon EBS volumes using software such as Logical Volume Manager and copy the image from S3 to this new volume For added protection you can encrypt the data on your device before you ship it to AWS F or Amazon S3 data you can use a PIN code device with hardware encryption or TrueCrypt software to encrypt your data before sending it to AWS For EBS and Amazon S3 Glacier data you can use any encryption method you choose including a PINcode device AW S will decrypt your Amazon S3 data before importing using the PIN code and/or TrueCrypt password you supply in your import manifest AWS uses your PIN to access a PIN code device but does not decrypt software encrypted data for import to Amazon EBS or Ama zon S3 Glacier The following table summarizes your encryption options for each type of import/export job Table 4: Encryption options for import/export jobs Import to Amazon S3 Source Target Result • Files on a device file system • Encrypt data using PIN code device and/or TrueCrypt before shipping device • Objects in an existing Amazon S3 bucket • AWS decrypts the data before performing the import • One object for each file • AWS erases your device after every import job prior to shipping Export from Amazon S3 Source Target Result ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 52 • Objects in one or more Amazon S3 buckets • Provide a PIN code and/or password that AWS will use to encrypt your data • Files on your storage device • AWS formats your device • AWS copies your data to an encrypted file container on your device • One file for each object • AWS encrypts your data prior to shipping • Use PIN code device and/or TrueCrypt to decrypt the files Import to Amazon S3 Glacier Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One archive in an existing Amazon S3 Glacier vault • AWS does not decrypt your device • Device image stored as a single archive • AWS erases your device after every import job prior to shipping Import t o Amazon EBS (Device Capacity < 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One Amazon EBS snapshot • AWS does not decrypt your device • Device image is stored as a single snapshot • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping Import to Amazon EBS (Device Capacity > 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • Multiple objects in an existing Amazon S3 bucket • AWS does not decrypt your device • Device image chunked into series of 1 TB snapshots stored as objects in Amazon S3 bucket specified in manifest file • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping ArchivedAmazon Web Services Amazo n Web Services: Overview of Security Processes Page 53 After the import is complete AWS Snowball will erase the contents of your storage device to safeguard the data during return shipment AWS overwrites all writable blocks on the storage device with zeroes You will need to repartition and format the device after the wipe If AWS is unable to erase the data on the device it will be scheduled for destruction and our support team will contact yo u using the email address specified in the manifest file you ship with the device When shipping a device internationally the customs option and certain required subfields are required in the manifest file sent to AWS AWS Snowball uses these values to va lidate the inbound shipment and prepare the outbound customs paperwork Two of these options are whether the data on the device is encrypted or not and the encryption software’s classification When shipping encrypted data to or from the United States the encryption software must be classified as 5D992 under the United States Export Administration Regulations Amazon Elastic File System Security Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloud With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files Amazon EFS file systems are distributed across an unconstrained number of storage servers enabling file systems to grow elast ically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data Data Access With Amazon EFS you can create a file system mount the file system on an Amazon EC2 instance and then read and write data from to and fr om your file system You can mount an Amazon EFS file system on EC2 instances in your VPC through the Network File System versions 40 and 41 (NFSv4) protocol To access your Amazon EFS file system in a VPC you create one or more mount targets in the VP C A mount target provides an IP address for an NFSv4 endpoint You can then mount an Amazon EFS file system to this end point using its DNS name which will resolve to the IP address of the EFS mount target in the same Availability Zone as your EC2 instan ce You can create one mount target in each Availability Zone in a region If there are multiple subnets in an Availability Zone in your VPC you create a mount target in one of the subnets and all EC2 instances in that Availability Zone share that mount target You ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 54 can also mount an EFS file system on a host in an on premises datacenter using AWS Direct Connect When using Amazon EFS you specify Amazon EC2 security groups for your EC2 instances and security groups for the EFS mount targets associated wit h the file system Security groups act as a firewall and the rules you add define the traffic flow You can authorize inbound/outbound access to your EFS file system by adding rules that allow your EC2 instance to connect to your Amazon EFS file system vi a the mount target using the NFS port After mounting the file system via the mount target you use it like any other POSIX compliant file system Files and directories in an EFS file system support standard Unix style read/write/execute permissions based on the user and group ID asserted by the mounting NFSv41 client For information about NFS level permissions and related considerations see Working with Users Groups and Permissions at the Network File System (NFS) Level All Amazon EFS file systems are owned by an AWS Account You can use IAM policies to grant permissions to other users so that they can perform administrative operations on your file systems including deleting a file system or modifying a mount target’s security groups For more information about EFS permissions see Overview of Managing Access Permissions to Your Amazon EFS Resources Data Durability and Reliability Amazon EFS is designed to be highly durable and highly available All data and metadata is stored across multiple Availability Zones and all service components are designed to be highly availa ble EFS provides strong consistency by synchronously replicating data across Availability Zones with read afterwrite semantics for most file operations Amazon EFS incorporates checksums for all metadata and data throughout the service Using a file sys tem checking process (FSCK) EFS continuously validates a file system's metadata and data integrity Data Sanitization Amazon EFS is designed so that when you delete data from a file system that data will never be served again If your procedures require that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) we recommend that you conduct a specialized wipe procedur e prior to deleting the file system ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 55 Database Services Amazon Web Services provides a number of database solutions for developers and businesses —from managed relational and NoSQL database services to in memory caching as a service and petabyte scale data warehouse service Amazon DynamoDB Security Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS so you don’t have to worry about hardware provisioning setup and configuration replication software patching or cluster scaling You can create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored while maintaining consistent fast performance All data items are stored on Solid State Drives (SSDs) and are automatically replicated across multiple availability zones in a region to provide built in high availability and data durability You can set up automatic backups using a sp ecial template in AWS Data Pipeline that was created just for copying DynamoDB tables You can choose full or incremental backups to a table in the same region or a different region You can use the copy for disaster recovery (DR) in the event that an erro r in your code damages the original table or to federate DynamoDB data across regions to support a multi region application To control who can use the DynamoDB resources and API you set up permissions in AWS IAM In addition to controlling access at the resource level with IAM you can also control access at the database level —you can create database level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application These database level permission s are called fine grained access controls and you create them using an IAM policy that specifies under what circumstances a user or application can access a DynamoDB table The IAM policy can restrict access to individual items in a table access to the a ttributes in those items or both at the same time ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 56 Figure 7: Database level permissions You can optionally use web identity federation to control access by application users who are authenticated by Login with Amazon Facebook or Google Web identity federation removes the need for creating individual IAM users; instead users can sign in to an identity provider and then obtain temporary security credentials from AWS Security Token Service (AWS STS) AWS STS returns temporary AW S credentials to the application and allows it to access the specific DynamoDB table In addition to requiring database and user permissions each request to the DynamoDB service must contain a valid HMAC SHA256 signature or the request is rejected The AWS SDKs automatically sign your requests; however if you want to write your own HTTP POST requests you must provide the signature in the header of your request to Amazon DynamoDB To calculate the signature you must request temporary security credential s from the AWS Security Token Service Use the temporary security credentials to sign your requests to Amazon DynamoDB Amazon DynamoDB is accessible via T LS/SSL encrypted endpoints Amazon Relational Database Service (Amazon RDS) Security Amazon RDS allow s you to quickly create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS manages the database instance on your behalf by performing backups handling failove r and maintaining the database software Currently Amazon RDS is available for MySQL Oracle Microsoft SQL Server and PostgreSQL database engines Amazon RDS has multiple features that enhance reliability for critical production databases including DB security groups permissions SSL connections automated backups DB snapshots and multi AZ deployments DB instances can also be deployed in an Amazon VPC for additional network isolation ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 57 Access Control When you first create a DB Instance within Amazon RDS you will create a master user account which is used only within the context of Amazon RDS to control access to your DB Instance(s) The master user account is a native database user account that allows you to log on to your DB Instance with all data base privileges You can specify the master user name and password you want associated with each DB Instance when you create the DB Instance Once you have created your DB Instance you can connect to the database using the master user credentials Subsequ ently you can create additional user accounts so that you can restrict who can access your DB Instance You can control Amazon RDS DB Instance access via DB Security Groups which are similar to Amazon EC2 Security Groups but not interchangeable DB Secur ity Groups act like a firewall controlling network access to your DB Instance Database Security Groups default to a “deny all” access mode and customers must specifically authorize network ingress There are two ways of doing this: authorizing a network I P range or authorizing an existing Amazon EC2 Security Group DB Security Groups only allow access to the database server port (all others are blocked) and can be updated without restarting the Amazon RDS DB Instance which allows a customer seamless contr ol of their database access Using AWS IAM you can further control access to your RDS DB instances AWS IAM enables you to control what RDS operations each individual AWS IAM user has permission to call Network Isolation For additional network access con trol you can run your DB Instances in an Amazon VPC Amazon VPC enables you to isolate your DB Instances by specifying the IP range you wish to use and connect to your existing IT infrastructure through industry standard encrypted IPsec VPN Running Amaz on RDS in a VPC enables you to have a DB instance within a private subnet You can also set up a virtual private gateway that extends your corporate network into your VPC and allows access to the RDS DB instance in that VPC Refer to the Amazon VPC User Guide for more details For Multi AZ deployments defining a subnet for all availability zones in a region will allow Amazon RDS to create a new standby in another availability zone should the need arise You can create DB Subnet Groups which are collections of subnets that you may want to designate for your RDS DB Instances in a VPC Each DB Subnet Group should have at least one subnet for every availability zone in a given reg ion In this case when you create a DB Instance in a VPC you select a DB Subnet Group; Amazon RDS then uses that DB Subnet Group and your preferred availability zone to select a subnet and ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 58 an IP address within that subnet Amazon RDS creates and associat es an Elastic Network Interface to your DB Instance with that IP address DB Instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet To use a bastion host you will need to set up a public subnet with an EC2 instance that acts as an SSH Bastion This public subnet must have an Internet gateway and routing rules that allow traffic to be directed via the SSH host which must then forward requests to the private IP address of your Amazon RDS DB instance DB Security Groups can be used to help secure DB Instances within an Amazon VPC In addition network traffic entering and exiting each subnet can be allowed or denied via network A CLs All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on premises security infrastructure including network firewalls and intrusion detection systems Encryption You can encrypt connections be tween your application and your DB Instance using SSL For MySQL and SQL Server RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned For MySQL you launch the mysql client using the ssl_ca para meter to reference the public key in order to encrypt connections For SQL Server download the public key and import the certificate into your Windows operating system Oracle RDS uses Oracle native network encryption with a DB instance You simply add the native network encryption option to an option group and associate that option group with the DB instance Once an encrypted connection is established data transferred between the DB Instance and your application will be encrypted during transfer You can also require your DB instance to only accept encrypted connections Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (SQL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterpris e Edition) The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage Note: SSL support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 59 While SSL offers security benefits be aware that SSL encryption is a compute intensive operation and will increase the latency of your database connection To learn how SSL works with SQL Server you can read more in the Amazon Relational Database Service User Guide Automated Backups and DB Snapshots Amazon RDS provi des two different methods for backing up and restoring your DB Instance(s): automated backups and database snapshots (DB Snapshots) Turned on by default the automated backup feature of Amazon RDS enables point in time recovery for your DB Instance Amazo n RDS will back up your database and transaction logs and store both for a user specified retention period This allows you to restore your DB Instance to any second during your retention period up to the last 5 minutes Your automatic backup retention pe riod can be configured to up to 35 days During the backup window storage I/O may be suspended while your data is being backed up This I/O suspension typically lasts a few minutes This I/O suspension is avoided with Multi AZ DB deployments since the ba ckup is taken from the standby DB Snapshots are user initiated backups of your DB Instance These full database backups are stored by Amazon RDS until you explicitly delete them You can copy DB snapshots of any size and move them between any of AWS’s pub lic regions or copy the same snapshot to multiple regions simultaneously You can then create a new DB Instance from a DB Snapshot whenever you desire DB Instance Replication Amazon cloud computing resources are housed in highly available data center fac ilities in different regions of the world and each region contains multiple distinct locations called Availability Zones Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive low latenc y network connectivity to other Availability Zones in the same region To architect for high availability of your Oracle PostgreSQL or MySQL databases you can run your RDS DB instance in several Availability Zones an option called a Multi AZ deployment When you select this option Amazon automatically provisions and maintains a synchronous standby replica of your DB instance in a different Availability Zone The primary DB instance is synchronously replicated across Availability Zones to the standby re plica In the event of DB instance or Availability Zone failure Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 60 For customers who use MySQL and need to scale beyond the capacity constraints of a single DB Instance for read heavy database workloads Amazon RDS provides a Read Replica option Once you create a read replica database updates on the source DB instance are replicated to the read replica using MySQL’s nativ e asynchronous replication You can create multiple read replicas for a given source DB instance and distribute your application’s read traffic among them Read replicas can be created with Multi AZ deployments to gain read scaling benefits in addition to the enhanced database write availability and data durability provided by Multi AZ deployments Automatic Software Patching Amazon RDS will make sure that the relational database software powering your deployment stays up todate with the latest patches W hen necessary patches are applied during a maintenance window that you can control You can think of the Amazon RDS maintenance window as an opportunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching oc cur in the event either are requested or required If a “maintenance” event is scheduled for a given week it will be initiated and completed at some point during the 30 minute maintenance window you identify The only maintenance events that require Amaz on RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start to finish) or required software patching Required patching is automatically scheduled only for patches that are security and durabilit y related Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window If you do not specify a preferred weekly maintenance window when creating your DB Instance a 30 minut e default value is assigned If you wish to modify when maintenance is performed on your behalf you can do so by modifying your DB Instance in the AWS Management Console or by using the ModifyDBInstance API Each of your DB Instances can have different preferred maintenance windows if you so choose Running your DB Instance as a Multi AZ deployment can further reduce the impact of a maintenance event as Amazon RDS will conduct maintenan ce via the following steps: 1) Perform maintenance on standby 2) Promote standby to primary and 3) Perform maintenance on old primary which becomes the new standby When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run the DB Instance i s marked for deletion Once the instance no longer indicates ‘deleting’ status it has been removed At this point the instance is no longer accessible and unless a final ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 61 snapshot copy was asked for it cannot be restored and will not be listed by any of t he tools or APIs Event Notification You can receive notifications of a variety of important events that can occur on your RDS instance such as whether the instance was shut down a backup was started a failover occurred the security group was changed or your storage space is low The Amazon RDS service groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs You can subscribe to an event category for a DB instance DB snapshot DB securi ty group or for a DB parameter group RDS events are published via AWS SNS and sent to you as an email or text message For more information about RDS notification event categories refer to the Amazon Relational Database Service User Guide Amazon Redshift Security Amazon Redshift is a petabyte scale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources The service ha s been architected to not only scale up or down rapidly but to significantly improve query speeds – even on extremely large datasets To increase performance Redshift uses techniques such as columnar storage data compression and zone maps to reduce the amount of IO needed to perform queries It also has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources When you create a Redshift data warehouse you provision a single node or multi node cluster specifying the type and number of nodes that will make up the cluster The node type determines the storage size memory and CPU of each node Each multi node cluster includes a leader node and two or more compute nodes A leader node manages connections parses queries builds execution plans and manages query execution in the compute nodes The compute nodes store data perform computations and run queries as directed by the leader node The leader node of each cluste r is accessible through ODBC and JDBC endpoints using standard PostgreSQL drivers The compute nodes run on a separate isolated network and are never accessed directly After you provision a cluster you can upload your dataset and perform data analysis queries by using common SQL based tools and business intelligence applications ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 62 Cluster Access By default clusters that you create are closed to everyone Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster You can also run Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry standard encrypted IPsec VPN The AWS accoun t that creates the cluster has full access to the cluster Within your AWS account you can use AWS IAM to create user accounts and manage permissions for those accounts By using IAM you can grant different users permission to perform only the cluster op erations that are necessary for their work Like all databases you must grant permission in Redshift at the database level in addition to granting access at the resource level Database users are named user accounts that can connect to a database and are authenticated when they login to Amazon Redshift In Redshift you grant database user permissions on a per cluster basis instead of on a per table basis However a user can see data only in the table rows that were generated by his own activities; ro ws generated by other users are not visible to him The user who creates a database object is its owner By default only a superuser or the owner of an object can query modify or grant permissions on the object For users to use an object you must gran t the necessary permissions to the user or the group that contains the user And only the owner of an object can modify or delete it Data Backups Amazon Redshift distributes your data across all compute nodes in a cluster When you run a cluster with at l east two compute nodes data on each node will always be mirrored on disks on another node reducing the risk of data loss In addition all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots Redshift stores your snapshots for a user defined period which can be from one to thirty five days You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them Amazon Redshift con tinuously monitors the health of the cluster and automatically re replicates data from failed drives and replaces nodes as necessary All of this happens without any effort on your part although you may see a slight performance degradation during the re replication process ArchivedAmazon Web Services Amazon We b Services: Overview of Security Processes Page 63 You can use any system or user snapshot to restore your cluster using the AWS Management Console or the Amazon Redshift APIs Your cluster is available as soon as the system metadata has been restored and you can start running queries w hile user data is spooled down in the background Data Encryption When creating a cluster you can choose to encrypt it in order to provide additional protection for your data at rest When you enable encryption in your cluster Amazon Redshift stores all data in user created tables in an encrypted format using hardware accelerated AES 256 block encryption keys This includes all data written to disk as well as any backups Amazon Redshift uses a four tier key based architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key: • Data encryption keys encrypt data blocks in the cluster Each data block is assigned a randomly generated AES 256 key These keys are encrypted by using the database key for the cluster • The database key encrypts data encryption keys in the cluster The database key is a randomly generated AES 256 key It is stored on disk in a separate network from the Amazon Redshift cluster and passed to the cluster across a secure channe l • The cluster key encrypts the database key for the Amazon Redshift cluster You can use either AWS or a hardware security module (HSM) to store the cluster key HSMs provide direct control of key generation and management and make key management separat e and distinct from the application and the database • The master key encrypts the cluster key if it is stored in AWS The master key encrypts the cluster keyencrypted database key if the cluster key is stored in an HSM You can have Redshift rotate the en cryption keys for your encrypted clusters at any time As part of the rotation process keys are also updated for all of the cluster's automatic and manual snapshots Note: Enabling encryption in your cluster will impact performance even though it is hard ware accelerated Encryption also applies to backups When restoring from an encrypted snapshot the new cluster will be encrypted as well ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 64 To encrypt your table load data files when you upload them to Amazon S3 you can use Amazon S3 server side encryptio n When you load the data from Amazon S3 the COPY command will decrypt the data as it loads the table Database Audit Logging Amazon Redshift logs all SQL operations including connection attempts queries and changes to your database You can access the se logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket You can then use these audit logs to monitor your cluster for security and troubleshooting purposes Automatic Software Patching Amazon Redshift manages all the work of setting up operating and scaling your data warehouse including provisioning capacity monitoring the cluster and applying patches and upgrades to the Amazon Redshift engine Patches are applied only during specified maintenance windows SSL Connections To protect your data in transit within the AWS cloud Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY UNLOAD backup and restore operations You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster To have your clients also authenticate the Redshift server you can install the public key (pem file) for the SSL certificate on your client and use the key to connect to your clusters Amazon Redshift offers the newer stronger cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral protocol ECDHE allows SSL clients to provide Perfect Forward Secrecy between the client and the Redshift clu ster Perfect Forward Secrecy uses session keys that are ephemeral and not stored anywhere which prevents the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised You do not need to configure any thing in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server Amazon Redshift will use the provided cipher list to make the appropriate connection ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 65 Amazon ElastiCache Security Amazon ElastiCache is a web service that makes it easy to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fas t managed in memory caching system instead of relying entirely on slower disk based databases It can be used to significantly improve latency and throughput for many readheavy application workloads (such as social networking gaming media sharing and Q&A portals) or compute intensive workloads (such as a recommendation engine) Caching improves application performance by storing critical pieces of data in memory for low latency access Cached information may include the results of I/O intensive datab ase queries or the results of computationally intensive calculations The Amazon ElastiCache service automates time consuming management tasks for in memory cache environments such as patch management failure detection and recovery It works in conjunction with other Amazon Web Services (such as Amazon EC2 Amazon CloudWatch and Amazon SNS) t o provide a secure high performance and managed in memory cache For example an application running in Amazon EC2 can securely access an Amazon ElastiCache Cluster in the same region with very low latency Using the Amazon ElastiCache service you crea te a Cache Cluster which is a collection of one or more Cache Nodes each running an instance of the Memcached service A Cache Node is a fixed size chunk of secure network attached RAM Each Cache Node runs an instance of the Memcached service and has its own DNS name and port Multiple types of Cache Nodes are supported each with varying amounts of associated memory A Cache Cluster can be set up with a specific number of Cache Nodes and a Cache Parameter Group that controls the properties for each C ache Node All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and have the same parameter and security group settings Amazon ElastiCache allows you to control access to your Cache Clusters using Cache Security Groups A Cache Security Group acts like a firewall controlling network access to your Cache Cluster By default network access is turned off to your Cache Clusters If you want your applications to access your Cache Cluster you must explicitly enable access from hosts in specific EC2 security groups Once ingress rules are configured the same rules apply to all Cache Clusters associated with that Cache Security Group To allow network access to your Cache Cluster create a Cache Security Group and use the Authorize Ca che Security Group Ingress API or CLI command to authorize the desired EC2 security group (which in turn specifies the EC2 instances allowed) IP ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 66 range based access control is currently not enabled for Cache Clusters All clients to a Cache Cluster must be within the EC2 network and authorized via Cache Security Groups ElastiCache for Redis provides backup and restore functionality where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time You can schedule auto matic recurring daily snapshots or you can create a manual snapshot at any time For automatic snapshots you specify a retention period; manual snapshots are retained until you delete them The snapshots are stored in Amazon S3 with high durability and can be used for warm starts backups and archiving Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email deli very search and transcoding Amazon CloudSearch Security Amazon CloudSearch is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections o f data such as web pages document files forum posts or product information It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuratio n that controls how your data is indexed and searched You create a separate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index a nd how you want to us them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 67 Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • The configuration service is acc essed through a general endpoint: cloudsearchus east1amazonawscom • The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east 1cloudsearchamazonawscom/ • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http://search domain name domainidus east1cloudsearchamazonawscom Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your CloudSearch domain API requests are sig ned with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch manag ement functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 4 days) Messages are highly durable; each message is persistently stored in highly availa ble highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AW S Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue is restricted to the AWS Account that c reated it However you can allow other access to a queue using either an SQS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 68 Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazo n EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the message when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other app lications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these topics publish messages and have these messages delivered over clien ts’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build hig hly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems timesensitive information updates mob ile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can publish or subscribe to a topic Additiona lly topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that created it However you can allow othe r access to SNS using either an SNS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 69 Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service ( Amazon SWF) makes it easy to build applications that coordinate work across distributed co mponents Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amazon SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application hosts interact You create desir ed workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a work flow— deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however o nly has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Amazon Simple Email Service (SES) built on Amazon’s reliable and scalable infrastructure is a mail service that can both send and receive mail on behalf of your domain Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from app lications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a differen t source To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record tha t Amazon SES supplies as proof of control over the domain ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 70 Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from bein g sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses con tentfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam A mazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for un solicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Ma il (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your beh alf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS services you use security credentials to verify who you are and whether you have per mission to interact with Amazon SES For information about which credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specify which Amazon SES API actions a user can perform If you choose to co mmunicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to communicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 71 final destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic T ranscoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H 264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Each job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is m ade to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs by Status function to find all of the jobs with a given status (eg ""Completed"") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 72 and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated proce sses or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Tran scoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against users accidently de leting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabl ed for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream 20 Security The Amazon AppStream 20 service provides a framework for running stream ing applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client devic e This can be a pre existing application that you modify to work with Amazon AppStream 20 or a new application that you design specifically to work with the service The Amazon AppStream 20 SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet i n near real time decode content on client devices and return user input to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream 20 deploys streaming appl ications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available ArchivedAmazon Web Services Amazon Web Se rvices: Overview of Security Processes Page 73 to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients using the Amazon AppStream 20 SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your a pplication before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream 20 REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream 20 utilizes an AWS CloudForm ation template that automates the process of deploying a GPU EC2 instance that has the AppStream 20 Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is upload your application to the server and run the command to launch it You can then use the Amazon AppStream 20 Service Simulator tool to test your application in stan dalone mode before deploying it into production Amazon AppStream 20 also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream 20 STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors network conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and vid eo as well as capturing input from your customers to be sent back to the application running in AWS Analytics Services Amazon Web Services provides cloud based analytics services to help you process and analyze any volume of data whether your need is for managed Hadoop clusters real time streaming data petabyte scale data warehousing or orchestration ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 74 Amazon EMR Security Amazon EMR is a managed web service you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among several servers It utilizes an enhanced version of the Apache Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 You simply upload your input data and a data processing application into Amazon S3 Amazon EMR then launches the number of Amazon EC2 instances you specify The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances Once the job flow is finished Amazon EMR transfers the output data to Amazon S3 where you can then retrieve it or use it as input in another job flow When launching job flows on your behalf Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves The master security group has a port open for communication with the service It also has the SSH port open to allow you to SSH into the instances using the key specified at startup The slaves start in a separate security group which only allows interaction with the master insta nce By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers Since these are security groups within your account you can reconfigure them using the standard EC2 to ols or dashboard To protect customer input and output datasets Amazon EMR transfers data to and from Amazon S3 using SSL Amazon EMR provides several ways to control access to the resources of your cluster You can use AWS IAM to create user accounts and roles and configure permissions that control which AWS features those users and roles can access When you launch a cluster you can associate an Amazon EC2 key pair with the cluster which you can then use when you connect to the cluster using SSH You c an also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster By default if an IAM user launches a cluster that cluster is hidden from other IAM users on the AWS account This filtering occurs on all Amazon E MR interfaces —the console CLI API and SDKs —and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users It is useful for clusters that are intended to be viewed by only a single IAM user and the main AWS acc ount You also have the option to make a cluster visible and accessible to all IAM users under a single AWS account For an additional layer of protection you can launch the EC2 instances of your EMR cluster into an Amazon VPC which is like launching it into a private subnet This allows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 75 you to control access to the entire subnetwork You can also launch the cluster into a VPC and enable the cluster to access resources on your internal network using a VPN connection You can encrypt the input data before you upload it to Amazon S3 using any common data encryption tool If you do encrypt the data before it’s uploaded you then need to add a decryption step to the beginning of your job flow when Amazon Elastic MapReduce fetches the data from Amazon S3 Amazo n Kinesis Security Amazon Kinesis is a managed service designed to handle real time streaming of big data It can accept any amount of data from any number of sources scaling up and down as needed You can use Kinesis in situations that call for large scale real time data ingestion and processing such as server logs social media or market data feeds and web clickstream data Applications read and write data records to Amazon Kinesis in streams You can create any number of Kinesis streams to capture store and transport data Amazon Kinesis automatically manages the infrastructure storage networking and configuration needed to collect and process your data at the level of throughput your streaming applications need You don’t have to worry about pr ovisioning deployment or ongoing maintenance of hardware software or other services to enable real time capture and storage of large scale data Amazon Kinesis also synchronously replicates data across three facilities in an AWS Region providing high availability and data durability In Amazon Kinesis data records contain a sequence number a partition key and a data blob which is an un interpreted immutable sequence of bytes The Amazon Kinesis service does not inspect interpret or change the da ta in the blob in any way Data records are accessible for only 24 hours from the time they are added to an Amazon Kinesis stream and then they are automatically discarded Your application is a consumer of an Amazon Kinesis stream which typically runs o n a fleet of Amazon EC2 instances A Kinesis application uses the Amazon Kinesis Client Library to read from the Amazon Kinesis stream The Kinesis Client Library takes care of a variety of details for you including failover recovery and load balancing allowing your application to focus on processing the data as it becomes available After processing the record your consumer code can pass it along to another Kinesis stream; write it to an Amazon S3 bucket a Redshift data warehouse or a DynamoDB table; or simply discard it A connector library is available to help you integrate Kinesis with other ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 76 AWS services (such as DynamoDB Redshift and Amazon S3) as well as third party products like Apache Storm You can control logical access to Kinesis resources and management functions by creating users under your AWS Account using AWS IAM and controlling which Kinesis operations these users have permission to perform To facilitate running your producer or consumer applications on an Amazon EC2 instance you c an configure that instance with an IAM role That way AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance which means you don’t have to use your long term AWS security credentials Roles have the added benefit of providing temporary credentials that expire within a short timeframe which adds an additional measure of protection See the AWS Ident ity and Access Management User Guide for more information about IAM roles The Amazon Kinesis API is only accessible via an SSL encrypted endpoint (kinesisus east1amazonawscom) to help ensure secure transmission of your data to AWS You must connect to that endpoint to access Kinesis but you can then use the API to direct AWS Kinesis to create a stream in any AWS Region AWS Data Pipeline Security The AWS Data Pipeline service helps you process and move data between different data sources at specified intervals using data driven workflows and built in dependency checking When you create a pipeline you define data sources preconditions destinations processing steps and an operational schedule Once you define and activate a pip eline it will run automatically according to the schedule you specified With AWS Data Pipeline you don’t have to worry about checking resource availability managing inter task dependencies retrying transient failures/timeouts in individual tasks or c reating a failure notification system AWS Data Pipeline takes care of launching the AWS services and resources your pipeline needs to process your data (eg Amazon EC2 or EMR) and transferring the results to storage (eg Amazon S3 RDS DynamoDB or E MR) When you use the console AWS Data Pipeline creates the necessary IAM roles and policies including a trusted entities list for you IAM roles determine what your pipeline can access and the actions it can perform Additionally when your pipeline cre ates a resource such as an EC2 instance IAM roles determine the EC2 instance's permitted resources and actions When you create a pipeline you specify one IAM role that governs your pipeline and another IAM role to govern your pipeline's resources (refe rred to as a ""resource role"") which can be the same role for both As part of the security best ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 77 practice of least privilege we recommend that you consider the minimum permissions necessary for your pipeline to perform work and define the IAM roles accord ingly Like most AWS services AWS Data Pipeline also provides the option of secure (HTTPS) endpoints for access via SSL Deployment and Management Services Amazon Web Services provides a variety of tools to help with the deployment and management of your applications This includes services that allow you to create individual user accounts with credentials for access to AWS services It also includes services for creating and updating stacks of AWS resources deploying applications on those resources and monitoring the health of those AWS resources Other tools help you manage cryptographic keys using hardware security modules (HSMs) and log AWS API activity for security and compliance purposes AWS Identity and Access Management (IAM) IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Service s IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs IAM is secure by default; new users have no access to AWS until permissions are explicitly granted IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this i s an important access control feature Using IAM to control access to the AWS Marketplace also enables AWS Account owners to have fine grained control over usage and software costs IAM enables you to minimize the use of your AWS Account credentials Once you create IAM user accounts all interactions with AWS Services and resources should occur with IAM user security credentials ArchivedAmazon Web Services Amazon Web Serv ices: Overview of Security Processes Page 78 Roles An IAM role uses temporary security credentials to allow you to delegate access to users or services that normally don't have access to your AWS resources A role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives tempo rary security credentials for authenticating to the resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after the y expire This can be particularly useful in providing limited controlled access in certain situations: • Federated (non AWS) User Access Federated users are users (or applications) who do not have AWS Accounts With roles you can give them access to your AWS resources for a limited amount of time This is useful if you have non AWS users that you can authenticate with an external service such as Microsoft Active Directory LDAP or Kerberos The temporary AWS credentials used with the roles provide ident ity federation between AWS and your non AWS users in your corporate identity and authorization system If your organization supports SAML 20 (Security Assertion Markup Language 20) you can create trust between your organization as an identity provider ( IdP) and other organizations as service providers In AWS you can configure AWS as the service provider and use SAML to provide your users with federated single sign on (SSO) to the AWS Management Console or to get federated access to call AWS APIs Roles are also useful if you create a mobile or web based application that accesses AWS resources AWS resources require security credentials for programmatic requests; however you shouldn't embed long term security credentials in your application because they are accessible to the application's users and can be difficult to rotate Instead you can let users sign in to your application using Login with Amazon Facebook or Google and then use their authentication information to assume a role and get temporary security credentials ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 79 • Cross Account Access For organizations who use multiple AWS Accounts to manage their resources you can set up roles to provide users who have permissions in one account to access resources under another account For organizations w ho have personnel who only rarely need access to resources under another account using roles helps ensures that credentials are provided temporarily only as needed • Applications Running on EC2 Instances that Need to Access AWS Resources If an applicatio n runs on an Amazon EC2 instance and needs to make requests for AWS resources such as Amazon S3 buckets or a DynamoDB table it must have security credentials Using roles instead of creating individual IAM accounts for each application on each instance ca n save significant time for customers who manage a large number of instances or an elastically scaling fleet using AWS Auto Scaling The temporary credentials include a security token an Access Key ID and a Secret Access Key To give a user access to cer tain resources you distribute the temporary security credentials to the user you are granting temporary access to When the user makes calls to your resources the user passes in the token and Access Key ID and signs the request with the Secret Access Ke y The token will not work with different access keys How the user passes in the token depends on the API and version of the AWS product the user is making calls to For more information about temporary security credentials see AWS Security Token Service API Reference The use of temporary credentials means additional protection for you because you don’t have to manage or distribute long term credentials to temporary users I n addition the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe like your code Temporary credentials are automatically rotated or changed multiple times a day without any action on you r part and are stored securely by default For m ore information about using IAM roles to auto provision keys on EC2 instances see the AWS Identity and Access Management Documentation Amazon CloudWatch Security Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources starting with Amazon EC2 It provides customers with visibility into resource utilization operational performance and overall demand patterns —includi ng metrics ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 80 such as CPU utilization disk reads and writes and network traffic You can set up CloudWatch alarms to notify you if certain thresholds are crossed or to take other automated actions such as adding or removing EC2 instances if Auto Scaling is enabled CloudWatch captures and summarizes utilization metrics natively for AWS resources but you can also have other logs sent to CloudWatch to monitor You can route your guest OS application and custom log files for the software installed on your E C2 instances to CloudWatch where they will be stored in durable fashion for as long as you'd like You can configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics You could for example monitor your web server's log files for 404 errors to detect bad inbound links or invalid user messages to detect unauthorized login attempts to your guest OS Like all AWS Services Amazon CloudWatch requires that every request made to its control API be authenticated so only authenticated users can access and manage CloudWatch Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudWatch control API is only a ccessible via SSL encrypted endpoints You can further control access to Amazon CloudWatch by creating users under your AWS Account using AWS IAM and controlling what CloudWatch operations these users have permission to call AWS CloudHSM Security The AW S CloudHSM service provides customers with dedicated access to a hardware security module (HSM) appliance designed to provide secure cryptographic key storage and operations within an intrusion resistant tamper evident device You can generate store an d manage the cryptographic keys used for data encryption so that they are accessible only by you AWS CloudHSM appliances are designed to securely store and process cryptographic key material for a wide variety of uses such as database encryption Digital Rights Management (DRM) Public Key Infrastructure (PKI) authentication and authorization document signing and transaction processing They support some of the strongest cryptographic algorithms available including AES RSA and ECC and many others The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC providing the appliance with its own private IP within a private subnet You can connect to CloudHSM appliances from your EC2 servers through SSL/TLS which uses two way ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 81 digital certif icate authentication and 256 bit SSL encryption to provide a secure communication channel Selecting CloudHSM service in the same region as your EC2 instance decreases network latency which can improve your application performance You can configure a client on your EC2 instance that allows your applications to use the APIs provided by the HSM including PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) Before you begin using an HSM you must set up at least o ne partition on the appliance A cryptographic partition is a logical and physical security boundary that restricts access to your keys so only you control your keys and the operations performed by the HSM AWS has administrative credentials to the applia nce but these credentials can only be used to manage the appliance not the HSM partitions on the appliance AWS uses these credentials to monitor and maintain the health and availability of the appliance AWS cannot extract your keys nor can AWS cause th e appliance to perform any cryptographic operation using your keys The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected The HSM is designed to detect tampering if the physical barrier of the HSM appliance is breached In addition after three unsuccessful attempts to access an HSM partition with HSM Admin credentials the HSM appliance erases its HSM partitions When your CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed you must delete each partition and its contents as well as any logs As part of the decommissioning process AWS zeroizes the appliance permanently erasing all ke y material AWS CloudTrail Security AWS CloudTrail provides a log of user and system actions affecting AWS resources within your account For each event recorded you can see what service was accessed what action was performed any parameters for the acti on and who made the request For mutating actions you can see the result of the action Not only can you see which one of your users or services performed an action on an AWS service but you can see whether it was as the AWS root account user or an IAM user or whether it was with temporary security credentials for a role or federated user ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 82 CloudTrail captures information about API calls to an AWS resource whether that call was made from the AWS Management Console CLI or an SDK If the API request returned an error CloudTrail provides the description of the error including messages for authorization failures It even captures AWS Management Console sign in events creating a log record every time an AWS account owner a federated user or an IAM user simply signs into the console Once you have enabled CloudTrail event logs are delivered about every 5 minutes to the Amazon S3 bucket of your choice The log files are organized by AWS Account ID region service name date and time You can configure CloudTrail so that it aggregates log files from multiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you can upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take immediate action to specific events By default log files are stored indefinitely The log files are automatically encrypted using Amazon S3's Server Side Encryption and will remain in the bucket until you choose to delete or archive them For even more security you can use KMS to encrypt the log files using a key that you own You can use Amazon S3 lifecycle configuration rules to automatically delete old log files or archive them to Amazon S3 Glacier for additional longevity at significant savings By enabling the optional log file validation you can validate that logs have not been added deleted or tampered with Like every other AWS service you can limit access to CloudTrail to only certain users You can use IAM to control which AWS users can create configure or delete AWS CloudTrail trails as well as which users can start and stop logging You can control access to the log files by applying I AM or Amazon S3 bucket policies You can also add an additional layer of security by enabling MFA Delete on your Amazon S3 bucket Mobile Services AWS mobile services make it easier for you to build ship run monitor optimize and scale cloud powered applications for mobile devices These services also help you authenticate users to your mobile application synchronize data and collect and analyze application usage ArchivedAmazon Web Services Amazon Web Servic es: Overview of Security Processes Page 83 Amazon Cognito Amazon Cognito provides identity and sync services for mobile and web based applications It simplifies the task of authent icating users and storing managing and syncing their data across multiple devices platforms and applications It provides temporary limited privilege credentials for both authenticated and unauthenticated users without having to manage any backend inf rastructure Amazon Cognito works with well known identity providers like Google Facebook and Amazon to authenticate end users of your mobile and web applications You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own Your application authenticates with one of these identity providers using the provider’s SDK Once the end user is authenticated with the provider an OAuth or OpenID Connect token returned from th e provider is passed by your application to Cognito which returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials To begin using Amazon Cognito you create an identity pool through the Amazon Cognito console The identity pool is a store of user identity information that is specific to your AWS account During the creation of the identity pool you will be asked to create a new IAM role or pic k an existing one for your end users An IAM role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receiv es temporary security credentials for authenticating to the AWS resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reuse d after they expire The role you select has an impact on which AWS services your end users will be able to access with the temporary credentials By default Amazon Cognito creates a new role with limited permissions – end users only have access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console With Amazon Cognito there’s no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile app’s end users who will need to access your AWS resources In conjunction with IAM roles mobile users can securely access AWS resources and application features and even save data to the AWS cloud without having to create an account or log in However if they choose to do this later Amazon Cognito merge s data and identification information Because Amazon Cognito stores data locally as well as in the service your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 84 end users can continue to interact with their data even when they are offline Their offline data may be stale but anything they put into the dataset they can immediately retrieve whether they are online or not The client SDK manages a local SQLite store so that the application can work even when it is not connected The SQLite store functions as a cache and is the target of all read and write operations Cognito's sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed Note that in order to sync data across devices your identity pool must support authenticated identities Unauthenticated identities are tied to the device so unless an end user authenticates no data can be synced across multiple devices With Amazon Cognito your application communicates directly with a supported public identity provider (Amazon Facebook or Google) to authenticate users Amazon Cognito does not receive or store user credentials —only the OAuth or OpenID Connect token received from the identity provider Once Amazon Cognito receives the token it returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials Each Amazon Cognito identity has access only to its own data in the sync store and this data is encrypted when stored In addition all identity data is transmitted over HTTPS The unique Amazon Cognito identifier on the device is stored in the appropriate secure location —on iOS for example the Amazon Cognito identifier is stored in the iOS keychain User data is cached in a local SQLite database within the application’s sandbox; if you require additional security you can encrypt this iden tity data in the local cache by implementing encryption in your application Amazon Mobile Analytics Amazon Mobile Analytics is a service for collecting visualizing and understanding mobile application usage data It enables you to track customer behavio rs aggregate metrics and identify meaningful patterns in your mobile applications Amazon Mobile Analytics automatically calculates and updates usage metrics as the data is received from client devices running your app and displays the data in the consol e You can integrate Amazon Mobile Analytics with your application without requiring users of your app to be authenticated with an identity provider (like Google Facebook or Amazon) For these unauthenticated users Mobile Analytics works with Amazon Cognit o to provide temporary limited privilege credentials To do this you first create an identity pool in Amazon Cognito The identity pool will use IAM roles which is a set of permissions not tied to a specific IAM user or group but which allows an entity to access ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 85 specific AWS resources The entity assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role By default Amazon Cognito creates a new role with limited permissions – end users only hav e access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console You can integrate the AWS Mo bile SDK for Android or iOS into your application or use the Amazon Mobile Analytics REST API to send events from any connected device or service and visualize data in the reports The Amazon Mobile Analytics API is only accessible via an SSL encrypted end point ( https://mobileanalyticsus east 1amazonawscom ) Applications AWS applications are managed services that enable you to provide your users with secure centralized storage and work area s in the cloud Amazon WorkSpaces Amazon WorkSpaces is a managed desktop service that allows you to quickly provision cloud based desktops for your users Simply choose a Windows 7 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch Once the WorkSpaces are ready users receive an email informing them where they can download the relevant client and log into their WorkSpace They can then access their cloud based desktops from a variety of endpoint device s including PCs laptops and mobile devices However your organization’s data is never sent to or stored on the end user device because Amazon WorkSpaces uses PC overIP (PCoIP ) which provides an interactive video stream without transmitting actual data The PCoIP protocol compresses encrypts and encodes the users’ desktop computing experience and transmits ‘pixels only’ across any standard IP network to end user devices In order to access their WorkSpace users must sign in using a set of unique credentials or their regular Active Directory credentials When you integrate Amazon WorkSpaces with your corporate Active Directory each WorkSpace joins your Active Directory domain and can be man aged just like any other desktop in your organization This means that you can use Active Directory Group Policies to manage your users’ ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 86 WorkSpaces to specify configuration options that control the desktop If you choose not to use Active Directory or othe r type of on premises directory to manage your user WorkSpaces you can create a private cloud directory within Amazon WorkSpaces that you can use for administration To provide an additional layer of security you can also require the use of multi factor authentication upon sign in in the form of a hardware or software token Amazon WorkSpaces supports MFA using an on premise Remote Authentication Dial in User Service (RADIUS) server or any security provider that supports RADIUS authentication It current ly supports the PAP CHAP MS CHAP1 and MS CHAP2 protocols along with RADIUS proxies Each Workspace resides on its own EC2 instance within a VPC You can create WorkSpaces in a VPC you already own or have the WorkSpaces service create one for you autom atically using the WorkSpaces Quick Start option When you use the Quick Start option WorkSpaces not only creates the VPC but it performs several other provisioning and configuration tasks for you such as creating an Internet Gateway for the VPC settin g up a directory within the VPC that is used to store user and WorkSpace information creating a directory administrator account creating the specified user accounts and adding them to the directory and creating the WorkSpace instances Or the VPC can be connected to an on premises network using a secure VPN connection to allow access to an existing on premises Active Directory and other intranet resources You can add a security group that you create in your Amazon VPC to all the WorkSpaces that belong t o your Directory This allows you to control network access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC and on premises network Persistent storage for WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3 If WorkSpaces Sync is enabled on a WorkSpace the folder a user chooses to sync will be continuously backed up and stored in Amazon S3 You can also use WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that y ou can always have access to your data regardless of the desktop computer you are using Because it’s a managed service AWS takes care of several security and maintenance tasks like daily backups and patching Updates are delivered automatically to your WorkSpaces during a weekly maintenance window You can control how patching is configured for a user’s WorkSpace By default Windows Update is turned on but you have the ability to customize these settings or use an alternative patch management approach if you desire For the underlying OS Windows Update is enabled by default ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 87 on WorkSpaces and configured to install updates on a weekly basis You can use an alternative patching approach or to configure Windows Update to perform updates at a time of your choosing You can use IAM to control who on your team can perform administrative functions like creating or deleting WorkSpaces or setting up user directories You can also set up a WorkSpace for directory administration install your favorite Active Direc tory administration tools and create organizational units and Group Policies in order to more easily apply Active Directory changes for all your WorkSpaces users Amazon WorkDocs Amazon WorkDocs is a managed enterprise storage and sharing service with fee dback capabilities for user collaboration Users can store any type of file in a WorkDocs folder and allow others to view and download them Commenting and annotation capabilities work on certain file types such as MS Word and without requiring the applic ation that was used to originally create the file WorkDocs notifies contributors about review activities and deadlines via email and performs versioning of files that you have synced using the WorkDocs Sync application User information is stored in an Ac tive Directory compatible network directory You can either create a new directory in the cloud or connect Amazon WorkDocs to your on premises directory When you create a cloud directory using WorkDocs’ quick start setup it also creates a directory admi nistrator account with the administrator email as the username An email is sent to your administrator with instructions to complete registration The administrator then uses this account to manage your directory When you create a cloud directory using Wo rkDocs’ quick start setup it also creates and configures a VPC for use with the directory If you need more control over the directory configuration you can choose the standard setup which allows you to specify your own directory domain name as well as one of your existing VPCs to use with the directory If you want to use one of your existing VPCs the VPC must have an Internet gateway and at least two subnets Each of the subnets must be in a different Availability Zone Using the Amazon WorkDocs Mana gement Console administrators can view audit logs to track file and user activity by time IP address and device and choose whether to allow users to share files with others outside their organization Users can then control who can access individual fi les and disable downloads of files they share ArchivedAmazon Web Services Amazon Web Services : Overview of Security Processes Page 88 All data in transit is encrypted using industry standard SSL The WorkDocs web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL WorkDocs users can also uti lize Multi Factor Authentication or MFA if their organization has deployed a Radius server MFA uses the following factors: username password and methods supported by the Radius server The protocols supported are PAP CHAP MS CHAPv1 and MS CHAPv2 You choose the AWS Region where each WorkDocs site’s files are stored Amazon WorkDocs is currently available in the US East (Virginia) US West (Oregon) and EU (Ireland) AWS Regions All files comments and annotations stored in WorkDocs are automatical ly encrypted with AES 256 encryption Document Revisions Date Description March 2020 Updated compliance certifications hypervisor AWS Snowball February 2019 Added information about deleting objects in Amazon S3 Glacier December 2018 Edit made to the Amazon Redshift Security topic May 2017 Added section on AWS Config Security Checks April 2017 Added section on Amazon Elastic File System March 2017 Migrated into new format January 2017 Updated regions",General,consultant,Best Practices AWS_Response_to_CACP_Information_and_Communication_Technology_SubCommittee,Amazon Web Services May 2017 Page 1 of 38 AWS Response to CACP Information and Communication Technology Sub Committee Offsite Data Storage and Processing Best Practices May 2017 Amazon Web Services May 2017 Page 2 of 38 © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are s ubject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether ex press or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Amazon Web Services May 2017 Page 3 of 38 Contents Introduction 4 CACP Requirements 5 Vendor Requirements 6 Information Security Requirements 18 Data Centre Security Requirements 27 Personnel Security Requirements 32 Access Control Requirements 34 Document Revisions 38 Amazon Web Services May 2017 Page 4 of 38 Introduction This document provide s information that Canadian police agencies can use to help determine how AWS services support their requirements and how to integrate AWS into the existing control framework that supports their IT environment For more information about compliance on AWS see AWS Risk and Compliance Overview (https://d0awsstaticcom/whitepapers/compliance/A WS_Risk_and_Compliance_Overviewpdf ) The tables listed in CACP Requirements below address the requirements listed in the Canadian Association of Chiefs of Police (CACP) Information and Communication Technology Sub Com mittee’s Offsite Data Storage and Processing Best Practices Further supporting details on AWS’s alignment with the CACP Sub Committee’s best practices can be requested subject to a non disclosure agreement with AWS Please contact your AWS account representative Amazon Web Services May 2017 Page 5 of 38 CACP Requirements The following tables describe how AWS aligns with the CACP information storage requirements Protected A and Protected B refer to security levels that the Canadian government has defined for sensitive government informa tion and assets Unauthorized access to Protected A information could lead to “Injury to an individual organization or government” Unauthorized access to Protected B information could lead to “ Serious injury to an individual organization or government” Values in Protected A and Protected B are set to the following possible states: • M – Mandatory • H – Highly Desirable • D – Desirable Amazon Web Services May 2017 Page 6 of 38 Vendor Requirements Requirement Protected A Protected B Reference AWS Responsibility 24x7 managed tier 1 and tier 2 support M M CJIS AWS provides a variety of options for 24x7 tier 1 and tier 2 support at the Business Support level or better For more information see https://awsamazoncom/premiumsupport/compareplans/ Uptime Guarantee of a minimum of 999% H H CACP ICT Each AWS service provides details on availability SLAs For instance Amazon EC2 has an availability SLA of 9995% (https://awsamazoncom/ec2/sla ) and Amazon S3 has an availability SLA of 9999% ( https://awsamazoncom/s3/sla ) Amazon Web Services May 2017 Page 7 of 38 Documented and proven configuration management processes M M MITS AWS maintains a documented and proven configuration management process that is performed during information system design development implementation and operation Documented and proven change control processes that adhere to ITIL service management processes M M MITS/CACP ICT AWS maintains change control processes that support the scale and complexity of the business and have been independently assessed Documented and proven incident response processes including: • Incident Identification • Incident Response • Incident Reporting • Incident Recovery • PostIncident Analysis M M MITS The AWS incident response program (detection investigation and response to incidents) has been developed in alignment with ISO 27001 standards Amazon Web Services May 2017 Page 8 of 38 Provide a current SOC Level 2 Compliance Report (if financi al data is used or stored) M M CACP ICT AWS provides access to its SOC 1 Type 2 and SOC 2 Type 2: Security & Availability report s subject to a nondisclosure agreement while the SOC 3: Security & Availability report is publicly available For more information see https://awsamazoncom/compliance/soc faqs/ Maintain current PCI compliance (if PCI data is used or stored) M M CACP ICT AWS maintains compliance with PCI DSS v32 as a Level 1 service provider For more information see https://awsamazoncom/compliance/pci dsslevel 1faqs/ Maintain current Cloud Controls Matrix (CCM) compliance report and provide to the agency upon request H H CACP ICT AWS is listed on the CSA’s Star registrant’s page located at https://cloudsecurityallianceorg/star registrant/amazonaws/ Amazon Web Services May 2017 Page 9 of 38 The Contractor must possess adequate disaster recovery and business continuity processes from a manmade or natural disaster The Contractor must provide their business continuity and disaster recovery plan to customer upon request The plans must include but i s not limited to: • How long it would take to recover from a disruption • How long it will take to switch to a backup site • The level of service and functionality provided by the backup site; and within what time frame the provider will recover th e primary da ta and service • A report on how and how often the customer data is backed up M M RCMP Customer resiliency in the cloud is transformed with the use of cloud Businesses are using AWS to enable faster disaster recovery of critical IT systems and we provide a whitepaper ( https://awsamazoncom/blogs/aws/new whitepaper useaws fordisaster recovery/ ) on using AWS for disaster recovery Customer resiliency is then not tied to any underlying infrastructure impacts AWS maintains internal operational continuity processes including N+2 physical redundancy from generators to third party service providers at every data centre globally Ability to determine where all agency information is at all times including online data and backups D M CACP ICT When using AWS c ustomers have full control of the movement of their data with the ability to choose the region in which their data is kept Amazon Web Services May 2017 Page 10 of 38 Ensure any connections to the Internet other external networks or information systems occur through controlled interfaces (eg proxies gateways routers firewalls encrypted tunnels) H M CJIS AWS has a limited number of access points to the information system to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints which allow customers to establish a secure communication session with their storage or compute instances within AW S Customers have the ability to deploy various tools and mechanisms to monitor traffic and activity such as VPC configurations EC2 Security Groups the AWS Web Application Firewall (WAF) as well as secure encrypted connections For more information se e https://awsamazoncom/security/ Employ tools and techniques to monitor network events detect attacks and provide identification of unauthorized use 24x7 D M CJIS AWS customers benefit from AWS servi ces and technologies built from the ground up to provide resilience in the face of DDoS attacks to include services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide identification of unauthorized use 24x7 to include vulnerability scanning and penetration testing For more information see https://d0awsstaticcom/whitepapers/DDoS_White_Paper_June 2015pdf Ensure the operational failure of the boundary protection mechanisms do not result in any unauthorized release of informatio n outside of the information system boundary (ie the device shall “fail closed” vs “fail open”) D M CJIS AWS users have the ability to configure their services to operate in a number of ways compliant with fail secure requirements Amazon Web Services May 2017 Page 11 of 38 Allocate publicly accessible information system components (eg public Web servers) to separate sub networks with separate network interfaces D H CACP ICT AWS does not operate publicly accessible information system components such as public web servers from within the cloud infrastructure All external interaction with the infrastructure is through a set of well known structured API end points Internet facing servers in the customer’s account are entirely within their operational control For more information see https://awsamazoncom/whitepapers/aws security best practices/ Data in transit is encrypted H M MITS AWS provides several means for supporting encrypting data in transit Enc rypted IPSec tunnels can be created between a customer’s endpoint and their VPC For more information see https://awsamazoncom/vpc Data at rest (local or backups) is encrypted H M MITS AWS provides a variety of options for encryption of data at rest For instance with S3 customers can securely upload or download data to Amazon S3 via the SSL encrypted endpoints using the HTTPS protocol Amazon S3 can automatically encrypt customer data at rest and gives sev eral choices for key management Alternatively customers can use a client encryption library such as the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 If desired Amazon S3 can encrypt customer data at rest with server side en cryption (SSE); Amazon S3 will automatically encrypt customer data on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys Amazon Web Services May 2017 Page 12 of 38 There are three ways to manage the encryption keys with server side encryption with Amazon S3: • SSE with Amazon S3 Key Management (SSE S3): Amazon S3 will encrypt data at rest and manage the encryption keys • SSE with Customer Provided Keys (SSEC): Amazon S3 will encrypt data at rest using the customer encryption keys customers provide • SSE with AWS KMS (SSEKMS): Amazon S3 will encrypt data at rest using keys only the customer manages in the AWS Key Management Service (KMS) For more information see: • https://awsamazoncom/s3/details/#security • https://awsamazoncom/kms/ When encryption is employed the cryptographic keys meet or exceed AES 256 H M CACP ICT AWS supports the use of AE S 256 Amazon Web Services May 2017 Page 13 of 38 When encryption is employed the cryptographic module used shall be certified to meet FIPS 1402 standards D H MITS AWS GovCloud (US) provides endpoints compliance with FIPS 1402 requirements Customers have the ability to deploy FIPS compliant modules within their account depending on their application’s ability to support FIPS 140 2 cryptographic modules Encryption keys be highly secured protected and available to the agency upon request M M MITS The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Encryption keys are controlled and stored by the agency D H CACP ICT The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Amazon Web Services May 2017 Page 14 of 38 External access to the administrative or management functions must be over VPN only This includes mode ms FTP or any protocol/port support provided by the equipment manufacturer This access must be limited to users with twofactor authentication D H NPISAB Customers can connect to the management console to administer their environment over VPN and mandate the use of twofactor authentication per internal agency requirements For more information see https://awsamazoncom/iam/details/mfa/ AWS infrastructure administrative connections to the AWS infrastructure are performed using secure mechanisms Agency data shall not be used by any service provider for any purposes The service provider shall be prohibited from scanning data files for the purpose of data mining or advertising M M CACP ICT AWS does not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to customers and their end users AWS never uses customer content or derives information from it for marketing or advertising For more information see https://awsamazoncom/compliance/data privacy faq/ The AWS Privacy Policy describes how AWS collects and uses information that customers provide in connection with the creation or administration of AWS accounts which is referred to as “Account Information ” For example Account Information includes names usernames phone numbers email addresses and billing information associated with a customer’ s AWS account The AWS Privacy Policy applies to customers’ Account Information and does not apply to the content that customers store on AWS including any personal information of customer end users AWS will not disclose move access or use customer content except as provided in the customer’s agreement with AWS The customer agreement with AWS (https://awsamazoncom/agreement/ ) and the AWS Data Protection FAQ contain more information about how we handle content you store on our systems Amazon Web Services May 2017 Page 15 of 38 All firewalls meet the minimum standard of Evaluation Assurance Level (EAL) 4 H M NPISAB AWS provides multiple features and services to help customers protec t data including the AWS Web Application Firewall (WAF) There are also several vendors in the AWS Marketplace with similar security utility product offerings For more information see: • https://awsamazoncom/waf/ • https://awsamazoncom/marketplace Ensure regular virus malware & penetration testing of their environment M M NPISAB AWS ensures regular virus malware and penetration testing of the infrastructure environment Customers can also conduct their own penetration testing within their account For more information see https://awsamazoncom/security/penetration testing/ Provide sufficient documentation of their virus malware & penetration testing results and upon request by the agency the vendor will provide a current report H M CACP ICT AWS ’ program processes and p rocedures for managing antivirus/malicious software are in alignment with the ISO 27001 standard and are referenced in AWS SOC reports AWS Security regularly engages independent security firms to perform external vulnerability threat assessments and has been validated and certified by an independent auditor to confirm al ignment with ISO 27 001 certification standard Amazon Web Services May 2017 Page 16 of 38 Provide sufficient documentation of all patch management and upon request by the agency the vendor will provide a current report H M CACP ICT Customers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabilities AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Continual monitoring and logging for the following events: • DDOS attacks • Unauthorized changes to the system hardware firmware and software • System performance anomalies • Known attack signatures D M MITS AWS employs a variety of tools and techniques to monitor network events and unauthorized use 24x7 AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks including servic es designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide id entification of unauthorized use 24x7 For more information see: • https://d0awsstaticcom/whitepapers/DDoS_White_Pap er_June2015pdf • https://awsamazoncom/security Amazon Web Services May 2017 Page 17 of 38 Ability to enable data retention policies as defined by the customer D H CACP ICT While AWS provides customers with the ability to delete their data AWS customers retain control and ownership of their data and are responsible for managing data retention to their own requirements AWS maintains data retention policies in accordance with several well known international standards and regulations such as SOC and PCI DSS that are independently assessed and attested Amazon Web Services May 2017 Page 18 of 38 Information Security Requirements Requirement Protected A Protected B Reference AWS Responsibility Ability to determine where all agency information is at all times including online data and backups D M CACP ICT Customers have full control of the movement of their data when using AWS with the choice of the region in which their data is kept Ensure any connections to the Internet other external networks or information systems occur through controlled interfaces (eg proxies gateways routers firewalls encrypted tunnels) H M CJIS AWS has a limited number of access points to the information system to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints which allow customers to establish a secure communication session with their storage or compute instances within AWS Customers have the ability to deploy various tools and mechan isms to monitor traffic and activity such as VPC configurations EC2 Security Groups the AWS Web Application Firewall (WAF) as well as secure encrypted connections For more information see https://awsa mazoncom/security/ Amazon Web Services May 2017 Page 19 of 38 Employ tools and techniques to monitor network events detect attacks and provide identification of unauthorized use 24x7 D M CJIS AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks to include services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to moni tor system events detect attacks and provide identification of unauthorized use 24x7 to include vulnerability scanning and penetration testing For more information see • https://d0awsstaticcom/whitepapers/DDoS_White_Paper_J une2015pdf • https://awsamazoncom/security • https://awsamazoncom/security/penetrat iontesting/ Ensure the operational failure of the boundary protection mechanisms do not result in any unauthorized release of information outside of the information system boundary (ie the device shall “fail closed” vs “fail open”) D M CJIS Users in AWS have the ability to configure their services to operate in a number of ways compliant with fail secure requirements Amazon Web Services May 2017 Page 20 of 38 Allocate publicly accessible information system components (eg public Web servers) to separate sub networks with separate network interfaces D H CACP ICT AWS does not operate publicly accessible information system components such as public web servers from within the cloud infrastructure All external interaction with the infrastructure is through a set of well known struc tured API end points Internet facing servers in the customer’s account are entirely within their operational control For more information see https://awsamazoncom/whitepap ers/aws security best practices/ Data in transit is encrypted H M MITS AWS provides several options for supporting encrypting data in transit Encrypted IPSec tunnels can be created between a customer’s endpoint and their VPC For more information see https://awsamazoncom/vpc Data at rest (local or backups) is encrypted H M MITS AWS provides a variety of options for encryption of data at rest For example with S3 customers can securely upload or download data to Amazon S3 via the SSL encrypted endpoints using the HTTPS protocol Amazon S3 can automatically encrypt customer data at rest and offers several choices for key management Alternatively customers can use a client encryption library such as the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 If desired Amazon S3 can encrypt customer data at rest with server side encryption (SSE); Amazon S3 will automatically encrypt customer dat a on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys There are three ways to Amazon Web Services May 2017 Page 21 of 38 manage the encryption keys with server side encryption with Amazon S3: • SSE with Amazon S3 Key Management (SSE S3): Amazon S3 will encrypt data at rest and manage the encryption keys ; • SSE with Customer Provided Keys (SSEC): Amazon S3 will encrypt data at rest using the customer encryption keys customers provide; or • SSE with AWS KMS (SSEKMS): Amazon S3 will encrypt data at rest using keys only the customer manages in the AWS Key Management Service (KMS) For more information see • https://awsamazoncom/s3/details/#s ecurity • https://awsamazoncom/kms When encryption is employed the cryptographic keys meet or exceed AES 256 H M CACP ICT AWS supports the use of AES 256 Amazon Web Services May 2017 Page 22 of 38 When encryption is employed the cryptographic module used shall be certified to meet FIPS 140 2 standards D H MITS AWS GovCloud (US) provides endpoints compliance with FIPS 1402 requirements Customers have the ability to deploy FIPS compliant modules within their account depending on their application’s ability to support FIPS 1402 cryptographic modules For more information see https://awsamazoncom/federal/ Encryption keys be highly secured protected and available to the agency upon request M M MITS The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Encryption keys are controlled and stored by the agency D H CACP ICT The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Amazon Web Services May 2017 Page 23 of 38 External access to the administrative or management functions must be over vpn only This includes modems ftp or any protocol/port support provided by the equipment manufacturer This access must be limited t o users with two factor authentication D H NPISAB Customers can connect to the management console to administer their environment over VPN and mandate the use of twofactor authentication per internal agency requirements For more information see https://awsamazoncom/iam/details/mfa/ AWS infrastructure administrative connections to the AWS infrastructure are performed using secure mechanisms Agency data shall not be used by any service provider for any purposes The service provider shall be prohibited from scanning data files for the purpose of data mining or advertising M M CACP ICT AWS does not access or use customer content for any purpose other than as legally required and for maintaining t he AWS services and providing them to customers and their end users AWS never uses customer content or derives information from it for marketing or advertising For more information see https://awsamazoncom/compliance/data privacy faq/ The AWS Privacy Policy describes how AWS collects and uses information that customers provide in connection with the creation or administration of AWS accounts which is referred to as “Account Informat ion” For example Account Information includes names usernames phone numbers email addresses and billing information associated with a customer’s AWS account The AWS Privacy Policy applies to customers’ Account Information and does not apply to the content that customers store on AWS including any personal information of customer end users AWS will not disclose move access or use customer content except as provided in the customer’s agreement with AWS The customer agreement with AWS (https://awsamazoncom/agreement/ ) and the AWS Data Protection FAQ contain more information about how we handle content you store on our systems Amazon Web Services May 2017 Page 24 of 38 All firewalls meet the minimum standard of Evaluation Assurance Level (EAL) 4 H M NPISAB AWS provides multiple features and services to help customers protect data including the AWS Web Application Firewall (WAF) There are also several vendors in the AWS Marketplace with similar security utility product offerings For more information see • https://awsamazoncom/waf/ • https://awsamazoncom/marketplace Ensure regular virus malware & penetration testing of their environment M M NPISAB AWS ensures regular virus malware and penetration testing of the infrastructure environment Customers can also conduct their own penetration testing within their account For more information see https://awsamazoncom/security/penetrationtesting/ Provide sufficient documentation of their virus malware & penetration testing results and upon request by the agency the vendor will provide a current report H M CACP ICT Custom ers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabilities AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers Amazon Web Services May 2017 Page 25 of 38 For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Provide sufficient documentation of all patch management and upon request by the agenc y the vendor will provide a current report H M CACP ICT Customers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabili ties AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Amazon Web Services May 2017 Page 26 of 38 Continual monitoring an d logging for the following events: • DDOS Attacks • Unauthorized changes to the system hardware firmware and software • System performance anomalies • Known attack signatures D M MITS AWS employs a variety of tools and techniques to monitor network events and unauthorized use 24x7 AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks including services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide identification of unauthorized use 24x7 For more information see: • https://d0awsstaticcom/whitepapers/DDoS_White_Paper_J une2015pdf • https://awsamazoncom/security Ability to enable data retention policies a s defined by the customer D H CACP ICT While AWS provides customers with the ability to delete their data AWS customers retain control and ownership of their data and are responsible for managing data retention to their own requirements AWS maintains data retention policies in accordance with several wellknown international standards and regulations such as SOC and PCIDSS that are independently assessed and attested Amazon Web Services May 2017 Page 27 of 38 Data Centre Security Requirements Requirement Protected A Protected B Reference AWS Responsibility The data centre must be physically secured against the entry of unauthorized personnel H M MITS AWS strictly controls access to data centres even for internal employees Physical access to all AWS data centres housing IT infrastructure components is restricted to authorized data centre employees vendors and contractors who require access in order to execute their jobs AWS data centres utilize trained security guards 24x7 Due to the fact that our data centres host multi ple customers AWS does not allow data center tours by customers as this exposes a wide range of customers to physical access of a third party To meet this customer need an independent and competent auditor validates the presence and operation controls as part of our SOC 1 Type II report This broadly accepted thirdparty validation provides customers with the independent perspective of the effec tiveness of controls in place Locked doors with access control systems that restrict entry to authorized p arties only All activity must be logged H M RCMP Physical access to the AWS data centres is controlled by an access control system and all activity is logged Amazon Web Services May 2017 Page 28 of 38 Logs of personnel access privilege shall be kept for a minimum of one year and provided to the agency upon request D M CACP ICT Physical access logs are maintained for a minimum of one year Access logs are provided to independent auditors in support of our formal compliance audits Logs of personnel access changes shall be kept for a minimum of one year and provided to the agency upon request D M CJIS Physical access logs are maintained for a minimum of one year Building must be constructed with walls that are difficult to breach D M RCMP Buildings are constructed according to local building code (typically concrete) Amazon Web Services May 2017 Page 29 of 38 Twofactor authentication to enter the building containing the data centre D H MITS Access to AWS data centres requires a variety of twofactor authentication mechanisms CCTV Video displayed and recorded for all entr y and exit paths and building exterior D M CACP ICT CCTV systems are in use for every AWS data centre with recorded video 24x 7 guard personnel at all main entry points to the building Bags and packages will be examined upon entry D M CJIS AWS uses gu ard personnel at all main entry points 24x7 with bag searches in place Amazon Web Services May 2017 Page 30 of 38 Authenticate visitors before authorizing escorted access to the data centre H M CJIS Physical access to all AWS data centres housing IT infrastructure components is restricted to authorized data centre employees vendors and contractors who require access in order to execute their jobs and includes the escorting of visitors where applicab le All customer information must be logically (and/or physically) separated from all other customer’s information This separation must be tested by an unbiased third party or demonstrated by the data centre management D H CJIS All customer information is logically separated by default through the use of the Amazon Virtual Private Cloud (VPC) service – a service that has been assessed by multiple third party assessors For more information see https://awsamazoncom/vpc/ Ability to indicate and limit which data centres agency data will be stored in D M CACP ICT The location of customer data is determined by the customer at the region level AWS does not access use or move customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to customers and their end users Amazon Web Services May 2017 Page 31 of 38 Agency information kept within a secure server room (SSR) that includes the following: • Vibration detection on walls • Intrusion detection system inside the secure server room • Two person authentication to enter the secure server room D H RCMP AWS utilizes several layers of security to protect the server rooms within the data centre (“red zones”) AWS employs several physic al security mechanisms including intrusion detection systems and two person authentication Disposal of hard drives with agency information includes the following steps to meet Canadian Standard ITSG 06: 1 Disk Encryption or overwriting 2 Grind or hammer mill into at least three pieces D M RCMP AWS uses multiple steps during the process of media decommissioning for both magnetic hard drives (HDD) and solid state drives (SSD) On site HDDs are degaussed and then bent to an abrupt angle and SSDs are logically overwritten before being punched Both types of drives are ultimately shredded for recycling of materials Customers have the ability to conduct a variety of sanitization methods themselves including data deletion using relevant tools or encrypting data and destroying the encryption key rendering the data permanently unusable Amazon Web Services May 2017 Page 32 of 38 Personnel Security Requirements Requirement Protected A Protected B Reference AWS Responsibility All system administrators and personnel with access to the facility must have Enhanced Security Check completed by a substantive law enforcement agency A Canadian federal security clearance of level Secret or higher may be substituted and considered equivalent A US federal security clearance of level Secret or higher may be substi tuted and considered equivalent H M RCMP All AWS employees must complete a comprehensive preemployment background check Several specific positions are also processed through a separate Trusted Position Check Additionally there are many employees that hold or are otherwise processed for a US national security clearance (TS/SCI) (reinvestigated every five years) and/or Criminal Justice Information Services (CJIS) fingerprint and records check Personnel must have initial background checks at the time of first employment with the Data Centre owner Security clearances must be maintained within the expiry period All system administrators and personnel with access to the facility must have the background check repeated on a five year cycle H M RCMP All AWS employees must complete a comprehensive pre employment background check Several specific positions are also processed through a separate Trusted Position Check Additionally there are many employees that hold or are otherwise processed for a US na tional security clearance (TS/SCI) (reinvestigated every five years) and/or Criminal Justice Information Services (CJIS) fingerprint and records check Employees with physical access are not provisioned logical access Amazon Web Services May 2017 Page 33 of 38 Upon termination of individual emplo yment shall immediately terminate access to the facility M M RCMP Upon termination all employees’ access to systems and facilities are revoked immediately Must maintain a list of personnel who have been authorized system or physical access to the date centre and its systems and upon request provide a current copy to the agency H M CJIS AWS maintains a list of employees with physical access as granted through the process to receive physical access Logical access lists are retained as part of the LDAP permission group structure and does not constitute a consolidated list for distribution All access management for both physical and logical are independently audited by multiple third party auditors for several formal compliance programs The Contractor must enforce separation of job duties require commercially reasonable non disclosure agreements and limit staff knowledge of customer data to that which is absolutely needed to perform to work H M RCMP AWS rigorously employs the principles of least pri vilege separation of roles and responsibilities and disclosure of information on a need to know basis Amazon Web Services May 2017 Page 34 of 38 Access Control Requirements Requirement Protected A Protected B Reference AWS Responsibility A password minimum length will be 8 characters and will have 3 of the 4 complexity requirements: • Upper case • Lower case • Special characters • Numeric Characters H M CJIS Access to the AWS infrastructure requires multi factor authentication to include password complexity requirements Customers can im plement this requirement within their account which AWS does not manage on their behalf The following password rules are implemented : • A password re use restriction will be used • Password lifespans will be implemented and the time is configurable by the agency (standard 90 days) • Not be a dictionary word or proper name • Not be the same as the user ID • Not be identical to the previous 6 passwords • Must be transmitted and stored in an encrypted state H M CJIS/NPISAB Access to the AWS infrastructure requires multi factor authentication to include password complexity and protection requirements Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 35 of 38 • Not be displayed when entered • Automatic storage and caching o f passwords by applications must be disabled User lockout after failed login attempts will be implemented and the count is configurable by the agency (default to 5) M M CJIS/NPISAB Customers can implement this requirement within their account which AWS does not manage on their behalf Password reset will leverage automated email personal identity verification questions M M CACP ICT Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 36 of 38 Policy exist to ensure passwords must not be emailed or given over the phone M M CACP ICT Customers can implement this requirement within their account which AWS does not manage on their behalf When using a Personal Identification Number (PIN) as a standa rd authenticator the following rules are implemented : • Must be a minimum of 6 digits • Have no repeating digits (eg 112233) • Have no sequential patterns (eg 12345) • Expire within a maximum of 365 days (unless PIN is second factor) • Not be identical to previous 3 PINS • Must be transmitted and stored in an encrypted state • Not be displayed when entered H M CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 37 of 38 System activity timer that will redirect user to the login page after a specific time that is configurable by the agency (Session Lock) (default to 30 mins) M M CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf The information system shall display an agency configurable system use message notification message D H CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf Continual monitoring and logging for the following events: • Successful and Unsuccessful login attempts • Successful and Unsuccessful attempts to view/modify/delete permissions files directory or system resources D M MITS AWS maintains logging and monitoring requirements in accordance with a variety of standards and requirements to include ISO 27001 SOC PCI DSS FedRAMP US Department of Defense Cloud Computing Security Requirements Guidance (DoD CC SRG) CJIS and others covering these requirements Amazon Web Services May 2017 Page 38 of 38 • Successful and Unsuccessful attempts to change account passwords • Successful and Unsuccessful attempts to view/modify/delete audit logs Utilize Strong Identification & Authentication leveraging Public Key Infrastructure (PKI) D H CACP ICT Customers can implement this requirement wi thin their account which AWS does not manage on their behalf Document Revisions Date Description May 2017 First publication,General,consultant,Best Practices AWS_Risk__Compliance,Amazon Web Services: Risk and Compliance Amazon Web Services: Risk and Compliance Amazon Web Services: Risk and Compliance Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonAmazon Web Services: Risk and Compliance Table of Contents Amazon Web Services: Risk and Compliance1 Abstract1 Introduction2 Shared responsibility model3 Evaluating and integrating AWS controls4 AWS risk and compliance program5 AWS business risk management5 Operational and business management 5 Control environment and automation6 Controls assessment and continuous monitoring6 AWS certifications programs reports and thirdparty attestations7 Cloud Security Alliance7 Customer cloud compliance governance8 Conclusion 9 Contributors 10 Further reading11 Document Revisions12 Notices13 iiiAmazon Web Services: Risk and Compliance Abstract Amazon Web Services: Risk and Compliance Publication date: March 11 2021 (Document Revisions (p 12)) Abstract AWS serves a variety of customers including those in regulated industries Through our shared responsibility model we enable customers to manage risk effectively and efficiently in the IT environment and provide assurance of effective risk management through our compliance with established widely recognized frameworks and programs This paper outlines the mechanisms that AWS has implemented to manage risk on the AWS side of the Shared Responsibility Model and the tools that customers can leverage to gain assurance that these mechanisms are being implemented effectively 1Amazon Web Services: Risk and Compliance Introduction AWS and its customers share control over the IT environment Therefore security is a shared responsibility When it comes to managing security and compliance in the AWS Cloud each party has distinct responsibilities A customer’s responsibility depends on which services they are using However in general customers are responsible for building their IT environment in a manner that aligns with their specific security and compliance requirements This paper provides more details about each party’s security responsibilities and the ways customers can benefit from the AWS Risk and Compliance Program 2Amazon Web Services: Risk and Compliance Shared responsibility model Security and compliance are shared responsibilities between AWS and the customer Depending on the services deployed this shared model can help relieve the customer’s operational burden This is because AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The customer assumes responsibility and management of the guest operating system (including updates and security patches) and other associated application software in addition to the configuration of the AWSprovided security group firewall We recommend that customers carefully consider the services they choose because their responsibilities vary depending on the services used the integration of those services into their IT environment and applicable laws and regulations It is possible for customers to enhance their security and/or meet their more stringent compliance requirements by leveraging technology such as hostbased firewalls host based intrusion detection and prevention encryption and key management The nature of this shared responsibility also provides the flexibility and customer control that permits customers to deploy solutions that meet industryspecific certification requirements This shared responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between AWS and its customers the management operation and verification of IT controls is also a shared responsibility AWS can help customers by managing those controls associated with the physical infrastructure deployed in the AWS environment Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required For examples of how responsibility for certain controls is shared between AWS and its customers see the AWS Shared Responsibility Model 3Amazon Web Services: Risk and Compliance Evaluating and integrating AWS controls AWS provides a wide range of information about its IT control environment to customers through technical papers reports certifications and other thirdparty attestations This documentation helps customers to understand the controls in place relevant to the AWS services they use and how those controls have been validated This information also helps customers account for and validate that controls in their extended IT environment are operating effectively Traditionally internal and/or external auditors validate the design and operational effectiveness of controls by process walkthroughs and evidence evaluation This type of direct observation and verification by the customer or customer’s external auditor is generally performed to validate controls in traditional onpremises deployments In the case where service providers are used (such as AWS) customers can request and evaluate third party attestations and certifications These attestations and certifications can help assure the customer of the design and operating effectiveness of control objective and controls validated by a qualified independent third party As a result although some controls might be managed by AWS the control environment can still be a unified framework where customers can account for and verify that controls are operating effectively and accelerating the compliance review process Thirdparty attestations and certifications of AWS provide customers with visibility and independent validation of the control environment Such attestations and certifications may help relieve customers of the requirement to perform certain validation work themselves for their IT environment in the AWS Cloud 4Amazon Web Services: Risk and Compliance AWS business risk management AWS risk and compliance program AWS has integrated a risk and compliance program throughout the organization This program aims to manage risk in all phases of service design and deployment and continually improve and reassess the organization’s riskrelated activities The components of the AWS integrated risk and compliance program are discussed in greater detail in the following sections AWS business risk management AWS has a business risk management (BRM) program that partners with AWS business units to provide the AWS Board of Directors and AWS senior leadership a holistic view of key risks across AWS The BRM program demonstrates independent risk oversight over AWS functions Specifically the BRM program does the following: •Performs risk assessments and risk monitoring of key AWS functional areas •Identifies and drives remediation of risks •Maintains a register of known risks To drive the remediation of risks the BRM program reports the results of its efforts and escalates where necessary to directors and vice presidents across the business to inform business decisionmaking Operational and business management AWS uses a combination of weekly monthly and quarterly meetings and reports to among other things ensure communication of risks across all components of the risk management process In addition AWS implements an escalation process to provide management visibility into high priority risks across the organization These efforts taken together help ensure that risk is managed consistently with the complexity of the AWS business model In addition through a cascading responsibility structure vice presidents (business owners) are responsible for the oversight of their business To this end AWS conducts weekly meetings to review operational metrics and identify key trends and risks before they impact the business Executive and senior leadership play important roles in establishing the AWS tone and core values Every employee is provided with the company’s Code of Business Conduct and Ethics and employees complete periodic training Compliance audits are performed so that employees understand and follow established policies The AWS organizational structure provides a framework for planning executing and controlling business operations The organizational structure includes roles and responsibilities to provide for adequate staffing efficiency of operations and the segregation of duties Management has also established appropriate lines of reporting for key personnel The company’s hiring verification processes include validation of education previous employment and in some cases background checks as permitted by law and regulation for employees commensurate with the employee’s position and level of access to AWS facilities The company follows a structured onboarding process to familiarize new employees with Amazon tools processes systems policies and procedures 5Amazon Web Services: Risk and Compliance Control environment and automation Control environment and automation AWS implements security controls as a foundational element to manage risk across the organization The AWS control environment is comprised of the standards processes and structures that provide the basis for implementing a minimum set of security requirements across AWS While processes and standards included as part of the AWS control environment stand on their own AWS also leverages aspects of Amazon’s overall control environment Leveraged tools include: •Tools used across all Amazon businesses such as the tool that manages separation of duties •Certain Amazonwide business functions such as legal human resources and finance In instances where AWS leverages Amazon’s overall control environment the standards and processes governing these mechanisms are tailored specifically for the AWS business This means that the expectations for their use and application within the AWS control environment may differ from the expectations for their use and application within the overall Amazon environment The AWS control environment ultimately acts as the foundation for the secure delivery of AWS service offerings Control automation is a way for AWS to reduce human intervention in certain recurring processes comprising the AWS control environment It is key to effective information security control implementation and associated management of risks Control automation seeks to proactively minimize potential inconsistencies in process execution that might arise due to the flawed nature of humans conducting a repetitive process Through control automation potential process deviations are eliminated This provides increased levels of assurance that a control will be applied as designed Engineering teams at AWS across security functions are responsible for engineering the AWS control environment to support increased levels of control automation wherever possible Examples of automated controls at AWS include: •Governance and Oversight: Policy versioning and approval •Personnel Management: Automated training delivery rapid employee termination •Development and Configuration Management: Code deployment pipelines code scanning code backup integrated deployment testing •Identity and Access Management: Automated segregation of duties access reviews permissions management •Monitoring and Logging: Automated log collection and correlation alarming •Physical Security: Automated processes related to AWS data centers including hardware management data center security training access alarming and physical access management •Scanning and Patch Management: Automated vulnerability scanning patch management and deployment Controls assessment and continuous monitoring AWS implements a variety of activities prior to and after service deployment to further reduce risk within the AWS environment These activities integrate security and compliance requirements during the design and development of each AWS service and then validate that services are operating securely after they are moved into production (launched) Risk management and compliance activities include two prelaunch activities and two postlaunch activities The prelaunch activities are: •AWS Application Security risk management review to validate that security risks have been identified and mitigated 6Amazon Web Services: Risk and Compliance AWS certifications programs reports and thirdparty attestations •Architecture readiness review to help customers ensure alignment with compliance regimes At the time of its deployment a service will have gone through rigorous assessments against detailed security requirements to meet the AWS high bar for security The postlaunch activities are: •AWS Application Security ongoing review to help ensure service security posture is maintained •Ongoing vulnerability management scanning These control assessments and continuous monitoring allow regulated customers the ability to confidently build compliant solutions on AWS services For a list of services in the scope for various compliance programs see the AWS Services in Scope webpage AWS certifications programs reports and third party attestations AWS regularly undergoes independent thirdparty attestation audits to provide assurance that control activities are operating as intended More specifically AWS is audited against a variety of global and regional security frameworks dependent on region and industry AWS participates in over 50 different audit programs The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact AWS Artifact is a no cost selfservice portal for ondemand access to AWS compliance reports When new reports are released they are made available in AWS Artifact allowing customers to continuously monitor the security and compliance of AWS with immediate access to new reports Depending on a country’s or industry’s local regulatory or contractual requirements AWS may also undergo audits directly with customers or governmental auditors These audits provide additional oversight of the AWS control environment to ensure that customers have the tools to help themselves operate confidently compliantly and in a riskbased manner using AWS services For more detailed information about the AWS certification programs reports and thirdparty attestations visit the AWS Compliance Program webpage You can also visit the AWS Services in Scope webpage for servicespecific information Cloud Security Alliance AWS participates in the voluntary Cloud Security Alliance (CSA) Security Trust & Assurance Registry (STAR) SelfAssessment to document its compliance with CSApublished best practices The CSA is “the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment”The CSA Consensus Assessments Initiative Questionnaire (CAIQ) provides a set of questions the CSA anticipates a cloud customer and/or a cloud auditor would ask of a cloud provider It provides a series of security control and process questions which can then be used for a wide range of efforts including cloud provider selection and security evaluation There are two resources available to customers that document the alignment of AWS to the CSA CAIQ The first is the CSA CAIQ Whitepaper and the second is a more detailed control mapping to our SOC2 controls which is available to via AWS Artifact For more information about the AWS participation in CSA CAIQ see the AWS CSA site 7Amazon Web Services: Risk and Compliance Customer cloud compliance governance AWS customers are responsible for maintaining adequate governance over their entire IT control environment regardless of how or where IT is deployed Leading practices include: •Understanding the required compliance objectives and requirements (from relevant sources) •Establishing a control environment that meets those objectives and requirements •Understanding the validation required based on the organization’s risk tolerance •Verifying the operating effectiveness of their control environment Deployment in the AWS Cloud gives enterprises different options to apply various types of controls and various verification methods Strong customer compliance and governance may include the following basic approach: 1Reviewing the AWS Shared Responsibility Model AWS Security Documentation AWS compliance reports and other information available from AWS together with other customerspecific documentation Try to understand as much of the entire IT environment as possible and then document all compliance requirements into a comprehensive cloud control framework 2Designing and implementing control objectives to meet the enterprise compliance requirements as laid out in the AWS Shared Responsibility Model 3Identifying and documenting controls owned by outside parties 4Verifying that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help customers gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed 8Amazon Web Services: Risk and Compliance Conclusion Providing highly secure and resilient infrastructure and services to our customers is a top priority for AWS Our commitment to our customers is focused on working to continuously earn customer trust and ensure customers maintain confidence in operating their workloads securely on AWS To achieve this AWS has integrated risk and compliance mechanisms that include: •The implementation of a wide array of security controls and automated tools •Continuous monitoring and assessment of security controls to help ensure AWS operational effectiveness and strict adherence to compliance regimes •Independent risk assessment by the AWS Business Risk Management program •Operational and business management mechanisms In addition AWS regularly undergoes independent thirdparty audits to provide assurance that the control activities are operating as intended These audits along with the many certifications AWS has obtained provide an additional level of validation of the AWS control environment that benefit customers Taken together with customermanaged security controls these efforts allow AWS to securely innovate on behalf of customers and help customers improve their security posture when building on AWS 9Amazon Web Services: Risk and Compliance Contributors Contributors to this document include: •Marta Taggart Senior Program Manager AWS Security •Bradley Roach Risk Manager AWS Business Risk Management •Patrick Woods Senior Security Specialist AWS Security 10Amazon Web Services: Risk and Compliance Further reading AWS provides customers with information regarding its security and control environment by: •Obtaining and maintaining industry certifications and independent thirdparty attestations as listed on the AWS Compliance Program Page •Consistently publishing information about the AWS security and control practices in whitepapers and web content like the AWS Security Blog •Providing indepth descriptions of how AWS utilizes automation at scale to manage our service infrastructure in The AWS Builders Library •Enhancing transparency by providing compliance certificates reports and other documentation directly to AWS customers via the selfservice portal known as AWS Artifact •Providing AWS Compliance Resources and consistently documenting and publishing answers to queries on AWS Compliance FAQs webpage •Customers can follow the design principles in the AWS WellArchitected Framework for guidance of how to approach the above the line configuration of their workloads build on AWS 11Amazon Web Services: Risk and Compliance Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Minor updates (p 12) Reviewed for technical accuracy March 10 2021 Whitepaper updated (p 12) This version includes substantial changes that include removing the reference information about compliance programs and schemes because this information is available on the AWS Compliance Programs and AWS Services in Scope by Compliance Program webpages Additionally we removed the section covering common compliance questions because that information is now available on the AWS Compliance FAQs webpageNovember 1 2020 Initial publication (p 12) Amazon Web Services: Risk and Compliance whitepaper publishedMay 1 2011 12Amazon Web Services: Risk and Compliance Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 13,General,consultant,Best Practices AWS_Risk_and_Compliance_Overview,Archived AWS Risk and Compliance Overview This paper has been archived January 2017 For the latest information on risk and compliance see Amazon Web Services: Risk and ComplianceArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuranc es from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Shared Responsibility Environment 1 Strong Compliance Governance 2 Evaluating and Integrating AWS Controls 3 AWS IT Control Information 3 AWS Global Regions 5 AWS Risk and Compliance Program 5 Risk Management 5 Control Environment 6 Information Security 7 AWS Contact 7 Further Reading 8 Document Revisions 8 Archived Abstract This paper provides information to help customers integrate AWS into their existing control framework including a basic approach for evaluating AWS controls ArchivedAmazon Web Services – Risk and Compliance Overview Page 1 Introduction AWS and its customers share control over the IT environment AWS’ part in this shared responsibility includes providing its services on a highly secure and controlled platform and providing a wide array of security features customers can use The customers’ responsibility includes configuring their IT environments in a secure and controlled manner for their purposes While customers don’t communicate their use and configurations to AWS AWS does communicate its security and control environment relevant to customers AWS does this by doing the following: • Obtaining industry certifications and independent third party attestations described in this document • Publishing information about the AWS security and control practices in whitepapers and web site content • Providing certificates reports and other documentation directly to AWS customers under NDA (as required) For a more detailed description of AWS security please see AWS Security Center For a more detailed description of AWS Compliance please see AWS Compliance page Additionally the AWS Overview of Security Processes whitepaper covers AWS’ general security controls and service specific security Shared Responsibility Environment Moving IT infrastructure to AWS services creates a model of shared responsibility between the customer and AWS This shared model can help relieve customer’s operational burden as AWS opera tes manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The customer assumes responsibility and management of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services used the integration of those ArchivedAmazon Web Services – Risk and Compliance Overview Page 2 services into their IT environment and applicable laws and regulations It is possible for customers to enhance security and/or meet their more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detection/prevention encryption and key management The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment of solutions that meet industry spec ific certification requirements This customer/AWS shared responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between AWS and its customers so is the management operation and verification of IT controls shared AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer As every custome r is deployed differently in AWS customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment Customers can then use the AWS control and compliance documentation available to t hem (described in AWS Certifications and Third party Attestations) to perform their control evaluation and verification procedures as required Strong Compliance Governance As always AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of how IT is deployed Leading practices include an understanding of required compliance objectives and requirements (from relevant sources) establishment of a control environment that meets those objectives and requirements an understanding of the validation required based on the organization’s risk tolerance and verification of the operating effectiveness of their control environment Deployment in the AWS cloud gives enterprises different options to ap ply various types of controls and various verification methods Strong customer compliance and governance might include the following basic approach: 1 Review information available from AWS together with other information to understand as much of the entire IT environment as possible and then document all compliance requirements ArchivedAmazon Web Services – Risk and Compliance Overview Page 3 2 Design and implement control objectives to meet the enterprise compliance requirements 3 Identify and document controls owned by outside parties 4 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help companies gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed Evaluating and Integrating AWS Controls AWS provides a wide range of information regarding its IT control environment to customers through white papers reports certifications and other third party attestations This documentation assists customers in understanding the controls in place relevant to the AWS services they use and how those controls have been validated This information also assists customers in their efforts to account for and to validate that controls in their extended IT environment are operating effectively Traditionally the design and operating effectiveness of control objectives and controls are validated by internal and/or external auditors via process walkthroughs and evidence evaluation Direct observation/verification by the customer or customer’s external auditor is generally performed to validate controls In the case where service providers such as AWS are used companies request and evaluate third party attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objective and controls As a result although customer’s key controls may be managed by AWS the control environment can still be a unified framework where all controls are accounte d for and are verified as operating effectively Third party attestations and certifications of AWS can not only provide a higher level of validation of the control environment but may relieve customers of the requirement to perform certain validation work themselves for their IT environment in the AWS cloud AWS IT Control Information AWS provides IT control information to customers in the following ways: ArchivedAmazon Web Services – Risk and Compliance Overview Page 4 Specific control definition AWS customers are able to identify key controls managed by AWS Key controls are critical to the customer’s control environment and require an external attestation of the operating effectiveness of these key controls in order to comply with compliance requirements —such as the annual financial audit For this purpose AWS publishes a wide range of specific IT controls in its Service Organization Controls 1 (SOC 1) Type II report The SOC 1 report formerly the Statement on Auditing Standards (SAS) No 70 Service Organizations report is a widely recognized auditing standard developed by the American Institute of Certified Public Accountants (AICPA) The SOC 1 audit is an in depth audit of both the design and operating effectiveness of AWS’ defined control o bjectives and control activities (which include control objectives and control activities over the part of the infrastructure AWS manages) “Type II” refers to the fact that each of the controls described in the report are not only evaluated for adequacy o f design but are also tested for operating effectiveness by the external auditor Because of the independence and competence of AWS’ external auditor controls identified in the report should provide customers with a high level of confidence in AWS’ contr ol environment AWS’ controls can be considered designed and operating effectively for many compliance purposes including Sarbanes Oxley (SOX) Section 404 financial statement audits Leveraging SOC 1 Type II reports is also generally permitted by other ex ternal certifying bodies (eg ISO 27001 auditors may request a SOC 1 Type II report in order to complete their evaluations for customers) Other specific control activities relate to AWS’ Payment Card Industry (PCI) and Federal Information Security Mana gement Act (FISMA) compliance AWS is compliant with FISMA Moderate standards and with the PCI Data Security Standard These PCI and FISMA standards are very prescriptive and require independent validation that AWS adheres to the published standard Genera l control standard compliance If an AWS customer requires a broad set of control objectives to be met evaluation of AWS’ industry certifications may be performed With the AWS ISO 27001 certification AWS complies with a broad comprehensive security sta ndard and follows best practices in maintaining a secure environment With the PCI Data Security Standard (PCI DSS) AWS complies with a set of controls important to companies that handle credit card information With AWS’ ArchivedAmazon Web Services – Risk and Compliance Overview Page 5 compliance with the FISMA standar ds AWS complies with a wide range of specific controls required by US government agencies Compliance with these general standards provides customers with in depth information on the comprehensive nature of the controls and security processes in place and can be considered when managing compliance AWS Global Regions Data centers are built in clusters in various global regions including: US East (Northern Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) (Oregon) EU (Frankfurt) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) China (Beijing) and South America (Sao Paulo) For a complete list of regions see the AWS Global Infrastructure page AWS Risk and Compliance Program AWS provides information about its risk and compliance program to enable customers to incorporate AWS controls into their governance framework This information can assist customers in documenting a complete control and governance framework with AWS included as an important part of that framework Risk Management AWS management has developed a strategic business plan which includes risk identification and the implementation of cont rols to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks In addition the AWS control environment is subject to various internal and external risk assessments AWS’ Compliance and Security teams have established an information security framework and policies based on the Control Objectives for Inform ation and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v31 and the National Institute of ArchivedAmazon Web Services – Risk and Compliance Overview Page 6 Standards and Technology (NIST) Publication 800 53 Rev 3 (Recommended Security Controls for Federal Information Systems) AWS maintains the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as conformance to the information security policy AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scan s do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and reco mmendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form Control Environment AWS manages a comprehensi ve control environment that includes policies processes and control activities that leverage various aspects of Amazon’s overall control environment This control environment is in place for the secure delivery of AWS’ service offerings The collective co ntrol environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of AWS’ control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS continues to monitor these industry groups for ideas on which leading practices can be implemented to better assist customers with managing their control environment The control e nvironment at Amazon begins at the highest level of the Company Executive and senior leadership play important roles in establishing the Company’s tone and core values Every employee is provided with the Company’s Code of Business Conduct and Ethics and completes periodic ArchivedAmazon Web Services – Risk and Compliance Overview Page 7 training Compliance audits are performed so that employees understand and follow the established policies The AWS organizational structure provides a framework for planning executing and controlling business operations The organizati onal structure assigns roles and responsibilities to provide for adequate staffing efficiency of operations and the segregation of duties Management has also established authority and appropriate lines of reporting for key personnel Included as part of the Company’s hiring verification processes are education previous employment and in some cases background checks as permitted by law and regulation for employees commensurate with the employee’s position and level of access to AWS facilities The Com pany follows a structured on boarding process to familiarize new employees with Amazon tools processes systems policies and procedures Information Security AWS has implemented a formal information security program designed to protect the confidentialit y integrity and availability of customers’ systems and data AWS publishes a security whitepaper that is available on the public website that addresses how AWS can help customers secure their data AWS Contact Customers can request the reports and certifications produced by our thirdparty auditors or can request more information about AWS Compliance by contacting AWS Sales and Business Development The representative will route customers to the p roper team depending on nature of the inquiry For additional information on AWS Compliance see the AWS Compliance site or send questions directly to mailto:awscompliance@amazoncom ArchivedAmazon Web Services – Risk and Compliance Overview Page 8 Further Reading For additional information see the following sources: • CSA Consensus Assessments Initiative Questionnaire • AWS Certifications Program s Reports and Third Party Attestations • AWS Answers to Key Compliance Questions Document Revisions Date Description January 2017 Migrate to new template January 2016 First publication,General,consultant,Best Practices AWS_Security_Best_Practices,"ArchivedAWS Security Best Practices August 2016 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/architecture/ securityidentitycompliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Know the AWS Shared Responsibility Model 2 Understanding the AWS Secure Global Infrastructure 3 Sharing Security Responsibility for AWS Services 4 Using the Trusted Advisor Tool 10 Define and Categorize Assets on AWS 10 Design Your ISMS to Protect Your Assets on AWS 11 Manage AWS Accounts IAM Users Groups and Roles 13 Strategies for Using Multiple AWS Ac counts 14 Managing IAM Users 15 Managing IAM Groups 15 Managing AWS Credentials 16 Understa nding Delegation Using IAM Roles and Temporary Security Credentials 17 Managing OS level Access to Amazon EC2 Instances 20 Secure Your Data 22 Resource Access Authorization 22 Storing and Managing Encryption Keys in the Cloud 23 Protecting Data at Rest 24 Decommission Data and Media Securely 31 Protect Data in Transit 32 Secure Your Operating Systems and Applications 38 Creating Custom AMIs 39 Bootstrapping 41 Managing Patches 42 Controlling Security for Public AMIs 42 Protecting Your System from Malware 42 ArchivedMitigating Compromise and Abuse 45 Using Additional Application Security Practices 48 Secure Your Infrastructure 49 Using Amazon Virtual Private Cloud (VPC) 49 Using Security Zoning and Network Segmentation 51 Strengthening Network Security 54 Securing Periphery Systems: User Repositories DNS NTP 55 Building Threat Protection Layers 57 Test Security 60 Managing Metrics and Improvement 61 Mitigating and Protecting Against DoS & DDoS Attacks 62 Manage Security Monitoring Alerting Audit Trail and Incident Response 65 Using Change Management Logs 68 Managing Logs for Critical Transactions 68 Protecting Log Information 69 Logging Faults 70 Conclusion 70 Contributors 70 Further Reading 70 Document Revisions 71 ArchivedAbstract This whitepaper is intended f or existing and potential customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) It provides security best practices that will help you define your Information Security Management Sy stem (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud The whitepaper also provides an overview of different security topics such as identifying categorizing and prote cting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data your operating systems and applications and overall infrastructure in the cloud The paper is targeted at IT decision makers and security personnel and assumes that you are familiar with basic security concepts in the area of networking operating systems data encryption and operational controls ArchivedAmazon Web Services AWS Security Be st Practices Page 1 Introduction Information security is of paramount importance to Amazon Web Services (AWS) customers Security is a core functional requirement that protects mission critical information from accidental or deliberate theft leakage integrity compromise and deletion Under the AWS shared respon sibility model AWS provides a global secure infrastructure and foundation compute storage networking and database services as well as higher level services AWS provides a range of security services and features that AWS customers can use to secure the ir assets AWS customers are responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection For more information on AWS’s security features please read Overview of Security Processes Whitepaper This whitepaper describes best practices that you can leverage to build and define an Information Security Management System (ISMS) that is a collection of information security policies and processes for your organization’s assets on AWS For more inform ation about ISMSs see ISO 27001 at https://wwwisoorg/standard/54534html Although it is not required to build an ISMS to use AWS we think that the structured approach for managing information sec urity that is built on basic building blocks of a widely adopted global security approach will help you improve your organization’s overall security posture We address the following topics: • How security responsibilities are shared between AWS and you the customer • How to define and categorize your assets • How to manage user access to your data using privileged accounts and groups • Best practices for securing your data operating systems and network • How monitoring and alerting can help you achieve your secur ity objectives This whitepaper discusses security best practices in these areas at a high level (It does not provide “how to” configuration guidance For service specific configuration guidance see the AWS Security Documentation ) ArchivedAmazon Web Services AWS Security Best Practices Page 2 Know the AWS Shared Responsibility Model Amazon Web Services provides a secure global infrastructure and services in the cloud You can build your systems using AWS as the foundation and architect an ISMS that takes advantag e of AWS features To design an ISMS in AWS you must first be familiar with the AWS shared responsibility model which requires AWS and customers to work together towards security objectives AWS provides secure infrastructure and services while you the customer are responsible for secure operating systems platforms and data To ensure a secure global infrastructure AWS configures infrastructure components and provides services and features you can use to enhance security such as the Identity and Ac cess Management (IAM) service which you can use to manage users and user permissions in a subset of AWS services To ensure secure services AWS offers shared responsibility models for each of the different type of service that we offer : • Infrastructure se rvices • Container services • Abstracted services The shared responsibility model for infrastructure services such as Amazon Elastic Compute Cloud (Amazon EC2) for example specifies that AWS manages the security of the following assets: • Facilities • Physical s ecurity of hardware • Network infrastructure • Virtualization infrastructure Consider AWS the owner of these assets for the purposes of your ISMS asset definition Leverage these AWS controls and include them in your ISMS In this Amazon EC2 example you as th e customer are responsible for the security of the following assets: • Amazon Machine Images (AMIs) • Operating systems ArchivedAmazon Web Services AWS Security Best Practices Page 3 • Applications • Data in transit • Data at rest • Data stores • Credentials • Policies and configuration Specific services further delineate how responsibilities are shared between you and AWS For more information see https://awsamazoncom/compliance/shared responsibility model/ Underst anding the AWS Secure Global Infrastructure The AWS secure global infrastructure and services are managed by AWS and provide a trustworthy foundation for enterprise systems and individual applications AWS establishes high standards for information securit y within the cloud and has a comprehensive and holistic set of control objectives ranging from physical security through software acquisition and development to employee lifecycle management and security organization The AWS secure global infrastructure and services are subject to regular third party compliance audits See the Amazon Web Services Risk and Compliance whitepaper for more information Using the IAM Service The IAM service is one component of the AWS secure global infrastructure that we discuss in this paper With IAM you can centrally manage users security credentials such as passwords access keys and permissions policies that contr ol which AWS services and resources users can access When you sign up for AWS you create an AWS account for which you have a user name (your email address) and a password The user name and password let you log into the AWS Management Console where you can use a browser based interface to manage AWS resources You can also create access keys (which consist of an access key ID and secret access key) to use when you make programmatic calls to AWS using the command line interface (CLI) the AWS SDKs or A PI calls IAM lets you create individual users within your AWS account and give them each their own user name password and access keys Individual users can then log into the ArchivedAmazon Web Services AWS Security Best Practices Page 4 console using a URL that’s specific to your account You can also create access keys for individual users so that they can make programmatic calls to access AWS resources All charges for activities performed by your IAM users are billed to your AWS account As a best practic e we recommend that you create an IAM user even for yourself and that you do not use your AWS account credentials for everyday access to AWS See Security Best Practices in IAM for more information Regions Availability Zones and Endpoints You should also be familiar with regions Availability Zones and endpoints which are components of the AWS secure global infrastructure Use AWS regions to manage network latency and regulatory compliance When you store data in a specific region it is not replicated outside that region It is your responsibility to replicate data across regions if your business needs require that AWS provides information about the country and wh ere applicable the state where each region resides; you are responsible for selecting the region to store data with your compliance and network latency requirements in mind Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids They are interconnected using high speed links so applications can rely on Local Ar ea Network (LAN) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zones where your systems will reside Systems can span multiple Availability Zones and we recom mend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through the AWS Management Console availab le at and then through individual consoles for each service AWS provides programmatic access to services through Application Programming Interfaces (APIs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Sharing Security Responsibility for AWS Services AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services let’s categorize them in three main categories: infrastructure container and abstracted services Each ArchivedAmazon Web Services AWS Security Best Practices Page 5 category comes with a slightly different security ownership model based on how you interact and access the functionality • Infrastructure Services: This category includes comp ute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack • Container Services: Services i n this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides a managed service for these application “containers” You are responsible for se tting up and managing network controls such as firewall rules and for managing platform level identity and access management separately from IAM Examples of container services include Amazon Relational Database Services (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk • Abstracted Services: This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Servic e (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS mana ges the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful int egration with IAM Let’s dig a little deeper into the shared responsibility model for each service type Shared Responsibility Model for Infrastructure Services Infrastructure services such as Amazon EC2 Amazon EBS and Amazon VPC run on top of the AWS global infrastructure They vary in terms of availability and durability objectives but always operate within the specific region where they have been launched You can build systems that meet availability objectives exceeding those of ArchivedAmazon Web Services AWS Security Best Practices Page 6 individual services from AWS by employing resilient components in multiple Availability Zones Figure 1 depicts the building blocks for the shared responsibility model for infrastructure services Figure 1: Shared Responsibility Model for Infrastruc ture Services Building on the AWS secure global infrastructure you install and configure your operating systems and platforms in the AWS cloud just as you would do on premises in your own data centers Then you install your applications on your platform Ultimately your data resides in and is managed by your own applications Unless you have more stringent business or compliance requirements you don’t need to introduce additional layers of protection beyond those provided by the AWS secure glob al infrastructure For certain compliance requirements you might require an additional layer of protection between the services from AWS and your operating systems and platforms where your applications and data reside You can impose additional controls such as protection of data at rest and protection of data in transit or introduce a layer of opacity between services from AWS and your platform The opacity layer can include data encryption data integrity authentication software and data signing s ecure time stamping and more AWS provides technologies you can implement to protect data at rest and in transit See the Managing OS level Access to Amazon EC2 Instances and Secure Your Data sections in this whitepaper for more information Alternatively you might introduce your own data protection tools or leverage AWS partner offerings The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating ArchivedAmazon Web Services AWS Security Best Practices Page 7 system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When you launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (R DP) You must successfully authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the ope rating system authentication mechanisms you want which might include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known a s Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public key of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key ; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system us er’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the priva te key file for user authentication ArchivedAmazon Web Services AWS Security Best Practices Page 8 For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it usin g the corresponding Amazon EC2 key pair’s public key The user can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amazon EC2 instance can be used to authenticate to the Windows instance AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have higher security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Shared Responsi bility Model for Container Services The AWS shared responsibility model also applies to container services such as Amazon RDS and Amazon EMR For these services AWS manages the underlying infrastructure and foundation services the operating system and t he application platform For example Amazon RDS for Oracle is a managed database service in which AWS manages all the layers of the container up to and including the Oracle database platform For services such as Amazon RDS the AWS platform provides dat a backup and recovery tools; but it is your responsibility to configure and use tools in relation to your business continuity and disaster recovery (BC/DR) policy For AWS Container services you are responsible for the data and for firewall rules for acce ss to the container service For example Amazon RDS provides RDS security groups and Amazon EMR allows you to manage firewall rules through Amazon EC2 security groups for Amazon EMR instances Figure 2 depicts the shared responsibility model for containe r services ArchivedAmazon Web Services AWS Security Best Practices Page 9 Figure 2: Shared Responsibility Model for Container Services Shared Responsibility Model for Abstracted Services For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer t he operating system and platforms and you access the endpoints to store and retrieve data Amazon S3 and DynamoDB are tightly integrated with IAM You are responsible for managing your data (including classifying your assets) and for using IAM tools to a pply ACL type permissions to individual resources at the platform level or permissions based on user identity or user responsibility at the IAM user/group level For some services such as Amazon S3 you can also use platform provided encryption of data a t rest or platform provided HTTPS encapsulation for your payloads for protecting your data in transit to and from the service Figure 3 outlines the shared responsibility model for AWS abstracted services: Figure 3: Shared Respo nsibility Model for Abstracted Services ArchivedAmazon Web Services AWS Security Best Practices Page 10 Using the Trusted Advisor Tool Some AWS Premium Support plans include access to the Trusted Advisor tool which offers a one view snap shot of your service and helps identify common security misconfigurations suggestions for improving system performance and underutilized resources In this whitepaper we cover the security aspects of Trusted Advisor that apply to Amazon EC2 Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses This includes ports 22 (SSH) 23 (Telnet) 3389 (RDP) and 5500 (VNC) • Limited access to co mmon database ports This includes ports 1433 (MSSQL Server) 1434 (MSSQL Monitor) 3306 (MySQL) Oracle (1521) and 5432 (PostgreSQL) • IAM is configured to help ensure secure access control of AWS resources • Multi factor authentication (MFA) token is enabl ed to provide two factor authentication for the root AWS account Define and Categorize Assets on AWS Before you design your ISMS identify all the information assets that you need to protect and then devise a technically and financially viable solution fo r protecting them It can be difficult to quantify every asset in financial terms so you might find that using qualitative metrics (such as negligible/low/medium/high/very high) is a better option Assets fall into two categories: • Essential elements such as business information process and activities • Components that support the essential elements such as hardware software personnel sites and partner organizations Table 1 shows a sample matrix of assets ArchivedAmazon Web Services AWS Security Best Practices Page 11 Table 1: Sample asset matrix Asset Name Asset Owner Asset Category Dependencies Customer facing website applications ECommerce team Essential EC2 Elastic Load Balancing Amazon RDS development Customer credit card data ECommerce team Essential PCI card holder environment encryption AWS PCI service Personnel data COO Essential Amazon RDS encryption provider dev and ops IT third party Data archive COO Essential S3 S3 Glacier dev and ops IT HR management system HR Essential EC2 S3 RDS dev and ops IT third party AWS Direct Connect infrastructure CIO Network Network ops TelCo provider AWS Direct Connect Business intelligence platform BI team Software EMR Redshift DynamoDB S3 dev and op s Business intelligence services COO Essential BI infrastructure BI analysis teams LDAP directory IT Security team Security EC2 IAM custom software dev and ops Windows AMI Server team Software EC2 patch management software dev and ops Customer credentials Compliance team Security Daily updates; archival infrastructure Design Your ISMS to Protect Your Assets on AWS After you have determined assets categories and costs establish a standard for implementing operating monitoring reviewing maintaining and improving your information security management system (ISMS) on AWS Security requirements differ in every organization depending on the following factors: ArchivedAmazon Web Services AWS Security Best Practices Page 12 • Business needs and objectives • Processes employed • Size and s tructure of the organization All these factors can change over time so it is a good practice to build a cyclical process for managing all of this information Table 2 suggests a phased approach to designing and building an ISMS in AWS You might also find standard frameworks such as ISO 27001 helpful with ISMS design and implementation Table 2: Phases of building an ISMS Phase Title Description 1 Define scope and boundaries Define which regions Availability Zones instances and AWS resources are “in scope” If you exclude any component (for example AWS manages facilities so you can leave it out of your own management system) state what you have excluded and why explicitly 2 Define an ISMS policy Include the following: • Objectives that set the direction and principles for action regarding information security • Legal contractual and regulatory requirements • Risk management objectives for your organization • How you will measure risk • How management approves the pla n 3 Select a risk assessment methodology Select a risk assessment methodology based on input from groups in your organization about the following factors: • Business needs • Information security requirements • Information technology capabilities and use • Legal requirements • Regulatory responsibilities Because public cloud infrastructure operates differently from legacy environments it is critical to set criteria for accepting risks and identifying the acceptable levels of risk (risk tolerances) We recomme nded starting with a risk assessment and leveraging automation as much as possible AWS risk automation can narrow down the scope of resources required for risk management There are several risk assessment methodologies including OCTAVE (Operationally Cr itical Threat Asset and Vulnerability Evaluation) ISO 31000:2009 Risk Management ENISA (European Network and Information Security Agency IRAM (Information Risk Analysis Methodology) and NIST (National Institute of Standards & Technology) Special Publ ication (SP) 800 30 rev1 Risk Management Guide ArchivedAmazon Web Services AWS Security Best Practices Page 13 Phase Title Description 4 Identify risks We recommend that you create a risk register by mapping all your assets to threats and then based on the vulnerability assessment and impact analysis results creating a new risk matrix for each AWS environment Here’s an example risk register: • Assets • Threats to those assets • Vulnerabilities that could be exploited by those threats • Consequences if those vulnerabilities are exploited 5 Analyze and evaluate risks Analyze and evaluate the risk by calculating business impact likelihood and probability and risk levels 6 Address risks Select options for addressing risks Options include applying security controls accepting risks avoiding risk or transferring risks 7 Choose a security control framework When you choose your security controls use a framework such as ISO 27002 NIST SP 800 53 COBIT (Control Objectives for Information and related Technology) and CSA CCM (Cloud Security Alliance Cloud Control Matrix The se frameworks comprise a set of reusable best practices and will help you to choose relevant controls 8 Get management approval Even after you have implemented all controls there will be residual risk We recommend that you get approval from your busine ss management that acknowledges all residual risks and approvals for implementing and operating the ISMS 9 Statement of applicability Create a statement of applicability that includes the following information: • Which controls you chose and why • Which controls are in place • Which controls you plan to put in place • Which controls you excluded and why Manage AWS Accounts IAM Users Groups and Roles Ensuring that users have appropriate levels of permissions to access the resources they need but no more than that is an important part of every ISMS You can use IAM to help perform this function You create IAM users under your AWS account and then assign them permissions directly or assign them to groups to which you assign permissions Here's a little more detail about AWS accounts and IAM users: ArchivedAmazon Web Services AWS Security Best Practices Page 14 • AWS account This is the account that you create when you first sign up for AWS Your AWS account represe nts a business relationship between you and AWS You use your AWS account to manage your AWS resources and services AWS accounts have root permissions to all AWS resources and services so they are very powerful Do not use root account credentials for da ytoday interactions with AWS In some cases your organization might choose to use several AWS accounts one for each major department for example and then create IAM users within each of the AWS accounts for the appropriate people and resources • IAM u sers With IAM you can create multiple users each with individual security credentials all controlled under a single AWS account IAM users can be a person service or application that needs access to your AWS resources through the management console C LI or directly via APIs Best practice is to create individual IAM users for each individual that needs to access services and resources in your AWS account You can create fine grained permissions to resources under your AWS account apply them to group s you create and then assign users to those groups This best practice helps ensure users have least privilege to accomplish tasks Strategies for Using Multiple AWS Accounts Design your AWS account strategy to maximize security and follow your business a nd governance requirements Table 3 discusses possible strategies Table 3: AWS Account strategies Business Requirement Proposed Design Comments Centralized security management Single AWS account Centralize information security management and minimize overhead Separation of production development and testing environments Three AWS accounts Create one AWS account for production services one for development and one for testing Multiple autonomous departments Multiple AWS accounts Create separate AWS accounts for each autonomous part of the organization You can assign permissions and policies under each account ArchivedAmazon Web Services AWS Security Best Practices Page 15 Business Requirement Proposed Design Comments Centralized security management with multiple autonomous independent projects Multiple AWS accounts Create a single AWS account for common project resources (such as DNS services Active Directory CMS etc)Then create separate AWS accounts per project You can assign permissions and policies under each project account and grant ac cess to resources across accounts You can configure a consolidated billing relationship across multiple accounts to ease the complexity of managing a different bill for each account and leverage economies of scale When you use billing consolidation th e resources and credentials are not shared between accounts Managing IAM Users IAM users with the appropriate level of permissions can create new IAM users or manage and delete existing ones This highly privileged IAM user can create a distinct IAM user for each individual service or application within your organization that manages AWS configuration or accesses AWS resources directly We strongly discourage the use of shared user identities where multiple entities share the same credentials Managing IAM Groups IAM groups are collections of IAM users in one AWS account You can create IAM groups on a functional organizational or geographic basis or by project or on any other basis where IAM users need to access similar AWS resources to do their jo bs You can provide each IAM group with permissions to access AWS resources by assigning one or more IAM policies All policies assigned to an IAM group are inherited by the IAM users who are members of the group For example let’s assume that IAM user Jo hn is responsible for backups within an organization and needs to access objects in the Amazon S3 bucket called Archives You can give John permissions directly so he can access the Archives bucket But then your organization places Sally and Betty on the same team as John While you can assign user permissions individually to John Sally and Betty to give them access to the Archives bucket assigning the permissions to a group and placing John Sally and Betty in that group will be easier to manage and maintain If additional users require the same access you can give it to them by adding them to the group When a user no ArchivedAmazon Web Services AWS Security Best Practices Page 16 longer needs access to a resource you can remove them from the groups that provide access to that resource IAM groups are a powerfu l tool for managing access to AWS resources Even if you only have one user who requires access to a specific resource as a best practice you should identify or create a new AWS group for that access and provision user access via group membership as we ll as permissions and policies assigned at the group level Managing AWS Credentials Each AWS account or IAM user is a unique identity and has unique long term credentials There are two primary types of credentials associated with these identities: (1) th ose used for sign in to the AWS Management Console and AWS portal pages and (2) those used for programmatic access to the AWS APIs Table 4 describes two types of sign in credentials Table 4: Sign in credentials Sign In Credential Type Details Username/Password User names for AWS accounts are always email addresses IAM user names allow for more flexibility Your AWS account password can be anything you define IAM user passwords can be forced to comply with a policy you define (that is you can require minimum password length or the use of non alphanumeric characters) Multi factor authentication (MFA) AWS Multi factor authentication (MFA) provides an extra level of security for sign in credentials With MFA enabled when users signs in to an AWS website they will be prompted for their user name and password (the first factor –what they know) as well as for an authentication code from their MFA device (the second factor – what they have) You can also require MFA for users to delete S3 objects We recommend you activate MFA for your AWS account and your IAM users to prevent unauthorized access to your AWS environment Currently AWS supports Gemalto hardware MFA devices as well as virtual MFA devices in the form of smartphone applications Table 5 describes types of credentials used for programmatic access to APIs ArchivedAmazon Web Services AWS Security Best Practices Page 17 Table 5: API access credentials Access Credential Type Details Access keys Access keys are used to digitally sign API calls made to AWS services Each access key credential is comprised of an access key ID and a secret key The secret key portion must be secured by the AWS account holder or the IAM user to whom they are assigned Users can have two sets of active access k eys at any one time As a best practice users should rotate their access keys on a regular basis MFA for API calls Multi factor authentication (MFA) protected API access requires IAM users to enter a valid MFA code before they can use certain functions which are APIs Policies you create in IAM will determine which APIs require MFA Because the AWS Management Console calls AWS service APIs you can enforce MFA on APIs whether access is through the console or via APIs Understanding Delegation Using IAM Roles and Temporary Security Credentials There are scenarios in which you want to delegate access to users or services that don't normally have access to your AWS resources Table 6 below outlines common use cases for delegating such a ccess Table 6: Common delegation use cases Use Case Description Applications running on Amazon EC2 instances that need to access AWS resources Applications that run on an Amazon EC2 instance and that need access to AWS resources such as Amazon S3 buckets or an Amazon DynamoDB table must have security credentials in order to make programmatic requests to AWS Developers might distribute their credentials to each instance and applications can then use those credentials to access resources but distributing longterm credentials to each instance is challenging to manage and a potential security risk ArchivedAmazon Web Services AWS Security Best Practices Page 18 Use Case Description Cross account access To manage access to resources you might have multiple AWS accounts —for example to isolate a developmen t environment from a production environment However users from one account might need to access resources in the other account such as promoting an update from the development environment to the production environment Although users who work in both accounts could have a separate identity in each account managing credentials for multiple accounts makes identity management difficult Identity federation Users might already have identities outside of AWS such as in your corporate directory However those users might need to work with AWS resources (or work with applications that access those resources) If so these users also need AWS security credentials in order to make requests to AWS IAM roles and temporary security credentials address these use cases An IAM role lets you define a set of permissions to access the resources that a user or service needs but the permissions are not attached to a specific IAM user or group Instead IAM users mobile and EC2 based appli cations or AWS services (like Amazon EC2) can programmatically assume a role Assuming the role returns temporary security credentials that the user or application can use to make for programmatic requests to AWS These temporary security credentials have a configurable expiration and are automatically rotated Using IAM roles and temporary security credentials means you don't always have to manage long term credentials and IAM users for each entity that requires access to a resource IAM Roles for Amazon EC2 IAM Roles for Amazon EC2 is a specific implementation of IAM roles that addresses the first use case in Table 6 In the following figure a developer is running an application on an Amazon EC2 instance that requires access to the Amazon S3 bucket name d photos An administrator creates the Get pics role The role includes policies that grant read permissions for the bucket and that allow the developer to launch the role with an Amazon EC2 instance When the application runs on the instance it can acces s the photos bucket by using the role's temporary credentials The administrator doesn't have to grant the developer permission to access the photos bucket and the developer never has to share credentials ArchivedAmazon Web Services AWS Security Best Practices Page 19 Figure 4: How roles fo r EC2 work 1 An administrator uses IAM to create the Getpics role In the role the administrator uses a policy that specifies that only Amazon EC2 instances can assume the role and that specifies only read permissions for the photos bucket 2 A developer lau nches an Amazon EC2 instance and associates the Getpics role with that instance 3 When the application runs it retrieves credentials from the instance metadata on the Amazon EC2 instance 4 Using the role credentials the application accesses the photo bucket with read only permissions Cross Account Access You can use IAM roles to address the second use case in Table 6 by enabling IAM users from another AWS account to access resources within your AWS account This process is referred to as cross accoun t access Cross account access lets you share access to your resources with users in other AWS accounts To establish cross account access in the trusting account (Account A) you create an IAM policy that grants the trusted account (Account B) access to specific resources Account B can then delegate this access to its IAM users Account B cannot delegate more access to its IAM users than the permissions that it has been granted by Account A Identity Federation You can use IAM roles to address the third use case in Table 6 by creating an identity broker that sits between your corporate users and your AWS resources to manage the authentication and authorization process without needing to re create all your users as IAM users in AWS ArchivedAmazon Web Services AWS Security Best Practices Page 20 Figure 5: AWS identity federation with temporary security credentials 1 The enterprise user accesses the identity broker application 2 The identity broker application authenticates the users against the corporate identity store 3 The identity broker application has permissions to access the AWS Security Token Service (STS) to request temporary security credentials 4 Enterprise users can get a temporary URL that gives them access to the AWS APIs or the Management Console A sample identity broker applic ation for use with Microsoft Active Directory is provided by AWS Managing OS level Access to Amazon EC2 Instances The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When y ou launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (RDP) You must successfully ArchivedAmazon Web Services AWS Security Best Practices Page 21 authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the operating system authentication mechanisms you want which migh t include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known as Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public ke y of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system user’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the private key file for user authentication For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it using the corresponding Amazon EC2 key pair’s public key The us er can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amaz on EC2 instance can be used to authenticate to the Windows instance ArchivedAmazon Web Services AWS Security Best Practices Page 22 AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have highe r security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Secure Your Data This section discusses protecting data at rest and in tran sit on the AWS platform We assume that you have already identified and classified your assets and established protection objectives for them based on their risk profiles Resource Access Authorization After a user or IAM role has been authenticated they can access resources to which they are authorized You provide resource authorization using resource policies or capability policies depending on whether you want the user to have control over the resources or whether you want to override individual user control • Resource policies are appropriate in cases where the user creates resources and then wants to allow other users to access those resources In this model the policy is attached directly to the resource and describes who can do what with the resour ce The user is in control of the resource You can provide an IAM user with explicit access to a resource The root AWS account always has access to manage resource policies and is the owner of all resources created in that account Alternatively you ca n grant users explicit access to manage permissions on a resource • Capability policies (which in the IAM docs are referred to as ""user based permissions"") are often used to enforce company wide access policies Capability policies are assigned to an IAM u ser either directly or indirectly using an IAM group They can also be assigned to a role that will be assumed at run time Capability policies define what capabilities (actions) the user is allowed or denied to perform They can override resource based po licies permissions by explicitly denying them • IAM policies can be used to restrict access to a specific source IP address range or during specific days and times of the day as well as based on other conditions ArchivedAmazon Web Services AWS Security Best Practices Page 23 • Resource policies and capability policies and are cumulative in nature: An individual user’s effective permissions is the union of a resources policies and the capability permissions granted directly or through group membership Storing and Managing Encryption Keys in the Cloud Security measures t hat rely on encryption require keys In the cloud as in an on premises system it is essential to keep your keys secure You can use existing processes to manage encryption keys in the cloud or you can leverage server side encryption with AWS key manage ment and storage capabilities If you decide to use your own key management processes you can use different approaches to store and protect key material We strongly recommend that you store keys in tamper proof storage such as Hardware Security Modules Amazon Web Services provides an HSM service in the cloud known as AWS CloudHSM Alternatively you can use HSMs that store keys on premises and access them over secure links such as IPSec virtual private networks (VPNs) to Amazon VPC or AWS Direct Con nect with IPSec You can use on premises HSMs or CloudHSM to support a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infrastructure (PKI) including authentication and authorization document signing and transaction processing CloudHSM currently uses Luna SA HSMs from SafeNet The Luna SA is designed to meet Federal Information Processing Standard (FIPS) 140 2 and Common Criteria EAL4+ standards and supports a variety of industry standard cryptographic algorithms When you sign up for CloudHSM you receive dedicated single tenant access to CloudHSM appliances Each appliance appears as a resource in your Amazon VPC You not AWS initialize and manage the cryptographic domain of the CloudHSM The cryptographic domain is a logical and physical security boundary that restricts access to your keys Only you can control your keys and operations performed by the CloudHSM AWS administrators manage maintain and monitor the health of the CloudHSM appliance but do not have access to the cryptographic domain After you initialize the cryptographic domain you can configure clients on your EC2 instances that allow applications to use the APIs provided by CloudHSM Your applicat ions can use the standard APIs supported by the CloudHSM such as PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) The CloudHSM client provides the APIs to your applications ArchivedAmazon Web Services AWS Security Best Practices Page 24 and implements each API call by c onnecting to the CloudHSM appliance using a mutually authenticated SSL connection You can implement CloudHSMs in multiple Availability Zones with replication between them to provide for high availability and storage resilience Protecting Data at Rest For regulatory or business requirement reasons you might want to further protect your data at rest stored in Amazon S3 on Amazon EBS Amazon RDS or other services from AWS Table 7 lists concern to consider when you are implementing protection of data at r est on AWS Table 7: Threats to data at rest Concern Recommended Protection Approach Strategies Accidental information disclosure Designate data as confidential and limit the number of users who can access it Use AWS permissions to manage access to resources for services such as Amazon S3 Use encryption to protect confidential data on Amazon EBS or Amazon RDS Permissions File partition volume or application level encryption Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification use resource permissions to limit the scope of users who can modify the data Even with resource permissions accidental deletion by a privileged user is still a t hreat (including a potential attack by a Trojan using the privileged user’s credentials) which illustrates the importance of the principle of least privilege Perform data integrity checks such as Message Authentication Codes (SHA 1/SHA 2) or Hashed Mes sage Authentication Codes (HMACs) digital signatures or authenticated encryption (AES GCM) to detect data integrity compromise If you detect data compromise restore the data from backup or in the case of Amazon S3 from a previous object version Permissions Data integrity checks (MAC/HMAC/Digital Signatures/Authenticated Encryption) Backup Versioning (Amazon S3) ArchivedAmazon Web Services AWS Security Best Practices Page 25 Concern Recommended Protection Approach Strategies Accidental deletion Using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion For services such as Amazon S3 you can use MFA Delete to require multi factor authentication to delete an object limiting access to Amazon S3 objects to privileged users If you detect data compromise restore the data f rom backup or in the case of Amazon S3 from a previous object version Permissions Backup Versioning (Amazon S3) MFA Delete (Amazon S3) System infrastructure hardware or software availability In the case of a system failure or a natural disaster restore your data from backup or from replicas Some services such as Amazon S3 and Amazon DynamoDB provide automatic data replication between multiple Availability Zones within a region Other services require you to configure replication or backups Backup Replication Analyze the threat landscape that applies to you and employ the relevant protection techniques as outlined in Table 1 The following sections describe how you can configure different services from AWS to protect data at rest Protecting Data at Rest on Amazon S3 Amazon S3 provides a number of security features for protection of data at rest which you can use or not depending on your threat profile Table 8 summarizes these features: Table 8: Amazon S3 features for protecting data at rest Amazon S3 Feature Description Permissions Use bucket level or object level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure data integrity compromise or deletion Versioning Amazon S3 supports object versions Versioning is disabled by default Enable versioning to store a new versio n for every modified or deleted object from which you can restore compromised objects if necessary ArchivedAmazon Web Services AWS Security Best Practices Page 26 Amazon S3 Feature Description Replication Amazon S3 replicates each object across all Availability Zones within the respective region Replication can provide data and service availabil ity in the case of system failure but provides no protection against accidental deletion or data integrity compromise –it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy o ptions which have different durability objectives and price points Backup Amazon S3 supports data replication and versioning instead of automatic backups You can however use application level technologies to back up data stored in Amazon S3 to other AWS regions or to on premises backup systems Encryption –server side Amazon S3 supports server side encryption of user data Server side encryption is transparent to the end user AWS generates a unique encryption key for each object and then encry pts the object using AES 256 The encryption key is then encrypted itself using AES 256with a master key that is stored in a secure location The master key is rotated on a regular basis Encryption –client side With client side encryption you create and manage your own encryption keys Keys you create are not exported to AWS in clear text Your applications encrypt data before submitting it to Amazon S3 and decrypt data after receiving it from Amazon S3 Data is stored in an encrypted form wi th keys and algorithms only known to you While you can use any encryption algorithm and either symmetric or asymmetric keys to encrypt the data the AWS provided Java SDK offers Amazon S3 client side encryption features See Further Reading for more information Protecting Data at Rest on Amazon EBS Amazon EBS is the AWS abstract block storage service You receive each Amazon EBS volume in raw unformatted mode as if it were a new hard disk You can partition the Amaz on EBS volume create software RAID arrays format the partitions with any file system you choose and ultimately protect the data on the Amazon EBS volume All of these decisions and operations on the Amazon EBS volume are opaque to AWS operations You ca n attach Amazon EBS volumes to Amazon EC2 instances Table 9 summarizes features for protecting Amazon EBS data at rest with the operating system running on an Amazon EC2 instance ArchivedAmazon Web Services AWS Security Best Practices Page 27 Table 9: Amazon EBS features for protecting data at rest Amazon EBS Feature Description Replication Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure; it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level and/or create backups Backup Amazon EBS provides snapshots that captu re the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon EBS backups Encryption: Microsoft Windows EFS If you are running Microsoft Windows Server on AWS and you require an additional level of data confidentiality you can implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions EFS is an extension to the NTFS file system that provides for transparent file and folder encryption and integrates with Windows and Active Directory key management facilities and PKI You can manage your own keys on EFS Encryption: Microsoft Windows BitLocker is a volume (or partition in the case of single drive) encryption solution included in Windows Server 2008 and l ater operating systems BitLocker uses Windows Bitlocker AES 128 and 256 bit encryption By default BitLocker requires a Trusted Platform Module (TPM) to store keys; this is not supported on Amazon EC2 However you can protect EBS volumes using BitLock er if you configure it to use a password Encryption: Linux dmcrypt On Linux instances running kernel versions 26 and later you can use dm crypt to configure transparent data encryption on Amazon EBS volumes and swap space You can use various ciphers as well as Linux Unified Key Setup (LUKS) for key management Encryption: TrueCrypt TrueCrypt is a third party tool that offers transparent encryption of data at rest on Amazon EBS volumes TrueCrypt supports both Microsoft Windows and L inux operating systems Encryption and integrity authentication: SafeNet ProtectV SafeNet ProtectV is a third party offering that allows for full disk encryption of Amazon EBS volumes and pre boot authentication of AMIs SafeNet ProtectV provides data confidentiality and data integrity authentication for data and the underlying operating system ArchivedAmazon Web Services AWS Security Best Practices Page 28 Protecting Data at Rest on Amazon RDS Amazon RDS leverages the same secure infrastructure as Amazon EC2 You can use the Amazon RDS service without additional protection but if you require encryption or data integrity authentication of data at rest for compliance or other purposes you can add protection at the application layer or at the platform layer using SQL cryptographic functions You could add protecti on at the application layer for example using a built in encryption function that encrypts all sensitive database fields using an application key before storing them in the database The application can manage keys by using symmetric encryption with PK I infrastructure or other asymmetric key techniques to provide for a master encryption key You could add protection at the platform using MySQL cryptographic functions; which can take the form of a statement like the following: INSERT INTO Customers (Cust omerFirstNameCustomerLastName) VALUES (AES_ENCRYPT('John'@key) AES_ENCRYPT('Smith'@key); Platform level encryption keys would be managed at the application level like application level encryption keys Table 10 summarizes Amazon RDS platform level protection options Table 10: Amazon RDS platform level data protection at rest Amazon RDS Platform Comment MySQL MySQL cryptographic functions include encryption hashing and compression For more information see https://devmysqlcom/doc/refman/55/en/encryption functionshtml Oracle Oracle Transparent Data Encryption is supported on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model Microsoft SQL Microsoft Transact SQL data protection functions include encryption signing and hashing For more information see http://msdnmicrosoftcom/en us/library/ms173744 Note that SQL range queries are no longer applicable to the encrypted portion of the data This query for example would not return the expected results for names like “John” “Jonathan” and “Joan” if the contents of column CustomerFirstName is encrypted at the application or platform layer: ArchivedAmazon Web Services AWS Security Best Practices Page 29 SELECT CustomerFirstName CustomerLastName from Customers WHERE CustomerName LIKE 'Jo%';” Direct comparisons such as the following would work and return the expected result for all fields where CustomerFirstName matches “John” exactly SELECT CustomerFirstName CustomerLastName FROM Customers WHERE CustomerFirstName = AES_ENCRYPT('John' @key); Range queries would also work on fields that are not encrypted For example a Date field in a table could be left unencrypt ed so that you could use it in range queries Oneway functions are a good way to obfuscate personal identifiers such as social security numbers or equivalent personal IDs where they are used as unique identifiers While you can encrypt personal identifie rs and decrypt them at the application or platform layer before using them it’s more convenient to use a one way function such as keyed HMAC SHA1 to convert the personal identifier to a fixed length hash value The personal identifier is still unique because collisions in commercial HMACs are extremely rare The HMAC is not reversible to the original personal identifier however so you cannot track back the data to the original individual unless you know the original personal ID and process it via th e same keyed HMAC function In all regions Amazon RDS supports Transparent Data Encryption and Native Network Encryption both of which are components of the Advanced Security option for the Oracle Database 11g Enterprise Edition Oracle Database 11g Ente rprise Edition is available on Amazon RDS for Oracle under the Bring Your OwnLicense (BYOL) model There is no additional charge to use these features Oracle Transparent Data Encryption encrypts data before it is written to storage and decrypts data whe n it is read from storage With Oracle Transparent Data Encryption you can encrypt table spaces or specific table columns using industry standard encryption algorithms such as Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) Protecting Data at Rest on Amazon S3 Glacier Data at rest stored in Amazon S3 Glacier is automatically server side encrypted using 256bit Advanced Encryption Standard (AES 256) with keys maintained by AWS The encryption key is then encrypted itself using AES256 with a master key that is stored in ArchivedAmazon Web Services AWS Security Best Practices Page 30 a secure location The master key is rotated on a regular basis For more information about the default encryption behavior for an Amazon S3 bucket see Amazon S3 Default Encryption Protecting Data at Rest on Amazon DynamoDB Amazon DynamoDB is a shared service from AWS You can use DynamoDB without adding protection but you can also implement a data encryption layer over the s tandard DynamoDB service See the previous section for considerations for protecting data at the application layer including impact on range queries DynamoDB supports number string and raw binary data type formats When storing encrypted fields in Dyna moDB it is a best practice to use raw binary fields or Base64 encoded string fields Protecting Data at Rest on Amazon EMR Amazon EMR is a managed service in the cloud AWS provides the AMIs required to run Amazon EMR and you can’t use custom AMIs or you r own EBS volumes By default Amazon EMR instances do not encrypt data at rest Amazon EMR clusters often use either Amazon S3 or DynamoDB as the persistent data store When an Amazon EMR cluster starts it can copy the data required for it to operate fro m the persistent store into HDFS or use data directly from Amazon S3 or DynamoDB To provide for a higher level of data at rest confidentiality or integrity you can employ a number of techniques summarized in Table 11 Table 11: Protecting data at rest in Amazon EMR Requirement Description Amazon S3 server side encryption –no HDFS copy Data is permanently stored on Amazon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 server side encryption ArchivedAmazon Web Services AWS Security Best Practices Page 31 Requirement Description Amazon S3 client side encryption Data is permanently stored on Am azon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies To apply client side decryption you can use a custom Serializer/Deserializer (SerDe) with products such as Hiv e or InputFormat for Java Map Reduce jobs Apply encryption at each individual row or record so that you can split the file See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 cli entside encryption Application level encryption –entire file encrypted You can encrypt or protect the integrity of the data (for example by using HMAC SHA1) at the application level while you store data in Amazon S3 or DynamoDB To decrypt the data you would use a custom SerDe with Hive or a script or a bootstrap action to fetch the data from Amazon S3 decrypt it and load it into HDFS befo re processing Because the entire file is encrypted you might need to execute this action on a single node such as the master node You can use tools such as S3Distcp with special codecs Application level encryption –individual fields encrypted/structur e preserved Hadoop can use a standard SerDe such as JSON Data decryption can take place during the Map stage of the Hadoop job and you can use standard input/output redirection via custom decryption tools for streaming jobs Hybrid You might want to employ a combination of Amazon S3 server side encryption and client side encryption as well as application level encryption AWS Partner Network (APN) partners provide specialized solutions for protecting data at rest and in transit on Amazon EMR for more information visit the AWS Security Partner Solutions page Decommission Data and Media Securely You decommission data differently in the cloud than you do in traditional on premises environments When you ask AWS to delete data in the cloud AWS does not decommission the underlying physical media; instead the storage blocks are mark ed as unallocated AWS uses secure mechanisms to reassign the blocks elsewhere When you provision block storage the hypervisor or virtual machine manager (VMM) keeps track of which blocks your instance has written to When an instance writes to a block o f storage the previous ArchivedAmazon Web Services AWS Security Best Practices Page 32 block is zeroed out and then overwritten with your block of data If your instance attempts to read from a block previously written to your previously stored data is returned If an instance attempts to read from a block it has no t previously written to the hypervisor zeros out the previous data on disk and returns a zero to the instance When AWS determines that media has reached the end of its useful life or it experiences a hardware fault AWS follows the techniques detailed i n Department of Defense (DoD) 522022 M (“National Industrial Security Program Operating Manual”) or NIST SP 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process For more information about deletion of data in the cloud see the AWS Overview of Security Processes whitepaper When you have regulatory or business reasons to require further controls for securely decommissioning data you can implement data encryption at rest using customer managed keys which are not stored in the cloud Then in addition to following the previous process you would delete the key used to protect the decommissioned data making it irrecoverable Protec t Data in Transit Cloud applications often communicate over public links such as the Internet so it is important to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers Table 12 lists common concerns to communication over public links such as the Internet Table 12: Threats to data in transit Concern Comments Recommended Protection Accidental information disclosure Access to your confidential data should be limited When data is traversing the public network it should be protected from disclosure through encryption Encrypt data in transit using IPSec ESP and/or SSL/TLS ArchivedAmazon Web Services AWS Security Best Practices Page 33 Concern Comments Recommended Protection Data integrity compromise Whether or not data is confidential you want to know that data integrity is not compromised through deliberate or accidental modification Authenticate data integrity using IPSec ESP/AH and/or SSL/TLS Peer identity compromise/ identity spoofing/ man inthe middle Encryption and data integrity authentication are important for protecting the communications channel It is equally important to authenticate the identity of the remote end of the connection An encrypted channel is worthless if the remote end happens to be an attacker or an imposter relaying the connection to the intended recipient Use IPSec with IKE with pre shared keys or X509 certificates to authenticate the remote end Alternatively use SSL/TLS with server certificate authentication based on the server common name (CN) or Alternative Name (AN/SAN) Services from AWS provide support for both IPSec and SSL/TLS for protection of data in transit IPSec is a protocol that extends the IP protocol stack often in n etwork infrastructure and allows applications on upper layers to communicate securely without modification SSL/TLS on the other hand operates at the session layer and while there are third party SSL/TLS wrappers it often requires support at the appli cation layer as well The following sections provide details on protecting data in transit Managing Application and Administrative Access to AWS Public Cloud Services When accessing applications running in the AWS public cloud your connections traverse t he Internet In most cases your security policies consider the Internet an insecure communications medium and require application data protection in transit Table 13 outlines approaches for protecting data in transit when accessing public cloud services ArchivedAmazon Web Services AWS Security Best Practic es Page 34 Table 13: Protecting application data in transit when accessing public cloud Protocol/Scenari o Description Recommended Protection Approach HTTP/HTTPS traffic (web applications) By default HTTP traffic is unprotected SSL/TLS protection for HTTP traffic also known as HTTPS is industry standard and widely supported by web servers and browsers HTTP traffic can include not just client access to web pages but also web services (REST based access) as well Use HTTPS (HTTP over SSL/TLS) with server certificate authentication HTTPS offload (web applications) While using HTTPS is often recommended especially for sensitive data SSL/TLS processing requires additional CPU and memory resources from both the web server and the client This can put a considerable load on web servers handling thousands of SSL/TLS sessions There is less impact on the client where only a limited number of SSL/TLS connections are terminated Offload HTTPS processing on Elastic Load Balancing to minimize impact on web servers while still protecting data in transit Further protect the backend connection to instances using an application protocol such as HTTP over SSL Remote Desktop Protocol (RDP) traffic Users who access Windows Terminal Services in the public cloud usually use the Microsoft Remote Desktop Protocol (RDP) By default RDP connections establish an underlying SSL/TLS connection For optimal protection the Windows server being accessed should be issued a trusted X50 9 certificate to protect from identity spoofing or man inthemiddle attacks By default Windows RDP servers use selfsigned certificates which are not trusted and should be avoided ArchivedAmazon Web Services AWS Security Best Practices Page 35 Protocol/Scenari o Description Recommended Protection Approach Secure Shell (SSH) traffic SSH is the preferred approach for establi shing administrative connections to Linux servers SSH is a protocol that like SSL provides a secure communications channel between the client and the server In addition SSH also supports tunneling which you should use for running applications such as XWindows on top of SSH and protecting the application session in transit Use SSH version 2 using non privileged user accounts Database server traffic If clients or servers need to access databases in the cloud they might need to traverse the Internet as well Most modern databases support SSL/TLS wrappers for native database protocols For database servers running on Amazon EC2 we recommend this approach to protecting data in transit Amazon RDS provides support for SSL/TLS in some cases See the Protecting Data in Transit to Amazon RDS section for more details Protecting Data in Transit when Managing AWS Services You can manage your services from AWS such as Amazon EC2 and Amazon S3 using the AWS Man agement Console or AWS APIs Examples of service management traffic include launching a new Amazon EC2 instance saving an object to an Amazon S3 bucket or amending a security group on Amazon VPC The AWS Management Console uses SSL/TLS between the client browser and console service endpoints to protect AWS service management traffic Traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint by using an X509 certificate After an SSL/TLS session is established between the client browser and the console service endpoint all subsequent HTTP traffic is protected within the SSL/TLS session You can alternatively use AWS APIs to manage services from AWS either directly from applicat ions or third party tools or via SDKs or via AWS command line tools AWS ArchivedAmazon Web Services AWS Security Best Practices Page 36 APIs are web services (REST) over HTTPS SSL/TLS sessions are established between the client and the specific AWS service endpoint depending on the APIs used and all subsequent tr affic including the REST envelope and user payload is protected within the SSL/TLS session Protecting Data in Transit to Amazon S3 Like AWS service management traffic Amazon S3 is accessed over HTTPS This includes all Amazon S3 service management requ ests as well as user payload such as the contents of objects being stored/retrieved from Amazon S3 and associated metadata When the AWS service console is used to manage Amazon S3 an SSL/TLS secure connection is established between the client browser a nd the service console endpoint All subsequent traffic is protected within this connection When Amazon S3 APIs are used directly or indirectly an SSL/TLS connection is established between the client and the Amazon S3 endpoint and then all subsequent HTTP and user payload traffic is encapsulated within the protected session Protecting Data in Transit to Amazon RDS If you’re connecting to Amazon RDS from Amazon EC2 instances in the same region you can rely on the security of the AWS networ k but if you’re connecting from the Internet you might want to use SSL/TLS for additional protection SSL/TLS provides peer authentication via server X509 certificates data integrity authentication and data encryption for the client server connection SSL/TLS is currently supported for connections to Amazon RDS MySQL and Microsoft SQL instances For both products Amazon Web Services provides a single self signed certificate associated with the MySQL or Microsoft SQL listener You can download the selfsigned certificate and designate it as trusted This provides for peer identity authentication and prevents man inthemiddle or identity spoofing attacks on the server side SSL/TLS provides for native encryption and data integrity authentication of the communications channel between the client and the server Because the same self signed certificate is used on all Amazon RDS MySQL instances on AWS and another single self signed certificate is used across all Amazon RDS Microsoft SQL instances on AWS peer identity authentication does not provide for individual instance authentication If you require individual server authentication via SSL/TLS you might need to leverage Amazon EC2 and self managed relational database services ArchivedAmazon Web Services AWS Security Best Practices Page 37 Amazon RDS for Oracle Na tive Network Encryption encrypts the data as it moves into and out of the database With Oracle Native Network Encryption you can encrypt network traffic travelling over Oracle Net Services using industry standard encryption algorithms such as AES and Tri ple DES Protecting Data in Transit to Amazon DynamoDB If you're connecting to DynamoDB from other services from AWS in the same region you can rely on the security of the AWS network but if you're connecting to DynamoDB across the Internet you should u se HTTP over SSL/TLS (HTTPS) to connect to DynamoDB service endpoints Avoid using HTTP for access to DynamoDB and for all connections across the Internet Protecting Data in Transit to Amazon EMR Amazon EMR includes a number of application communication paths each of which requires separate protection mechanisms for data in transit Table 14 outlines the communications paths and the protection approach we recommend Table 14: Protecting data in transit on Amazon EMR Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop nodes Hadoop Master Worker and Core nodes all communicate with one another using proprietary plain TCP connections However all Hadoop nodes on Amazon EMR reside in the same Availability Zone and are protected by security standards at the physical and infrastructure layer No additional protection typically required – all nodes reside in the same facility Between Hadoop Cluster and Amazon S3 Amazon EMR uses HTTPS to send data between DynamoDB and Amazon EC2 For more information see the Protecting Data in Transit to Amazon S3 section HTTPS used by default ArchivedAmazon Web Services AWS Security Best Practices Page 38 Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop Cluster and Amazon DynamoDB Amazon EMR uses HTTPS to send data between Amazon S3 and Amazon EC2 For more information see the Protecting Data in Transit to Amazon DynamoDB section HTTPS used by default Use SSL/TLS if Thrift REST or Avro are used User or application access to Hadoop cluster Clients or applications on premises can access Amazon EMR clusters across the Internet using scripts (SSH based access) REST or protocols such as Thrift or Avro Use SSH for interactive access to applications or for tunneling other protocols within SSH Administrative access to Hadoop cluster Amazon EMR cluster administrators typically use SSH to manage the cluster Use SSH to the Amazon EMR master node Secure Your Operating Systems and Applications With the AWS shared responsibility model you manage your operating systems and applications security Amazon EC2 presents a true virtual computing environment in which you can use web service interfaces to launch instances with a variety of operating systems with custom preloaded applications You can standardize operating system and application builds and centrally manage the security of your operating systems and applications in a single secure build repository You can build and test a pre configured AMI to meet your security requirements Recommendations include: • Disable root API access keys and secret key • Restrict access to instances from limited IP ranges using Security Groups • Password protect the pem file on user machines ArchivedAmazon Web Services AWS Securit y Best Practices Page 39 • Delete keys f rom the authorizedkeys file on your instances when someone leaves your organization or no longer requires access • Rotate credentials (DB Access Keys) • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys • Use bastion hosts to enforce control and visibility This section is not intended to provide a comprehensive list of hardening standards for AMIs Sources of industry accepted system hardening standards include but are not limited to: • Center for Internet Secu rity (CIS) • International Organization for Standardization (ISO) • SysAdmin Audit Network Security (SANS) Institute • National Institute of Standards Technology (NIST) We recommend that you develop configuration standards for all system components Ensure that these standards address all known security vulnerabilities and are consistent with industry accepted system hardening standards If a published AMI is found to be in violation of best practices or poses a significant risk to customers running the AMI AWS reserves the right to take measures to remove the AMI from the public catalog and notify the publisher and those running the AMI of the findings Creating Custom AMIs You can create your own AMIs that meet the specific requirements of your organization and publish them for internal (private) or external (public) use As a publisher of an AMI you are responsible for the initial security posture of the machine images tha t you use in production The security controls you apply on the AMI are effective at a specific point in time they are not dynamic You can configure private AMIs in any way that meets your business needs and does not violate the AWS Acceptable Use Policy For more information see the Amazon Web Services Acceptable Use Policy Users who launch from AMIs however might not be security experts so we recommend that you meet certain minimum security standards ArchivedAmazon Web Services AWS Security Best Practices Page 40 Before you publish an AMI make sure that the published software is up to date with relevant security patches and perform the clean up and hardening tasks listed in Table 15 Table 15: Clean up tasks before publishing an AMI Area Recommended Task Disable insecure applications Disable services and protocols that authenticate users in clear text over the network or otherwise insecurely Minimize exposure Disable non essential network services on startup Only administrative services (SSH/RDP) and the services required for essential applications should be started Protect credentials Securely delete all AWS credentials from disk and configuration files Protect credentials Securely delete any third party crede ntials from disk and configuration files Protect credentials Securely delete all additional certificates or key material from the system Protect credentials Ensure that software installed does not use default internal accounts and passwords Use good governance Ensure that the system does not violate the Amazon Web Services Acceptable Use Policy Examples of violations include open SMTP relays or proxy servers For more information see the Amazon Web Se rvices Acceptable Use Policy Tables 16 and 17 list additional operating system specific clean up tasks Table 16 lists the steps for securing Linux AMIs Table 16: Securing Linux/UNIX AMIs Area Hardening Activity Secure services Configure sshd to allow only public key authentication Set PubkeyAuthentication to Yes and PasswordAuthentication to No in sshd_config Secure services Generate a unique SSH host key on instance creation If the AMI uses cloud init it will hand le this automatically ArchivedAmazon Web Services AWS Security Best Practices Page 41 Area Hardening Activity Protect credentials Remove and disable passwords for all user accounts so that they cannot be used to log in and do not have a default password Run passwd l for each account Protect credentials Securely delete all user SSH public and private key pairs Protect data Securely delete all shell history and system log files containing sensitive data Table 17: Securing Windows AMIs Area Hardening Activity Protect credentials Ensure that all enabled user accounts have new randomly generated passwords upon instance creation You can configure the EC2 Config Service to do this for the Administrator account upon boot but you must explicitly do so before bundling the image Protect credentials Ensure that the Guest account is disabled Protect data Clear the Windows event logs Protect credentials Make sure the AMI is not a part of a Windows domain Minimizing exposure Do not enable any file sharing print spooler RPC and other Windows services that are not essential but are enabled by default Bootstrapping After the hardened AMI is instantiated you can still amend and update security controls by using bootstrapping applications Common bootstrapping applications includ e Puppet Chef Capistrano Cloud Init and Cfn Init You can also run custom bootstrapping Bash or Microsoft Windows PowerShell scripts without using third party tools Here are a few bootstrap actions to consider: • Security software updates install the lat est patches service packs and critical updates beyond the patch level of the AMI ArchivedAmazon Web Services AWS Security Best Practices Page 42 • Initial application patches install application level updates beyond the current application level build as captured in the AMI • Contextual data and configuration enables instances to apply configurations specific to the environment in which they are being launched –production test or DMZ/internal for example • Register instances with remote security monitoring and management systems Managing Patches You are responsible f or patch management for your AMIs and live instances We recommend that you institutionalize patch management and maintain a written procedure While you can use third party patch management systems for operating systems and major applications it is a goo d practice to keep an inventory of all software and system components and to compare the list of security patches installed on each system to the most recent vendor security patch list to verify that current vendor patches are installed Implement proces ses to identify new security vulnerabilities and assign risk rankings to such vulnerabilities At a minimum rank the most critical highest risk vulnerabilities as “High” Controlling Security for Public AMIs Take care that you don’t leave important crede ntials on AMIs when you share them publicly For more information see How To Share and Use Public AMIs in A Secure Manner Protecting Your System from Malware Protect your systems in the cloud as you would protect a conventional infrastructure from threats such as viruses worms Trojans rootkits botnets and spam It’s important to understand the implications of a malware infection to an individual instance as well as to the entire cloud system: When a user –wittingly or unwittingly – executes a program on a Linux or Windows system the executable assumes the privileges of that user (or in some cases impersonates another user) The code can carry out an y action that the user who launched it has permissions for Users must ensure that they only execute trusted code ArchivedAmazon Web Services AWS Security Best Practices Page 43 If you execute a piece of untrusted code on your system it’s no longer your system –it belongs to someone else If a superuser or a user with administrative privileges executes an untrusted program the system on which the program was executed can no longer be trusted –malicious code might change parts of the operating system install a rootkit or establish back doors for accessing the system It might delete data or compromise data integrity or compromise the availability of services or disclose information in a covert or overt fashion to third parties Consider the instance on which the code was executed to be infected If the infected insta nce is part of a single sign on environment or if there is an implicit trust model for access between instances the infection can quickly spread beyond the individual instance into the entire system and beyond An infection of this scale can quickly lead to data leakage data and service compromise and it can erode the company’s reputation It might also have direct financial consequences if for example it compromises services to third parties or over consumes cloud resources You must manage the threat of malware Table 18 outlines some common approaches to malware protection Table 18: Approaches for protection from malware Factor Common Approaches Untrusted AMIs Launch instances from trusted AMIs only Trusted AMIs include t he standard Windows and Linux AMIs provided by AWS and AMIs from trusted third parties If you derive your own custom AMIs from the standard and trusted AMIs all the additional software and settings you apply to it must be trusted as well Launching an un trusted third party AMI can compromise and infect your entire cloud environment Untrusted software Only install and run trusted software from a trusted software provider A trusted software provider is one who is well regarded in the industry and develops software in a secure and responsible fashion not allowing malicious code into its software packages Open source software can also be trusted software and you should be able to compile your own executables We strongly recommend that you perform careful code reviews to ensure that source code is non malicious Trusted software providers often sign their software using code signing certificates or provide MD5 or SHA 1 signatures of their products so that you can verify the integrity of the softwar e you download ArchivedAmazon Web Services AWS Security Best Practices Page 44 Factor Common Approaches Untrusted software depots You download trusted software from trusted sources Random sources of software on the Internet or elsewhere on the network might actually be distributing malware inside an otherwise legitimate and reputable softwa re package Such untrusted parties might provide MD5 or SHA 1 signatures of the derivative package with malware in it so such signatures should not be trusted We advise that you set up your own internal software depots of trusted software for your users to install and use Strongly discourage users from the dangerous practice of downloading and installing software from random sources on the Internet Principle of least privilege Give users the minimum privileges they need to carry out their tasks That way even if a user accidentally launches an infected executable the impact on the instance and the wider cloud system is minimized Patching Patch external facing and internal systems to the latest security level Worms often spread through unpatched systems on the network Botnets If an infection –whether from a conventional virus a Trojan or a worm –spreads beyond the individual instance and infects a wider fleet it might carry malicious code that creates a botnet –a network of infected ho sts that can be controlled by a remote adversary Follow all the previous recommendations to avoid a botnet infection Spam Infected systems can be used by attackers to send large amounts of unsolicited mail (spam) AWS provides special controls to limit how much email an Amazon EC2 instance can send but you are still responsible for preventing infection in the first place Avoid SMTP open relay which can be used to spread spam and which might also represent a breach of the AWS Acceptable Use Poli cy For more information see the Amazon Web Services Acceptable Use Policy Antivirus/ Antispam software Be sure to use a reputable and up todate antiv irus and antispam solution on your system Host based IDS software Many AWS customers install host based IDS software such as the open source product OSSEC that includes file integrity checking and rootkit detection software Use these products to analy ze important system files and folders and calculate checksum that reflect their trusted state and then regularly check to see whether these files have been modified and alert the system administrator if so If an instance is infected antivirus software might be able to detect the infection and remove the virus We recommend the most secure and widely recommended approach which is to save all the system data then reinstall all the systems platforms and ArchivedAmazon Web Services AWS Security Best Practices Page 45 application executables from a trusted source and then restore the data only from backup Mitigating Compromise and Abuse AWS provides a global infrastructure for customers to build solutions on many of which face the Internet Our customer solutions must operate in a manner that does no harm to the rest of Internet community that is they must avoid abuse activities Abuse activities are externally observed behaviors of AWS customers’ instances or other resources that are malicious offensive illegal or could harm other Internet sites AWS works with you to detect and address suspicious and malicious activities from your AWS resources Unexpected or suspicious behaviors from your resources can indicate that your AWS resources have been compromised which signals potential risks to your business AWS uses the following mechanisms to detect abuse activities from customer resources : • AWS internal event monitoring • External security intelligence against AWS network space • Internet abuse complaints against AWS re sources While the AWS abuse response team aggressively monitors and shuts down malicious abusers or fraudsters running on AWS the majority of abuse complaints refer to customers who have legitimate business on AWS Common causes of unintentional abuse act ivities include: • Compromised resource For example an unpatched Amazon EC2 instance could be infected and become a botnet agent • Unintentional abuse For example an overly aggressive web crawler might be classified as a DOS attacker by some Internet site s • Secondary abuse For example an end user of the service provided by an AWS customer might post malware files on a public Amazon S3 bucket • False complaints Internet users might mistake legitimate activities for abuse AWS is committed to working with AWS customers to prevent detect and mitigate abuse and to defend against future re occurrences When you receive an AWS abuse warning your security and operational staffs must immediately investigate the matter Delay can prolong the damage to other In ternet sites and lead to reputation and legal ArchivedAmazon Web Services AWS Security Best Practices Page 46 liability for you More importantly the implicated abuse resource might be compromised by malicious users and ignoring the compromise could magnify damages to your business Malicious illegal or harmful act ivities that use your AWS resources violate the AWS Acceptable Use Policy and can lead to account suspension For more information see the Amazon Web Services Acceptable Use Policy It is your responsibility to ma intain a well behaved service as evaluated by the Internet community If an AWS customer fails to address reported abuse activities AWS will suspend the AWS account to protect the integrity of the AWS platform and the Internet community Table 19 lists b est practices that can help you respond to abuse incidents: Table 19: Best practices for mitigating abuse Best Practice Description Never ignore AWS abuse communication When an abuse case is filed AWS immediately sends an email notification to the customer’s registered email address You can simply reply to the abuse warning email to exchange information with the AWS abuse response team All communications are saved in the AWS abuse tracking system for future reference The AWS abuse response team is committed to helping customers to understand the nature of the complaints AWS helps customers to mitigate and prevent abuse activities Account suspension is the last action the AWS abuse response team takes to stop abuse activ ities We work with our customers to mitigate problems and avoid having to take any punitive action But you must respond to abuse warnings take action to stop the malicious activities and prevent future re occurrence Lack of customer response is the le ading reason for instance and account blocks Follow security best practices The best protection against resource compromise is to follow the security best practices outlined in this document While AWS provides certain security tools to help you establi sh strong defenses for your cloud environment you must follow security best practices as you would for servers within your own data center Consistently adopt simple defense practices such as applying the latest software patches restricting network traf fic via a firewall and/or Amazon EC2 security groups and providing least privilege access to users ArchivedAmazon Web Services AWS Security Best Practices Page 47 Best Practice Description Mitigation to compromises If your computing environment has been compromised or infected we recommend taking the following steps to recover to a safe state: Consider any known compromised Amazon EC2 instance or AWS resource unsafe If your Amazon EC2 instance is generating traffic that cannot be explained by your application usage your instance has probably been compromised or infected with malici ous software Shut down and rebuild that instance completely to get back to a safe state While a fresh re launch can be challenging in the physical world in the cloud environment it is the first mitigation approach You might need to carry out forensic a nalysis on a compromised instance to detect the root cause Only well trained security experts should perform such an investigation and you should isolate the infected instance to prevent further damage and infection during the investigation To isolate a n Amazon EC2 instance for investigation you can set up a very restrictive security group for example close all ports except to accept inbound SSH or RDP traffic from one single IP address from which the forensic investigator can safely examine the insta nce You can also take an offline Amazon EBS snapshot of the infected instance and then deliver the offline snapshot to forensic investigators for deep analysis AWS does not have access to the private information inside your instances or other resources so we cannot detect guest operating system or application level compromises such as application account take over AWS cannot retroactively provide information (such as access logs IP traffic logs or other attributes) if you are not recording that infor mation via your own tools Most deep incident investigation and mitigation activities are your responsibility The final step you must take to recover from compromised Amazon EC2 instances is to back up key business data completely terminate the infected instances and re launch them as fresh resources To avoid future compromises we recommend that you review the security control environment on the newly launched instances Simple steps like applying the latest software patches and restricting firewalls g o a long way Set up security communication email address The AWS abuse response team uses email for abuse warning notifications By default this email goes to your registered email address but if you are in a large enterprise you might want to create a dedicated response email address You can set up additional email addresses on your Personal Information page under Configure Additional Contacts ArchivedAmazon Web Services AWS Security Best Practices Page 48 Using Additional Application Security Practices Here are some additional general security best practices for your operating systems and applications: • Always change vendor supplied defaults before creating new AMIs or prior to deploying new applications including but not limited to passwords simple network management protocol (SNMP) community strin gs and security configuration • Remove or disable unnecessary user accounts • Implement a single primary function per Amazon EC2 instance to keep functions that require different security levels from co existing on the same server For example implement we b servers database servers and DNS on separate servers • Enable only necessary and secure services protocols daemons etc as required for the functioning of the system Disable all non essential services because they increase the security risk exposu re for the instance as well as the entire system • Disable or remove all unnecessary functionality such as scripts drivers features subsystems EBS volumes Configure all services with security best practices in mind Enable security features for any re quired services protocols or daemons Choose services such as SSH which have built in security mechanisms for user/peer authentication encryption and data integrity authentication over less secure equivalents such as Telnet Use SSH for file transfers rather than insecure protocols like FTP Where you can’t avoid using less secure protocols and services introduce additional security layers around them such as IPSec or other virtual private network (VPN) technologies to protect the communications cha nnel at the network layer or GSS API Kerberos SSL or TLS to protect network traffic at the application layer While security governance is important for all organizations it is a best practice to enforce security policies Wherever possible configure your system security parameters to comply with your security policies and guidelines to prevent misuse For administrative access to systems and applications encrypt all non console administrative access using strong cryptographic mechanisms Use technolo gies such ArchivedAmazon Web Services AWS Security Best Practices Page 49 as SSH user and site tosite IPSec VPNs or SSL/TLS to further secure remote system management Secure Your Infrastructure This section provides recommendations for securing infrastructure services on the AWS platform Using Amazon Virtual Priva te Cloud (VPC) With Amazon Virtual Private Cloud (VPC) you can create private clouds within the AWS public cloud Each customer Amazon VPC uses IP address space allocated by customer You can use private IP addresses (as recommended by RFC 1918) for your Amazon VPCs building private clouds and associated networks in the cloud that are not directly routable to the Internet Amazon VPC provides not only isolation from other customers in the private cloud it provides layer 3 (Network Layer IP routing) isola tion from the Internet as well Table 20 lists options for protecting your applications in Amazon VPC: ArchivedAmazon Web Services AWS Security Best Practices Page 50 Table 20: Accessing resources in Amazon VPC Concern Description Recommended Protection Approach Internet only The Amazon VPC is not connected to any of your infrastructure on premises or elsewhere You might or might not have additional infrastructure residing on premises or elsewhere If you need to accept connections from Internet users you can provide inbound access by allo cating elastic IP addresses (EIPs) to only those Amazon VPC instances that need them You can further limit inbound connections by using security groups or NACLs for only specific ports and source IP address ranges If you can balance the load of traffic i nbound from the Internet you don’t need EIPs You can place instances behind Elastic Load Balancing For outbound (to the Internet) access for example to fetch software updates or to access data on AWS public services such as Amazon S3 you can use a NA T instance to provide masquerading for outgoing connections No EIPs are required Encrypt application and administrative traffic using SSL/TLS or build custom user VPN solutions Carefully plan routing and server placement in public and private subnets Use security groups and NACLs IPSec over the Internet AWS provides industry standard and resilient IPSec termination infrastructure for VPC Customers can establish IPSec tunnels from their on premises or other VPN infrastructure to Amazon VPC IPSec tunnels are established between AWS and your infrastructure endpoints Applications running in the cloud or on premises don’t require any modification and can benefit from IPSec data protection in transit immediately Establish a private IPSec connec tion using IKEv1 and IPSec using standard AWS VPN facilities (Amazon VPC VPN gateways customer gateways and VPN connections) Alternatively establish customer specific VPN software infrastructure in the cloud and on premises AWS Direct Connect witho ut IPSec With AWS Direct Connect you can establish a connection to your Amazon VPC using private peering with AWS over dedicated links without using the Internet You can opt to not use IPSec in this case subject to your data protection requirements Depending on your data protection requirements you might not need additional protection over private peering ArchivedAmazon Web Services AWS Security Best Practices Page 51 Concern Description Recommended Protection Approach AWS Direct Connect with IPSec You can use IPSec over AWS Direct Connect links for additional end to end protection See IPSec over the Internet a bove Hybrid Consider using a combination of these approaches Employ adequate protection mechanisms for each connectivity approach you use You can leverage Amazon VPC IPSec or VPC AWS Direct Connect to seamlessly integrate on premises or other hosted infrastructure with your Amazon VPC resources in a secure fashion With either approach IPSec connections protect data in transit while BGP on IPSec or AWS Direct Connect links integrate your Amazon VPC and on premises routing domains for transpar ent integration for any application even applications that don’t support native network security mechanisms Although VPC IPSec provides industry standard and transparent protection for your applications you might want to use additional levels of protect ion mechanisms such as SSL/TLS over VPC IPSec links For more information please refer to the Amazon VPC Connectivity Options whitepaper Using Security Zoning a nd Network Segmentation Different security requirements mandate different security controls It is a security best practice to segment infrastructure into zones that impose similar security controls While most of the AWS underlying infrastructure is manag ed by AWS operations and security teams you can build your own overlay infrastructure components Amazon VPCs subnets routing tables segmented/zoned applications and custom service instances such as user repositories DNS and time servers supplement t he AWS managed cloud infrastructure Usually network engineering teams interpret segmentation as another infrastructure design component and apply network centric access control and firewall rules to manage access Security zoning and network segmentation are two different concepts however: A network segment simply isolates one network from another where a security zone creates a group of system components with similar security levels with common controls ArchivedAmazon Web Services AWS Security Best Practices Page 52 On AWS you can build network segmen ts using the following access control methods: • Using Amazon VPC to define an isolated network for each workload or organizational entity • Using security groups to manage access to instances that have similar functions and security requirements; security groups are stateful firewalls that enable firewall rules in both directions for every allowed and established TCP session or UDP communications channel • Using Network Access Control Lists (NACLs) that allow stateless management of IP traffic NACLs are agnostic of TCP and UDP sessions but they allow granular control over IP protocols (for example GRE IPSec ESP ICMP) as well as control on a per source/destination IP address and port for TCP and UDP NACLs work in conjunction with s ecurity groups and can allow or deny traffic even before it reaches the security group • Using host based firewalls to control access to each instance • Creating a threat protection layer in traffic flow and enforcing all traffic to traverse the zone • Apply ing access control at other layers (eg applications and services) Traditional environments require separate network segments representing separate broadcast entities to route traffic via a central security enforcement system such as a firewall The conc ept of security groups in the AWS cloud makes this requirement obsolete Security groups are a logical grouping of instances and they also allow the enforcement of inbound and outbound traffic rules on these instances regardless of the subnet where these instances reside Creating a security zone requires additional controls per network segment and they often include: • Shared Access Control –a central Identity and Access Management (IDAM) system Note that although federation is possible this will often be separate from IAM • Shared Audit Logging –shared logging is required for event analysis and correlation and tracking security events • Shared Data Classification –see Table 1: Sample Asset Matrix Design Your ISMS to Protect Your Assets section for more information ArchivedAmazon Web Services AWS Security Best Pract ices Page 53 • Shared Management Infrastructure –various components such as anti virus/antispam systems patching systems and performance monitoring systems • Shared Secu rity (Confidentiality/Integrity) Requirements –often considered in conjunction with data classification To assess your network segmentation and security zoning requirements answer the following questions: • Do I control inter zone communication? Can I use n etwork segmentation tools to manage communications between security zones A and B? Usually access control elements such as security groups ACLs and network firewalls should build the walls between security zones Amazon VPCs by default builds inter zone isolation walls • Can I monitor inter zone communication using an IDS/IPS/DLP/SIEM/NBAD system depending on business requirements? Blocking access and managing access are different terms The porous communication between security zones mandates sophisticat ed security monitoring tools between zones The horizontal scalability of AWS instances makes it possible to zone each instance at the operating systems level and leverage host based security monitoring agents • Can I apply per zone access control rights? O ne of the benefits of zoning is controlling egress access It is technically possible to control access by resources such as Amazon S3 and Amazon SMS resources policies • Can I manage each zone using dedicated management channel/roles? Role Based Access Con trol for privileged access is a common requirement You can use IAM to create groups and roles on AWS to create different privilege levels You can also mimic the same approach with application and system users One of the new key features of Amazon VPC –based networks is support for multiple elastic network interfaces Security engineers can create a management overlay network using dual homed instances • Can I apply per zone confidentiality and integrity rules? Per zone encryption data classification and DRM simply increase the overall security posture If the security requirements are different per security zone then the data security requirements must be different as well And it is always a good policy to use different encryption options with rotating keys on each security zone AWS provides flexible security zoning options Security engineers and architects can leverage the following AWS features to build isolated security zones/segments on AWS per Amazon VPC access control: ArchivedAmazon Web Services AWS Security Best Practices Page 54 • Per subnet access control • Per security group access control • Per instance access control (host based) • Per Amazon VPC routing block • Per resource policies (S3/SNS/SMS) • Per zone IAM policies • Per zone log management • Per zone IAM users administrative users • Per zone log feed • Per zone administrative channels (roles interfaces management consoles) • Per zone AMIs • Per zone data storage resources (Amazon S3 buckets or Glacier archives) • Per zone user directories • Per zone applications/application controls With elastic cloud infrastructure an d automated deployment you can apply the same security controls across all AWS regions Repeatable and uniform deployments improve your overall security posture Strengthening Network Security Following the shared responsibility model AWS configures infr astructure components such as data center networks routers switches and firewalls in a secure fashion You are responsible for controlling access to your systems in the cloud and for configuring network security within your Amazon VPC as well as secure inbound and outbound network traffic While applying authentication and authorization for resource access is essential it doesn’t prevent adversaries from acquiring network level access and trying to impersonate authorized users Controlling access to ap plications and services based on network locations of the user provides an additional layer of security For example a webbased application with strong user authentication could also benefit from an IP address based firewall that limits source traffic to a specific range of IP addresses and ArchivedAmazon Web Services AWS Security Best Practices Page 55 an intrusion prevention system to limit security exposure and minimize the potential attack surface for the application Best practices for network security in the AWS cloud include the following: • Always use security groups: They provide stateful firewalls for Amazon EC2 instances at the hypervisor level You can apply multiple security groups to a single instance and to a single ENI • Augment security groups with Network ACLs: They are stateless but they provide fast and efficient controls Network ACLs are not instance specific so they can provide another layer of control in addition to security groups You can apply separation of duties to ACLs management and security group management • Use IPSec or AWS Direct Connec t for trusted connections to other sites Use Virtual Gateway (VGW) where Amazon VPC based resources require remote network connectivity • Protect data in transit to ensure the confidentiality and integrity of data as well as the identities of the communic ating parties • For large scale deployments design network security in layers Instead of creating a single layer of network security protection apply network security at external DMZ and internal layers • VPC Flow Logs provides further visibility as it enables you to capture information about the IP traffic going to and from network interfaces in your VPC Many of the AWS service endpoints that you interact with do not provide for native firewall functionality or access control lists AWS monitors and pr otects these endpoints with state oftheart network and application level control systems You can use IAM policies to restrict access to your resources based on the source IP address of the request Securing Periphery Systems: User Repositories DNS NTP Overlay security controls are effective only on top of a secure infrastructure DNS query traffic is a good example of this type of control When DNS systems are not properly secured DNS client traffic can be intercepted and DNS names in queries and responses can be spoofed Spoofing is a simple but efficient attack against an infrastructure that lacks basic controls SSL/TLS can provide additional protection ArchivedAmazon Web Services AWS Security Best Practices Page 56 Some AWS customers use Amazon Route 53 which is a secure DNS service If you require internal D NS you can implement a custom DNS solution on Amazon EC2 instances DNS is an essential part of the solution infrastructure and as such becomes a critical part of your security management plan All DNS systems as well as other important custom infrastru cture components should apply the following controls: Table 21: Controls for periphery system Common Control Description Separate administrative level access Implement role separation and access controls to limit access to such services often separate from access control required for application access or access to other parts of the infrastructure Monitoring alerting audit trail Log and monitor authorized and unauthorized activity Network layer access control Restri ct network access to only systems that require it If possible apply protocol enforcement for all network level access attempts (that is enforce custom RFC standards for NTP and DNS) Latest stable software with security patches Ensure that the software is patched and not subject to any known vulnerabilities or other risks Continuous security testing (assessments) Ensure that the infrastructure is tested regularly All other security controls processes in place Make sure the periphery systems follow your information security management system (ISMS) best practices in addition to service specific custom security controls In addition to DNS other infrastructure services might require specific controls Centralized access control is es sential for managing risk The IAM service provides rolebased identity and access management for AWS but AWS does not provide end user repositories like Active Directory LDAP or RADIUS for your operating systems and applications Instead you establish user identification and authentication systems alongside Authentication Authorization Accounting (AAA) servers or sometimes proprietary database tables All identity and access management servers for the purposes of user platforms and applications are c ritical to security and require special attention ArchivedAmazon Web Services AWS Security Best Practices Page 57 Time servers are also critical custom services They are essential in many security related transactions including log time stamping and certificate validation It is important to use a centralized time s erver and synchronize all systems with the same time server The Payment Card Industry (PCI) Data Security Standard (DSS) proposes a good approach to time synchronization: • Verify that time synchronization technology is implemented and kept current • Obtain and review the process for acquiring distributing and storing the correct time within the organization and review the time related system parameter settings for a sample of system components • Verify that only designated central time servers receive tim e signals from external sources and that time signals from external sources are based on International Atomic Time or Universal Coordinated Time (UTC) • Verify that the designated central time servers peer with each other to keep accurate time and that ot her internal servers receive time only from the central time servers • Review system configurations and time synchronization settings to verify that access to time data is restricted to only personnel who have a business need to access time data • Review sys tem configurations and time synchronization settings and processes to verify that any changes to time settings on critical systems are logged monitored and reviewed • Verify that the time servers accept time updates from specific industry accepted exter nal sources (This helps prevent a malicious individual from changing the clock) (You have the option of receiving those updates encrypted with a symmetric key and you can create access control lists that specify the IP addresses of client machines that will be updated (This prevents unauthorized use of internal time servers) Validating the security of custom infrastructure is an integral part of managing security in the cloud Building Threat Protection Layers Many organizations consider layered securi ty to be a best practice for protecting network infrastructure In the cloud you can use a combination of Amazon VPC implicit firewall rules at the hypervisor layer alongside network access control lists security ArchivedAmazon Web Services AWS Secur ity Best Practices Page 58 groups host based firewalls and IDS/I PS systems to create a layered solution for network security While security groups NACLs and host based firewalls meet the needs of many customers if you’re looking for defense in depth you should deploy a network level security control appliance and you should do so inline where traffic is intercepted and analyzed prior to being forwarded to its final destination such as an application server Figure 6: Layered Network Defense in the Cloud Examples of inline threat protec tion technologies include the following: • Third party firewall devices installed on Amazon EC2 instances (also known as soft blades) • Unified threat management (UTM) gateways • Intrusion prevention systems • Data loss management gateways • Anomaly detection gateways • Advanced persistent threat detection gateways ArchivedAmazon Web Services AWS Security Best Practices Page 59 The following key features in the Amazon VPC infrastructure support deploying threat protection layer technologies: • Support for Multiple Layers of Load Balancers: When you use threat protecti on gateways to secure clusters of web servers application servers or other critical servers scalability is a key issue AWS reference architectures underline deployment of external and internal load balancers for threat management and internal server lo ad distribution and high availability You can leverage Elastic Load Balancing or your custom load balancer instances for your multi tiered designs You must manage session persistence at the load balancer level for stateful gateway deployments • Support fo r Multiple IP Addresses: When threat protection gateways protect a presentation layer that consists of several instances (for example web servers email servers application servers) these multiple instances must use one security gateway in a many toone relationship AWS provides support for multiple IP addresses for a single network interface • Support for Multiple Elastic Network Interfaces (ENIs): Threat protection gateways must be dual homed and in many cases depending on the complexity of the networ k must have multiple interfaces Usingthe concept of ENIs AWS supports multiple network interfaces on several different instance types which makes it possible to deploy multi zone security features Latency complexity and other architectural constrain ts sometimes rule out implementing an inline threat management layer in which case you can choose one of the following alternatives • A distributed threat protection solution : This approach installs threat protection agents on individual instances in the c loud A central threat management server communicates with all host based threat management agents for log collection analysis correlation and active threat response purposes • An overlay network threat protection solution : Build an overlay network on top of your Amazon VPC using technologies such as GRE tunnels vtun interfaces or by forwarding traffic on another ENI to a centralized network traffic analysis and intrusion detection system which can provide active or passive threat response ArchivedAmazon Web Services AWS Security Best Practices Page 60 Test Securi ty Every ISMS must ensure regular reviews of the effectiveness of security controls and policies To guarantee the efficiency of controls against new threats and vulnerabilities customers need to ensure that the infrastructure is protected against attacks Verifying existing controls requires testing AWS customers should undertake a number of test approaches: • External Vulnerability Assessment: A third party evaluates system vulnerabilities with little or no knowledge of the infrastructure and its componen ts; • External Penetration Tests: A third party with little or no knowledge of the system actively tries to break into it in a controlled fashion • Internal Gray/White box Review of Applications and Platforms : A tester who has some or full knowledge of the s ystem validates the efficiency of controls in place or evaluates applications and platforms for known vulnerabilities The AWS Acceptable Use Policy outlines permitted and prohibited behavior in the AWS cloud and defines security violations and network abuse AWS supports both inbound and outbound penetration testing in the cloud although you must request permission to conduct penetration tests For more information see the Amazon Web Services Acceptable Use Policy To request penetration testing for your resources complete and submit the AWS Vulnerability Penetration Testing R equest Form You must be logged into the AWS Management Console using the credentials associated with the instances you want to test or the form will not pre populate correctly For third party penetration testing you must fill out the form yourself and then notify the third parties when AWS grants approval The form includes information about the instances to be tested the expected start and end dates and times of the tests and requires you to read and agree to the terms and conditions specific to pene tration testing and to the use of appropriate tools for testing AWS policy does not permit testing of m1small or t1micro instance types After you submit the form you will receive a response confirming receipt of the request within one business day If you need more time for additional testing you can reply to the authorization email asking to extend the test period Each request is subject to a separate approval process ArchivedAmazon Web Services AWS Security Best Practices Page 61 Managing Metrics and Improvement Measuring control effectiveness is an integral p rocess to each ISMS Metrics provide visibility into how well controls are protecting the environment Risk management often depends on qualitative and quantitative metrics Table 22 outlines measurement and improvement best practices: Table 22: Measuring and improving metrics Best Practice Improvement Monitoring and reviewing procedures and other controls • Promptly detect errors in the results of processing • Promptly identify attempted and successful security breaches and incidents • Enable management to determine whether the security activities delegated to people or implemented by information technology are performing as expected • Help detect security events and thereby prevent security incidents by the use of indicators • Determine whether the actions taken to resolve a breach of security were effective Regular reviews of the effectiveness of the ISMS • Consider results from security audits incidents and effectiveness measurements; and suggestions and feedback from all interested parties • Ensure that the ISMS meets the policy and objectives • Review security controls Measure controls effectiveness • Verify that security requirements have been met Risk assessments reviews at planned intervals • Review the residual risks and t he identified acceptable levels of risks taking into account: • Changes to the organization technology business objectives and processes identified threats • Effectiveness of the implemented controls • External events such as changes to the legal or regulat ory environment changed contractual obligations and changes in social climate Internal ISMS audits • First party audits (internal audits) are conducted by or on behalf of the organization itself for internal purposes ArchivedAmazon Web Services AWS Security Best Practices Page 62 Best Practice Improvement Regular management reviews • Ensure that the scope remains adequate • Identify improvements in the ISMS process Update security plans • Take into account the findings of monitoring and reviewing activities • Record actions and events that could affect the ISMS effectiveness or performance Mitigating and Protecting Against DoS & DDoS Attacks Organizations running Internet applications recognize the risk of being the subject of Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks by competitors activists or i ndividuals Risk profiles vary depending on the nature of business recent events the political situation as well as technology exposure Mitigation and protection techniques are similar to those used on premises If you’re concerned about DoS/DDoS attac k protection and mitigation we strongly advise you to enroll in AWS Premium Support services so that you can proactively and reactively involve AWS support services in the process of mitigating attacks or containing ongoing incidents in your environment on AWS Some services such as Amazon S3 use a shared infrastructure which means that multiple AWS accounts access and store data on the same set of Amazon S3 infrastructure components In this case a DoS/DDoS attack on abstracted services is likely to affect multiple customers AWS provides both mitigation and protection controls for DoS/DDoS on abstracted services from AWS to minimize the impact to you in the event such an attack You are not required to provide additional DoS/DDoS protection of such services but we do advise that you follow best practices outlined in this whitepaper Other services such as Amazon EC2 use shared physical infrastructure but you are expected to manage the operating system platform and customer data For such servic es we need to work together to provide for effective DDoS mitigation and protection AWS uses proprietary techniques to mitigate and contain DoS/DDoS attacks to the AWS platform To avoid interference with actual user traffic though and following the shared responsibility model AWS does not provide mitigation or actively block network ArchivedAmazon Web Services AWS Security Best Practices Page 63 traffic affecting individual Amazon EC2 instances: only you can determine whether excessive traffic is expected and benign or part of a DoS/DDoS attack While a number of techniques can be used to mitigate DoS/DDoS attacks in the cloud we strongly recommend that you establish a security and performance baseline that captures system parameters under normal circumstances potentially also considering daily weekly annual o r other patterns applicable to your business Some DoS/DDoS protection techniques such as statistical and behavioral models can detect anomalies compared to a given baseline normal operation pattern For example a customer who typically expects 2000 co ncurrent sessions to their website at a specific time of day might trigger an alarm using Amazon CloudWatch and Amazon SNS if the current number of concurrent sessions exceeds twice that amount (4000) Consider the same components that apply to on premise s deployments when you establish your secure presence in the cloud Table 23 outlines common approaches for DoS/DDoS mitigation and protection in the cloud Table 23: Techniques for mitigation and protection from DoS/DDoS attacks Technique Description Protection from DoS/DDoS Attacks Firewalls: Security groups network access control lists and host based firewalls Traditional firewall techniques limit the attack surface for potential attackers and deny traffic to and from the source of destination of attack • Manage the list of allowed destination servers and services (IP addresses & TCP/UDP ports) • Manage the list of allowed sources of traffic protocols • Explicitly deny access temporarily or permanently from specific IP addresses • Manage the list of allowed Web application firewalls (WAF) Web application firewalls provide deep packet inspection for web traffic • Platform and application specific attacks • Protocol sanity attacks • Unauthorized user access Host based or inline IDS/IPS systems IDS/IPS systems can use statistical/behavioral or signature based algorithms to detect and contain network attacks and Trojans • All types of attacks ArchivedAmazon Web Services AWS Security Best Practices Page 64 Technique Description Protection from DoS/DDoS Attacks Traffic shaping/rate limiting Often DoS/DDoS attacks deplete network and system resources Rate limiting is a good technique for protecting scarce resources from overconsumption • ICMP flooding • Application request flooding Embryonic session limits TCP SYN flooding attacks can take place in both simple and distributed form In either case if you have a baseline of the system you can detect considerable deviations from the norm in the number of half open (embryonic) TCP sessions and drop any further TCP SYN packets from the specific sources TCP SYN flooding Along with conven tional approaches for DoS/DDoS attack mitigation and protection the AWS cloud provides capabilities based on its elasticity DoS/DDoS attacks are attempts to deplete limited compute memory disk or network resources which often works against on premise s infrastructure By definition however the AWS cloud is elastic in the sense that new resources can be employed on demand if and when required For example you might be under a DDoS attack from a botnet that generates hundreds of thousands of request s per second that are indistinguishable from legitimate user requests to your web servers Using conventional containment techniques you would start denying traffic from specific sources often entire geographies on the assumption that there are only att ackers and no valid customers there But these assumptions and actions result in a denial of service to your customers themselves In the cloud you have the option of absorbing such an attack Using AWS technologies like Elastic Load Balancing and Auto Sc aling you can configure the web servers to scale out when under attack (based on load) and shrink back when the attack stops Even under heavy attack the web servers could scale to perform and provide optimal user experience by leveraging cloud elastici ty By absorbing the attack you might incur additional AWS service costs; but sustaining such an attack is so financially challenging for the attacker that absorbed attacks are unlikely to persist You could also use Amazon CloudFront to absorb DoS/DDoS flooding attacks AWS WAF integrates with AWS CloudFront to help protect your web applications from ArchivedAmazon Web Services AWS Security Best Practices Page 65 common web exploits that could affect application availability compromise security or consume exce ssive resources Potential attackers trying to attack content behind CloudFront are likely to send most or all requests to CloudFront edge locations where the AWS infrastructure would absorb the extra requests with minimal to no impact on the back end cust omer web servers Again there would be additional AWS service charges for absorbing the attack but you should weigh this against the costs the attacker would incur in order to continue the attack as well In order to effectively mitigate contain and ge nerally manage your exposure to DoS/DDoS attacks you should build a layer defense model as outlined elsewhere in this document Manage Security Monitoring Alerting Audit Trail and Incident Response The shared responsibility model requires you to monit or and manage your environment at the operating system and higher layers You probably already do this on premises or in other environments so you can adapt your existing processes tools and methodologies for use in the cloud For extensive guidance on security monitoring see the ENISA Procure Secure whitepaper which outlines the concepts of continuous security monitoring in the cloud (see Further Reading ) Security monitoring starts with answering the following questions : • What parameters should we measure? • How should we measure them? • What are the thresholds for these parameters? • How will escalation processes work? • Where will data be kept? Perhaps the most important question you must answer is “What do I need to log?” We recommend configuring the following areas for logging and analysis: • Actions taken by any individual with root or administrative privileges • Access to all audit trails ArchivedAmazon Web Services AWS Security Best Practices Page 66 • Invalid logical access attempts • Use of identification and authentication mechanisms • Initia lization of audit logs • Creation and deletion of system level objects When you design your log file keep the considerations in Table 24 in mind: Table 24: Log file considerations Area Consideration Log collection Note how log files are collected Often operating system application or third party/middleware agents collect log file information Log transport When log files are centralized transfer them to the central location in a secure reliable and timely fashion Log s torage Centralize log files from multiple instances to facilitate retention policies as well as analysis and correlation Log taxonomy Present different categories of log files in a format suitable for analysis Log analysis/ correlation Log files provide security intelligence after you analyze them and correlate events in them You can analyze logs in real time or at scheduled intervals Log protection/ security Log files are sensitive Protect them through network control identity and access management encryption data integrity authentication and tamper proof time stamping You might have multiple sources of security logs Various network components such as firewalls IDP DLP AV systems the operating system platforms and applications will generate log files Many will be related to security and those need to be part of the log file strategy Others which are not related to security are better excluded from the strategy Logs should include all user activities exception s and security events and you should keep them for a predetermined time for future investigations To determine which log files to include answer the following questions: • Who are the users of the cloud systems? How do they register how do they authenti cate how are they authorized to access resources? • Which applications access cloud systems? How do they get credentials how do they authenticate and how they are authorized for such access? ArchivedAmazon Web Services AWS Security Best Practices Page 67 • Which users have privileged access (administrative level access) to AWS infrastructure operating systems and applications? How do they authenticate how are they authorized for such access? Many services provide built in access control audit trails (for example Amazon S3 and Amazon EMR provide such logs) but in some cases your business requirements for logging might be higher than what’s available from the native service log In such cases consider using a privilege escalation gateway to manage access control logs and authorization When you use a privilege escalat ion gateway you centralize all access to the system via a single (clustered) gateway Instead of making direct calls to the AWS infrastructure your operating systems or applications all requests are performed by proxy systems that act as trusted interme diaries to the infrastructure Often such systems are required to provide or do the following: • Automated password management for privileged access: Privileged access control systems can rotate passwords and credentials based on given policies automatically using built in connectors for Microsoft Active Directory UNIX LDAP MYSQL etc • Regularly run least privilege checks using AWS IAM user Access Advisor and AWS IAM user Last Used Access Keys • User authentication on the front end and delegated access to services from AWS on the back end: Typically a website that provides single sign on for all users Users are assigned access privileges based on their authorization profiles A common approach is using token based authentication for the website and acquir ing click through access to other systems allowed in the user’s profile • Tamper proof audit trail storage of all critical activities • Different sign on credentials for shared accounts: Sometimes multiple users need to share the same password A privilege e scalation gateway can allow remote access without disclosing the shared account • Restrict leapfrogging or remote desktop hopping by allowing access only to target systems • Manage commands that can be used during sessions For interactive sessions like SSH or appliance management or AWS CLI such solutions can enforce policies by limiting the range of available commands and actions ArchivedAmazon Web Services AWS Security Best Practices Page 68 • Provide audit trail for terminals and GUI based sessions for compliance and security related purposes • Log everything and aler t based on given threshold for the policies Using Change Management Logs By managing security logs you can also track changes These might include planned changes which are part of the organization’s change control process (sometimes referred to as MACD –Move/Add/Change/Delete) ad hoc changes or unexpected changes such as incidents Changes might occur on the infrastructure side of the system but they might also be related to other categories such as changes in code repositories gold image/applicati on inventory changes process and policy changes or documentation changes As a best practice we recommend employing a tamper proof log repository for all the above categories of changes Correlate and interconnect change management and log management sy stems You need a dedicated user with privileges for deleting or modifying change logs; for most systems devices and applications change logs should be tamper proof and regular users should not have privileges to manage the logs Regular users should be unable to erase evidence from change logs AWS customers sometimes use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts while adding new entries does not generate a lerts All logs for system components must be reviewed at the minimum on a daily basis Log reviews must include those servers that perform security functions such as intrusion detection system (IDS) and authentication authorization and accounting proto col (AAA) servers (for example RADIUS) To facilitate this process you can use log harvesting parsing and alerting tools Managing Logs for Critical Transactions For critical applications all Add Change/Modify and Delete activities or transactions must generate a log entry Each log entry should contain the following information: • User identification information • Type of event • Date and time stamp ArchivedAmazon Web Services AWS Security Best Practices Page 69 • Success or failure indication • Origination of event • Identity or name of affected data system component or resource Protecting Log Information Logging facilities and log information must be protected against tampering and unauthorized access Administrator and operator logs are often targets for erasing trails of activities Common controls for pr otecting log information include the following: • Verifying that audit trails are enabled and active for system components • Ensuring that only individuals who have a job related need can view audit trail files • Confirming that current audit trail files are pro tected from unauthorized modifications via access control mechanisms physical segregation and/or network segregation • Ensuring that current audit trail files are promptly backed up to a centralized log server or media that is difficult to alter • Verifying that logs for external facing technologies (for example wireless firewalls DNS mail) are offloaded or copied onto a secure centralized internal log server or media • Using file integrity monitoring or change detection software for logs by examining syste m settings and monitored files and results from monitoring activities • Obtaining and examining security policies and procedures to verify that they include procedures to review security logs at least daily and that follow up to exceptions is required • Verify ing that regular log reviews are performed for all system components • Ensuring that security policies and procedures include audit log retention policies and require audit log retention for a period of time defined by the business and compliance requiremen ts ArchivedAmazon Web Services AWS Security Best Practices Page 70 Logging Faults In addition to monitoring MACD events monitor software or component failure Faults might be the result of hardware or software failure and while they might have service and data availability implications they might not be related to a security incident Or a service failure might be the result of deliberate malicious activity such as a denial of service attack In any case faults should generate alerts and then you should use event analysis and correlation techniques to determine t he cause of the fault and whether it should trigger a security response Conclusion AWS Cloud Platform provides a number of important benefits to modern businesses including flexibility elasticity utility billing and reduced time to market It provid es a range of security services and features that that you can use to manage security of your assets and data in the AWS While AWS provides an excellent service management layer around infrastructure or platform services businesses are still responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection Conventional security and compliance concepts still apply in the cloud Using the various best practices highlighted in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications and data quickly and securely Contributors Contributors to this document include : • Dob Todorov • Yinal Ozkan Further Reading For additional information see: • Amazon Web Services: Overview of Security Processes • Amazon Web Services Risk and Compliance Whitepaper ArchivedAmazon Web Services AWS Security Best Practices Page 71 • Amazon VPC Network Connectivity Options • AWS SDK support for Amazon S3 client side encryption • Amazon S3 Default Encryption f or S3 Buckets • AWS Security Partner Solutions • Identity Federation Sample Application for an Active Directory Use Case • Single Sign on to Amazon EC2 NET Applications from an On Premises Windows Domain • Authenticating Users of AWS Mobile Applications with a Token Vending Machine • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 • Amazon Web Services Acceptable Use Policy • ENISA Procure Secure: A Guide to Monitoring of Security Service Levels in Cloud Contracts • The PCI Data Security Standard • ISO/IEC 27001:2013 Document Revisions Date Description August 2016 First publication",General,consultant,Best Practices AWS_Security_Checklist,AWS Security Checklist This checklist provides customer recommendations that align with the WellArchitected Framework Security Pillar Identity & Access Management 1 Secure your AWS account Use AWS Organizations to manage your accounts use the root user by exception with multi factor authentication (MFA) enabled and configure account contacts 2 Rely on centralized identity provider Centralize identities using either AWS Single Sign On or a thirdparty provider to avoid routinely creat ing IAM users or using longterm access keys —this approach makes it easier to manage multiple AWS accounts and federated applications 3 Use multiple AWS accounts to separate workloads and workload stages such as production and non production Multiple AWS accounts allow you to separate data and resources and enab le the use of Service Control Policies to implement guardrails AWS Control Tower can hel p you easily set up and govern a multi account AWS environment 4 Store and use secrets securely Where you cannot use temporary credentials like tokens from AWS Security Token Service store your secrets like database passwords using AWS Secrets Manager which handles encryption rotation and access control Detection 1 Enable foundational services: AWS CloudTrail Amazon GuardDuty and AWS Security Hub For all your AWS accounts configure CloudTrail to log API activity use GuardDuty for continuous monitoring and use AWS Security Hub for a comprehensive view of your security posture 2 Configure service and application level logging In addition to your application logs enable logging at the service level such as Amazon VPC Flow Logs and Amazon S3 CloudTrail and Elastic Load Balancer access logging to gain visibility into events Configure logs to flow to a central account and protect them from manipulation or deletion 3 Configure monitoring and alerts and investigate events Enable AWS Config to track the history of resources and Config Managed Rules to automatically alert or remediate on undesired changes For all your sources of logs and events from AWS CloudTrail to Amazon GuardDuty and your application logs configure alerts for high priority events and investigate Infrastructure Protection 1 Patch your operating system applications and code Use AWS Systems Manager Patch Manager to automate the patching process of all systems and code for which you are responsible including your OS applications and code dependencies AWS Security Checklist 2 Implement distributed denial ofservice ( DDoS ) protection for your internet facing resources Use Amazon Cloudfront AWS WAF and AWS Shield to provide layer 7 and layer 3/ layer 4 DDoS protection 3 Control access using VPC Security Groups and subnet layers Use security groups for controlling i nbound and outbound traffic and automatically apply rules for both security groups and WAFs using AWS Firewall Manager Group different resources into different subnets to create routing layers for example database resources do not need a route to the internet Data Protection 1 Protect data at rest Use AWS Key Management Service (KMS) to protect data at rest across a wide range of AWS services and your applications Enable default encryption for Amazon EBS volumes and Amazon S3 buckets 2 Encrypt data in transit Enable encryption for all network traffic including Transport Layer Security (TLS) for web based network infrastructure you control using AWS Certificate Manager to manage and provision certificates 3 Use mechanisms to keep people away from data Keep all users away from directly accessing sensitive data and systems For example provide a n Amazon QuickSight dashboard to business users instead of direct access to a database and perform actions at a distance using AWS Systems Manager automation documents and Run C ommand Incident Response 1 Ensure you have an incident response (IR) plan Begin your IR plan by building runbooks to respond to unexpected events in your workload For details see the AWS Security Inciden t Response Guide 2 Make sure that someone is notified to take action on critical findings Begin with GuardDuty findings Turn on GuardDuty and ensure that someone with the ability to take action receives the notification s Automatically creating trouble tickets is the best way to ensure that GuardDuty findings are integrated with your operational processes 3 Practice respo nding to events Simulate and practice incident response by running regular game days incorporating the lessons learned into your incident management plans and continuously improving them For more best practices see the Security Pillar of the Well Architected Framework and Security Documentation Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create a ny commitmen ts or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to i ts customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved,General,consultant,Best Practices AWS_Serverless_MultiTier_Architectures_Using_Amazon_API_Gateway_and_AWS_Lambda,AWS Serverless Multi Tier Architectures With Amazon API Gateway and AWS Lambda First Published November 2015 Updated Octo ber 20 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Three tier architecture overview 2 Serverless logic tier 3 AWS Lambda 3 API Gateway 6 Data tier 11 Presentation tier 14 Sample architecture patterns 15 Mobile backend 16 Single page application 17 Web application 19 Microservices with Lambda 20 Conclusion 21 Contributors 21 Further reading 22 Document revisions 22 Abstract This whitepaper illustrates how innovations from Amazon Web Services (AWS) can be used to chang e the way you design multi tier architectures and implement popular patterns such as microservices mobile backends and single page applications Architects and developers can use Amazon API Gateway AWS Lambda and other services to reduce the developmen t and operations cycles required to create and manage multi tiered applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 1 Introduction The multi tier application (three tier ntier and so forth) has been a cornerstone architecture pattern for decades and remains a popular pattern for user facing applications Although the language used to describe a multi tier architecture varies a multi tier application generally consists of the following components: • Presentation tier – Component that the user directly interacts w ith (for example webpage s and mobile app UI s) • Logic tier – Code required to translate user actions to application functionality (for example CRUD database operations and data processing) • Data tier – Storage media ( for example databases object stores caches and file systems) that hold the data relevant to the application The multi tier architecture pattern provides a general framework to ensure decoupled and independently scalable application components can be separately developed managed a nd maintained (often by distinct teams) As a consequence of this pattern in which the network (a tier must make a network call to interact with another tier) acts as the boundary between tiers developing a multi tier application often requires creating m any undifferentiated application components Some of these components include: • Code that defines a message queue for communication between tiers • Code that defines an application programming interface (API) and a data model • Security related code that ensures appropriate access to the application All of these examples can be considered “boilerplate” components that while necessary in multi tier applications do not vary greatly in their implementation from one application to the next AWS offers a numb er of services that enable the creation of serverless multi tier applications —greatly simplifying the process of deploying such applications to production and removing the overhead associated with traditional server management Amazon API Gateway a service for creating and managing APIs and AWS Lambda a service for running arbitrary code functions can be used together to simplify the creation of robust multi tier applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 2 API Gateway’ s integration with AWS Lambda enable s userdefined code function s to be initiated directl y through HTT PS requests Regardle ss of the request volume both API Gatewa y and Lambda scale automaticall y to support exactl y the need s of your application (refe r to Amazon API Gatewa y quota s and important notes for scalability information) By combining these two services you can create a tie r that enables you to write onl y the code that matte rs to you r application and not focu s on variou s other undifferentiating aspect s of implementing a multitiered architecture such a s architecting for high availability writing client SDKs server and operating syste m (OS) management scaling and implementing a client authorization mechanism API Gatewa y and Lambda enable the creation of a serverle ss logic tier Depending on your application requirements AW S also provide s option s to create a serverless presentation tier (for example with Amazon CloudFront and Amazon Simple Storage Service (Amazon S3 ) and data tier (for example Amazon Aurora and Amazon DynamoDB ) This whitepaper focuses on the most popular example of a multitiered architecture the threetier web application However you can apply this multitier pattern well beyond a typical threetier web application Threeti er architectur e overview The threetie r architecture i s the most popula r implementation of a multitier architecture and consist s of a single presentation tier a logic tier and a data tier The following illustration show s an example of a simple generi c threetie r application Architectural pattern for a three tier application There are many great online resources where you can learn more about the general three tier architecture pattern This whitepaper focuses on a specific implementation pattern for this architecture using API Gateway and Lambda Amazon Web Services AWS Serverless Multi Tier Architectures Page 3 Serverless logic tier The logic tier of the three tier architecture represents the brains of the application This is where using API Gateway a nd AWS Lambda can have the most impact compared to a traditional server based implementation The features of these two services enable you to build a serverless application that is highly available scalable and secure In a traditional model your appl ication could require thousands of servers; however by using Amazon API Gateway and AWS Lambda you are not responsible for server management in any capacity In addition by using these managed services together you gain the following benefits: • Lambda o No OS to choose secure patch or manage o No servers to right size monitor or scale o Reduced risk to your cost from overprovisioning o Reduced risk to your performance from under provisioning • API Gateway o Simplified mechanisms to deploy monitor and secure APIs o Improved API performance through caching and content delivery AWS Lambda AWS Lambda is a compute service that enable s you to run arbitrary code functions in any of the supported languages (Nodejs Python Ruby Java Go NET For more informa tion refer to Lambda FAQs ) without provisioning managing or scaling servers Lambda functions are run in a managed isolated container and are launched in response to an event which can be one of several programmatic triggers that AWS makes available called an event source Refer to Lambda FAQs for all event sources Many popular use cases for Lambda r evolve around event driven data processing workflows such as processing files stor ed in Amazon S3 or streaming data records from Amazon Kinesis When used in conjunc tion with API Gateway a Lambda function performs the functionality of a typical web service: it initiates code in response to a client HTTPS request ; API Gateway acts as the front door for your logic tier and Lambda invokes the application code Amazon Web Services AWS Serverless Multi Tier Architectures Page 4 Your business logic goes here no servers necessary Lambda requires that you to write code functions called handlers which will run when initiat ed by an event To use Lambda with API Gateway you can configure API Gateway to launch handler functions when an HTTPS request to your API occurs In a serverless multi tier architecture each of the APIs you create in API Gateway will integrate with a Lambda function (and the handler within) that invok es the business logic required Using AWS Lambda functions to compose the logic tier enable s you to define a desired level of granularity for exposing the application functionality (one Lambda function per API or one Lambda function per API method) Inside the Lambda function the handler can reach out to any other dependencies ( for example other methods you’ve uploaded with your code libraries native binaries and external web services) or even other Lambda functions Creating or updating a Lambda function requires either uploadin g code as a Lambda deployment package in a zip file to an Amazon S3 bucket or packaging code as a container image along with all the dependencies The functions can use different deployment methods such as AWS Management Console running AWS Command Line Interface (CLI) or running infrastructure as code template s or framework s such as AWS CloudFormation AWS Serverless Application Model (AWS SAM) or AWS Cloud Development Kit (AWS CDK) When you create your function using any of these methods you specify which method inside your deployment package will act as the request handler You can reuse the same deployment package for multiple Lambda function definitions where each Lambda functio n might have a unique handler within the same deployment package Lambda security To run a Lambda function it must be invoked by an event or service that is permitted by an AWS Identity and Access Management (IAM) policy Using IAM policies you can create a Lambda function that cannot be initiated at all unless it is invoked by an API Gateway resource that you define Such policy can be defined using resource based policy across various AWS services Each Lambda function assumes an IAM role that is assigned when the Lambda function is deployed This IAM role defines the other AWS services and resources your Lambda function can interact with ( for example Amazon DynamoDB table and Amazon S3) In context of Lambda function this is called an execution role Amazon Web Services AWS Serverless Multi Tier Architectures Page 5 Do not store sensitive information inside a Lambda function IAM handles access to AWS services through the Lambda execution role; if you need to access other credentials ( for example database credentials and API keys) from inside your Lambda function you can use AWS Key Management Service (AWS KMS) with environment variables or use a service such as AWS Secrets Manager to keep this information safe when not in use Performance at scale Code pulled in as a container image from Amazon Elastic Container Registry (Amazon ECR) or from a zip file uploaded to Amazon S3 runs in an isolated environment managed by AWS You do not have to scale your Lambda functions —each time an event notification is received by your function AWS Lambda locates available capacity within its compute fleet and runs your code with runtime memory disk and timeout configurations that you define With this pattern AWS can start as many copies of your function as needed A Lambda based logic tier is always right sized for your customer needs The ability to quickly absorb surges in traffic through managed scaling and concurrent code initiation combined with Lambda payperuse pricing enables you to always meet customer requests while simultaneously not paying for idle compute capacity Serverless deployment and management To help you deploy and manage your Lambda functions use AWS Serverless Application Model (AWS SAM ) an open source framework that includes : • AWS SAM template specification – Syntax used to define your functions and describe their environments permissions configurations and events for simplified upload and deploym ent • AWS SAM CLI – Commands that enable you to verify AWS SAM template syntax invoke functions locally debug Lambda functions and deployment package functions You c an also use AWS CDK which is a software development framework for defining cloud infrastructure using programming languages and provisioning it through CloudFormation AWS CDK provides an imperative way to define AWS resources whereas AWS SAM provides a declarative way Amazon Web Services AWS Serverless Multi Tier Architectures Page 6 Typically when you deploy a Lambda function it is invok ed with permissions defined by its assigned IAM role and is able to reach internet facing endpoints As the core of your logic tier AWS Lambda is the component directly integrating w ith the data tier If your data tier contains sensitive business or user information it is important to ensure that this data tier is appropriately isolated (in a private subnet) You can configure a Lambda function to connect to private subnets in a virt ual private cloud (VPC) in your AWS account if you want the Lambda function to access resources that you cannot expose publicly like a private database instance When you connect a function to a VPC Lambda creates an elastic network interface for each subnet in your function's VPC configuration and elastic network interface is used to access your internal resources privately Lambda architecture pattern inside a VPC The use of Lambda with VPC means that databases and other storage media that your business logic depends on can be made inaccessible from the internet The VPC also ensures that the only way to interact with your data from the internet is through the APIs that you’ve defined and the Lambda code functions that you have written API Gateway API Gateway is a fully managed service that enables developers to create publish maintain monitor and secure APIs at any scale Amazon Web Services AWS Serverless Multi Tier Architectures Page 7 Clients ( that is presentation tier s) integrate with the APIs exposed through API Gateway using standard HTTPS requests The applicability of APIs exposed through API Gateway to a service oriented multi tier architecture is the ability to separate individual pieces of appli cation functionality and expose this functionality through REST endpoints API Gateway has specific features and qualities that can add powerful capabilities to your logic tier Integration with Lambda Amazon API Gateway supports both REST and HTTP type s of APIs An API Gateway API is made up of resources and methods A resource is a logical entity that an app can access through a resource path ( for example /tickets ) A method corresponds to an API request that is submitted to an API resource ( for example GET /tickets ) API Gateway enable s you to back each method with a Lambda function that is when you call the API through the HTTPS endpoint exposed in API Gateway API Gateway invokes the Lam bda function You can connect API Gateway and Lambda functions using proxy integrations and non proxy integrations Proxy integrations In a proxy integration the entire client HTTPS request is sent asis to the Lambda function API Gateway passes the enti re client request as the event parameter of the Lambda handler function and the output of the Lambda function is returned directly to the client (including status code headers and so forth) Nonproxy integrations In a nonproxy integration you configure how the parameters headers and body of the client request are passed to the event parameter of the Lambda handler function Additionally you configure how the Lambda output is translated back to the user Note : API Gateway can also proxy to ad ditional serverless resources outside of AWS Lambda such as mock integrations (useful for initial application development) and direct proxy to S3 objects Amazon Web Services AWS Serverless Multi Tier Architectures Page 8 Stable API performance across regions Each deployment of API Gateway includes a Amazon CloudFront distribution under the hood CloudFront is a content delivery service that uses Amazon’s global network of edge locations as connection points for clients using your API This helps decrease the response lat ency of your API By using multiple edge locations across the world CloudFront also provides capabilities to combat distributed denial of service (DDoS) attack scenarios For more information review the AWS Best Practices for DDoS Resiliency whitepaper You can improve the performance of specific API requests by using API Gateway to store responses in an optional in memory cache This approach not only provides performance benefits for repeated API requests but it also reduces the number of times your Lambda functions are invoked which can reduce your overall cost Encourage innovation and reduce overhead with builtin features The development cost to build any new application is an investment Using API Gateway can reduce the amount of time required for certain development tasks and lower the total development cost enab ling organizations to more freely experiment and innovate During initial application development phases implementation of logging and metrics gathering are often neglected to deliver a new application more quickly This can lead to technical debt and operational risk when deploying these features to an applicati on running in production API Gateway integrates seamlessly with Amazon CloudWatch which collects and processes raw data from API Gateway into readable near real time metrics for monitoring API implement ation API Gateway also supports access logging with configurable reports and AWS X Ray tracing for debugging Each of these features requires no code to be written and can be adjusted in applications running in production without risk to the core business logic The overall lifetime of an application m ight be unknown or it m ight be known to be short lived Creating a business case for building such applications can be made easier if your starting point alread y includes the managed features that API Gateway provides and if you only incur infrastructure costs after your APIs begin receiving requests For more information refer to Amazon API Gateway pr icing Amazon Web Services AWS Serverless Multi Tier Architectures Page 9 Iterate rapidly stay agile Using API Gateway and AWS Lambda to build the logic tier of your API enables you to quickly adapt to the changing demands of your user base by simplifying API deployment and version management Stage deployment When you deploy an API in API Gateway you must associate the deployment with an API Gateway stage—each stage is a snapshot of the API and is made available for client apps to call Using this convention you can easily deploy apps to dev test stage or prod stages and move deployments between stages Each time you deploy your API to a stage you create a different version of the API which can be r everted if necessary These features enable existing functionality and client dependencies to continue undis turbed while new functionality is released as a separate API version Decouple d integration with Lambda The integration between API in API Gateway and Lambda function can be decoupled using API Gateway stage variables and a Lambda function alias This simp lifies and speeds up the API deployment Instead of configuring the Lambda function name or alias in the API directly you can configure stage variable in API which can point to a particular alias in the Lambda function During deployment change the stage variable value to point to a Lambda function alias and API will run the Lambda function version behind the Lambda alias for a particular stage Canary release deployment Canary release is a software development strategy in which a new version of an API is deployed for testing purposes and the base version remains deployed as a production release for normal operations on the same stage In a canary release deployment tota l API traffic is separated at random into a production release and a canary release with a preconfigured ratio APIs in API Gateway can be configured for the canary release deployment to test new features with a limited set of users Custom domain names You can provide an intuitive business friendly URL name to API in stead of the URL provided by API Gateway API Gateway provides features to configure custom domain for the APIs With custom domain names you can set up your API's hostname and choose a multi level base path (for example myservice myservice/cat/v1 or myservice/dog/v2 ) to map the alternative URL to your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 10 Prioritize API security All applications must ensure that only authorized clients have access to their API resources When designing a multi tier application you can take advantage of several different ways in which API Gateway contributes to securing your logic tier : Transit security All requests to your APIs can be made through HTTPS to enable encryption in transit API Gateway provide s built in SSL/TLS Certificates —if using the custom domain name option for public APIs you can provide your own SSL/TLS certificate using AWS Certificate Manager API Gateway also supports mutual TLS (mTLS) authentication Mutual TLS enhances the security of your API and helps protect your data from attacks such as client spoofing or man inthe middle attacks API authorization Each resource and method combination that you create as part of your A PI is granted a unique Amazon Resource Name (ARN) that can be referenced in AWS Identity and Access Management ( IAM) policies There are three general methods to add authorization to an API in API Gateway: • IAM roles and policies Clients use AWS Signature Version 4 (SigV4) authorization and IAM policies for API access The same credentials can restrict or permit access to other AWS services and resources as ne eded ( for example S3 buckets or Amazon DynamoDB tables) • Amazon Cognito user pools Clients sign in through an Amazon Cognito user pool and obtain tokens which are included in the authorization header of a request • Lambda authorizer Define a Lambda function that implements a custom authorization scheme that uses a bearer token strategy ( for example OAuth and SAML) or uses request par ameters to identify users Access restrictions API Gateway supports the generation of API keys and association of these keys with a configurable usage plan You can monitor API key usage with CloudWatch API Gateway supports throttling rate limits and bu rst rate limits for each method in your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 11 Private APIs Using API Gateway you can create private REST APIs that can only be accessed from your virtual private cloud in Amazon VPC by using an interface VPC endpoint This is an endpoint network interface that you create in your VPC Using resource policies you can enable or deny access to your API from selected VPCs and VPC endpoints including across AWS accounts Each endpoint can be used to access multiple private APIs You can also use AWS Direct Connect to establish a connection from an on premises network to Amazon VPC and access your private API over that connection In all cases traffic to your private API uses secure connections and does not leave the Amazon network —it is isolated from the public internet Firewall protection using AWS WAF Internet facing APIs ar e vulnerable to malicious attacks AWS WAF is a we b application firewall which helps protect APIs from such attacks It protects APIs from common web exploits such as SQL injection and cross site scripting attacks You can use AWS WAF with API Gateway to help protect APIs Data tier Using AWS Lambda as your logic tier does not limit the data storage options available in your data tier Lambda functions connect to any data storage option by including the appropriate database driver in the Lambda deployment package and use IAM role based access or encrypted credentials ( through AWS KMS or Secrets Manager) Choosing a data store for your a pplication is highly dependent on your application requirements AWS offers a number of serverless and non serverless data stores that you can use to compose the data tier of your application Serverless data storage options • Amazon S3 is an object storage service that offers industry leading scalability data availability security and performance Amazon Web Services AWS Serverless Multi Tier Architectures Page 12 • Amazon Aurora is a MySQL compatible and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost effectiveness of open source databases Aurora offers both serverless and traditional usage models • Amazon DynamoDB is a key value and document database that delivers single digit millisecond performance at any scale It is a fully manag ed serverless multi region durable database with built in security backup and restore and in memory caching for internet scale applications • Amazon Timestream is a fast scalable fully managed time se ries database service for IoT and operational applications that makes it simple to store and analyze trillions of events per day at 1/10th the cost of relational databases Driven by the rise of IoT devices IT systems and smart industrial machines time series data —data that measures how things change over time —is one of the fastest growing data types • Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent im mutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time • Amazon Keyspaces (for Apache Cassandra) is a scalable highly available and managed Apache Cassandra –compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application co de and developer tools that you use today You don’t have to provision patch or manage servers and you don’t have to install maintain or operate software Amazon Keyspaces is serverless so you pay for only the resources you use and the service can au tomatically scale tables up and down in response to application traffic Amazon Web Services AWS Serverless Multi Tier Architectures Page 13 • Amazon Elastic File System (Amazon EFS) provides a simple serverless set andforget elastic file system that lets you share file data without provisioning or managing storage It can be used with AWS Cloud services and on premises resources and is built to scale on demand to petabytes without disrupting applications With Amazon EFS you can grow and shrink your file systems automa tically as you add and remove files eliminating the need to provision and manage capacity to accommodate growth Amazon EFS can be mounted with Lambda function which makes it a viable file storage option for APIs Nonserverless data storage options • Amazon Relational Database Service (Amazon RDS) is a managed web service that enables you to set up operate and scale a relational database using several engines (Aurora PostgreSQL MySQL MariaDB Oracle and Micro soft SQL Server) and running on several different database instance types that are optimized for memory performance or I/O • Amazon Redshift is a fully managed petabyte scale data warehouse service in the c loud • Amazon ElastiCache is a fully managed deployment of Redis or Memcached Seamlessly deploy run and scale popular open source compatible in memory data stores • Amazon Neptun e is a fast reliable fully managed graph database service that makes it simple to build and run applications that work with highly connected datasets Neptune supports popular graph models —property graphs and W3C Resource Description Framework (RDF)—and their respective query languages enabl ing you to easily build queries that efficiently navigate highly connected datasets • Amazon DocumentDB (with MongoDB compatibi lity) is a fast scalable highly available and fully managed document database service that supports MongoDB workloads • Finally you can also use data stores running independently on Amazon EC2 as the data tier of a multi tier application Amazon Web Services AWS Serverless Multi Tier Architectures Page 14 Presentation tier The presentation tier is responsible for interacting with the logic tier through the API Gateway REST endpoints exposed over the internet Any HTTPS capable client or device can communicate with these endpoints giving your presentation tier the flexibility to take many forms (desktop applications mobile apps webpages IoT devices and so forth) Depending on your requirements your presentation tier can use the following AWS serverless offerings: • Amazon Cognito – A serverless user identity and data synchronization service that enable s you to add user sign up sign in and access control to your web and mobile apps quickly and efficien tly Amazon Cognito scales to millions of users and supports sign in with social identity providers such as Facebook Google and Amazon and enterprise identity providers through SAML 20 • Amazon S3 with CloudFront – Enables you to serve static websites such as single page applications directly from an S3 bucket without requiring provision of a web server You can use CloudFront as a managed content delivery network (CDN ) to improve performance and enable SSL/TL using managed or custom certificates AWS Amplify is a set of tools and services that can be used together or on their own to help front end web and mobile developers build scalable full stack applications powered by AWS Amplify offers a fully ma naged service for deploying and hosting static web applications globally served by Amazon's reliable CDN with hundreds of points of presence globally and with built in CI/CD workflows that accelerate your application release cycle Amplify supports popula r web frameworks including JavaScript React Angular Vue Nextjs and mobile platforms including Android iOS React Native Ionic and Flutter Depending on your networking configurations and application requirements you m ight need to enable your API Gateway APIs to be cross origin resource sharing (CORS) – compliant CORS compliance allows web browsers to directly invoke your APIs from within static webpages When you deploy a website with CloudFront you are provided a CloudFront domain name to reach your application ( for example d2d47p2vcczkh2cloudfrontnet ) You can use Amazon Route 53 to register domain names and direct them to your CloudFront distribution or direct already owned domain names t o your CloudFront distribution This enable s users to access your site using a familiar domain name Note Amazon Web Services AWS Serverless Multi Tier Architectures Page 15 that you can also assign a custom domain name using Route 53 to your API Gateway distribution which enable s users to invoke APIs using familiar domai n names Sample architecture patterns You can implement popular architecture patterns using API Gateway and AWS Lambda as your logic tier This whitepaper includes the most popular architecture patterns that use AWS Lambda based logic tier s: • Mobile backend – A mobile application communicates with API Gateway and Lambda to access application data This pattern can be extended to generic HTTPS clients that don’t use serverless AWS resources to host presentation tier resources ( such as desktop clients web ser ver running on EC2 and so forth) • Single page application – A single page application hosted in Amazon S3 and CloudFront communicates with API Gateway and AWS Lambda to access application data • Web application – The web application is a general purpose event driven web application back end that uses AWS Lambda with API Gateway for its business logic It also uses DynamoDB as its database and Amazon Cognito for user management All static content is hosted using Amplify In addition to t hese two patterns this whitepaper discuss es the applicability of AWS Lambda and API Gateway to a general microservice architecture A microservice architecture is a popular pattern that although not a standard three tier architecture involves decoupling application components and deploying them as stateless individual units of functionality that communicate with each other Amazon Web Services AWS Serverless Multi Tier Architectures Page 16 Mobile backend Architectural pattern for serverless mobile backend Amazon Web Services AWS Serverless Multi Tier Architectures Page 17 Table 1 Mobile backend tier components Tier Components Presentation Mobile application running on a user device Logic API Gateway with AWS Lambda This architecture shows three exposed services (/tickets /shows and /info ) API Gateway endpoints are secured by Amazon Cognito user pools In this method users sign in to Amazon Cognito user pools (using a federated third party if necessary) and receive access and ID tokens that are used to authorize API Gateway calls Each Lambda function is assigned its own Identity and Access Management (IAM) role to provide access to the appropriate data source Data DynamoDB is use d for the /tickets and /shows services Amazon RDS is used for the /info service This Lambda function retrieves Amazon RDS credentials from Secrets Manager and uses an elastic network interface to access the private subnet Single page application Architectural pattern for serverless single page application Amazon Web Services AWS Serverless Multi Tier Architectures Page 18 Table 2 Single page application components Tier Components Presentation Static website content is hosted in Amazon S3 and distributed by CloudFront AWS Certificate Manager allows a custom SSL/TLS certificate to be used Logic API Gateway with AWS Lambda This architecture shows three exposed services ( /tickets /shows and /info ) API Gateway endpoints are secured by a Lambda authorizer In this method users sign in through a third party identity provider and obtain access and ID tokens These tokens are included in API Gateway calls and the Lambda authorizer validates these tokens and generates an IAM policy containing API initiation permissions Each Lambda function is assigned its own IAM role to provide access to the appropria te data source Data DynamoDB is used for the /tickets and /shows services ElastiCache is used by the /shows service to improve database performance Cache misses are sent to DynamoDB Amazon S3 is used to host static content used by the /info service Amazon Web Services AWS Serverless Multi Tier Architectures Page 19 Web application Architectural pattern for web application Table 3 Web application components Tier Components Presentation The front end application is all static content (HTML CSS JavaScript and images ) which are generated by React utilities like create react app Amazon CloudFront hosts all these objects The web application when used downloads all the resources to the b rowser and starts to run from there The web application connects to the backend calling the APIs Logic Logic layer is built using Lambda functions fronted by API Gateway REST APIs This architecture shows multiple exposed services There are multiple d ifferent Lambda functions each handling a different aspect of the application The Lambda functions are behind API Gateway and accessible using API URL paths Amazon Web Services AWS Serverless Multi Tier Architectures Page 20 Tier Components The user authentication is handled using Amazon Cognito User Pools or federated user providers A PI Gateway uses out of box integration with Amazon Cognito Only after a user is authenticated the client will receive a JSON Web Token ( JWT) which it should then use when making the API calls Each Lambda function is assigned its own IAM role to provide access to the appropriate data source Data In this particular example DynamoDB is used for the data storage but other purpose built Amazon database or storage services can be used depending o n the use case and usage scenario Microservices with Lambda Architectural pattern for microservices with Lambda The microservice architecture pattern is not bound to the typical three tier architecture; however this popular pattern can realize significant benefits from the use of serverless resources In this architecture each of the application components are decoupled and indepe ndently deployed and operated An API created with API Gateway and functions Amazon Web Services AWS Serverless Multi Tier Architectures Page 21 subsequently launch ed by AWS Lambda is all that you need to build a microservice Your team can use these services to decouple and fragment your environment to the level of gran ularity desired In general a microservices environment can introduce the following difficulties: repeated overhead for creating each new microservice issues with optimizing server density and utilization complexity of running multiple versions of multi ple microservices simultaneously and proliferation of client side code requirements to integrate with many separate services When you create microservices using serverless resources these problems become less difficult to solve and in some cases simpl y disappear The serverless microservices pattern lowers the barrier for the creation of each subsequent microservice (API Gateway even allows for the cloning of existing APIs and use of Lambda functions in other accounts) Optimizing server utilization i s no longer relevant with this pattern Finally API Gateway provides programmatically generated client SDKs in a number of popular languages to reduce integration overhead Conclusion The multi tier architecture pattern encourages the best practice of cre ating application components that are simple to maintain decouple and scale When you create a logic tier where integration occurs by API Gateway and computation occurs within AWS Lambda you realize these goals while reducing the amount of effort to achieve them Together these services provide a n HTTPS API front end for your clients and a secure environment to apply your business log ic while removing the overhead involved with managing typical server based infrastructure Contributors Contributors to this document include : • Andrew Baird AWS Solutions Architect • Bryant Bost AWS ProServe Consultant • Stefano Buliani Senior Product Manage r Tech AWS Mobile • Vyom Nagrani Senior Product Manager AWS Mobile Amazon Web Services AWS Serverless Multi Tier Architectures Page 22 • Ajay Nair Senior Product Manager AWS Mobile • Rahul Popat Global Solutions Architect • Brajendra Singh Senior Solutions Architect Further reading For additional information refer to : • AWS Whitepapers and Guides Document revisions Date Description Octo ber 20 2021 Updated for new service features and patterns June 1 2021 Updated for new service features and patterns September 25 2019 Updated for new service features November 1 2015 First publication,General,consultant,Best Practices AWS_Storage_Optimization,This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers AWS Storage Optimization March 2018 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2018 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Identify Your Data Storage Requirements 1 AWS Storage Services 2 Object storage 2 Block storage 3 File storage 5 Optimizing Amazon S3 Storage 5 Optimizing Amazon EBS Storage 7 Delete Unattached Amazon EBS Volumes 8 Resize or Change the EBS Volume Type 8 Delete Stale Amazon EBS Snapshots 9 Optimizing Storage is an Ongoing Process 9 Conclusion 10 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This is the last in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how to choose and optim ize AWS storage service s to meet your data storage needs and help you save costs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 1 Introduction Organizations tend to think of data storage as an ancillary service and do not optimize storage after data is moved to the cloud Many also fail to clean up unused storage and let these services run for days weeks and even months at significant cost According to this blog post by RightScale up to 7 % of all cloud spend is wasted on unused storage volumes and old snapshots (copies of storage volumes) AWS offers a broad and flexible set of data storage options that let you move between different tiers of storage and change storage types at any time This whitepaper discusses how to choose AWS storage services that meet your data storage needs at the lowest cost It also discusses how to optimize these services to achieve balance between performance availability and durability Identify Your Data Storage Requirements To optimize storage the first step is to understand the performance profile for each of your workloads You should conduct a performance analysis to measure input/output operations per second ( IOPS ) throughput and other variables AWS s torage services are optimized for different storage scenarios —there is no single data storage option t hat is ideal for all workloads When evaluating your storage requirements consider data storage options for each workload separately The following ques tions can help you segment data within each of your workload s and determine your storage requirements : • How often and how quickly do you need to access your data? AWS offers storage options and pricing tiers for frequently accessed less frequently accessed and infrequently accessed data • Does your data store require high IOPS or throughput? AWS provides categories of storage that are optimized for performance and throughput Understanding IOPS and throughput requirements will help you provision the right amount of storage and avoid overpaying This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 2 • How critical (durable) is your data? Critical or regulated data needs to be retained at almost any expense and tends to be stored for a long time • How sensitive is your data? Highly sensitive data needs to be protected from accidental and malicious changes not just data loss or corruption Durability cost and security are equally important to consider • How large is your data set? Knowing the total size of the data set helps in estima ting storage capacity and cost • How transient is your data? Transient data is short lived and typically does not require high durability (Note: Durability refers to average annual expected data loss) Clickstream and Twitter data are good examples of transient data • How much are you prepared to pay to store the data? Setting a budget for data storage will inform your d ecisions about storage options AWS Storage Service s Choosing the right AWS storage service for your data means finding the closest match in terms of data availability durability and performance Note: Availability refers to a storage volume’s ability to deliver data upon request Performance refers to the number of IOPS or the amount of throughput (measured in megabytes per second) that t he storage volume can deliver Amazon offers three broad categories of storage services: object block and file storage Each offering is designed to meet a different storage req uirement which gives you flexibility to find the solution that works b est for your storage scenarios Object storage Amazon Simple Storage Service (Amazon S3) is highly durable general purpose object storage tha t works well for unstructured data sets such as media content Amazon S3 provides the highest level of data durability and availability on the AWS Cloud There are three tiers of storage: one each for hot warm or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 3 cold data In terms of pricing the colde r the data the cheaper it is to store and the costlier it is to access when needed You can easily move data between these storage options to optimize storage costs: • Amazon S3 Standard – The best storage option for data that you frequently access Amazon S3 delivers low latency and high throughput and is ideal for use cases such as cloud applications dynamic websites content distribution gaming and data analytics • Amazon S3 Standard Infrequent Access (Amazon S3 Standard IA) – Use this storage option for data that you access less frequently such as long term backups and disaster recovery It offers cheaper storage over time but higher charges to retrieve or transfer data • Amazon Glacier – Designed for long term storage of infr equently accessed data such as end oflifecycle compliance or regulatory backups Different methods of data retrieval are available at various speeds and cost Retrieval can take from a few minutes to several hours The following table shows comparative pricing for Amazon S3 Amazon S3 Pricing* Per Gigabyte Month Amazon S3 $0023 Amazon S3 Standard IA $00125 (plus $001/GB retrieval charge) Amazon Glacier $0004 *Based on US East (N Virginia) prices Block storage Amazon Elastic Block Store (Amazon EBS) volumes provide a durable block storage option for use with EC2 instances Use Amazon EBS for data that requires long term persistence and quick access at guaranteed levels of performance There are two types of block storage: solid state drive (SSD) storage and hard diskdrive (HDD) storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 4 • SSD storage is optimized for transactional workloads where performance is closely tied to IOPS There are two SSD vol ume options to choose from: o EBS Provisioned IOPS SSD (io1) – Best for latency sensitive workloads that requir e specific minimum guaranteed IOPS With io1 volumes you pay separately for provisioned IOPS so unless you need high levels of provisioned IOPS gp2 volumes are a better match at lower cost o EBS General Purpose SSD (gp2) – Designed for general use and offer a balance between cost and performance • HDD storage is designed for throughput intensive workloads such as data warehouses and log processing There are two types of HDD volumes: o Throughput Optimized HDD (st1) – Best for frequently accessed throughput intensive workloads o Cold HDD (sc1) – Designed for less frequently accessed throughput intensive workloads The following table shows comparative pricing for Amazon EBS Amazon EBS Pricing* Per Gigabyte Month General Purpose SSD (gp2) $010 per GB month of provisioned storage Provisioned IOPS SSD (io1) $0125 per GB month of provisioned storage plus $0065 per provisioned IOPS month Throughput Optimized HDD (st1) $0045 per GB month of provisioned storage Cold HDD (sc1) $0025 per GB month of provisioned storage Amazon EBS Snapshots to Amazon S3 $005 per GB month of data stored *Based on US East (N Virginia) prices This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 5 File storage Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with EC2 instances Amazon EFS supports any number of instances at the same time Its storage capacity can scale from gigabytes to petabytes of data without needing to provision storage Amazon EFS is designed for workloads and applications such as big data media processing workflows content management and web serving Amazon EFS also supports file synchronization cap abilities so that you can efficiently and securel y synchronize files from on premises or cloud file systems to Amazon EFS at speeds of up to 5 times faster than standard Linux copy tools Amazon S3 and Amazon EFS allocate storage based on your usage and you pay for what you use However for EBS volumes you are charged for provisioned (allocated) storage whether or not you use it The key to keeping storage costs low without sacrificing required functionality is to maximize the use of Amazon S3 when possib le and use more expensive EBS volumes with provisioned I/O only when appl ication requirements demand it The following table shows pricing for Amazon EF S Amazon EFS Pricing* Per Gigabyte Month Amazon EFS $030 *Based on US East (N Virginia) prices Optimize Amazon S3 Storage Amazon S3 lets you analyze data access patterns create inventory lists and configure lifecycle policies You can set up rules to automatically move data objects to cheaper S3 storage tiers as objects are accessed less frequently or to automatically delete objects after an expiration date To manage storage data most effectively you can use tagging to categorize your S3 objects and filter on these tags in your data lifecycle policies To determine when to transition data to another storage class you can use Amazon S3 analytics storage class analysis to analyze storage access This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 6 patterns Analyze all th e objects in a bucket or use an object tag or common prefix to filter objects for analysis If you observe infrequent access patterns of a filtered data set over time you can use the information to choose a more appropriate storage class improve lifecycl e policies and make predictions around future usage and growth Another management tool is Amazon S3 Inventory which audits and reports on the replication and encryptio n status of your S3 objects on a weekly or monthly basis This feature provides CSV output files that list objects and their corresponding metadata and lets you configure multiple inventory lists for a single bucket organized by different S3 metadata tags You can also query Amazon S3 inventory using standard SQL by using Amazon Athena Amazon Redshift Spectrum and other tools such as Presto Apache Hive and Apace Spark Amazon S3 can also publish storage request and data transfer metrics to Amazon CloudWatch Storage metrics are reported daily are available at one minute intervals for granular visibility and can be collected and reported for an entire bucket or a subset of objects (selected via pref ix or tags) With all the information these storage management tools provide you can create policies to move less frequently accessed data S3 data to cheaper storage tiers for considerable savings For example by moving data from Amazon S3 Standard to Am azon S3 Standard IA you can save up to 60 % (on a per gigabyte basis) of Amazon S3 pricing By moving data that is at the end of its lifecycle and accessed on rare occasions to Amazon Glacier you can save up to 80 % of Amazon S3 pricing The following table compares the monthly cost of storing 1 petabyte of content on Amazon S3 Standard versus Amazon S3 Standard IA (the cost includes the content retrieval fee) It demonstrates that if 10 % of the content is accessed per month the savings would be 41 % with Amazon S3 Standard IA If 50 % of the content is accessed the savings would be 24 %—which is still significant Even if 100 % of the content is accessed per month you would still save 2 % using Amazon S3 Standard IA Comparing 1 Petabyte of Object Storage* This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 7 Content Accessed Per Month S3 Standard S3 Standard IA Savings 1 PB Monthly 10% $24117 $14116 41% 1 PB Monthly 50% $24117 $18350 24% 1 PB Monthly 100% $24117 $23593 2% *Based on US East prices Note: There is no charge for transferring data between Amazon S3 storage options as long as they are within the same AWS Region To further optimize costs associated to storage and data retrieval AWS announced the launch of Amazon S3 Select and Amazon Glacier Select Traditionally data in object storage had to be accessed as whole entities regardless of the size of the object Amazon S3 Select now lets you retrieve a subset of data from an object using simple SQL expressions which means that your applications no longer have to use compute resources to scan and filter the data from an object Using Amazon S3 Select you can potentially improve query performance by up to 400 % and reduce query costs as much as 8 0% AWS also supports efficient data retrieval with Amazon Glacier so that you do not have to restore an archived object to find the bytes needed for analytics With both Amazon S3 Select and Amazon Glacier Select you can lower your costs and uncover more insights from your data regardles s of what storage tier it’s in Optimize Amazon EBS Storage With Amazon EBS it’s important to keep in mind that you are paying for provisioned capacity and performance —even if the volume is unattached or has very low wri te activity To optimize storage performance and costs for Amazon EBS monitor volumes periodically to identify ones that are unattached or appear to be underutilized or overutilized and adjust provisioning to match actual usage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 8 AWS offers tools that can help you optimize block storage Amazon CloudWatch automatically collects a range of data points for EBS volumes and lets you set alarms on volume behavior AWS Trusted Advisor is another way for you to analyze your infrastructure to identify unattached underutilized and overutilized EBS volumes Third party tools such as Cloudability can also provide insight i nto performance of EBS volumes Delet e Unattached Amazon EBS Volumes An easy way to reduce wasted spend is to find and delete unattached volumes However w hen EC2 instances are stopped or terminated attached EBS volumes are not automatically deleted and will continue to accrue charges since they are still operating To find unattached EBS volumes look for volumes that are available which indicat es that they are not attached to an EC2 instance You can also look at network throughput and IOPS to see whether there has been any volume activity over the previous two weeks If the volume is in a nonproduction environment hasn’t been used in weeks or h asn’t been attached in a month there is a good chance you can delete it Before deleting a volume store an Amazon EBS snapshot (a backup copy of an EBS volume) so that the volume can be quickly restored later if needed You can automate the process of de leting unattached volumes by using AWS Lambda functions with Amazon CloudWatch Resiz e or Chang e the EBS Volume Type Another way to optimize storage costs is to identify volumes that are underutilized and downsize them or change the volume type Monitor th e read write access of EBS volumes to determine if throughput is low If you have a current generation EBS volume attached to a current generation EC2 instance type you can use the elastic volumes feature to change the size or volume type or (for an SSD io1 volume) adjust IOPS performanc e without detaching the volume The following tips can help you optimize your EBS volumes: • For General Purpose SSD gp2 volumes you’ll want to optimize for capacity so that you’r e paying only for what you use This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 9 • With Provisi oned IOPS SSD io1 volumes pay close attention to IOPS utilization rather than throughput since you pay for IOPS directly Provision 10 –20% above maximum IOPS utilization • You can save by reducing provisioned IOPS or by switching from a Provision ed IOPS S SD io1 volume type to a General Purpose SSD gp2 volume type • If the volume is 500 gigabytes or larger consider converting to a Cold HDD sc1 volum e to save on your storage rate • You can always return a volume to its original settings if needed Delet e Stale Amazon EBS Snapshots If you have a backup policy that takes EBS volume snapshots daily or weekly you will quickly accumulate snapshots Check for stale snapshots that are over 30 days old and delete them to reduce storage costs Deleting a snapshot has no effect on the volume You can use the AWS Management Console or AWS Command Line Interface (CLI) for this purpose or third party tools such as Skeddly Storage Optimization is an Ongoing Process Maintain ing a storage architecture that is both right sized and right priced is an ongoing process To get the most efficient use of your storage spend you should optimize storage on a monthly basis You can streamline this effort by: • Establishing an ongoing mechani sm for optimizing storage a nd setting up storage policies • Monitoring costs closely using AWS cost and reporting tools such as Cost Explorer budgets and detailed billing reports in the Billi ng and Cost Management console • Enforcing Amazon S3 object tagging and establishing S3 lifecycle policies to continually optimize data storage throughout the data lifecycle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 10 Conclusion Storage optimization is the ongoing process of evaluating changes in data storage usage and needs and choosing the most cost effective and appropriate AWS storage option For object stores you want to implement Amazon S3 lifecycle policies to automatically move data to cheaper storage tiers as data is accessed l ess frequently For Amazon EBS block stores monitor your storage usage and resiz e underutilized (or overutilized) volumes You also want to delete unattached volumes and stale Amazon EBS snapshots so that you’re not paying for unused resources You can st reamline the process of storage optimization by setting up a monthly schedule for this task and taking advantage of the powerful tools by AWS and third party vendors to monitor storage costs and evaluate volume usage,General,consultant,Best Practices AWS_Storage_Services_Overview,AWS Storage Services Overview A Look at Storage Services Offered by AWS December 2016 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind wheth er express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are contr olled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Abstract 6 Introduction 1 Amazon S3 1 Usage Patterns 2 Performance 3 Durability and Availability 4 Scalability and Elasticity 5 Security 5 Interfaces 6 Cost Model 7 Amazon Glacier 7 Usage Patterns 8 Performance 8 Durability and Availability 9 Scalability and Elasticity 9 Security 9 Interfaces 10 Cost Model 11 Amazon EFS 11 Usage Patterns 12 Performance 13 Durability and A vailability 15 Scalability and Elasticity 15 Security 15 Interfaces 16 Cost Model 16 ArchivedAmazon EBS 17 Usage Patterns 17 Performance 18 Durability and Availability 21 Scalability and Elasticity 22 Security 23 Interfaces 23 Cost Model 24 Amazon EC2 Instanc e Storage 24 Usage Patterns 26 Performance 27 Durability and Availability 28 Scalability and Elasticity 28 Security 29 Interfaces 29 Cost Model 30 AWS Storage Gateway 30 Usage Pa tterns 31 Performance 32 Durability and Availability 32 Scalability and Elasticity 32 Security 33 Interfaces 33 Cost Model 34 AWS Snowball 34 Usage Patterns 34 Performance 35 Durability and A vailability 36 ArchivedScalability and Elasticity 36 Security 36 Interfaces 37 Cost Model 38 Amazon CloudFront 39 Usage Patterns 39 Performance 40 Durability and A vailability 40 Scalability and Elasticity 40 Security 41 Interfaces 41 Cost Model 42 Conclusion 42 Contributors 43 References and Further Reading 44 AWS Storage Services 44 Other Re sources 44 ArchivedAbstract Amazon Web Services (AWS) is a flexible costeffective easy touse cloud computing platform This whitepaper is designed to help architects and developers understand the different storage services and features available in the AWS Cloud We provide an overview of each storage service or feature and describe usage patterns performance durability and availability scalability and elasticity security interfaces and the cost model ArchivedAmazon Web Services – AWS Storage Services Overview Page 1 Introduction Amazon Web Services (AWS) provides lowcost data storage with high durability and availability AWS offers storage choices for backup archiving and disaster recovery use cases and provides block file and object storage In this whitepaper we examine the following AWS Cloud storage services and features Amazon Simple Storage Service (Amazon S3) A service that provides scalable and highly durable object storage in the cloud Amazon Glacier A service that provides low cost highly durable archive storage in the cloud Amazon Elastic File System (Amazon EFS) A service that provides scalable network file storage for Amazon EC2 instances Amazon Elastic Block Store (Amazon EBS) A service that provides block storage volumes for Amazon EC2 instances Amazon EC2 Instance Storage Temporary block storage volumes for Amazon EC2 instances AWS Storage Gateway An on premises storage appliance that integrates with cloud storage AWS Snowball A service that transports large amounts of data to and from the cloud Amazon CloudFront A service that provides a global content delivery network (C DN) Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams secure durable highly scalable object storage at a very low cost 1 You can store and retrieve any amount of data at any time from anywhere on the web through a simple web service interface You can write read and de lete objects containing from zero to 5 TB of data Amazon S3 is highly scalable allowing concurrent read or write access to data by many separate clients or application threads ArchivedAmazon Web Services – AWS Storage Services Overview Page 2 Amazon S3 offers a range of storage classes designed for different use cases including the following: • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard Infrequent Access (Standard IA) for long lived but less frequently accessed data • Amazon Glacier for low cost archival data Usage Pat terns There are four common usage patterns for Amazon S3 First Amazon S3 is used to store and distribute static web content and media This content can be delivered directly from Amazon S3 because each object in Amazon S3 has a unique HTTP URL Alternat ively Amazon S3 can serve as an origin store for a content delivery network (CDN) such as Amazon CloudFront The elasticity of Amazon S3 makes it particularly well suited for hosting web content that requires bandwidth for addressing extreme demand spike s Also because no storage provisioning is required Amazon S3 works well for fast growing websites hosting data intensive user generated content such as video and photo sharing sites Second Amazon S3 is used to host entire static websites Amazon S3 provides a lowcost highly available and highly scalable solution including storage for static HTML files images videos and client side scripts in formats such as JavaScript Third Amazon S3 is used as a data store for computation and large scale analytics such as financial transaction analysis clickstream analytics and media transcoding Because of the horizontal scalability of Amazon S3 you can access your data from multiple computing nodes concurrently without being constrained by a single co nnection Finally Amazon S3 is often used as a highly durable scalable and secure solution for backup and archiving of critical data You can easily move cold data to Amazon Glacier using lifecycle management rules on data stored in Amazon S3 You can a lso use Amazon S3 cross region replication to automatically copy objects across S3 buckets in different AWS Regions asynchronously providing disaster recovery solutions for business continuity 2 ArchivedAmazon Web Services – AWS Storage Services Overview Page 3 Amazon S3 doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services File system Amazon S3 uses a flat namespace and isn’t meant to serve as a standalone POSIX compliant file system Instead consider using Amazon EFS as a file system Amazon EFS Structured data with query Amazon S3 doesn’t offer query capabilities to retrieve specific objects When you use Amazon S3 you need to know the exact bucket name and key for the files you want to retrieve from the service Amazon S3 can ’t be used as a database or search engine by it self Instead you can pair Amazon S3 with Amazon DynamoDB Amazon CloudSearch or Amazon Relational Data base Service (Amazon RDS) to index and query metadata about Amazon S3 buckets and objects Amazon Dynam oDB Amazon RDS Amazon CloudSearch Rapidly changing data Data that must be updated very frequently might be better served by storage solutions that take into acco unt read and write latencies such as Amazon EBS volumes Amazon RDS Amazon DynamoDB Amazon EFS or relational databases running on Amazon EC2 Amazon EBS Amazon EFS Amazon DynamoDB Amazon RDS Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier Dynamic website hosting Although Amazon S3 is ideal for static content websites dynamic websites that depend on database interaction or use serv erside scripting should be hosted on Amazon EC2 or Amazon EFS Amazon EC2 Amazon EFS Performance In scenarios where you use Amazon S3 from within Amazon EC2 in the same Region access to Amazon S3 from Amazon EC2 is designed to be fast Amazon S3 is also designed so that server side latencies are insignificant relative to Internet latencies In additi on Amazon S3 is built to scale storage requests and numbers of users to support an extremely large number of web scale applications If you access Amazon S3 using multiple threads multiple applications or multiple clients concurrently total Amazon S3 aggregate throughput typically scales to rates that far exceed what any single server can generate or consume ArchivedAmazon Web Services – AWS Storage Services Overview Page 4 To improve the upload performance of large objects (typically over 100 MB) Amazon S3 offers a multipart upload command to upload a single object as a set of parts 3 After all parts of your object are uploaded Amazon S3 assembles these parts and creates the object Using multipart upload you can get improved throughput and quick recovery from any network issues Another benefit of using multipart upload is that you can upload multiple parts of a single object in parallel and restart the upload of smaller parts instead of restarting the upload of the entire larg e object To speed up access to relevant data many developers pair Amazon S3 with a search engine such as Amazon CloudSearch or a database such as Amazon DynamoDB or Amazon RDS In these scenarios Amazon S3 stores the actual information and the search e ngine or database serves as the repository for associated metadata (for example the object name size keywords and so on) Metadata in the database can easily be indexed and queried making it very efficient to locate an object’s reference by using a se arch engine or a database query This result can be used to pinpoint and retrieve the object itself from Amazon S3 Amazon S3 Transfer Acceleration enables fast easy and secure transfer of files over long distances between your client and your Amazon S3 bucket It leverages Amazon CloudFront globally distributed edge locations to route traffic to your Amazon S3 bucket over an Amazon optimized network path To get started with Amazon S3 Transfer Acceleration you first must enable it on an Amazon S3 bucket Then modify your Amazon S3 PUT and GET requests to use the s3 accelerate endpoint domain name (s3 accelerateamazonawscom) The Amazon S3 bucket can still be accessed using the regular endpoint Some customers have measured performance impro vements in excess of 500 percent when performing intercontinental uploads Durability and Availability Amazon S3 Standard storage and Standard IA storage provide high level s of data durability and availability by automatically and synchronously storing your data across both multiple devices and multiple facilities within your selected geographical region Error correction is built in and there are no single points of failure Amazon S3 is designed to sustain the concurrent loss of data in two facilities making it very well suited to serve as the primary data storage for ArchivedAmazon Web Services – AWS Storage Services Overview Page 5 mission critical data In fact Amazon S3 is designed for 99999999999 percent (11 nines) durability per o bject and 9999 percent availability over a one year period Additionally you have a choice of enabling cross region replication on each Amazon S3 bucket Once enabled cross region replication automatically copies objects across buckets in different AWS Regions asynchronously providing 11 nines of durability and 4 nines of availability on both the source and destination Amazon S3 objects Scalability and Elasticity Amazon S3 has been designed to offer a very high level of automatic scalability and elasti city Unlike a typical file system that encounters issues when storing a large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucket Also unlike a disk drive that has a limit on the total amount of data th at can be stored before you must partition the data across drives and/or servers an Amazon S3 bucket can store a virtually unlimited number of bytes You can store any number of objects (files) in a single bucket and Amazon S3 will automatically manage s caling and distributing redundant copies of your information to other servers in other locations in the same Region all using Amazon’s high performance infrastructure Security Amazon S3 is highly secure It provides multiple mechanisms for fine grained control of access to Amazon S3 resources and it supports encryption You can manage access to Amazon S3 by granting other AWS accounts and users permission to perform the resource operations by writing an access policy 4 You can protect Amazon S3 data at rest by using serve rside encryption 5 in which you request Amazon S3 to encrypt your object before it’s written to disks in data centers and decrypt it when you download the object or by using client side encryption 6 in which you encrypt your data on the client side and upload the encrypted data to Amazon S3 You can protect the data in transit by using Secure Sockets Layer (SSL) or client side encryption ArchivedAmazon Web Services – AWS Storage Services Overview Page 6 You can use versioning to preserve retrieve and restore every version of every object stored in your Amazon S3 bucket With versioning you can easily recover from both unintended user actions and application failures Additionally you can add an optional layer of security by enabling Multi Factor Authentication (MFA) Delete for a bucket 7 With this option enabled for a bucket two forms of authentication are re quired to change the versioning state of the bucket or to permanently delete an object version: valid AWS account credentials plus a six digit code (a single use time based password) from a physical or virtual token device To track requests for access t o your bucket you can enable access logging 8 Each access log record provides details about a single access request such as the requester bucket name request time request action response status and error code if any Access log information can be useful in security and access audits It can al so help you learn about your customer base and understand your Amazon S3 bill Interfaces Amazon S3 provides standards based REST web service application program interfaces (APIs) for both management and data operations These APIs allow Amazon S3 objects to be stored in uniquely named buckets (top level folders) Each object must have a unique object key (file name) that serves as an identifier for the object within that bucket Although Amazon S3 is a web based object store with a flat naming structure ra ther than a traditional file system you can easily emulate a file system hierarchy (folder1/folder2/file) in Amazon S3 by creating object key names that correspond to the full path name of each file Most developers building applications on Amazon S3 use a higher level toolkit or software development kit (SDK) that wraps the underlying REST API AWS SDKs are available for Android Browser iOS Java NET Nodejs PHP Python Ruby and Go The integrated AWS Command Line Interface (AWS CLI) also provides a set of high level Linux like Amazon S3 file commands for common operations such as ls cp mv sync and so on Using the AWS CLI for Amazon S3 you can perform recursive uploads and downloads using a single folder level Amazon S3 command and also per form parallel transfers You can also use the AWS CLI for command line access to the low level Amazon S3 API Using the AWS Management Console you can easily create and manage Amazon S3 buckets ArchivedAmazon Web Services – AWS Storage Services Overview Page 7 upload and download objects and browse the contents of your S3 buckets using a simple web based user interface Additionally you can use the Amazon S3 notification feature to receive notifications when certain events happen in your bucket Currently Amazon S3 can publish events when an object is uploaded or when an object is deleted Notifications can be issued to Amazon Simple Notification Service (SNS) topics 9 Amazon Simple Queue Service (SQS) queues 10 and AWS Lambda functions 11 Cost Model With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon S3 Standard has three pricing components: storage (per GB per month) data tran sfer in or out (per GB per month) and requests (per thousand requests per month) For new customers AWS provides the AWS Free Tier which includes up to 5 GB of Amazon S3 storage 20000 get requests 2000 put requests and 15 GB of data transfer out each month for one year for free 12 You can find pricing information at the Amazon S3 pricing page 13 There are Data Transfer IN and OUT fees if you enable Amazon S3 Transfer Acceleration on a bucket and the transfer performance is faster than regular Amazon S3 transfer If we determine that Transfer Acceleration is not likely to be faster than a regular Amazon S3 transfer of the same object to the same destination we will not charge for that use of Transfer Acceleration for that transfer and may bypass the Transfer Acceleration system for that upload Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides highly secure durable and flexible storage for data archiving and online backup 14 With Amazon Glacier you can reliably store your data for as little as $0007 per gigabyte per month Amazon Glacie r enables you to offload the administrative burdens of operating and scaling storage to AWS so that you don’t have to worry about capacity planning hardware provisioning data replication hardware failure detection and repair or time consuming hardware migrations You store data in Amazon Glacier as archives An archive can represent a single file or you can combine several files to be uploaded as a single archive ArchivedAmazon Web Services – AWS Storage Services Overview Page 8 Retrieving archives from Amazon Glacier requires the initiation of a job You organize yo ur archives in vaults Amazon Glacier is designed for use with other Amazon web services You can seamlessly move data between Amazon Glacier and Amazon S3 using S3 data lifecycle policies Usage Patterns Organizations are using Amazon Glacier to support a number of use cases These use cases include archiving offsite enterprise information media assets and research and scientific data and also performing digital preservation and magnetic tape replacement Amazon Glacier doesn’t suit all storage situatio ns The following table presents a few storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Rapidly changing data Data that must be updated very frequently might be better served by a storage solution w ith lower read/write latencies such as Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB or relational databases running on Amazon EC2 Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB Amazon EC2 Immediate access Data stored in Amazon Glacier is not available immediately Retrieval jobs typically require 3 –5 hours to complete so if you need immediate access to your object data Amazon S3 is a better choice Amazon S3 Performance Ama zon Glacier is a low cost storage service designed to store data that is infrequently accessed and long lived Amazon Glacier retrieval jobs typically complete in 3 to 5 hours You can improve the upload experience for larger archives by using multipart upload for archives up to about 40 TB (the single archive limit) 15 You can upload separate parts of a large archive independently in any order and in parallel t o improve the upload experience for larger archives You can even perform range retrievals on archives stored in Amazon Glacie r by specifying a range or portion ArchivedAmazon Web Services – AWS Storage Services Overview Page 9 of the archive 16 Specifying a range of bytes for a retrieval can help control bandwidth costs manage your data downloads and retrieve a targeted part of a large archive Durability and Availability Amazon Glacier is designed to provide average annual durability of 9999 9999999 percent (11 nines) for an archive The service redundantly stores data in multiple facilities and on multiple devices within each facility To increase durability Amazon Glacier synchronously stores your data across multiple facilities before retu rning SUCCESS on uploading an archive Unlike traditional systems which can require laborious data verification and manual repair Amazon Glacier performs regular systematic data integrity checks and is built to be automatically self healing Scalability and Elasticity Amazon Glacier scales to meet growing and often unpredictable storage requirements A single archive is limited to 40 TB in size but there is no limit to the total amount of data you can store in the service Whether you’re storing petabyt es or gigabytes Amazon Glacier automatically scales your storage up or down as needed Security By default only you can access your Amazon Glacier data If other people need to access your data you can set up data access control in Amazon Glacier by usi ng the AWS Identity and Access Management (IAM) service 17 To do so simply create an IAM policy that specifies which account users have rights to operations on a given vault Amazon Glacier uses server side encr yption to encrypt all data at rest Amazon Glacier handles key management and key protection for you by using one of the strongest block ciphers available 256 bit Advanced Encryption Standard (AES 256) Customers who want to manage their own keys can enc rypt data prior to uploading it ArchivedAmazon Web Services – AWS Storage Services Overview Page 10 Amazon Glacier allows you to lock vaults where long term records retention is mandated by regulations or compliance rules You can set compliance controls on individual Amazon Glacier vaults and enforce these by using locka ble policies For example you might specify controls such as “undeletable records” or “time based data retention” in a Vault Lock policy and then lock the policy from future edits After it’s locked the policy becomes immutable and Amazon Glacier enforces the prescribed controls to help achieve your compliance objectives To help monitor data access Amazon Glacier is integrated with AWS CloudTrail allowing any API calls made to Amazon Glac ier in your AWS account to be captured and stored in log files that are delivered to an Amazon S3 bucket that you specify 18 Interfaces There are two ways to use Amazon Glacier each with its own interfaces The Amazon Glacier API provides both management and data operations First Amazon Glacier provides a native standards based REST web services interface This interface can be accessed using the Java SDK or the NET SDK You can use the AWS Management Console or Amazon Glacier API actions to create vau lts to organize the archives in Amazon Glacier You can then use the Amazon Glacier API actions to upload and retrieve archives to monitor the status of your jobs and also to configure your vault to send you a notification through Amazon SNS when a job is complete Second Amazon Glacier can be used as a storage class in Amazon S3 by using object lifecycle management that provides automatic policy driven archiving from Amazon S3 to Amazon Glacier You simply se t one or more lifecycle rules for an Amazon S3 bucket defining what objects should be transitioned to Amazon Glacier and when You can specify an absolute or relative time period (including 0 days) after which the specified Amazon S3 objects should be transitioned to Amazon Glacier The Amazon S3 API includes a RESTORE operation The retrieval process from Amazon Glacier using RESTORE takes three to five hours the same as other Amazon Glacier retrievals Retrieval puts a copy of the retrieved object in Am azon S3 Reduced Redundancy Storage (RRS) for a specified retention period The original archived object ArchivedAmazon Web Services – AWS Storage Services Overview Page 11 remains stored in Amazon Glacier For more information on how to use Amazon Glacier from Amazon S3 see the Object Lifecycle Management section of the Amazon S3 Developer Guide 19 Note that when using Amazon Glacier as a storage class in Amazon S3 you use the Amazon S3 API and when using “native” Amazon Glacier you use the Amazon Glacier API For example objects archived to Amazon Glacier using Amazon S3 lifecycle policies can only be listed and retrieved by using the Amazon S3 API or the Amazon S3 console You can ’t see them as archives in an Amazon Glacier vault Cost Model With Amazon Glacier you pay only for what you use and there is no minimum fee In normal use Amazon Glacier has three pricing components: storage (per GB per month) data transfer out (per GB per month) and requests (per thousand UPLOAD and R ETRIEVAL requests per month) Note that Amazon Glacier is designed with the expectation that retrievals are infrequent and unusual and data will be stored for extended periods of time You can retrieve up to 5 percent of your average monthly storage (pror ated daily) for free each month If you retrieve more than this amount of data in a month you are charged an additional (per GB) retrieval fee A prorated charge (per GB) also applies for items deleted prior to 90 days’ passage You can find pricing infor mation at the Amazon Glacier pricing page 20 Amazon EFS Amazon Elastic File System (Amazon EFS) delivers a simple scalable elastic highly available and highly durable network file system as a service to EC2 instances 21 It supports Network File System versions 4 (NFSv4) and 41 (NFSv41) which makes it easy to migrate enterprise applications to AWS or build new ones We recommend clients run NFSv41 to take advantage of the many performance benefits found in the latest version including scalability and parallelism You can create and configure file systems quickly and easily through a simple web services interface You don’t need to provision storag e in advance and there is no minimum fee or setup cost —you simply pay for what you use Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes which allows massively parallel access from EC2 instances to ArchivedAmazon Web Services – AWS Storage Services Overview Page 12 your da ta within a Region It is also highly available and highly durable because it stores data and metadata across multiple Availability Zones in a Region To understand Amazon EFS it is best to examine the different components that allow EC2 instances access to EFS file systems You can create one or more EFS file systems within an AWS Region Each file system is accessed by EC2 instances via mount targets which are created pe r Availability Zone You create one mount target per Availability Zone in the VPC you create using Amazon Virtual Private Cloud Traffic flow between Amazon EFS and EC2 instances is controlled using security groups associated with the EC2 instance and the EFS mount targets Access to EFS file system objects (files and directories) is controlled using standard Unix style read/write/execute permissions based on user and group IDs You can find more information about how EFS works in the Amazon EFS User Guide 22 Usage Patterns Amazon EFS is designed to meet the needs of multi threaded applications and applications that concurrently access data from multiple EC2 instances and that require substantial levels of aggregate throughput and input/output operations per second (IOPS) Its distributed design enables high levels of availability durability and scalability which results in a small latency overhead for each file operation Because o f this per operation overhead overall throughput generally increases as the average input/output (I/O) size increases since the overhead is amortized over a larger amount of data This makes Amazon EFS ideal for growing datasets consisting of larger files that need both high performance and multi client access Amazon EFS supports highly parallelized workloads and is designed to meet the performance needs of big data and analytics media processing content management web serving and home directories Amazon EFS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 13 Storage Need Solution AWS Services Relational database storage In most cases relational databases require storage that is mounted accessed and locked by a single node (EC2 instance etc) When running relational databases on AWS look at leveraging Amazon RDS or Amazon EC2 with Amazon EBS PIOPS volumes Amazon RDS Amazon EC2 Amazon EBS Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon EC2 Local Instance Store Performance Amazon EFS file systems are distributed across an unconstrained number of storage servers e nabling file systems to grow elastically to petabyte scale and allowing massively parallel access from EC2 instances within a Region This distributed data storage design means that multi threaded applications and applications that concurrently access dat a from multiple EC2 instances can drive substantial levels of aggregate throughput and IOPS There are two different performance modes available for Amazon EFS: General Purpose and Max I/O General Purpose performance mode is the default mode and is approp riate for most file systems However i f your overall Amazon EFS workload will exceed 7000 file operations per second per file system we recommend the files system use Max I/O performance mode Max I/O performance mode is optimized for applications where tens hundreds or thousands of EC2 instances are accessing the file system With this mode file systems scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations Due to the spiky nature of file based workloads Amazon EFS is optimized to burst at high throughput levels for short periods of time while delivering low levels of throughput the rest of the time A credit system determines when an Amazon EFS file system can burst Over time each file system earns burst credits at a baseline rate determined by the size of the file system and uses these credits whenever it reads or writes data A file system can drive throughput continuously at its baseline rate It accumulates c redits during periods of inactivity or when throughput is below its baseline rate These accumulated burst credits allow a file system to drive throughput above its baseline rate The file system can continue to drive throughput above its baseline rate as long as it has a positive burst credit ArchivedAmazon Web Services – AWS Storage Services Overview Page 14 balance You can see the burst credit balance for a file system by viewing the BurstCreditBalance metric in Amazon CloudWatch 23 Newly created file systems start with a credit balance of 21 TiB with a baseline rate of 50 MiB/s per TiB of storage and a burst rate of 100 MiB/s The following list describes some examples of bursting behaviors for file systems of different sizes File system size (GiB) Baseline aggregate throughput (MiB/s) Burst aggregate throughput (MiB/s) Maximum burst duration (hours) % of time file system can burst 10 05 100 60 05% 256 125 100 69 125% 512 250 100 80 250% 1024 500 100 120 500% 1536 750 150 120 500% 2048 1000 200 120 500% 3072 1500 300 120 500% 4096 2000 400 120 500% Here are a few recommendations to get the most performance out of your Amazon EFS file system Because of the distributed architecture of Amazon EFS larger I/O workloads generally experience higher throughput EFS file systems can be mounted by thousands of EC2 instances concurrently If your application is parallelizable across multiple instances you can drive higher throughput levels on your file system in aggregate across instances If your application can handle asynchronous writes to your file system and you’re able to trade off consistency for speed enabling asynchronous writes may improve performance We recommend Linux kernel version 4 or later and NFSv41 for all clients accessing EFS file systems When mounting EFS file systems use the mount o ptions recommended in the Mounting File Systems and Additional Mounting Considerations sections of the Amazon EFS User Guide 24 25 ArchivedAmazon Web Services – AWS Storage Services Overview Page 15 Durability and Availability Amazon EFS is designed to be highly durable and highly available Each Amazon EFS file system object ( such as a directory file or link) is redundantly stored across multiple Availabilit y Zones within a Region Amazon EFS is designed to be as highly durable and available as Amazon S3 Scalability and Elasticity Amazon EFS automatically scales your file system storage capacity up or down as you add or remove files without disrupting your a pplications giving you just the storage you need when you need it and while eliminating time consuming administration tasks associated with traditional storage management ( such as planning buying provisioning and monitoring) Your EFS file system can grow from an empty file system to multiple petabytes automatically and there is no provisioning allocating or administration Security There are three levels of access control to consider when planning your EFS file system security: IAM permissions for API calls; security groups for EC2 instances and mount targets; and Network File System level users groups and permissions IAM enables access control for administering EFS file systems allowing you to specify an IAM identity ( either an IAM user or IAM role) so you can create delete and describe EFS file system resources The primary resource in Amazon EFS is a file system All other EFS resources such as mount targets and tags are referred to as subresources Identity based policies like IAM polic ies are used to assign permissions to IAM identities to manage the EFS resources and subresources Security groups play a critical role in establishing network connectivity between EC2 instances and EFS file systems You associate one security group with an EC2 instance and another security group with an EFS mount target associated with the file system These sec urity groups act as firewalls and enforce rules that define the traffic flow between EC2 instances and EFS file systems EFS file system objects work in a Unix style mode which defines permissions needed to perform actions on objects Users and groups are mapped to numeric ArchivedAmazon Web Services – AWS Storage Services Overview Page 16 identifiers which are mapped to EFS users to represent file ownership Files and directories within Amazon EFS are owned by a single owner and a single group Amazon EFS uses these numeric IDs to check permissions when a user attempts to access a file system object For more information about Amazon EFS security see the Amazon EFS User Guide 26 Interfaces Amazon offers a network protocol based HTTP (RFC 2616) API for managing Amazon EFS as well as support ing for EFS operations within the AWS SDKs and the AWS CLI The API actions and EFS operations are used to create delete and describe file systems; crea te delete and describe mount targets; create delete and describe tags; and describe and modify mount target security groups If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface EFS file systems use Network File System version 4 (NFSv4) and version 41 (NFSv41) for data access We recommend using NFSv41 to take advantage of the performance benefits in the latest version including scalability and parallelis m Cost Model Amazon EFS provides the capacity you need when you need it without having to provision storage in advance It is also designed to be highly available and highly durable as each file system object ( such as a directory file or link) is redu ndantly stored across multiple Availability Zones This highly durable highly available architecture is built into the pricing model and you only pay for the amount of storage you put into your file system As files are added your EFS file system dynami cally grows and you only pay for the amount of storage you use As files are removed your EFS file system dynamically shrinks and you stop paying for the data you deleted There are no charges for bandwidth or requests and there are no minimum commitme nts or up front fees You can find pricing information for Amazon EFS at the Amazon EF S pricing page 27 ArchivedAmazon Web Services – AWS Storage Services Overview Page 17 Amazon EBS Amazon Elastic Block Store (Amazon EBS) volumes provide durable block level storage for use with EC2 instances 28 Amazon EBS volumes are network attached storage that persists independently from the running life of a single EC2 instance After an EBS volume is attached to an EC2 instance you can use t he EBS volume like a physical hard drive typically by formatting it with the file system of your choice and using the file I/O interface provided by the instance operating system Most Amazon Machine Images (AMIs) are backed by Amazon EBS and use an EBS volume to boot EC2 instance s You can also attach multiple EBS volumes to a single EC2 instance Note however that any single EBS volume can be attached to only one EC2 instance at any time EBS also provides the ability to create point intime snapshots of volumes which are stored in Amazon S3 These snapshots can be used as the starting point for new EBS volumes and to protect data for long term durability To learn more about Amazon EBS durability see the EBS Durability and Availability section of this whitepaper The same snapshot can be used to instantiate as many volumes as you want These snapshots can be copied across AWS Regions making it easier to leverage multiple AWS Regions for geographical expansion data center migration and disaster recovery Sizes for EBS volumes range from 1 GiB to 16 TiB depending on the volume type and are allocated in 1 GiB increments You can find information about Amazon EBS previous generation Magne tic volumes at the Amazon EBS Previous Generation Volumes page 29 Usage Patterns Amazon EBS is meant for data that changes relatively frequently and needs to persist beyond the life of EC2 ins tance Amazon EBS is well suited for use as the primary storage for a database or file system or for any application or instance (operating system) that requires direct access to raw block level storage Amazon EBS provides a range of options that allow y ou to optimize storage performance and cost for your workload These options are divided into two major categories: solid state drive ( SSD )backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS ) and hard disk drive ( HDD )backed storage for throughput intensive workloads such as big data data warehouse and log processing (performance depends primarily on MB/s) ArchivedAmazon Web Services – AWS Storage Services Overview Page 18 Amazon EBS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon Local Instance Store Multi instance storage Amazon EBS volumes can only be attached to one EC2 instance at a time If you need multiple EC2 instances accessing vo lume data at the same time consider using Amazon EFS as a file system Amazon EFS Highly durable storage If you need very highly durable storage use S3 or Amazon EFS Amazon S3 Standard storage is designed for 99999999999 percent (11 nines) annual durability per object You can even decide to take a snapshot of the EBS volumes Such a snapshot then gets saved in Amazon S3 thus providing you the durability of Amazon S3 For more information on EBS durability see the Durability and Availability section EFS is designed for high durability and high availability with data stored in multiple Availability Zones within an AWS Region Amazon S3 Amazon EFS Static data or web content If your data doesn’t change that often Amazon S3 might represent a more cost effective and scalable solution for storing this fixed information Also web content served out of Amazon EBS requires a web server running on Amazon EC2; in contrast you can deliver web content directly out of Amazon S3 or from multiple EC2 instances using Amazon EFS Amazon S3 Amazon EFS Performance As described previously Amazon EBS provides a range of volume types that are divided into two major categories: SSD backed storage volumes and HDD backed storage volumes SSD backed storage volumes offer great price/performance characteristics for random small block workloads such as transactional applications whereas HDD backed storage volumes offer the best price/performance characteristics for large block sequential workloads You can attach and stripe data across multiple volumes of any type to increase the I/O performance available to your Amazon EC2 applications The following table presents the storage characteristics of the current generat ion volume types ArchivedAmazon Web Services – AWS Storage Services Overview Page 19 SSDBacked Provisioned IOPS (io1) SSDBacked General Purpose (gp2)* HDD Backed Throughput Optimized (st1) HDD Backed Cold (sc1) Use Cases I/Ointensive NoSQL and relational databases Boot volumes lowlatency interactive apps dev & test Big data data warehouse log processing Colder data requiring fewer scans per day Volume Size 4 GiB – 16 TiB 1 GiB – 16 TiB 500 GiB – 16 TiB 500 GiB – 16 TiB Max IOPS** per Volume 20000 10000 500 250 Max Throughput per Volume 320 MiB/s 160 MiB/s 500 MiB/s 250 MiB/s Max IOPS per Instance 65000 65000 65000 65000 Max Throughput per Instance 1250 MiB/s 1250 MiB/s 1250 MiB/s 1250 MiB/s Dominant Performance Attribute IOPS IOPS MiB/s MiB/s *Default volume type **io1/gp2 based on 16 KiB I/O; st1/sc1 based on 1 MiB I/O General Purpose SSD (gp2) volumes offer cost effective storage that is ideal for a broad range of workloads These volumes deliver single digit millisecond latencies the ability to burst to 3000 IOPS for extended periods of time and a baseline performance of 3 IOPS/GiB up to a maximum of 10000 IOPS (at 3334 GiB) The gp2 volumes can range in size from 1 GiB to 16 TiB These volumes have a throughput limit range of 128 MiB/second for volumes less than or equal to 170 GiB; for volumes over 170 GiB this limit increases at the ra te of 768 KiB/second per GiB to a maximum of 160 MiB/second (at 214 GiB and larger) You can see the percentage of I/O credits remaining in the burst buckets for gp2 volumes by viewing the Burst Balance metric in Amazon CloudWatch 30 Provisioned IOPS SSD (io1) volumes are designed to deliver predictable high performance for I/O intensive workloads with small I/O size where the dominant performance attribute is IOPS such as database workloads that are sensitive to ArchivedAmazon Web Services – AWS Storage Services Overview Page 20 storage performance and consistency in random access I/O throughput You specify an IOPS rate when creating a volume an d then Amazon EBS delivers within 10 percent of the provisioned IOPS performance 999 percent of the time over a given year when attached to an EBS optimized instance The io1 volumes can range in size from 4 G iB to 16 T iB and you can provision up to 20 000 IOPS per volume The ratio of IOPS provisioned to the volume size requested can be a maximum of 50 For example a volume with 5000 IOPS must be at least 100 GB in size Throughput Optimized HDD (st1) volumes are ideal for frequently accessed through putintensive workloads with large datasets and large I/O sizes where the dominant performance attribute is throughput (MiB/s) such as streaming workloads big data data warehouse log processing and ETL workloads These volumes deliver performance in terms of throughput measured in M iB/s and include the ability to burst up to 250M iB/s per T iB with a baseline throughput of 40M iB/s per T iB and a maximum throughput of 500M iB/s per volume The st1 volumes are designed to deliver the expected throughput performance 99 percent of the time and has enough I/O credits to support a full volume scan at the burst rate The st1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for st1 vol umes by viewing the Burst Balance metric in Amazon CloudWatch 31 Cold HDD (sc1) volumes provide the lowest cost per G iB of all EBS volume types These are ideal for infrequently accessed workloads with large cold datasets with large I/O sizes where the dominant performance attribute is throughput (MiB/s) Similarly to st1 sc1 volumes provide a burst model and can burst up to 80 MiB/s per TiB with a basel ine throughput of 12 M iB/s per T iB and a maximum throughput of 250 MB/s per volume The sc1 volumes are designed to deliver the expected throughput performance 99 percent of the time and have enough I/O credits to support a full volume scan at the burst r ate The sc1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for s c1 volumes by viewing the Burst Balance metric in CloudWatch 32 Because all EBS volumes are network attached devices other network I/O performed by an EC2 instance as well as the total load on the shared network can affect the performance of individual EBS volumes To enab le your EC2 instances to maximize the performance of EBS volumes you can launch selected EC2 instance types as EBS optimized instances Most of the latest generation ArchivedAmazon Web Services – AWS Storage Services Overview Page 21 EC2 instances (m4 c4 x1 and p 2) are EBS optimized by de fault EBS optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS with speeds between 500 Mbps and 10000 Mbps depending on the instance type When attached to EBS optimized instances provisioned IOPS volumes are designed to deliver within 10 percent of the provisioned IOPS performance 999 percent of the time within a given year Newly created EBS volumes receive their maximum performance the moment they are available and they don’t require initialization (formerly known as prewarming) However you must i nitialize the storage blocks on volumes that were restored from snapshots before you can access the block 33 Using Amazon EC2 with Amazon EBS you can take advantage of many of the same disk performance optimization techniques that you do with on premises servers and storage For example by attaching multiple EBS volumes to a single EC2 instance you can partition the total application I/O load by allocating one volume for database log data one or more volumes for database file storage and other volumes for file system data Each separate EBS volume can be configured as EBS General Purpose (SSD) Provisioned IOPS (SSD) Throughput Optimized (HDD) or Cold (HDD) as needed Some of the best price/performance balanced workloads take advantage of different v olume types on a single EC2 instance For example Cassandra using General Purpose (SSD) volumes for data but Throughput Optimized (HDD) volumes for logs or Hadoop using General Purpose (SSD) volumes for both data and logs Alternatively you can stripe your data across multiple similarly provisioned EBS volumes using RAID 0 (disk striping) or logical volume manager software thus aggregating available IOPS total volume throughput and total volume size Durability and Avail ability Amazon EBS volumes are designed to be highly available and reliable EBS volume data is replicated across multiple servers in a single Availability Zone to prevent the loss of data from the failure of any single component Taking snapshots of your EBS volumes increases the durability of the data stored on your EBS volumes EBS snapshots are incremental point intime backups containing only the data blocks changed since the last snapshot EBS volumes are designed for an annual failure rate (AFR) of between 01 and 02 percent where failure refers to a complete or partial loss of the volume depending on the size and performance of the volume This means if you have 1000 EBS volumes over the course of a year you can expect unrecoverable failures with 1 or 2 of your ArchivedAmazon Web Services – AWS Storage Services Overview Page 22 volumes This AFR makes EBS volumes 20 times more reliable than typical commodity disk drives which fail with an AFR of around 4 percent Despite these very low EBS AFR numbers we still recommend that you create snapshot s of your EBS volumes to improve the durability of your data The Amazon EBS snapshot feature makes it easy to take application consistent backups of your data For more information on EBS durability see the Amazon EBS Availability and Durability section of the Amazon EBS Product Details page 34 To maximize both durability and availability of Amazon EBS data you should create snapshots of your EBS volumes frequently (For application consistent backups we recommend b riefly pausing any write operations to the volume or unmounting the volume while you issue the snapshot command You can then safely continue to use the volume while the snapshot is pending completion) All EBS volume types offer durable snapshot capabil ities and are designed for 99999 percent availability If your EBS volume does fail all snapshots of that volume remain intact and you can recreate your volume from the last snapshot point Because an EBS volume is created in a particular Availability Z one the volume will be unavailable if the Availability Zone itself is unavailable A snapshot of a volume however is available across all of the Availability Zones within a Region and you can use a snapshot to create one or more new EBS volumes in any Availability Zone in the region EBS snapshots can also be copied from one Region to another and can easily be shared with other user accounts Thus EBS snapshots provide an easy touse disk clone or disk image mechanism for backup sharing and disaster recovery Scalability and Elasticity Using the AWS Management Console or the Amazon EBS API you can easily and rapidly provision and release EBS volumes to scale in and out with your total storage demands The simplest approach is to create and attach a new EBS volume and begin using it together with your existing ones However if you need to expand the size of a single EBS volume you can effectively resize a volume using a snapshot: 1 Detach the original EBS volume 2 Create a snapshot of the original EBS volume’s data in Amazon S3 ArchivedAmazon Web Services – AWS Storage Services Overview Page 23 3 Create a new EBS volume from the snapshot but specify a larger size than the original volume 4 Attach the new larger volume to your EC2 instance in place of the original (In many cases an OS level utility must also be used to expand the file system) 5 Delete the original EBS volume Security IAM enables access control for your EBS volumes allowing you to specify who can access which EBS volumes EBS encryption enables data atrest and data inmotion security It offers seamless encryption of both EBS boot volumes and data volumes as well as snapshots eliminating the need to build and manage a secure key management infrastructure These encryption keys are Amazon managed or keys that you create and manage using t he AWS Key Management Service (AWS KMS) 35 Data inmotion security occurs on the servers that host EC2 instances providing encryption of data as it moves between EC2 instances and EBS volumes Access control plu s encryption offers a strong defense indepth security strategy for your data For more information see Amazon EBS Encryption in the Amazon EBS User Guide 36 Interfaces Amazon offers a REST management API for Amazon EBS as well as support for Amazon EBS operations within both the AWS SDKs and the AWS CLI The API actions and EBS operations are used to create delete describe attach and detach EBS volumes for your EC2 instances; to create delete and describe snapshots from Amazon EBS to Amazon S3; and to copy snapshots from one region to another If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface Regardless of how you create your EBS volume note that all storage is allocated at the time of volume creation and that you are charged for this allocated storage even if you don’t write data to it ArchivedAmazon Web Services – AWS Storage Services Overview Page 24 Amazon EBS doesn’t provide a d ata API Instead Amazon EBS presents a block device interface to the EC2 instance That is to the EC2 instance an EBS volume appears just like a local disk drive To write to and read data from EBS volumes you use the native file system I/O interfaces of your chosen operating system Cost Model As with other AWS services with Amazon EBS you pay only for what you provision in increments down to 1 GB In contrast hard disks come in fixed sizes and you pay for the entire size of the disk regardless of the amount you use or allocate Amazon EBS pricing has three components: provisioned storage I/O requests and snapshot storage Amazon EBS General Purpose (SSD) Throughput Optimized (HDD) and Cold (HDD) volumes are charged per GB month of provisioned s torage Amazon EBS Provisioned IOPS (SSD) volumes are charged per GB month of provisioned storage and per provisioned IOPS month For all volume types Amazon EBS snapshots are charged per GB month of data stored An Amazon EBS snapshot copy is charged fo r the data transferred between R egions and for the standard Amazon EBS snaps hot charges in the destination R egion It’s important to remember that for EBS volumes you are charged for provisioned (allocated) storage whether or not you actually use it For Amazon EBS snapshots you are charged only for storage actually used (consumed) Note that Amazon EBS snapshots are incremental so the storage used in any snapshot is generally much less than the storage consumed for an EBS volume Note that there is no ch arge for transferring information among the various AWS storage offerings (that is an EC2 instance transferring information with Amazon EBS Amazon S3 Amazon RDS and so on) as long as the storage off erings are within the same AWS R egion You can find pr icing information for Amazon EBS at the Amazon EBS pricing page 37 Amazon EC2 Instance Storage Amazon EC2 instance st ore volumes (also called ephemeral drives) provide temporary block level storage for many EC2 instance types 38 This storage consists of a preconfigured and pre attached block of disk storage on the same ArchivedAmazon Web Services – AWS Storage Services Overview Page 25 physical server that hosts the EC2 instance for which the block provides storage The amount of the disk storage provided varies by EC2 instance type In the EC2 instance families that provide instance storage larger instances tend to provide both more and larger instance store volumes Note that some instance types such as the micro instances (t1 t2) and the Compute optimized c4 instances use EBS storage only w ith no instance storage provided Note also that instances using Amazon EBS for the root device (in other words that boot from Amazon EBS) don’t expose the instance store volumes by default You can choose to expose the instance store volumes at instance l aunch time by specifying a block device mapping For more information see Block Device Mapping in the Amazon EC2 User Guide 39 AWS offers two EC2 inst ance families that are purposely built for storage centric workloads Performance specificat ions of the storage optimized (i2) and dense storage (d2) instance families are outlined in the following table SSDBacked Storage Optimized (i2) HDD Backed Dense Storage (d2) Use Cases NoSQL databases like Cassandra and MongoDB scale out transactional databases data warehousing Hadoop and cluster file systems Massively Parallel Processing (MPP) data warehousing MapReduce and Hadoop distributed computing distributed file systems network file systems log or data processing applications Read Performance 365000 Random IOPS 35 G iB/s* Write Performance 315000 Random IOPS 31 G iB/s* Instance Store Max Capacity 64 T iB SSD 48 TiB HDD Optimized For Very high random IOPS High disk throughput * 2MiB block size ArchivedAmazon Web Services – AWS Storage Services Overview Page 26 Usage Patterns In general EC2 local instance store volumes are ideal for temporary storage of information that is continually changing such as buffers caches scratch data and other temporary content or for data that is replicated across a fleet of instances such as a load balanced pool of web servers EC2 instance storage is wellsuited for this purpose It cons ists of the virtual machine’s boot device (for instance store AMIs only) plus one or more additional volumes that are dedicated to the EC2 instance (for both Amazon EBS AMIs and instance store AMIs) This storage can only be used from a single EC2 instanc e during that instance's lifetime Note that unlike EBS volumes instance store volumes cannot be detached or attached to another instance For high I/O and high storage use EC2 instance storage targeted to these use cases High I/O instances (the i2 family) provide instance store volumes backed by SSD and are ideally suited for many high performance database workloads Example applications include NoSQL databases like Cassandra and MongoDB clustered databases and online transaction processing (OLT P) systems High storage instances (the d2 family) support much higher storage density per EC2 instance and are ideally suited for applications that benefit from high sequential I/O performance across very large datasets Example applications include data warehouses Hadoop/MapReduce storage nodes and parallel file systems Note that applications using instance storage for persistent data generally provide data durability through replication or by periodically copying data to durable storage EC2 instance store volumes don’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Persistent storage If you need persistent virtual disk storage si milar to a physical disk drive for files or other data that must persist longer than the lifetime of a single EC2 instance EBS volumes Amazon EFS file systems or Amazon S3 are more appropriate Amazon EC2 Amazon EBS Amazon EFS Amazon S3 Relational database storage In most cases relational databases require storage that persists beyond the lifetime of a single EC2 instance making EBS volumes the natural choice Amazon EC2 Amazon EBS ArchivedAmazon Web Services – AWS Storage Services Overview Page 27 Storage Need Solution AWS Services Shared storage Instance store volumes are dedicated to a single EC2 instance and can’t be shared with other systems or users If you need storage that can be detached from one instance and attached to a different instance or if you need the ability to share data easily Amazon EFS Amazon S3 or Amazon EBS are better choice s Amazon EFS Amazon S3 Amazon EBS Snapshots If you need the convenience long term durability availability and the ability to share point intime disk snapshots EBS volumes are a better choice Amazon EBS Performance The instance store volumes that are not SSD based in most EC2 instance families have performance characteristics similar to standard EBS volumes Because the EC2 instance virtual machine and the local instance store volumes are located on the same physical server interaction with this storage is very fast particularly for sequential acc ess To increase aggregate IOPS or to improve sequential disk throughput multiple instance store volumes can be grouped together using RAID 0 (disk striping) software Because the bandwidth of the disks is not limited by the network aggregate sequential throughput for multiple instance volumes can be higher than for the same number of EBS volumes Because of the way that EC2 virtualizes disks the first write operation to any location on an instance store volume performs more slowly than subsequent write s For most applications amortizing this cost over the lifetime of the instance is acceptable However if you require high disk performance we recommend that you prewarm your drives by writing once to every drive location before production use The i2 r3 and hi1 instance types use direct attached SSD backing that provides maximum performance at launch time without prewarming Additionally r3 and i2 instance store backed volumes support the TRIM command on Linux instances For these volumes you can us e TRIM to notify the SSD controller whenever you no longer need data that you've written This notification lets the controller free space which can reduce write amplification and increase performance ArchivedAmazon Web Services – AWS Storage Services Overview Page 28 The SSD instance store volumes in EC2 high I/O instan ces provide from tens of thousands to hundreds of thousands of low latency random 4 KB random IOPS Because of the I/O characteristics of SSD devices write performance can be variable For more information see High I/O Instances in the Amazon EC2 User Guide 40 The instance store volumes in EC2 high storage instances provide very high storage density and high sequential read and write performance For more information see High Storage Instances in the Amazon EC2 User Guide 41 Durability and Availability Amazon EC2 local instance store volumes are not intended to be used as durable disk storage Unlike Amazon EBS volume data data on instance store volumes persists only during the life of the associated EC2 instance This functionality means that data on instance store volumes is persistent across orderly instance reboots but if the EC2 instance is stopped and restarted terminates or fails all data on the instance sto re volumes is lost For more information on the lifecycle of an EC2 instance see Instance Lifecycle in the Amazon EC2 User Guide 42 You should not use local ins tance store volumes for any data that must persist over time such as permanent file or database storage without providing data persistence by replicating data or periodically copying data to durable storage such as Amazon EBS or Amazon S3 Note that this usage recommendation also applies to the special purpose SSD and high density instance store volumes in the high I/O and high storage instance types Scalability and Elasticity The number and storage capacity of Amazon EC2 local instance store volumes are fixed and defined by the instance type Although you can’t increase or decrease the number of instance store volumes on a single EC2 instance this storage is still scalable and elastic; you can scale the total amount of instance store up or down by incre asing or decreasing the number of running EC2 instances To achieve full storage elasticity include one of the other suitable storage options such as Amazon S3 Amazon EFS or Amazon EBS in your Amazon EC2 storage strategy ArchivedAmazon Web Services – AWS Storage Services Overview Page 29 Security IAM helps you secure ly control which users can perform operations such as launch and termination of EC2 instances in your account and instance store volumes can only be mounted and accessed by the EC2 instances they belong to Also when you stop or terminate an instance th e applications and data in its instance store are erased so no other instance can have access to the instance store in the future Access to an EC2 instance is controlled by the guest operating system If you are concerned about the privacy of sensitive d ata stored in an instance storage volume we recommend encrypting your data for extra protection You can do so by using your own encryption tools or by using third party encryption tools available on the AWS Marketplace 43 Interfaces There is no separate management API for EC2 instance store volumes Instead instance store volumes are specified using the block device mapping feature of the Amazon EC2 API and the AWS Management Console You cannot create or destroy instance store volumes but you can control whether or not they are exposed to the EC2 instance and what device name is mapped to for each volume There is also no separate data API for instance store volumes Just like EBS volumes insta nce store volumes present a block device interface to the EC2 instance To the EC2 instance an instance store volume appears just like a local disk drive To write to and read data from instance store volumes you use the native file system I/O interfaces of your chosen operating system Note that in some cases a local instance store volume device is attached to an EC2 instance upon launch but must be formatted with an appropriate file system and mounted before use Also keep careful track of your block device mappings There is no simple way for an application running on an EC2 instance to determine which block device is an instance store (ephemeral) volume and which is an EBS (persistent) volume ArchivedAmazon Web Services – AWS Storage Services Overview Page 30 Cost Model The cost of an EC2 instance includes any local instance store volumes if the instance type provides them Although there is no additional charge for data storage on local instance store volumes note that data transferred to and from Amazon EC2 instance store volumes from other Availability Zones or outside of an Amazon EC2 Region can incur data transfer charges; additional charges apply for use of any persistent storage such as Amazon S3 Amazon Glacier Amazon EBS volumes and Amazon EBS snapshots You can find pricing information for Amazon EC2 A mazon EBS and data transfer at the Amazon EC2 Pricing web page 44 AWS Storage Gateway AWS Storage Gateway connects an on premises software appliance wi th cloud based storage to provide seamless and secure storage integration between an organization’s on premises IT environment and the AWS storage infrastructure 45 The service enables you to securely store data in the AWS Cloud for scalable and cost effec tive storage AWS Storage Gateway supports industry standard storage protocols that work with your existing applications It provides lowlatency performance by maintaining frequently accessed data on premises while securely storing all of your data encryp ted in Amazon S3 or Amazon Glacier For disaster recovery scenarios AWS Storage Gateway together with Amazon EC2 can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appl iance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance Once you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Consol e to create gateway cached volumes gateway stored volumes or a gateway virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications With gateway cached volumes you can use Amazon S3 to hold your primary data while retaining some portion of it locally in a cache for frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while still providing your applications with low latency access to their freq uently accessed data You can create storage volumes up to 32 ArchivedAmazon Web Services – AWS Storage Services Overview Page 31 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached volumes can support up to 20 volumes and total volume storage of 150 T iB Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your primary data locally while asynchronously backing up that data to AWS These volumes provide your on premises applications with low latency access to their entire datasets while providing durable off site backups You can create storage volumes up to 1 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 12 volumes and total volume storage of 12 T iB Data written to your gateway stored volumes is stored on your on premises storage hardware and a synchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup application with an iSCSI based virtual tape library consisting of a virtual media chang er and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 25 T iB A VTL can hold up to 1500 virtual tapes with a maximum aggregate capacity of 150 T iB On ce the virtual tapes are created your backup application can discover them by using its standard media inventory procedure Once created tapes are available for immediate access and are stored in Amazon S3 Virtual tapes that you need to access frequentl y should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Usage Patterns Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mirroring data to cloud based compute resources and th en later archiving it to Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 32 Performance Because the AWS Storage Gateway VM sits between your application Amazon S3 and underlying on premises storage the performance you experience depends upon a number of factors These factors include the speed and configuration of your underlying local disks the network bandwidth between your iSCSI initiator and gateway VM the amount of local storage allocated to the gateway VM and the bandwidth between the gateway VM and Amazon S3 For gateway cached volumes to provide low latency read access to your on premises applications it’s important that you provide enough local cache storage to store your recently accessed data The AWS Storage Gateway documentation provides guidance on how to optimize your environment setup for best performance including how to properly size your local storage 46 AWS Storage Gateway efficiently uses your Internet bandwidth to speed up the upload of your on premises application data to AWS AWS Storage Gateway only uploads data that has changed which minimizes the amount of data sent over the Internet To further increase throughput and reduce your network costs you can also use AWS Direct Connect to establish a dedicated network connection between your on premises gateway and AWS 47 Durability and Availability AWS Storage Gateway durably stores your on premises application data by uploading it to Amazon S3 or Amazon Glacier Both of these AWS services store data in multiple facilities and on multiple devices within each facility being designed to provide an average annual durability of 99999999999 percent (11 nines) They also perform regular systematic data integrity checks and are built to be automatically self healing Scalability and Elasticity In both gateway cached and gateway stored volume configurations AWS Storage Gateway stores data in Amazon S3 which has been designed to offer a very high level of scalability and elasticity automatically Unlike a typical file system that can encounter issues when storing large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucke t Also unlike a disk drive that has a limit on the total amount of data that can be stored before you must partition the data across drives or servers an Amazon S3 bucket can ArchivedAmazon Web Services – AWS Storage Services Overview Page 33 store a virtually unlimited number of bytes You are able to store any number of objects and Amazon S3 will manage scaling and distributing redundant copies of your information onto other servers in other locations in the same region all using Amazon’s high performance infrastructure In a gateway VTL configuration AWS Storage Ga teway stores data in Amazon S3 or Amazon Glacier providing a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning scaling and maintaining a physical tape infrastructure Securi ty IAM helps you provide security in controlling access to AWS Storage Gateway With IAM you can create multiple IAM users under your AWS account The AWS Storage Gateway API enables a list of actions each IAM user can perform on AWS Storage Gateway 48 The AWS Storage Gateway encrypts all data in transit to and from AWS by using SSL All volume and snapshot data stored in AWS using gateway stored or gateway cached volumes and all virtual tape data stored in AWS using a gateway VTL is encrypted at rest using AES 256 a secure symmetric key encryption standard using 256 bit encryption keys Storage Gateway supports authentication between your gateway and iSCS I initiators by using Challenge Handshake Authentication Protocol (CHAP) Interfaces The AWS Management Console can be used to download the AWS Storage Gateway VM on premises or onto an EC2 instance (an AMI that contains the gateway VM image) You can then select between a gateway cached gateway stored or gateway VTL configuration and activate your storage gateway by associating your gateway’s IP address with your AWS account All the detailed steps for AWS Storage Gateway deployment can be found in Getting Started in the AWS Storage Gateway User Guide 49 The integrated AWS CLI also provides a set of high level Linux like commands for common operations of the AWS Storage Gateway service ArchivedAmazon Web Services – AWS Storage Services Overview Page 34 You can also use the AWS SDKs to develop applications that interact with AWS Storage Gateway The AWS SDKs for Java NET JavaScript Nodejs Ruby PHP and Go wrap the underlying AWS Storage Gateway API to simplify your programming tasks Cost Model With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway per month) snapshot storage usage (per GB per month) volume storage usage (per GB per month) virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) retrieval from virtual tape shelf (per GB) and data transfer out (per GB per month) You can find pricing information at the AWS Storage Gateway pricing page 50 AWS Snowball AWS Snowball accelerates moving large amounts of data into and out of AWS using secure Snowball appl iances 51 The Snowball appliance is purpose built for efficient data storage and transfer All AWS Regions have 80 TB Snowballs while US Regions have both 50 TB and 80 TB models The Snowball appliance is rugged enough to withstand an 85 G jolt At less than 50 pounds the appliance is light enough for one person to carry It is entirely self contained with a power cord one RJ45 1 GigE and two SFP+ 10 GigE network connections on the back and an E Ink display and control panel on the front Each Snowball appliance is water resistant and dustproof and serves as its own rugged shipping container AWS transfers your data directly onto and off of Snowball storage devices using Amazon’s high speed internal network and bypasses the Internet For datasets of significant size Snowball is often faster than Internet transfer and more cost effective than upgrading your connectivity AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to oth er AWS services such as Amazon EBS and Amazon Glacier as desired Usage Patterns Snowball is ideal for transferring anywhere from terabytes to many petabytes of data in and out of the AWS Cloud securely This is especially beneficial in cases ArchivedAmazon Web Services – AWS Storage Services Overview Page 35 where you don’t want to make expensive upgrades to your network infrastructure or in areas whe re high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using Snowball Common use cases include cloud migration disaster recovery data ce nter decommission and content distribution When you decommission a data center many steps are involved to make sure valuable data is not lost and Snowball can help ensure data is securely and cost effectively transferred to AWS In a content distributi on scenario you might use Snowball appliances if you regularly receive or need to share large amounts of data with clients customers or business associates Snowball appliances can be sent directly from AWS to client or customer locations Snowball migh t not be the ideal solution if your data can be transferred over the Internet in less than one week Performance The Snowball appliance is purpose built for efficient data storage and transfer including a high speed 10 Gbps network connection designed to minimize data transfer times allowing you to transfer up to 80 TB of data from your data source to the appliance in 25 days plus shipping time In this case the end toend time to transfer the data into AWS is approximately a week including default s hipping and handling time to AWS data centers Copying 160 TB of data can be completed in the same amount of time by using two 80 TB Snowballs in parallel You can use the Snowball client to estimat e the time it takes to transfer your data (refer to the AWS Import/Export User Guide for more details) 52 In general you can improve your transfer speed from your data source to the Snowball appliance by reducing local network use eliminating unnecessary hops between the Snowball appliance and the workstation using a powerful computer as your workstation and combining smaller objects Parallelization can also help achieve maximum performance of your data transfer This could involve one or more of the following parallelization types: using multiple instances of the Snowball client on a single workstation with a single Snowball appliance; using multiple instances of the Snowball client on multiple workstations with a single ArchivedAmazon Web Services – AWS Storage Services Overview Page 36 Snowball appliance; and/or usi ng multiple instances of the Snowball client on multiple workstations with multiple Snowball appliances Durability and Availability Once the data is imported to AWS the durability and availability characteristics of the target storage applies Amazon S3 is designed for 99999999999 percent (11 nines) durability and 9999 percent availability Scalability and Elasticity Each AWS Snowball appliance is capable of storing 50 TB or 80 TB of data If you want to transfer more data than that you can use multipl e appliances For Amazon S3 individual files are loaded as objects and can range up to 5 TB in size but you can load any number of objects in Amazon S3 The aggregate total amount of data that can be imported is virtually unlimited Security You can integrate Snowball with IAM to control which actions a user can perform 53 You can give the IAM users on your AWS account access to all Snowball actions or to a subse t of them Similarly an IAM user that creates a Snowball job must have permissions to access the Amazon S3 buckets that will be used for the import operations For Snowball AWS KMS protects the encryption keys used to protect data on each Snowball appliance All data loaded onto a Snowball appliance is encrypted using 256 bit encryption Snowball is physically secured by using an industry standard Trusted Platform Module (TPM) that uses a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software Snowball is included in the AWS HIPAA compliance program so you can use Snowball to transfer large amounts of Protected Health Information (PHI) data into and out of AWS 54 ArchivedAmazon Web Services – AWS Storage Services Overview Page 37 Interfaces There are two ways to get started with Snowball You can create an import or export job using the AWS Snowball Management Console or you can use the Snowball Job Management API and integrate AWS Snowball as a p art of your data management solution The primary functions of the API are to create list and describe import and export jobs and it uses a simple standards based REST web services interface For more details around using the Snowball Job Management API see the API Reference documentation 55 You also have two ways to locally transfer data between a Snowball appliance and your on premises data center The Snowball c lient available as a download from the AWS Import/Export Tools page is a standalone terminal application that you run on your local workstation to do your data transfer 56 You use simple copy (cp ) commands to transfer data and handling errors and logs are written to your local workstation for troubleshooting and auditing The second option to locally transfer data between a Snowball appliance and your on premises data center is the Amazon S3 Adap ter for Snowball which is also available as a download from the AWS Import/Export Tools page You can programmatically transfer data between your on premises data center and a Snowball appliance u sing a subset of the Amazon S3 REST API commands This allows you to have direct access to a Snowball appliance as if it were an Amazon S3 endpoint Below is an example of how you would reference a Snowball appliance as an Amazon S3 endpoint when executing an AWS CLI S3 list command By default the adapter runs on port 8080 but a different port can be specified by changing the adapterconfig file The following example steps you through how to implement a Snowball appliance to import your data into AW S using the AWS Snowball Management Console 1 To start sign in to the AWS Snowball Management Console and create a job 2 AWS then prepares a Snowball appliance for your job 3 The Snowball appliance is shipped to you through a regional shipping carrier (UPS in all AWS regions except India which uses Amazon Logistics) You can find your tracking number and a link to the tracking website on the AWS Snowball Management Console ArchivedAmazon Web Services – AWS Storage Services Overview Page 38 4 A few days later the regional shipping carrier delivers the Snowball appliance to the address you provided when you created the job 5 Next get ready to transfer your data by downloading your credentials your job manifest and the manifest’s unlock code from the AWS Management Console and by downloading the Snowball client The Sno wball client is the tool that you’ll use to manage the flow of data from your on premises data source to the Snowball appliance 6 Install the Snowball client on the computer workstation that has your data source mounted on it 7 Move the Snowball appliance in to your data center open it and connect it to power and your local network 8 Power on the Snowball appliance and start the Snowball client You provide the IP address of the Snowball appliance the path to your manifest and the unlock code The Snowball client decrypts the manifest and uses it to authenticate your access to the Snowball appliance 9 Use the Snowball client to transfer the data that you want to import into Amazon S3 from your data source into the Snowball appliance 10 After your data transfer is complete power off the Snowball appliance and unplug its cables The E Ink shipping label automatically updates to show the correct AWS facility to ship to You can track job status by using Amazon SNS text messages or directly in the console 11 The regional shipping carrier returns the Snowball appliance to AWS 12 AWS gets the Snowball appliance and imports your data into Amazon S3 On average it takes about a day for AWS to begin importing your data into Amaz on S3 and the import can take a few days If there are any complications or issues we contact you through email Once the data transfer job has been processed and verified AWS performs a software erasure of the Snowball appliance that follows the Nation al Institute of Standards and Technology (NIST) 800 88 guidelines for media sanitization Cost Model With Snowball as with most other AWS services you pay only for what you use Snowball has three pricing components: service fee (per job) extra day char ges as required (the first 10 days of onsite usage are free) and data transfer For the ArchivedAmazon Web Services – AWS Storage Services Overview Page 39 destination storage the standard Amazon S3 storage pricing applies You can find pricing information at the AWS Snowball Pricing page 57 Amazon CloudFront Amazon CloudFront is a content delivery web service that speeds up the distribution of your website’s dynamic static and streaming content by making it available from a global network of edge locations 58 When a user requests content that you’re serving with Amazon Cloud Front the user is routed to the edge location that provides the lowest latency (time delay) so content is delivered with better performance than if the user had accessed the content from a data center farther away If the content is already in the edge l ocation with the lowest latency Amazon CloudFront delivers it immediately If the content is not currently in that edge location Amazon CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example a web server) that you have identifie d as the source for the definitive version of your content Amazon CloudFront caches content at edge locations for a period of time that you specify Amazon CloudFront supports all files that can be served over HTTP These files include dynamic web pages such as HTML or PHP pages and any popular static files that are a part of your web application such as website images audio video media files or software downloads For on demand media files you can also choose to stream your content using Real Time Messaging Protocol (RTMP) delivery Amazon CloudFront also supports delivery of live media over HTTP Amazon CloudFront is optimized to work with other Amazon web services such as Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 Amazon CloudFront also works seamlessly with any non AWS origin servers that store the original definitive versions of your files Usage Patterns CloudFront is ideal for distribution of frequently accessed static content that benefits from edge delivery such as popular website images videos media files or software downloads Amazon CloudFront can also be used to deliver dynamic web applications over HTTP These applications can include static content dynamic content or a whole site with a mixture of the two Amazon CloudFront is also commonly used to stream audio and video files to web browsers and mobile ArchivedAmazon Web Services – AWS Storage Services Overview Page 40 devices To get a better understanding of your end user usage patterns you can use Amazon CloudFront reports 59 If you need to remove an object from Amazon CloudFront edge server caches before it expires you can either invalidate the object or use object versioning to serve a different version of the object that has a different name 60 61 Additionally it might be better to serve infrequently accessed data directly from the origin server avoiding the additional cost of origin fetches for data that is not likely to be reused at the edge; however origin fetches to Amazon S3 are free Performance Amazon CloudFront is designed for low latency and high bandwidth delivery of content Amazon CloudFront speeds up the distribution of your content by routing end users to the edge location that can best serve each end user’s request in a worldwide network of edge locations T ypically requests are routed to the nearest Amazon CloudFront edge location in terms of latency This approach dramatically reduces the number of networks that your users’ requests must pass through and improves performance Users get both lower latency —here latency is the time it takes to load the first byte of an object —and the higher sustained data transfer rates needed to deliver popular objects at scale Durability and Availability Because a CDN is an edge cache Amazon CloudFront does not provide dur able storage The origin server such as Amazon S3 or a web server running on Amazon EC2 provides the durable file storage needed Amazon CloudFront provides high availability by using a distributed global network of edge locations Origin requests from t he edge locations to AWS origin servers (for example Amazon EC2 Amazon S3 and so on) are carried over network paths that Amazon constantly monitors and optimizes for both availability and performance This edge network provides increased reliability and availability because there is no longer a central point of failure Copies of your files are now held in edge locations around the world Scalability and Elasticity Amazon CloudFront is designed to provide seamless scalability and elasticity You can easi ly start very small and grow to massive numbers of global ArchivedAmazon Web Services – AWS Storage Services Overview Page 41 connections With Amazon CloudFront you don’t need to worry about maintaining expensive web server capacity to meet the demand from potential traffic spikes for your content The service automatica lly responds as demand spikes and fluctuates for your content without any intervention from you Amazon CloudFront also uses multiple layers of caching at each edge location and collapses simultaneous requests for the same object before contacting your origin server These optimizations further reduce the need to scale your origin infrastructure as your website becomes more popular Security Amazon CloudFront is a very secure service to distribute your data It integrates with IAM so that you can create us ers for your AWS account and specify which Amazon CloudFront actions a user (or a group of users) can perform in your AWS account You can configure Amazon CloudFront to create log files that contain detailed information about every user request that Amazo n CloudFront receives These access logs are available for both web and RTMP distributions 62 Additionally Amazon CloudFront integrates with Amazon CloudWatch metrics so that you can monitor your website or application 63 Interfaces You can manage and configure Amazon CloudFront in several ways T he AWS Management Console provides an easy way to manage Amazon CloudFront and supports all features of the Amazon CloudFront API For example you can enable or disable distributions configure CNAMEs and enable end user logging using the console You ca n also use the Amazon CloudFront command line tools the native REST API or one of the supported SDKs There is no data API for Amazon CloudFront and no command to preload data Instead data is automatically pulled into Amazon CloudFront edge locations o n the first access of an object from that location Clients access content from CloudFront edge locations either using HTTP or HTTPs from locations across the Internet; these protocols are configurable as part of a given CloudFront distribution ArchivedAmazon Web Services – AWS Storage Services Overview Page 42 Cost Model With Amazon CloudFront there are no long term contracts or required minimum monthly commitments —you pay only for as much content as you actually deliver through the service Amazon CloudFront has two pricing components: regional data transfer out (p er GB) and requests (per 10000) As part of the Free Usage Tier new AWS customers don’t get charged for 50 GB data transfer out and 2000000 HTTP and HTTPS requests each month for one year Note that if you use an AWS service as the origin (for example Amazon S3 Amazon EC2 Elastic Load Balancing or others) data transferred from the origin to edge locations (ie Amazon CloudFront “origin fetches”) will be free of charge For web distributions data transfer out of Amazon CloudFront to your origin server will be billed at the “Regional Data Transfer Out of Origin” rates CloudFront provides three different price classes according to where your content needs to be distributed If you don’t need your content to be distributed globally but only within certain locations such as the US and Europe you can lower the prices you pay to deliver by choosing a price class that includes only these locations Although there are no long term contracts or required minimum monthly commitments CloudFront offers an o ptional reserved capacity plan that gives you the option to commit to a minimum monthly usage level for 12 months or longer and in turn receive a significant discount You can find pricing information at the Amazon CloudFront pricing page 64 Conclusion Cloud storage is a critic al component of cloud computing because it holds the information used by applications Big data analytics data warehouses Internet of Things databases and backup and archiv e applications all rely on some form of data storage architecture Cloud storage is typically more reliable scalable and secure than traditional on premises storage systems AWS offers a complete range of cloud storage services to support both applicati on and archival compliance requirements This whitepaper provides guidance for understanding the different storage services and features available in the AWS Cloud Usage pat terns performance durability ArchivedAmazon Web Services – AWS Storage Services Overview Page 43 and availability scalability and elasticity security interface and cost models are outlined and described for these cloud storage service s While t his gives you a better understanding of the features and characteristics of these cloud services it is crucial for you to understand your workloads and requirements then decide which storage service is best suited for your needs Contributors The following individuals contributed to this document: • Darryl S Osborne Solutions Architect Amazon Web Services • Shruti Worlikar Solutions Archi tect Amazon Web Services • Fabio Silva Solutions Architect Amazon Web Services ArchivedAmazon Web Services – AWS Storage Services Overview Page 44 References and Further Reading AWS Storage Services • Amazon S3 65 • Amazon Glacier 66 • Amazon EFS 67 • Amazon EBS 68 • Amazon EC2 Instance Store 69 • AWS Storage Gateway 70 • AWS Snowball 71 • Amazon CloudFront 72 Other Resources • AWS SDKs IDE Toolkits and Command Line Tools 73 • Amazon Web Services Simple Monthly Calculator 74 • Amazon Web Services Blog 75 • Amazon Web Services Forums 76 • AWS Free Usage Tier 77 • AWS Case Studies 78 Notes 1 https://awsamazoncom/s3/ 2 https://docsawsamazoncom/AmazonS3/latest/dev/crrhtml 3 http://docsawsamazoncom/AmazonS3/latest/dev/uploadobjusingmpuhtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 45 4 http://docsawsamazoncom/AmazonS3/latest/dev/access control overviewhtml#access control resources manage permissions basics 5 http://docsawsamazoncom/AmazonS3/latest/dev/serv sideencryptionhtml 6 http://docsawsamazoncom/AmazonS3/latest/dev/UsingClientSideEncryptio nhtml 7 http://docsawsamazoncom/AmazonS3/latest/dev/Versioninghtml#MultiFac torAuthenticationDelete 8 http://docsawsamazoncom/AmazonS3/latest/dev/ServerLogsh tml 9 http://awsamazoncom/sns/ 10 http://awsamazoncom/sqs/ 11 http://awsamazoncom/lambda/ 12 http://awsamazoncom/free/ 13 http://awsamazoncom/s3/pricing/ 14 http://awsamazoncom/glacier/ 15 http://docsawsamazoncom/amazonglacier/latest/dev/uploading archive mpuhtml 16 http://docsawsamazoncom/amazonglacier/latest/dev/downloading an archivehtml#downloading anarchive range 17 https://awsamazoncom/iam/ 18 http://awsamazoncom/cloudtrail/ 19 http://docsawsamazoncom/AmazonS3/latest/dev/object lifecycle mgmthtml 20 http://awsamazoncom/glacier/pricing/ 21 http://awsamazoncom/efs/ 22 http://docsawsamazoncom/efs/latest/ug/how itworkshtml 23 http://docsawsamazoncom/efs/latest/ug/monito ringcloudwatchhtml#efs metrics ArchivedAmazon Web Services – AWS Storage Services Overview Page 46 24 http://docsawsamazoncom/efs/latest/ug/mounting fshtml 25 http://docsawsamazoncom/efs/latest/ug/mounting fsmount cmd generalhtml 26 http://docsawsamazoncom/efs/latest/ug/security considerationshtml 27 http://aws amazoncom/efs/pricing/ 28 http://awsamazoncom/ebs/ 29 https://awsamazoncom/ebs/previous generation/ 30 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 31 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 32 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 33 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializ ehtml 34 https://awsamazoncom/ebs/details/ 35 https://awsamazoncom/kms/ 36 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSEncryptionhtml 37 http://awsamazoncom/ebs/pricing/ 38 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 39 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/block device mapping conceptshtml 40 http://docsawsamazoncom/AWSEC2/latest/UserGuide/i2 instanceshtml 41 http://docsawsamazoncom/AWSEC2/latest/UserGuide/high_storage_instan ceshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 47 42 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance lifecyclehtml 43 https://awsamazoncom/marketplace 44 http://awsamazoncom/ec2/pricing/ 45 http://awsamazoncom/storagegateway/ 46 http://docsawsamazoncom/storagegateway/latest/userguide/Wh atIsStorage Gatewayhtml 47 http://awsamazoncom/directconnect/ 48 http://docsawsamazoncom/storagegateway/latest/userguide/AWSStorageGa tewayAPIhtml 49 http://docsawsamazoncom/storagegateway/latest/userguide/GettingStarted commonhtml 50 http://awsamazoncom/storagegateway/pricing/ 51 https://awsamazoncom/importexport/ 52 http://awsamazoncom/importexport/tools/ 53 http://docsawsamazoncom/AWSImportE xport/latest/DG/auth access controlhtml 54 https://awsamazoncom/about aws/whats new/2016/11/aws snowball now ahipaa eligible service/ 55 https://docsawsamazoncom/AWSImportExport/latest/ug/api referencehtml 56 https://awsamazoncom/importexport/tools/ 57 http://awsamazoncom/importexport/pricing/ 58 http://awsamazoncom/cloudfront/pricing/ 59 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/repo rtshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 48 60 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Inva lidationhtm l 61 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Repl acingObjectshtml 62 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Acce ssLogshtml 63 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/mon itoring using cloudwatchhtml 64 http://awsamazoncom/cloudfront/pricing/ 65 http://awsamazoncom/s3/ 66 http://awsamazoncom/glacier/ 67 http://awsamazoncom/efs/ 68 http://awsamazoncom /ebs/ 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 70 http://awsamazoncom/storagegateway/ 71 http://awsamazoncom/ snowball 72 http://awsamazoncom/cloudfront/ 73 http://awsamazoncom/tools/ 74 http://calculators3amazonawscom/indexhtml 75 https://awsamazoncom/blogs/aws/ 76 https://forumsawsamazoncom/indexjspa 77 http://awsamazoncom/free/ 78 http://awsamazoncom/solutions/case studies/ Archived,General,consultant,Best Practices AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Hong_Kong__Insurance_Authority,AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Complian ce Assurance Programs 4 AWS Artifact 6 AWS Regions 6 Hong Kong Insurance Authority Guideline on Outsourcing (GL14) 6 Prior Notification of Material Outsourcing 7 Outsourcing Policy 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 12 Contingenc y Planning 13 Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) 14 Next Steps 20 Additional Resources 21 Document Revisions 22 About this Guide This document provides information to assist Authorized Insurers (AIs) in Hong Kong regulated by the Hong Kong Insurance Authority (IA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 1 Overview The Hong Kong Insurance Authority (IA) issues guidelines to provide the Hong Kong insurance industry with practical guidance to facilitate compliance with regulatory requirements The guideli nes relevant to the use of outsourced services instruct Authorized Insurers (AIs) to perform materiality assessments risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibilities with regards to the following guidelines: • Guideline on Outsourcing (GL14) – This guideline sets out the IA’s supervisory approach to outsourcing and the major points that the IA recommends AIs to address when outsourcing their activities including the use of cloud services • Guideline on the Use of Internet for Insurance Activities (GL8) – This guideline outlines the specific points that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities For a full list of the IA guidelines see the Guidelines section of Legislative and Regulatory Framework on the IA website Security and the Shared Responsibility Model Cloud se curity is a shared responsibility At AWS we maintain a high bar for security OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers shoul d carefully consider how they will manage the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We recommend that cus tomers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Amazon Web Services AWS User Guide to the Hong Kong In surance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 2 Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Elastic Compute Cloud (EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application softwa re as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and maintain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service (RDS) manages to handle tasks such as provisioning patching backup recovery failure d etection and repair It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with the content • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted Amazon Web Services AWS User Guide to the Hong Kong Insurance Auth ority on Outsourcing and Use of Internet for Insurance Activities Guidelines 3 • How their content is encrypted and where the keys are stored • Who has acce ss to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assuranc e about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS complia nce certifications to validate the implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS s ervices and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports t he operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading prac tices that can be implemented and to better assist customers with managing their control environment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifyi ng bodies and independent auditors to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable compliance standard Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Out sourcing and Use of Internet for Insurance Activities Guidelines 4 • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments The following are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls foll owing the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines h ow AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of prac tice provides additional security controls implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementati on guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certification see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented a pproach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner where AWS Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 5 products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see th e ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PC I Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is manda ted by the card brands and administered by the Payment Card Industry Security Standards Council For more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party a udit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality without disclosing AWS internal information By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Inte rnet for Insurance Activities Guidelines 6 AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI repo rts and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up o f multiple Availability Zones Availability Zones consist of one or more discrete data centers that are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operat e production applications and databases at higher availability fault tolerance and scalability than would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Hong Kong Insurance Authority Guideline on Outsourcing (GL14) The Hong Kong Insurance Authority Guideline on Outsourcing (GL14) provides guidance and recommendations on prudent risk management practices for outsourcing including the use of cloud services by AIs AIs that use cloud services are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 5 of the GL14 states that the AI’s materiality and risk assessments should include considerations such as a determination of the importance and criticality of the services to be outs ourced and the impact on the AI’s risk profile (in respect to financial operational legal and reputational risks and potential losses to customers) if the outsourced service is disrupted or falls short of acceptable standards AIs should be able to de monstrate their observance of the guidelines as required by the IA A full analysis of the GL14 is beyond the scope of this document However the following sections address the considerations in the GL14 that most frequently arise in interactions with AIs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Ins urance Activities Guidelines 7 Prior Notification of Material Outsourcing Under Section 61 of the GL14 an AI is required to notify the IA when the AI is planning to enter into a new material outsourcing arrangement or significantly vary an existing one The notification includes the following requirements: • Unless otherwise justifiable by the AI the notification should be made at least 3 months before the day on which the new outsourcing arrangement is proposed to be entered into or the existing arrangement is proposed to be varied significantly • A detailed description of the proposed outsourcing arrangement to be entered into or the significant proposed change • Sufficient information to satisfy the IA that the AI has taken into account and properly addressed all of the essential iss ues set out in Section 5 of the GL14 Outsourcing Policy Section 58 of the GL14 sets out a list of factors that should be evaluated in the context of service provider due diligence when an AI is considering an outsourcing arrangement including the use of cloud services The following table includes considerations for each component of Section 58 Due Diligence Requirement Customer Considerations (a) reputation experience and quality of service Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers (b) financial soundness in particular the ability to continue to provide t he expected level of service The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are available from the SEC or at Amazon’s Investor Relations website Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activ ities Guidelines 8 Due Diligence Requirement Customer Considerations (c) managerial skills technical and operational expertise and competence in particular the ability to deal with disruptions in business continuity AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires ma nagement to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost i mportance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessments on newly implemented controls at least every six months (d) any license registration permission or authorization required by law to perform the outsourced service While Hong Kong does not have specific licensing or certification requirements for operating cloud services AWS has multiple attestations for secure and compliant operation of its services Globally these include certification to ISO 27017 (guidelines for in formation security controls applicable to the provision and use of cloud services) and ISO 27018 (code of practice for protection of personally identifiable information (PII) in public clouds) For more information about our assurance programs see AWS Assurance Programs (e) extent of reliance on sub contractors and effectiveness in monitoring the work of sub contractors AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided and implements appropriate relationship management mechanisms in line with their relationship to the business Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidel ines 9 Due Diligence Requirement Customer Considerations (f) compatibility with the insurer’s corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements (g) familiarity with the insurance industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial services cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud pla tform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement An outsourcing agreement should be undertaken in the form of a legally binding written agreement Section 510 of the Guideline on Outsourcing (GL14) clarifies the matters that an AI should consider when entering into an outsourcing arrangement with a service provider including performance standards certain reporting or notification requirem ents and contingency plans AWS cust omers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Ent erprise Agreements contact your AWS representative Information Confidentiality Under Sections 512 513 and 514 of the Guideline on Outsourcing (GL14) AIs need to ensure that the outsourcing arrangements comply with relevant laws and statutory requir ements on customer confidentiality The following table includes considerations for Sections 512 513 and 514 Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 10 Requirement Customer Considerations 512 The insurer should ensure that it and the service provider have proper safeguards in place to protect the integrity and confidentiality of the insurer’s information and customer data Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to m anage your own encryption keys If you want to tokenize data before it leaves your organization you can achieve this through a number of AWS partners that provide this Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check the configuration of AWS resources recorded by AWS Config When your reso urces are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration changes AWS Config represents relationships between resources so that you c an assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also pro vides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management and rotation temporary security credentials and multi factor authentica tion (MFA) Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 11 Requirement Customer Considerations 513 An authorized insurer should take into account any legal or contractual obligation to notify customers of the outsourcing arrangement and circumstances under which their data may be disclosed or lost In the event of the termination of th e outsourcing agreement the insurer should ensure that all customer data are either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage access to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Expo rt to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization” ) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned using these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 12 Requirement Customer Considerations 514 An authorized insurer should notify the IA forthwith of any unauthorized access or breach of confidentiality by the service provider or its sub contractor that affects the insurer or its customers AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Secu rity Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located athttp://statusawsamazoncom/ to alert customers to any broadly impacting availability issues Customers are responsible for their security in the cloud It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements inc luding who has access to their content and how those access rights are granted managed and revoked AWS customers should consider implementation of the following best practices to protect against and detect security breaches: • Use encryption to secure cus tomer data • Configure the AWS services to keep customer data secure AWS provides customers with information on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ • Implement least privilege permissions for a ccess to your resources and customer data • Use monitoring tools like AWS CloudWatch to track when customer data is accessed and by whom Monitoring and Control Under Section 515 of the Guideline on Outsourcing (GL14) AIs should ensure that they have suf ficient and appropriate resources to monitor and control outsourcing arrangements at all times Section 516 further sets out that once an AI implements an outsourcing arrangement it should regularly review the effectiveness and adequacy of its controls i n monitoring the performance of the service provider AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 13 notifications on the AWS Security Bulletins website AWS provides you with various tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Sections 517 and 518 of the Guideline on Outsourcing (GL14) if an AI chooses to outsource service to a service provider they should put in place a contingency plan to ensure that the AI’s busine ss won’t be disrupted as a result of undesired contingencies of the service provider such as system failures The AI should also ensure that the service provider has its own contingency plan that covers daily operational and systems problems The AI shoul d have an adequate understanding of the service provider's contingency plan and consider the implications for its own contingency planning in the event that the outsourced service is interrupted due to undesired contingencies of the service provider AWS a nd regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stabi lity For more information about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Fin ancial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimi zing system outage time due to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 re port in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geo graphic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 14 Hong Kong Insurance Authority Guid eline on the Use of Internet for Insurance Activities (GL8) The Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) aims to draw attentio n to the special considerations that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities Sections 51 items (a) (g) of the Guideline on the Use of Internet for Insurance Activities (GL8) sets out a series of requirements regarding information security confidentiality integrity data protection payment systems security and related concerns for AIs to address when carrying out internet insurance activities AIs should take all pract icable steps to ensure the following: Requirement Customer Considerations (a) a comprehensive set of security policies and measures that keep up with the advancement in internet security technologies shall be in place AWS has established formal policies a nd procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of your syste ms and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer data Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 15 (b) mechanisms shall be in place to maintain the integrity of data stored in the system hardware whilst in transit and as displayed on the website AWS is designed to protect the confidentiality and integrity of transmitted data through the comparison of a cryptographic hash of data transmitted This is done to help ensure that the message is not corrupted or altered in transit Data that has been alte red or corrupted in transit is immediately rejected AWS provides many methods for you to securely handle your data: AWS enables you to open a secure encrypted channel to AWS servers using HTTPS (TLS/SSL) Amazon S3 provides a mechanism that enables you t o use MD5 checksums to validate that data sent to AWS is bitwise identical to what is received and that data sent by Amazon S3 is identical to what is received by the user When you choose to provide your own keys for encryption and decryption of Amazon S 3 objects (S3 SSE C) Amazon S3 does not store the encryption key that you provide Amazon S3 generates and stores a one way salted HMAC of your encryption key and that salted HMAC value is not logged Connections between your applications and Amazon RDS MySQL DB instances can be encrypted using TLS/SSL Amazon RDS generates a TLS/SSL certificate for each database instance which can be used to establish an encrypted connection using the default MySQL client When an encrypted connection is established dat a transferred between the database instance and your application is encrypted during transfer If you require data to be encrypted while at rest in the database your application must manage the encryption and decryption of data Additionally you can set up controls to have your database instances only accept encrypted connections for specific user accounts Data is encrypted with 256 bit keys when you enable AWS KMS to encrypt Amazon S3 objects Amazon EBS volumes Amazon RDS DB Instances Amazon Redshift Data Blocks AWS CloudTrail log files Amazon SES messages Amazon Workspaces volumes Amazon WorkMail messages and Amazon EMR S3 storage AWS offers you the ability to add an additional layer of security to data at rest in the cloud providing scalable and efficient encryption features Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 16 Requirement Customer Considerations This includes: • Data encryption capabilities available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Database Amazon RDS for SQL Server and Amazon Redshift • Flexible key management options including AWS Key Management Service (AWS KMS) that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your keys • Dedicated hardware based cryptographi c key storage using AWS CloudHSM which enables you to satisfy compliance requirements In addition AWS provides APIs that you can use to integrate encryption and data protection with any of the services you develop or deploy in the AWS Cloud Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 17 Requirement Customer Considerations (c) approp riate backup procedures for the database and application software shall be implemented AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored You retain control and ownership of your data When you store data in a specific Region it is not replic ated outside that Region It is your responsibility to replicate data across Regions if your business needs require this capability Amazon S3 supports data replication and versioning instead of automatic backups You can however back up data stored in Amazon S3 to other AWS Regions or to on premises backup systems Amazon S3 replicates each object across all Availability Zones within the respective Region Replication can provide data and service availability in the case of system failure but provides no protection against accidental deletion or data integrity compromise —it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy options which have different durability objectiv es and price points Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level or create backups Amazon EBS provides snapshots that capture the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permiss ions so that only authorized users can access Amazon EBS backups Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 18 Requirement Customer Considerations (d) a client’s personal information (including password if any) shall be protected against loss; or unauthorized access use modification or disclosure etc You control your data With AWS you can do the following: • Determine where your data is stored including the type of storage and geographic Region of that storage • Choose the secured state of your data We offer you strong encryption for your content in transit or at rest and we provide you with the option to manage your own encryption keys • Manage access to your data and AWS services and resources through users groups permissions and credentials that you control (e) a client’s electronic signature if any shall be verified Amazon Partner Network (APN) Technology Partners provide software solutions (including electronic signature solutions) that are either hosted on or integrated with the AWS Cloud platform The AWS Partner Solutions Finder provides you with a centralized p lace to search discover and connect with trusted APN Technology and Consulting Partners based on your business needs For more information see AWS Partner Solutions Finder Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 19 Requirement Customer Considerations (f) the electronic payme nt system (eg credit card payment system) shall be secure AWS is a Payment Card Industry (PCI) compliant cloud service provider having been PCI DSS Certified since 2010 The most recent assessment validated that AWS successfully completed the PCI Data Security Standards 32 Level 1 Service Provider assessment and was found to be compliant for all the services outlined on AWS Services in Scope by Compliance Program The AWS PCI Complian ce Package which is available through AWS Artifact includes the AWS PCI DSS 32 Attestation of Compliance (AOC) and AWS 2016 PCI DSS 32 Responsibility Summary PCI compliance on AWS is a shared responsibility In accordance with the shared responsibili ty model all entities must manage their own PCI DSS compliance certification While for the portion of the PCI cardholder environment deployed in AWS your organization’s QSA can rely on AWS Attestation of Compliance (AOC) you are still required to satis fy all other PCI DSS requirements The AWS 2016 PCI DSS 32 Responsibility Summary provides you with guidance on what you are responsible for For more information about AWS PCI DSS Compliance see PCI DSS Level 1 Service Provider Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 20 Requirement Customer Considerations (g) a valid insurance contract shall not be cancelled accidentally maliciously or consequent upon careless computer handling Your data is validated for integrity and corrupted or tampered data is not written to storage Amazon S3 utilizes checksums int ernally to confirm the continued integrity of content in transit within the system and at rest Amazon S3 provides a facility for you to send checksums along with data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service utilizes checksums internally to confirm the continued integrity of content in transit within the system and at rest Whe n disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External access to content stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the accessor IP address object and operation Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the target state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and bestpractices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organiza tion throughout your IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops p lease contact your AWS representative Alternatively AWS provides access to tools and resources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework Amazon Web Services AWS Us er Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 21 For AIs in Hong Kong next steps typically also include the following: • Contact your AWS representative to discuss how the AWS Partner Network and AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative contact us at https://awsamazoncom/ contact us/ • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak to your AWS representative about an AWS Enterprise Agreement Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AWS in the Context of Hong Kong Privacy Considerations Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 22 Document Revisions Date Description April 2020 Updates to Additional Resources section February 2020 Revision and updates October 2017 First publication,General,consultant,Best Practices AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Hong_Kong__Monetary_Authority,AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitmen ts or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to i ts customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Compliance Assurance Programs 4 AWS Artifact 6 AWS Regions 6 HKMA Supervisory Policy Manual on Outsourcing (SA 2) 6 Outsourcing Notification 7 Assessment of Service Providers 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 11 Contingency Planning 12 Access to Outsourced Data 12 HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) 13 Next Steps 16 Additional Resources 17 Document Rev isions 18 About this Guide This document provides information to assist Authorized Institutions (AIs) in Hong Kong regulated by the Hong Kong Monetary Authority (HKMA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 1 Overview The Hong Kong Monetary Authority (HKMA) issues guidelines to provide the Hong Kong banking industry with practical guidance to facilitate compliance with regulatory requirements The guidelines relevant to the use of outsourced services instruct Authorized Institutions (AIs) to perform risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibili ties with regards to the following guidelines: • Supervisory Policy Manual on Outsourcing (SA 2) This Supervisory Policy Manual sets out the HKMA's supervisory approach to outsourcing and the major points which the HKMA recommends AIs to address when outsourcing their activities including the use of cloud services • Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) This Supervisory Policy Manual provides AIs with guidance on general principles which AIs are expected to consider in managing technology related risks Taken togeth er AIs can use this information to perform their due diligence and assess how to implement an appropriate information security risk management and governance program for their use of AWS For a list of the guidelines see the Regulatory Resources – Regulatory Guides section on the HKMA website Security and the Shared Responsibility Model Cloud security is a shared responsibility At AWS we maintain a high bar for secur ity OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers should carefully consider how they will manage the services they choose as thei r responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 2 recommend that customers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Amazon Elastic Compute Cloud ( Amazon EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application software as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and main tain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service ( Amazon RDS) manages to handle tasks such as provisioning patching backup recovery failure detection and repair It is impor tant to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with t he content AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 3 • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted • How their content is encrypted and where the keys are stored • Who has access to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that th ey connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assurance about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS compliance certifications to validate th e implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS services and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading practices that can be implemented an d to better assist customers with managing their control environment AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 4 • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifying bodies and independent auditor s to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable c ompliance standard • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independ ent assessments The followings are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls following the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines how AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of practice provides additional security controls impleme ntation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PI I) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 5 the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certificati on see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented approach to documenting and reviewing the structure responsibilities and procedures required to ac hieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner wh ere AWS products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see the ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council PCI DSS applies to all entities that store process or transmit card holder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council F or more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party audit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environm ent relevant to system security availability and confidentiality without disclosing AWS internal information AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 6 By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI reports and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up of multiple Availability Zones Availability Zones consist of one or more discrete data centers tha t are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operate production applications and databases at higher availability fault tolerance and scalability th an would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ HKMA Superv isory Policy Manual on Outsourcing (SA 2) The HKMA Supervisory Policy Manual on Outsourcing (SA 2) provides guidance and recommendation s on prudent risk management practices for outsourcing including use of cloud services by AIs AIs that use the cloud are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 22 of t he SA 2 states that the AI’s risk assessment should include a determination of the importance and criticality of the services to be outsourced the cost and benefit of the outsourcing and the impact on the AI’s risk profile (in respect of operational leg al AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 7 and reputation risks) of the outsourcing AIs should be able to demonstrate their observance of the guidelines to the HKMA through the submission of the HKMA Risk Assessment Form on Technology related Outsourcing (including Cloud Computing) six weeks be fore target implementation date A full analysis of the SA 2 is beyond the scope of this document However the following sections address the considerations in the SA 2 that most frequently arise in interactions with AIs Outsourcing Notification Under Se ction 132 of the SA 2 AIs are required to notify the HKMA via a Notification Letter prior to implementing solutions which leverage public cloud services in respect of banking related business areas including in cases where the AI is outsourcing a banki ng activity to a service provider who is providing services using the public cloud In general a notification letter should be submitted to the HKMA 3 months prior to the commencement of the outsourcing activity The AI must affirm specific compliance w ith controls related to outsourcing and cloud operation together with general compliance with other relevant HKMA guidelines such as the Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) The HKMA expects AIs to full y comply with all relevant regulatory control requirements prior to launching any new outsourced services including when deploying on AWS cloud Assessment of Service Providers Sections 21 22 and 23 of the SA 2 set out a list of topics that should be evaluated in the course of due diligence when an AI is considering an outsourcing arrangement including use of cloud services The following table includes considerations for each component of Section 231 of the SA 2 Due Diligence Requirement Customer Considerations Financial soundness The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are ava ilable from the SEC or at Amazon’s Investor Relations website AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 8 Due Diligence Requirement Customer Considerations Reputation Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers Managerial skills AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or m anage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks Technic al capabilities operational capability and capacity The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy esta blishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessme nts on newly implemented controls at least every six months Compatibility with the AI's corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements produc tion and testing and legal and regulatory requirements AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 9 Due Diligence Requirement Customer Considerations Familiarity with the banking industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial ser vices cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud platform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement Section 24 of the SA 2 clarifies that the type and level of services to be provided and the contractual liabilities and obligations of the service provider must be clearly set out in a serv ice agreement between the AI and their service provider HKMA expect AIs to regularly review their outsourcing agreements AWS customers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Enterprise Agreements contact your AWS representative Information Confidentiality Under Section 25 of the SA 2 AIs need to ensure that as part of the outsourcing AIs can continue to comply with local and regional data protection requirements The following table lists what you should consider AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 10 Requirement Customer Considerations Section 252: AIs should have controls in place to ensure that the requirements of customer data confidentiality are observed and proper safeguards are established to protect the integrity and confidentiality of customer information Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to manage your own encryption keys If you want to tokenize data before it leaves your organization you can engage a number of AWS partners with relevant expertise Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check t he configuration of AWS resources recorded by AWS Config When your resources are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration chang es AWS Config represents relationships between resources so that you can assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also provides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management a nd rotation temporary security credentials and multi factor authentication (MFA) AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 11 Requirement Customer Considerations Section 254: In the event of a termination of outsourcing agreement for whatever reason AIs should ensure that all customer data is either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage acce ss to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Export to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the tech niques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned usi ng these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to co nfirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Monitoring and Control Under Section 26 of the SA 2 AIs need to ensure that they have sufficient and effective procedures for monitoring the performance of the service provider the relationship with the ser vice provider and the risks associated with the outsourced activity AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security notifications on the AWS Security Bulletins website AWS provides you with various AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 12 tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Section 27 of the SA 2 AIs should maintain contingency plans that take the following into consideration: the service provider’s contingency plan a breakdown in the systems of the s ervice provider and telecommunication problems in the host country Section 272 of the SA 2 states that contingency arrangements in respect of daily operational and systems problems would normally be covered in the service provider’s own contingency pla n AIs should ensure that they have an adequate understanding of their service provider’s contingency plan and consider implications for their own contingency planning in the event that the outsourced service is interrupted AWS and regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stability For more informatio n about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Re covery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time d ue to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 report in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Access to Outsourced Data The SA 2 clarifies that an AI’s outsourcin g arrangements should not interfere with the ability of the AI to effectively manage its business activities or impede the HKMA in carrying out its supervisory functions and objectives AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 13 You retain ownership and control of your data when using AWS services You have complete control over which services you use and whom you empower to access your content and services including what credentials will be required You control how you configure your environments and secure your data including whether you encryp t your data (at rest and in transit) and what other security features and tools your use and how you use them AWS does not change your configuration settings as these settings are determined and controlled by you You have the complete freedom to design their security architecture to meet your compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers you to decide when and how security measures will be implemente d in the cloud in accordance with your business needs For example if a higher availability architecture is required to protect your data you may add redundant systems backups locations network uplinks etc to create a more resilient high availabil ity architecture If restricted access to your data is required AWS enables you to implement system level access rights management controls and data level encryption For more information see Using AWS in the Context of Hong Kong Privacy Considerations You can validate the security controls in place within the AWS environment through AWS certifications and reports including th e AWS Service Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS compliance reports These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls For more information about the AWS approach to audit and inspection please contact your AWS representative HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TMG1) The HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) sets out risk management principles and best prac tice standards to guide AIs in meeting their legal obligations The HKMA expects AIs to have an effective technology risk management framework in place to ensure the adequacy of IT controls and quality of their computer systems AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 14 AWS has produced a TM G1 Workbook that covers the six domains documented within the TM G1 For shared controls where AWS is expected to provide information as part of the Shared Responsibility Model AWS controls are mapped against the control requirements of the TM G1 The following table shows the AWS response to guidelines Sections 211 and 332 of the TM G1: ID Guideline Responsibility Customer Considerations 211 Achieving a consistent standard of sound practices for IT controls across an AI requires clear direction and commitment from the Board and senior management In this connection senior management who may be assisted by a delegated subcommittee is responsible for developing a set of IT control policies which es tablish the ground rules for IT controls These policies should be formally approved by the Board or its designated committee and properly implemented among IT functions and business units Customer Specific Not Applicable AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 15 ID Guideline Responsibility Customer Considerations 332 Proper segregation of duties within the security administration function or other compensating controls (eg peer reviews) should be in place to mitigate the risk of unauthorized activities being performed by the security administration function Shared Identity & Access Managemen t: Segregation of Duties Privileged access to AWS systems by AWS employees are allocated based on least privilege approved by an authorized individual prior to access provisioning and assigned a different user ID than used for normal business use Duties and areas of responsibility (for example access request and approval change management request and approval change development testing and deployment etc) are segregated across different individuals to reduce opportunities for an unauthorized or uni ntentional modification or misuse of AWS systems Customers retain the ability to manage segregation of duties of their AWS resources by using AWS Identity and Access Management (IAM) IAM enables you to securely control access to AWS services and resourc es for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources You can get a copy of the TM G1 Workbook by accessing AWS Artifact within the AWS Management Console To use the TM G1 Workbook you should review the AWS responses and then enrich them with your own organizational contro ls Let’s use the previous controls statements as an example Section 221 of the TM G1 discusses the sound practices for IT controls oversight by the AI’s board of directors/senior management This is a principle that would only apply to you and is not specific to cloud or particular applications This control can only be fulfilled by you the AI In contrast Section 332 of the TM G1 is a shared control This control requires formal procedures for administering the access rights to system resources a nd application systems This is a shared control because AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 16 AWS administers the access rights to the system resources AWS uses to operate the cloud services and you administer the system resources that you create using our services The Workbook also position s you to more clearly consider whether and how to add supplementary technology risk controls that are specific to your line ofbusiness or application teams or your particular needs Note that it is important to appreciate the implications of the shared security responsibility model and understand which party is responsible for a particular control Where AWS is responsible the AI should identify which of the AWS Assurance reports certifications or attestations are used to establish or assess that the control is operating Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the t arget state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their clo ud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organization throughout your IT lifecycle The AWS CAF breaks down the complicated process of plan ning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops please contact your AWS representative Alternatively AWS provides access to tools and res ources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework For AIs in Hong Kong next steps typically also include the following: • Contact your AWS repres entative to discuss how the AWS Partner Network as well as AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative please contact us at https://awsamazoncom/ contact us/ AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 17 • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recom mendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak with your AWS representative to learn more about how AWS is helping Financial Services customers migrate their critical workloads to the cloud Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AW S in the Context of Hong Kong Privacy Considerations AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 18 Document Revisions Date Description April 2020 Updates to Additional Resources February 2020 Revision and updates November 2017 Style and content updates August 2017 First publication,General,consultant,Best Practices AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Singapore,AWS User Guide to Financial Services Regulations & Guidelines in Singapore First Published July 2017 Updated January 3 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 22 Amazon Web Services Inc or its affiliates All rights reserved Contents About this guide 1 Security of the cloud 4 AWS complian ce programs 5 AWS Artifact 7 AWS Regions 7 MAS Guidelines on Outsourcing 8 Assessment of service providers 8 Cloud computing 11 Outsourcing agreements 15 Audit and inspection 16 MAS Technology Risk Management Guidelines 17 Notice 655 on Cyber Hygiene 20 ABS Cloud Co mputing Implementation Guide 20 23 Key controls 23 Next steps 26 Conclusion 28 Additional resources 28 Contributors 30 Document revisions 30 Abstract This document provides information to help regulated financial institutions (FIs) operating in Singapore as they accelerate their use of Amazon Web Services (AWS) Cloud services Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 1 About this guide This document provides information to assist banks and financial services institutions in Singapore regulated by the Monetary Authority of Singapore (MAS) as they adopt and accelerate their use of the AWS Cloud This guide: • Describes the respective roles that the customer and AWS each play in managing and securing the cloud environment; • Provides an overview of the regulatory requirements and guidance that financial institutions can consider when using AWS; and • Provides additional resources that financial institutions can use to help them design and architect their AWS environment to be secure and meet regulatory expectations The Monetary Authority of Singapore (MAS) Guidelines on Outsourcing for finan cial institutions (FIs) acknowledge s that FIs can leverage cloud services to enhance their operations and reap the benefit of the scale standardization and security of the cloud The MAS Guidelines on Outsourcing instruct FIs to perform due diligence and apply sound governance and risk management practices to their use of cloud services The following sections provide considerations for FIs as they assess their responsibilities related to the following guidelines: • MAS Guidelines on Outsourcing – The Guide lines on Outsourcing provide expanded guidance to the industry on prudent risk management practices for outsourcing including cloud services • MAS Technology Risk Management (TRM) Guidelines – These include guidance for a high level of reliability availab ility and recoverability of critical IT systems and for FIs to implement IT controls to protect customer information from unauthorized access or disclosure • Notice 655 on Cyber Hygiene : – This notice sets out cyber security requirements on securing administrative accounts applying security patching establishing baseline security standards deploying network security devices implementing anti malware measures and strengthening user authentication Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 2 • Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide 20 – This guide is intended to assist FIs in further understanding approaches to due diligence vendor management and key controls that should be implemented in cloud outsourcing arrangements Taken together FIs can use this information for their due diligence and to assess how to implement an appropriate information security risk management and governance program for their use of AWS Security and the Shared Responsibility Mod el Before exploring the requirements included in the various guidelines it is important that FIs understand the AWS Shared Responsibility Model AWS Shared Security Responsibility Model AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate The customer assumes responsibility and management of the guest operating system (including updates and security patches) other asso ciated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services used the integration of those servi ces into their IT environment and applicable laws and regulations The nature of this shared Amazon Web Services AWS User Guide to Financial Services Regulations & Guidel ines in Singapore Page 3 responsibility also provides the flexibility and customer control that permits the deployment As shown in the preceding chart this differentiation of responsib ility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud Customers should carefully consider the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations When using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that customers choose to store on AWS • The AWS services that are used with the content • The country where the content is stored • The format and structure of that content and whether it is masked anonymized or encrypted • How the data is encrypted and where the keys are stored • Who has access to that content an d how those access rights are granted managed and revoked It is possible to enhance security and meet more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detectio n and prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate whether controls are operating effectively in their extended IT environment For more information refer to the AWS Compli ance Center at http://awsamazoncom/compliance For more information on the Shared Responsibility Model and its implications for the storage and processing of personal data and other content using AWS refer to Using AWS in the Context of Singapore Privacy Considerations Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 4 Security of the cloud To provide security of the cloud AWS environments are nearly continuously audited and the infrastructure and services are approved to operate under several compliance standards and industry certifications across geographies and verticals Customers can use these certifications to validate the implementation and effectiveness of AWS securit y controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS services and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment includes policies processes and control activities that leverage various aspects of the AWS overall co ntrol environment The collective control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of our control framework AWS has integrated applicable clo udspecific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading practices that it can implement and to better assist customers with managing their control en vironment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifying bodies and independent auditors to provide customers with considerable information regardi ng the policies processes and controls established and operated by AWS Customers can leverage this information to perform their control evaluation and verification procedures as required under the applicable compliance standard • Monitor that AWS mainta ins compliance with global standards and best practices through the use of thousands of security control requirements Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 5 AWS compliance programs AWS has obtained certifications and independent thirdparty attestations for a variety of industry specific w orkloads The following are of particular importance to FIs: • ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls following the ISO 27002 best practice guidance The b asis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System which defines how AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification refer to https://awsamazoncom/compliance/iso 27001 faqs/ • ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of practi ce provides additional information security controls and implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification refer to https://awsamazoncom/compliance/iso 27017 faqs/ • ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on I SO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements which is not addressed by the existi ng ISO 27002 control set For more information or to download the AWS ISO 27018 certification refer to https://awsamazoncom/compliance/iso 27018 faqs/ • ISO 9001 – ISO 9001 outlines a pro cessoriented approach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to the ongoing certification under this standard is establishing maintaini ng and improving the organizational structure responsibilities procedures processes and resources in a manner in which AWS products and services consistently satisfy ISO 9001 quality requirements For mor e information or to download the AWS ISO 9001 certification refer to https://awsamazoncom/compliance/iso 9001 faqs/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 6 • MTCS Level 3 – Multi Tier Cloud Security (MTCS) is an operational Singapore security management Standard (SPRING SS 584:2013) based on ISO 27001/02 Information Security Management System (ISMS) standards The key to the ongoing three year certification under this standa rd is the effective management of a rigorous security program and annual monitoring by an MTCS Certifying Body (CB) The Information Security Management System (ISMS) required under this standard defines how AWS perpetually manages security in a holistic comprehensive way For more information refer to https://awsamazoncom/compliance/aws multitiered cloud security standard certification/ • Outsourced Service Provider’s Audit Report (OSPAR) – The ABS Guidelines recommend that Singapore banks select outsourced service providers that meet the controls set out in the ABS Guidelines which can be demonstrated through an OSPAR An OSPAR attestation involves an external audit of the service provider’s controls against the criteria specified in the ABS Guidelines For more information refer to https://awsamazoncom/compliance/OSPAR/ • PCI DSS Level 1 – The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD ) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council For more informati on or to request the PCI DSS Attestation of Compliance and Responsibility Summary refer to https://awsamazoncom/compliance/pci dsslevel1faqs/ • SOC – AWS Service Organization Con trol (SOC) Reports are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to su pport operations and compliance For more information refer to https://awsamazoncom/compliance/soc faqs/ There are three types of AWS SOC Reports: o SOC 1 – Provides information about the AWS control environment that might be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (I COFR) Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 7 o SOC 2 – Provides customers and their service users that have a business need with an independent assessment of the AWS control environment that is relevant to system security availability and confidentiality o SOC 3 – Provides customers and their service users that have a business need with an independent assessment of the AWS control environment that is relevant to system security availability and confidentiality without disclosing AWS internal information For more information about the other certifications and attestations from AWS refer to the AWS Compliance Center at https://awsamazoncom/compliance/ For a description of general security controls and service specific security from AWS refer to AWS Overview of Security Processes AWS Artifact Customers can review and download reports and details about more than 2500 secur ity controls by using AWS Artifact the self service audit artifact retrieval portal available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS security and compliance documents including Service Organization Control (SOC) reports Payment Card Industry (PCI) reports the AWS MAS Technology Risk Management Workbook and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in the world with multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity all housed in separate facilit ies These Availability Zones offer customers the ability to operate production applications and databases which are more highly available fault tolerant and scalable than would be possible from a single data center For additional information on AWS Reg ions and Availability Zones refer to https://awsamazoncom/about aws/global infrastructure/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 8 MAS Guidelines on Outsourcing The MAS Guidelines on Outsourcing provide guidance and reco mmendations on prudent risk management practices for outsourcing including the use of cloud services by FIs FIs that use the cloud are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements The Guidelines on Outsourcing expressly state that the extent and degree to which an FI implements the specific guidance therein should be commensurate with the nature of risks in and materiality of the outsourcing FIs should also demonstrate their observa nce of the guidelines to MAS through the submission of an outsourcing register to MAS annually or on request A full analysis of the Guidelines on Outsourcing is beyond the scope of this document However the following information includes the considera tions in the Guidelines that AWS most frequently encounters in interactions with Singapore’s FIs Assessment of service providers Section 543 of the Guidelines on Outsourcing includes a partial list of topics that should be evaluated in the course of due diligence when an FI is considering an outsourcing arrangement such as use of the cloud The following table includes considerations for each component of section 543 of the MAS Outsourcing G uidelines Table 1 – Considerations for section 543 of the MAS Outsourcing Guidelines Due diligence requirement AWS response 543 (a) Experience and capability to implement and support the outsourcing arrangement over the contracted period Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale which allows us to provide new services that help millions of active customers 543 (b) Financial stren gth and resources The financial statements of Amazoncom Inc include sales and income information from AWS permitting assessment of its financial position and the ability to service its debts and/or liabilities These financial statements are available from the SEC or at the Amazon Investor Relations website Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 9 Due diligence requirement AWS response 543 (c) Corporate governance business reputation and culture compliance and pending or potential litigation AWS has establi shed formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS performs a nearly continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a nearly continuous basis performing risk assessment s on newly implemented controls at least every six months For additional information see these AWS Audit Reports: SOC 2 PCI DSS ISO 27001 ISO 27017 Amazoncom has a Code of Business Conduct and Ethics available at the Amazon Investor Relations websi te which encompasses considerations such as compliance with laws conflicts of interest bribery discrimination and harassment health and safety recordkeeping and financial integrity Information on legal proceedings can be found within the Amazoncom Inc Form 10 K filing available at the Amazon Investor Relations website or the website of the US Securities and Exchange Commission 543 (d) Security and internal controls audit coverage reporting and monitoring environment AWS management re ‐evaluates the security program at least biannually This process includes risk assessment and implementation of appropriate measures designed to address those risks AWS has established a formal audit program that includes continual independ ent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment To learn more about each of the audit programs leveraged by AWS refer to the AWS Compliance Programs Compliance reports from these assessments are made available through AWS Artifact to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and Regions assessed as well the assessor’s attestation of compliance Customers can also leverage reports and certifications available through AWS Artifact to evaluate vendor s or suppliers ac cording to their requirements Amazon Web Services AWS User Guide to Financial Services Regulati ons & Guidelines in Singapore Page 10 Due diligence requirement AWS response 543 (e) Risk management framework and capabilities including technology risk management and business continuity management in respect of the outsourcing arrangement AWS performs a nearly continuous risk assessment proces s to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary AWS monitors and escalates risks on a nearly continuous basis regularly performing risk as sessments on newly implemented controls 543 (f) Disaster recovery arrangements and disaster recovery track record The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodica l sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions AWS maintains a ubiquitous security control environment across all Regions Each data center is built to physical environmental and security standards in an active active configuration employing an n+1 redundancy model designed to ensure system availability in the event of component failure Components (N) have at leas t one independent backup component (+1) so the backup component is active in the operation even if all other components are fully functional In order to reduce single points of failure this model is applied throughout AWS including network and data cen ter implementation All data centers are online and serving traffic; no data center is cold In case of failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy repl ication and the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region Each Availability Zone is designed as an independent failure zone In the case of failure automated processes move customer data traffic away from the affected area This means that Availability Zones are typically physically separated within a metropolitan region and are in different flood plains Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 11 Due diligence requirement AWS response Customers use AWS to enable faster disaster rec overy of their critical IT systems without incurring the infrastructure expense of a second physical site The AWS Cloud supports many popular disaster recovery (DR) architectures from pilot light environments that are ready to scale up at a moment’s noti ce to hot standby environments that enable rapid failover 543 (g) Reliance on and success in dealing with subcontractors AWS has a program in place for selecting vendors and periodically evaluating vendor performance and compliance with contractual obligations AWS implements policies and controls to monitor access to resources that process or store customer content Vendors and third parties with restricted access that engage in business with Amazon are subject to confidentiality commitments as part of their agreements with Amazon To monitor subcontractor access year round refer to https://awsamazoncom/compliance/third party access/ 543 (h) Insurance coverage Amazon's Memora ndum of Insurance is available on the Amazon Investor Relations website 543 (i) External environment (such as the political economic social and legal environment of the jurisdiction in which the service provider operates); 543 (j) Ability to comply with applicable laws and regulations and track record in relation to its compliance with applicable laws and regulations AWS complies with applicable federal state and local laws stat utes ordinances and regulations concerning security privacy and data protection of AWS services which helps to minimize the risk of accidental or unauthorized access or disclosure of customer content AWS formally tracks and monitors its regulatory an d contractual agreements and obligations AWS has performed and maintains the following activities: • Identified applicable laws and regulations for each of the jurisdictions in which AWS operates • Documented and maintains all statutory regulatory and contractual requirements relevant to AWS Cloud computing The updated MAS Guidelines on Outsourcing include a chapter on cloud computing MAS notes that cloud services can potentially offer many advantages including the following: • Economie s of scale • Costsavings Amazon Web Services AWS U ser Guide to Financial Services Regulations & Guidelines in Singapore Page 12 • Access to quality system administration • Operations that adhere to uniform security standards and best practices • Flexibility and agility for institutions to scale up or pare down on computing resources quickly as usage requirements change • Enha nce s ystem resilience during location specific disasters or disruptions MAS also clarified that it considers cloud computing a form of outsourcing and that the types of risks arising from using the cloud to FIs are not distinct from th ose of other forms o f outsourcing arrangements FIs are still expected to perform the necessary due diligence and apply sound governance and risk management practices in a similar manner that the FI would for any other outsourcing arrangement Section 6 of the Guidelines on Outsourcing outlines a partial list of specific risks that should be evaluated and addressed by an FI that uses cloud services The following table includes considerations relevant to each risk mentioned in paragraph 67 of the Guidelines Table 2 — Consi derations relevant to paragraph 67 of the Outsourcing Guidelines Risk area AWS controls Data access confidentiality and integrity AWS gives customers ownership and control over their customer content by design through simple but powerful tools that allow customers to determine where to store their customer content secure their customer content in transit or at rest and manage acce ss to AWS services and resources for their users AWS implements responsible and sophisticated technical and physical controls designed to prevent unauthorized access to or disclosure of customer content AWS seeks to maintain data integrity through all ph ases including transmission storage and processing AWS treats all customer data and associated assets as highly confidential AWS services are content agnostic which means that they offer the same high level of security to all customers regardless of the type of content being stored AWS is vigilant about customers’ security and ha s implemented sophisticated technical and physical measures against unauthorized access AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored and how it is used and protected from disclosure Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 13 Risk area AWS controls Customer provided data is validated for integrity and corrupted or tampered data is not written to storage Amazon Simple Storage Service (Amazon S3) uses checksums internally to confirm the continued integrity of data in transit within the system and at rest Amazon S3 provides a facility for customers to sen d checksums with the data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service uses checksums internally to confirm the continued integrity of data in transit within the system and at rest When disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External ac cess to data stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the data accessor IP address object and operation For more information see the following AWS Audit Reports : SOC 1 SOC 2 PCI DSS ISO 27001 ISO 27017 Sovereignty AWS customers choose the physical Region in which their data and servers are located AWS does not move customers’ content from the selected Regions without notifying the customer unless required to comply with the law or a binding order of a governmental body For more information refer to Using AWS in the context of Singapore Privacy Considerations Recoverability The Amazon infrastructure has a high level of availability and provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact AWS prov ides customers with the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region Each Availability Zone is designed as an independent failure zone This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete noninterruptable power supply (UPS) and onsite back up generation facilities they are each fed through different grids from independent utilities to further reduce single points of failure Availability Zones are all redundantly connected to multiple tier 1 transit providers Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 14 Risk area AWS controls Regulatory compliance AWS for mally tracks and monitors its regulatory and contractual agreements and obligations To do so AWS has performed and maintains the following activities: • Identifi ed applicable laws and regulations for each of the jurisdictions in which AWS operates • Document ed and maintains all statutory regulatory and contractual requirements relevant to AWS • Categorized records into types with details of retention periods and type of storage media through the Data Classification Policy • Informed and train ed personnel (emplo yees contractors third party users) that must be made aware of compliance policies to protect sensitive AWS information ( such as intellectual property rights and AWS records) through the Data Handling Policy • Monitors the use of AWS facilities for unauthorized activitie s with a process in place to enforce appropriate disciplinary action AWS maintains relationships with outside parties to monitor business and regulatory requirements Should a new security directive be issued AWS has documented plans in place to implement that directive with in designated time frames For more information see the following AWS Audit Reports: SOC 1 SOC 2 PCI DSS ISO 27001 ISO 27017 Auditing Enabling our customers to protect the confidentiality integrity and availability of systems and content is of the utmost importance to AWS as is maintaining customer trust and confidence To make sure these standards are met AWS has established a forma l audit program to validate the implementation and effectiveness of the AWS control environment The AWS audit program includes internal audits and thirdparty accreditation audits The objective of these audits is to evaluate the operating effectiveness o f the AWS control environment Internal audits are planned and performed periodically Audits by thirdparty accreditation are conducted to review the continued performance of AWS against standards based criteria and to identify general improvement opport unities Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapor e Page 15 Risk area AWS controls Compliance reports from these assessments are made available to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and Regions assessed as well as the assessor’s attestation of compliance Customers can also leverage reports and certifications available through AWS Artifact to evaluate vendors or suppliers according to their requirements Some of our key audit programs and certifications are described in the AWS compliance programs section of this document For a full list of audits certifications and attestations refer to the AWS Compliance Center Segregation of customer data Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them Customers maintain full control over who has access to their data Services which provide virtualized operational environments to customers (for example EC2) are designed to ensure that customers are segregated from one another and prevent cross tenant privilege escalation and information disclosure via hypervisors and instance isolation Customers can also use Amazon Virtual Private Cloud (V PC) which gives them complete control over their virtual networking environment including resource placement connectivity and security The first step is to create your VPC Then you can add resources to it such as Amazon Elastic Compute Cloud (EC2) an d Amazon Relational Database Service (RDS) instances Finally you can define how your VPCs communicate with each other across accounts Availability Zones (AZs) or Regions Outsourcing agreements Section 55 of the Guidelines on Outsourcing clarifies th at contractual terms and conditions governing the use of the cloud should be defined in written agreements MAS expects such agreements to address at the least the scope of the outsourcing arrangement; performance operational internal control and risk management standards; confidentiality and security; business continuity management; monitoring and control; audit and inspection; notification of adverse developments; dispute resolution; default termination and early exit; sub contracting; and applicabl e laws AWS customers have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit their needs AWS also provides an introductory guide to help Singapore’s FIs assess Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 16 the AWS Enterprise Agreement against the Guidelines on Outsourcing For more information about AWS Enterprise Agreements contact your AWS representative Audit and inspection The Guidelines on Outsourcing clarify that a n FI’s outsourcing arrangements should not interfere with the ability of the FI to effectively manage its business activities or impede MAS in carrying out its supervisory functions and objectives Customers retain ownership and control of their content when they use AWS services and do not cede that ownership and control of their content to AWS Customers have complete control over which services they use and whom they allow to access their content and services including what credentials are required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings because these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures are implemented in the cloud in accordance with each customer ’s business needs For example if a higher availability architecture is required to protect customer content the customer can add redundant systems backups locations and network uplinks to create a more resilient highavailability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a d ata level For more information refer to Using AWS in the Context of Singapore Privacy Considerations The Guidelines on Outsourcing also require FIs to have access to audit reports and findings made on service providers whether produced by the service provider’s or its subcontractors’ internal or external auditors or by agents appointed by the service provider and its sub contractor in relation to the outsourcing agreement Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS Service Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS Amazon Web Services AWS User Guide to Financial Servi ces Regulations & Guidelines in Singapore Page 17 complianc e reports These reports and certifications are produced by independent thirdparty auditors and attest to the design and operating effectiveness of AWS security controls For more information about how AWS approach es audit s and inspection s and how these requirements may be addressed in an Enterprise Agreement with AWS contact your AWS representative MAS Technology Risk Management Guidelines The MAS Technology Risk Management (TRM) Guidelines define risk management principles and best practice standard s to guide FIs in the following: • Establishing a sound and robust technology risk management framework • Strengthening system security reliability resiliency and recoverability • Deploying strong authentication to protect customer data transactions and systems AWS has produced a MAS TRM Guidelines Workbook that maps AWS security and compliance controls ( OF the cloud) and best practice guidance provided by the AWS WellArchitected Framewo rk (IN the cloud) to the requirements within the MAS TRM Guidelines Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the MAS TRM Guidelines for their workloads on AWS The WellArchitected Framework helps you un derstand the pros and cons of decisions you make while building systems on AWS By using the framework you learn architectural best practices for designing and operating reliable secure efficient and costeffective systems in the cloud It provides a w ay for you to consistently measure your architectures against best practices and identify areas for improvement The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism AWS believes that having well architected systems greatly increases the likelihood of business success AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases They have helped design and review thousands of customers’ architectures on AWS From this experience they have identified best practices and core strategies for architecting systems in the cloud Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 18 The AWS Well Architected Framework documents a set of foundational questions that allow you to understand whether a specific architecture aligns well with cloud best practices The Framework provides a consistent approach to evaluating systems against the qualities you expect from modern cloud based systems and the remediation that would be required to achieve those qualities As AWS continues to evolve and continue s to learn more from working with customers the definition of well architected will continue to be refined The Framework is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members It describes AWS best practices and strategies to use when designing and operating a cloud workload and provides link s to further implementation details and architectural patterns For more information refer to the AWS Well Architected page The following table excerpt shows an example of the response from AWS to guideline 915 in the TRM Guidelines: Table 3 — Response from AWS to guideline 915 in the TRM Guidelines Requirement Responsibility AWS supporting information Additional information 915 Multi factor authentication should be implemented for users with access to sensitive system functions to safeguard the systems and data from unauthori zed access AWS AWS Control Objective: Governance and Risk Management Shared Responsibility Model Security and compliance is a shared respon sibility between AWS and the customer AWS is responsible for the security and compliance 'of' the cloud and implements security controls to secure the underlying infrastructure that runs the AWS services and hosts and connects customer resources AWS cus tomers are responsible for security 'in' the cloud and should determine design and implement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 19 Requirement Responsibility AWS supporting information Additional information AWS AWS Control Objective: Identity and Access Management Administrative Access Amazon personnel with a business need to access the management plane are required to first use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane; such access is logged and audited When an employee no longer has a business need to access the management plane the p rivileges and access to these hosts and relevant systems are revoked 915 Multi factor authentication should be implemented for users with access to sensitive system functions to safeguard the systems and data from unauthori zed access Customer WellArchitected Question/Best Practice: SEC2 How do you manage authentication for people and machines? Use strong sign in mechanisms Enforce minimum password length and educate users to avoid common or re used passwords Enforce multi factor aut hentication (MFA) with software or hardware mechanisms to provide an additional layer FIs can create an AWS account at AWS Artifact and get a copy of the AWS MAS TRM Workbook from the AWS Artifact portal after logging in FIs should review responses from AWS in the AWS MAS TRM Workbook and enrich them with the FI’s own company wide cont rols For example section 3 of the MAS TRM Guidelines discusses the oversight of technology risk by the board of directors and senior management This is a principle that is likely to apply company wide is not specific to cloud or particular applications and can only be addressed by the FI The AWS MAS TRM Workbook also positions FIs to more clearly consider whether and how Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 20 to add extra or supplementary technology risk controls that are specific to line of businesses or application teams or the FI’s par ticular needs Notice 655 on Cyber Hygiene The Notice 655 on Cyber Hygiene applies to all banks in Singapore It sets out cyber security requirements on securing administrative accounts applying security patching establishing baseline security standards deploying network security devices implementing anti malware measures and strengthening user authentication AWS has produced the AWS Workbook for MAS Notice 655 on Cyber Hygiene that maps AWS security and compliance controls ( OF the cloud) and best prac tice guidance provided by the AWS Well Architected Framework (IN the cloud) to the requirements within the Notice 655 Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the Notice 655 on Cyber Hygiene for their workloads on AWS The following table excerpt shows an example of the response from AWS to Cyber Hygiene Pr actice 43 in the Notice 655 for Cyber hygiene : Table 4 — Response from AWS to Cyber Hygiene Practice 43 Requirement Responsibility AWS supporting information Additional information 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Baseline Requirements AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS complies with applicable federal state and local laws statutes ordinances and regulations concerning securit y privacy and data protection of AWS services which helps to minimize the risk of accidental or unauthorized access or disclosure of customer content Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 21 Requirement Responsibility AWS supporting information Additional information 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Security Control Framework AWS has developed and implemented a security control environment designed to protect the confidentiality integri ty and availability of customers’ systems and content AWS maintains a broad range of industry and geography specific compliance programs and is continually assessed by external certifying bodies and independent auditors to provide assurance the policies processes and controls established and operated by AWS are in alignment with these program standards and the highest open standards 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Shar ed Responsibility Model Security and compliance is a shared responsibility between AWS and the customer AWS is responsible for the security and compliance 'of' the cloud and implements security controls to secure the underlying infrastructure that runs t he AWS services and hosts and connects customer resources AWS customers are responsible for security 'in' the cloud and should determine design and implement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ AWS customers are responsible for all scanning penetration testing file integrity monitoring and intrusion detection for their Amazon EC2 and Amazon ECS instances and applications Refer to http://awsamazoncom/security/penetration testing for terms of service regarding penetration testing Penetration tests should include customer IP addresses and not AWS endpoints Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 22 Requirement Responsibility AWS supporting information Additional information AWS endpoints are tested as part of AWS compliance vulnerability scans Table 5 — Guidance provided by the AWS Well Architected Framework to the Cyber Hygiene Practice 43 Requirement Responsibility AWS Supporting information Additional information Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: OPS 3 How do you reduce defects ease remediation and improve flow into production? Share design standards Share best practices across teams to increase awareness and maximize the benefits of development efforts Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: SEC7 How do you protect your compute resources? Automate configuration management Enforce and validate secure configurations automatically by using a configuration management service or tool to reduce human error Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: SEC6 How do you protect your networks? Automate configuration management Enforce and validate secure configurations automatically by using a configuration management service or tool to reduce human error Learn more FIs can create an AWS account at AWS Artifact and get a copy of the AWS Workbook for MAS Notice 655 on Cyber Hygiene from the AWS Artifact portal after logging in FIs should review responses fr om AWS in the AWS Workbook for MAS Notice 655 on Cyber Hygiene and enrich them with the FI’s own company wide controls Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 23 ABS Cloud Computing Implementation Guide 20 The Association of Banks in Singapore (ABS) has also published an implementation guide for banks that are entering into cloud outsourcing arrangements The ABS Cloud Computing Implementation Guide 20 includes recommendations that were discussed and agreed by members of the ABS Standing Committee for Cyber Security and are intended to assist banks in further understanding approaches to due diligence vendor management and key controls that should be implemented in cloud outsourcing arrangements Imp ortantly while the MAS Guidelines on Outsourcing and Technology Risk Management Guidelines are issued by the relevant regulator and provide guidance for a broad class of financial institutions the ABS Cloud Computing Implementation Guide 20 comprises a series of practical recommendations from the banking industry body Key controls The ABS Cloud Computing Implementation Guide recommends that a number of key controls be implemented when entering into a cloud outsourcing arrangement AWS has produced the AWS Workbook for ABS Cloud Computing Implementation Guide 20 that maps AWS security and compliance controls ( OF the cloud) and best practice guidance provided by the AWS Well Architect ed Framework (IN the cloud) to the requirements within the guide Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the guide for their workloads on AWS The f ollowing table excerpt shows an example of the response from AWS to controls in section 4 C) Run the Cloud 1 Change Management – Considerations/Standard Workloads of the ABS Cloud Computing Implementation Guide 20 : Amazon Web Services AWS User Guide to Fin ancial Services Regulations & Guidelines in Singapore Page 24 Table 6 — Response from AWS to controls in section 4 C1 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads AWS AWS Control Objective: Governance and Risk Management Shared Responsibility Model Security and compliance is a shared responsibility between AWS and the customer AWS is responsible for the security and complia nce 'of' the cloud and implements security controls to secure the underlying infrastructure that runs the AWS services and hosts and connects customer resources AWS customers are responsible for security 'in' the cloud and should determine design and im plement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ AWS customers are responsible for all scanning penetration testing file integrity monitoring and intrusion detection for their Amazon EC2 and Amazon ECS insta nces and applications Refer to http://awsamazoncom/security/pen etration testing for terms of service regarding penetration testing Penetration tests should include customer IP addresses and not AWS endpoints AWS endpoints are tested as part of AWS com pliance vulnerability scans n/a Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 25 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads AWS AWS Control Objective: OSPAR The Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines) recommend that Singapore banks select out sourced service providers that meet the controls set out in the ABS Guidelines which can be demonstrated through an OSPAR Amazon Web Services (AWS) achieved the Outsourced Service Provider’s Audit Report (OSPAR) attestation An OSPAR attestation involve s an external audit of the service provider’s controls against the criteria specified in the ABS Guidelines The audit report can be downloaded on AWS Artifact n/ a Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 26 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads Customer WellArchitected Question/ Best Practice: REL5 How do you implement change? Deploy changes in a planned manner Deployments and patching follow a documented process Learn more FIs can create an AWS account at AWS Artifact and get a copy of the AWS Workbook for ABS Cloud Computing Implementation Guide 20 from the AWS Artifact portal after logging in FIs shoul d review responses from the AWS Workbook for ABS Cloud Computing Implementation Guide 20 and enrich them with the FI’s own company wide controls Next steps Each organization’s cloud adoption journey is unique To successfully complete cloud adoption FIs need to understand their organization’s current state the target state and the transition required to achieve the target state Knowing this will help FIs set goals and create work streams that will enable a successful move to the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescri bed in the Framework can help FIs build a comprehensive Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 27 approach to cloud computing across their organization throughout their IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops contact your AWS representative Alternatively AWS provides access to tools and reso urces for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework For FIs regulated by the Monetary Authority of Singapore (MAS) next steps typically also inclu de the following: • Contact your AWS representative to discuss how the AWS Partner Network as well as AWS solution architects Professional Services teams and training instructors can assist with your cloud adoption journey If you do not have an AWS repre sentative contact AWS • Obtain and review a copy of the latest AWS SOC 1 and 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS Amazon Web Services Foundations as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk manag ement practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak with your AWS representative to learn more about how A WS is helping financial services customers migrate their critical workloads to the cloud • Review a copy of the AWS MAS TRM Workbook Notice 655 on Cyber Hygiene Workbook and ABS Cloud Computing Implementation Guide 20 Workbook from the AWS Artifact portal (accessible through the AWS Management Console) FIs should populate the workbook with additional controls that they have implemented or will implement Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 28 • Update and maintain your register of outsourcing arrangements as appropriate for submission to MAS at least annually or upon request Conclusion Providing highly secure and resilient infrastructure and services to customers is a top priority for AWS The AWS commitment to customers is focused on working to continuously earn customer trust and ensure custo mers maintain confidence in operating their workloads securely on AWS To achieve this AWS has integrated risk and compliance mechanisms that include: • The implementation of a wide array of security controls and automated tools • Nearly c ontinuous monitoring and assessment of security controls to help ensure AWS operational effectiveness and strict adherence to compliance regimes In addition AWS regularly undergoes independent third party audits to provide assurance that the control activities are operating as intended These audits along with the many certifications AWS has obtained provide an additional level of validation of the AWS control environment that benefit s customers Taken together with customer managed security controls these efforts allow AW S to securely innovate on behalf of customers and help customers improve their security posture when building on AWS Additional resources Set out below are additional resources to help financial institut ions think about security compliance and designing a secure and resilient AWS environment • AWS Compliance Quick Reference Guide — AWS has many compliance enabl ing features that you can use for your regulated workloads in the AWS Cloud These features can allow you to achieve a higher level of security at scale Cloud based compliance offers a lower cost of entry easier operations and improved agility by provid ing more oversight security control and central automation Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 29 • AWS Well Architected Framework — The Well Architected Framework has been developed to help cloud architects build the most s ecure high performing resilient and efficient infrastructure possible for their applications This framework provides a consistent approach for customers and partners to evaluate architectures and provides guidance to help implement designs that will scale application needs over time The Well Architected Framework consists of five pillars: Operational Excellence; Security; Reliability; Performance Efficiency; Cost Optimization • Global Financial Services Regulatory Principles — AWS has identified five common principles related to financial services regulation that customers should consider when using AWS Cloud services and specifically when applying the share d responsibility model to their regulatory requirements Customers can access a whitepaper on t hese principles under a nondisclosure agreement at AWS Artifact • NIST Cybersecurity Framework (CSF) — The AWS whitepaper NIST Cybersecurity Framework (CSF): Aligning to the NIST CSF in the AWS Cloud demonstrates how public and commercial sector organizations can assess the AWS environment against the NIST CSF and improve the security measures they implement and operate (security in the cloud) The whitepaper also provides a third party auditor letter attesting to the AWS Cloud offering’s conformance to NIST CSF risk management p ractices (security of the cloud) FIs can leverage NIST CSF and AWS resources to elevate their risk management frameworks • Using AWS in the Context of Common Privacy and Data Protection Considerations — This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of comm on privacy and data protection considerations It will help customers understand : o The way AWS services operate including how customers can address security and encrypt their content o The geographic locations where customers can choose to store content ; and other relevant considerations o The respective roles the customer and AWS each play in managing and securing content stored on AWS services Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 30 Contributors Contributors to this document include: • Bella Khabbaz Senior Corporate Counsel Amazon Web Services • Alvin Li Sr Security Strategist Amazon Web Services • Brandon Lim Principal FS Security Amazon Web Services • Daniel Wu Principal Public Policy Amazon Web Services • Genevieve Ding Public Policy Head SG & ASEAN Amazon Web Services • Melissa Yoong Public Policy Manager SG Amazon Web Services Document revisions Date Description January 3 2022 Third publication Updated MAS TRMG ABS Cloud Computing Implementation Guidelines 20 to reflect the updated AWS TRMG Workbook and new ABS Cloud Computing Implementation Guide 20 May 2019 Second publication Updated MAS TRM section to reflect the security in the cloud guidance provided by AWS Well Architected and the associated enhanced MAS TRM Guidance Workbook July 2017 First publication,General,consultant,Best Practices AWS_User_Guide_to_Financial_Services_Regulations_and_Guidelines_in_Hong_Kong,AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Complian ce Assurance Programs 4 AWS Artifact 6 AWS Regions 6 Hong Kong Insurance Authority Guideline on Outsourcing (GL14) 6 Prior Notification of Material Outsourcing 7 Outsourcing Policy 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 12 Contingenc y Planning 13 Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) 14 Next Steps 20 Additional Resources 21 Document Revisions 22 About this Guide This document provides information to assist Authorized Insurers (AIs) in Hong Kong regulated by the Hong Kong Insurance Authority (IA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 1 Overview The Hong Kong Insurance Authority (IA) issues guidelines to provide the Hong Kong insurance industry with practical guidance to facilitate compliance with regulatory requirements The guideli nes relevant to the use of outsourced services instruct Authorized Insurers (AIs) to perform materiality assessments risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibilities with regards to the following guidelines: • Guideline on Outsourcing (GL14) – This guideline sets out the IA’s supervisory approach to outsourcing and the major points that the IA recommends AIs to address when outsourcing their activities including the use of cloud services • Guideline on the Use of Internet for Insurance Activities (GL8) – This guideline outlines the specific points that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities For a full list of the IA guidelines see the Guidelines section of Legislative and Regulatory Framework on the IA website Security and the Shared Responsibility Model Cloud se curity is a shared responsibility At AWS we maintain a high bar for security OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers shoul d carefully consider how they will manage the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We recommend that cus tomers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Amazon Web Services AWS User Guide to the Hong Kong In surance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 2 Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Elastic Compute Cloud (EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application softwa re as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and maintain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service (RDS) manages to handle tasks such as provisioning patching backup recovery failure d etection and repair It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with the content • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted Amazon Web Services AWS User Guide to the Hong Kong Insurance Auth ority on Outsourcing and Use of Internet for Insurance Activities Guidelines 3 • How their content is encrypted and where the keys are stored • Who has acce ss to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assuranc e about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS complia nce certifications to validate the implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS s ervices and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports t he operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading prac tices that can be implemented and to better assist customers with managing their control environment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifyi ng bodies and independent auditors to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable compliance standard Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Out sourcing and Use of Internet for Insurance Activities Guidelines 4 • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments The following are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls foll owing the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines h ow AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of prac tice provides additional security controls implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementati on guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certification see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented a pproach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner where AWS Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 5 products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see th e ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PC I Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is manda ted by the card brands and administered by the Payment Card Industry Security Standards Council For more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party a udit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality without disclosing AWS internal information By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Inte rnet for Insurance Activities Guidelines 6 AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI repo rts and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up o f multiple Availability Zones Availability Zones consist of one or more discrete data centers that are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operat e production applications and databases at higher availability fault tolerance and scalability than would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Hong Kong Insurance Authority Guideline on Outsourcing (GL14) The Hong Kong Insurance Authority Guideline on Outsourcing (GL14) provides guidance and recommendations on prudent risk management practices for outsourcing including the use of cloud services by AIs AIs that use cloud services are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 5 of the GL14 states that the AI’s materiality and risk assessments should include considerations such as a determination of the importance and criticality of the services to be outs ourced and the impact on the AI’s risk profile (in respect to financial operational legal and reputational risks and potential losses to customers) if the outsourced service is disrupted or falls short of acceptable standards AIs should be able to de monstrate their observance of the guidelines as required by the IA A full analysis of the GL14 is beyond the scope of this document However the following sections address the considerations in the GL14 that most frequently arise in interactions with AIs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Ins urance Activities Guidelines 7 Prior Notification of Material Outsourcing Under Section 61 of the GL14 an AI is required to notify the IA when the AI is planning to enter into a new material outsourcing arrangement or significantly vary an existing one The notification includes the following requirements: • Unless otherwise justifiable by the AI the notification should be made at least 3 months before the day on which the new outsourcing arrangement is proposed to be entered into or the existing arrangement is proposed to be varied significantly • A detailed description of the proposed outsourcing arrangement to be entered into or the significant proposed change • Sufficient information to satisfy the IA that the AI has taken into account and properly addressed all of the essential iss ues set out in Section 5 of the GL14 Outsourcing Policy Section 58 of the GL14 sets out a list of factors that should be evaluated in the context of service provider due diligence when an AI is considering an outsourcing arrangement including the use of cloud services The following table includes considerations for each component of Section 58 Due Diligence Requirement Customer Considerations (a) reputation experience and quality of service Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers (b) financial soundness in particular the ability to continue to provide t he expected level of service The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are available from the SEC or at Amazon’s Investor Relations website Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activ ities Guidelines 8 Due Diligence Requirement Customer Considerations (c) managerial skills technical and operational expertise and competence in particular the ability to deal with disruptions in business continuity AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires ma nagement to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost i mportance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessments on newly implemented controls at least every six months (d) any license registration permission or authorization required by law to perform the outsourced service While Hong Kong does not have specific licensing or certification requirements for operating cloud services AWS has multiple attestations for secure and compliant operation of its services Globally these include certification to ISO 27017 (guidelines for in formation security controls applicable to the provision and use of cloud services) and ISO 27018 (code of practice for protection of personally identifiable information (PII) in public clouds) For more information about our assurance programs see AWS Assurance Programs (e) extent of reliance on sub contractors and effectiveness in monitoring the work of sub contractors AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided and implements appropriate relationship management mechanisms in line with their relationship to the business Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidel ines 9 Due Diligence Requirement Customer Considerations (f) compatibility with the insurer’s corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements (g) familiarity with the insurance industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial services cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud pla tform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement An outsourcing agreement should be undertaken in the form of a legally binding written agreement Section 510 of the Guideline on Outsourcing (GL14) clarifies the matters that an AI should consider when entering into an outsourcing arrangement with a service provider including performance standards certain reporting or notification requirem ents and contingency plans AWS cust omers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Ent erprise Agreements contact your AWS representative Information Confidentiality Under Sections 512 513 and 514 of the Guideline on Outsourcing (GL14) AIs need to ensure that the outsourcing arrangements comply with relevant laws and statutory requir ements on customer confidentiality The following table includes considerations for Sections 512 513 and 514 Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 10 Requirement Customer Considerations 512 The insurer should ensure that it and the service provider have proper safeguards in place to protect the integrity and confidentiality of the insurer’s information and customer data Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to m anage your own encryption keys If you want to tokenize data before it leaves your organization you can achieve this through a number of AWS partners that provide this Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check the configuration of AWS resources recorded by AWS Config When your reso urces are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration changes AWS Config represents relationships between resources so that you c an assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also pro vides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management and rotation temporary security credentials and multi factor authentica tion (MFA) Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 11 Requirement Customer Considerations 513 An authorized insurer should take into account any legal or contractual obligation to notify customers of the outsourcing arrangement and circumstances under which their data may be disclosed or lost In the event of the termination of th e outsourcing agreement the insurer should ensure that all customer data are either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage access to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Expo rt to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization” ) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned using these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 12 Requirement Customer Considerations 514 An authorized insurer should notify the IA forthwith of any unauthorized access or breach of confidentiality by the service provider or its sub contractor that affects the insurer or its customers AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Secu rity Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located athttp://statusawsamazoncom/ to alert customers to any broadly impacting availability issues Customers are responsible for their security in the cloud It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements inc luding who has access to their content and how those access rights are granted managed and revoked AWS customers should consider implementation of the following best practices to protect against and detect security breaches: • Use encryption to secure cus tomer data • Configure the AWS services to keep customer data secure AWS provides customers with information on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ • Implement least privilege permissions for a ccess to your resources and customer data • Use monitoring tools like AWS CloudWatch to track when customer data is accessed and by whom Monitoring and Control Under Section 515 of the Guideline on Outsourcing (GL14) AIs should ensure that they have suf ficient and appropriate resources to monitor and control outsourcing arrangements at all times Section 516 further sets out that once an AI implements an outsourcing arrangement it should regularly review the effectiveness and adequacy of its controls i n monitoring the performance of the service provider AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 13 notifications on the AWS Security Bulletins website AWS provides you with various tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Sections 517 and 518 of the Guideline on Outsourcing (GL14) if an AI chooses to outsource service to a service provider they should put in place a contingency plan to ensure that the AI’s busine ss won’t be disrupted as a result of undesired contingencies of the service provider such as system failures The AI should also ensure that the service provider has its own contingency plan that covers daily operational and systems problems The AI shoul d have an adequate understanding of the service provider's contingency plan and consider the implications for its own contingency planning in the event that the outsourced service is interrupted due to undesired contingencies of the service provider AWS a nd regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stabi lity For more information about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Fin ancial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimi zing system outage time due to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 re port in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geo graphic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 14 Hong Kong Insurance Authority Guid eline on the Use of Internet for Insurance Activities (GL8) The Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) aims to draw attentio n to the special considerations that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities Sections 51 items (a) (g) of the Guideline on the Use of Internet for Insurance Activities (GL8) sets out a series of requirements regarding information security confidentiality integrity data protection payment systems security and related concerns for AIs to address when carrying out internet insurance activities AIs should take all pract icable steps to ensure the following: Requirement Customer Considerations (a) a comprehensive set of security policies and measures that keep up with the advancement in internet security technologies shall be in place AWS has established formal policies a nd procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of your syste ms and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer data Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 15 (b) mechanisms shall be in place to maintain the integrity of data stored in the system hardware whilst in transit and as displayed on the website AWS is designed to protect the confidentiality and integrity of transmitted data through the comparison of a cryptographic hash of data transmitted This is done to help ensure that the message is not corrupted or altered in transit Data that has been alte red or corrupted in transit is immediately rejected AWS provides many methods for you to securely handle your data: AWS enables you to open a secure encrypted channel to AWS servers using HTTPS (TLS/SSL) Amazon S3 provides a mechanism that enables you t o use MD5 checksums to validate that data sent to AWS is bitwise identical to what is received and that data sent by Amazon S3 is identical to what is received by the user When you choose to provide your own keys for encryption and decryption of Amazon S 3 objects (S3 SSE C) Amazon S3 does not store the encryption key that you provide Amazon S3 generates and stores a one way salted HMAC of your encryption key and that salted HMAC value is not logged Connections between your applications and Amazon RDS MySQL DB instances can be encrypted using TLS/SSL Amazon RDS generates a TLS/SSL certificate for each database instance which can be used to establish an encrypted connection using the default MySQL client When an encrypted connection is established dat a transferred between the database instance and your application is encrypted during transfer If you require data to be encrypted while at rest in the database your application must manage the encryption and decryption of data Additionally you can set up controls to have your database instances only accept encrypted connections for specific user accounts Data is encrypted with 256 bit keys when you enable AWS KMS to encrypt Amazon S3 objects Amazon EBS volumes Amazon RDS DB Instances Amazon Redshift Data Blocks AWS CloudTrail log files Amazon SES messages Amazon Workspaces volumes Amazon WorkMail messages and Amazon EMR S3 storage AWS offers you the ability to add an additional layer of security to data at rest in the cloud providing scalable and efficient encryption features Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 16 Requirement Customer Considerations This includes: • Data encryption capabilities available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Database Amazon RDS for SQL Server and Amazon Redshift • Flexible key management options including AWS Key Management Service (AWS KMS) that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your keys • Dedicated hardware based cryptographi c key storage using AWS CloudHSM which enables you to satisfy compliance requirements In addition AWS provides APIs that you can use to integrate encryption and data protection with any of the services you develop or deploy in the AWS Cloud Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 17 Requirement Customer Considerations (c) approp riate backup procedures for the database and application software shall be implemented AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored You retain control and ownership of your data When you store data in a specific Region it is not replic ated outside that Region It is your responsibility to replicate data across Regions if your business needs require this capability Amazon S3 supports data replication and versioning instead of automatic backups You can however back up data stored in Amazon S3 to other AWS Regions or to on premises backup systems Amazon S3 replicates each object across all Availability Zones within the respective Region Replication can provide data and service availability in the case of system failure but provides no protection against accidental deletion or data integrity compromise —it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy options which have different durability objectiv es and price points Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level or create backups Amazon EBS provides snapshots that capture the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permiss ions so that only authorized users can access Amazon EBS backups Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 18 Requirement Customer Considerations (d) a client’s personal information (including password if any) shall be protected against loss; or unauthorized access use modification or disclosure etc You control your data With AWS you can do the following: • Determine where your data is stored including the type of storage and geographic Region of that storage • Choose the secured state of your data We offer you strong encryption for your content in transit or at rest and we provide you with the option to manage your own encryption keys • Manage access to your data and AWS services and resources through users groups permissions and credentials that you control (e) a client’s electronic signature if any shall be verified Amazon Partner Network (APN) Technology Partners provide software solutions (including electronic signature solutions) that are either hosted on or integrated with the AWS Cloud platform The AWS Partner Solutions Finder provides you with a centralized p lace to search discover and connect with trusted APN Technology and Consulting Partners based on your business needs For more information see AWS Partner Solutions Finder Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 19 Requirement Customer Considerations (f) the electronic payme nt system (eg credit card payment system) shall be secure AWS is a Payment Card Industry (PCI) compliant cloud service provider having been PCI DSS Certified since 2010 The most recent assessment validated that AWS successfully completed the PCI Data Security Standards 32 Level 1 Service Provider assessment and was found to be compliant for all the services outlined on AWS Services in Scope by Compliance Program The AWS PCI Complian ce Package which is available through AWS Artifact includes the AWS PCI DSS 32 Attestation of Compliance (AOC) and AWS 2016 PCI DSS 32 Responsibility Summary PCI compliance on AWS is a shared responsibility In accordance with the shared responsibili ty model all entities must manage their own PCI DSS compliance certification While for the portion of the PCI cardholder environment deployed in AWS your organization’s QSA can rely on AWS Attestation of Compliance (AOC) you are still required to satis fy all other PCI DSS requirements The AWS 2016 PCI DSS 32 Responsibility Summary provides you with guidance on what you are responsible for For more information about AWS PCI DSS Compliance see PCI DSS Level 1 Service Provider Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 20 Requirement Customer Considerations (g) a valid insurance contract shall not be cancelled accidentally maliciously or consequent upon careless computer handling Your data is validated for integrity and corrupted or tampered data is not written to storage Amazon S3 utilizes checksums int ernally to confirm the continued integrity of content in transit within the system and at rest Amazon S3 provides a facility for you to send checksums along with data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service utilizes checksums internally to confirm the continued integrity of content in transit within the system and at rest Whe n disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External access to content stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the accessor IP address object and operation Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the target state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and bestpractices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organiza tion throughout your IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops p lease contact your AWS representative Alternatively AWS provides access to tools and resources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework Amazon Web Services AWS Us er Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 21 For AIs in Hong Kong next steps typically also include the following: • Contact your AWS representative to discuss how the AWS Partner Network and AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative contact us at https://awsamazoncom/ contact us/ • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak to your AWS representative about an AWS Enterprise Agreement Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AWS in the Context of Hong Kong Privacy Considerations Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 22 Document Revisions Date Description April 2020 Updates to Additional Resources section February 2020 Revision and updates October 2017 First publication,General,consultant,Best Practices AWS_WellArchitected_Framework__Cost_Optimization_Pillar,ArchivedCost Optimization Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/costoptimizationpillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are p rovided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Cost Optimization 2 Design Principles 2 Definition 3 Practice Cloud Financial Management 3 Functional Ownership 4 Finance and Technology Partnership 5 Cloud Budgets and Forecasts 6 CostAware Processes 7 CostAware Culture 8 Quantify Business Value Delivered Through Cost Optimization 8 Expenditure and Usage Awareness 10 Governance 10 Monitor Cost and Usage 14 Decommission Resources 17 Cost Effective Resources 18 Evaluate Cost When Selecting Services 18 Select the Correct Resource Type Size and Number 21 Select the Best Pricing Model 22 Plan for Data Transfer 28 Manage Demand and Supply Resources 30 Manage Demand 31 Dynamic Supply 31 Optimize Over Time 33 Review and Imp lement New Services 33 Conclusion 35 ArchivedContributors 35 Further Reading 36 Document Revisions 36 ArchivedAbstract This whitepaper focuses on the cost optimization pillar of the Amazon Web Services (AWS) WellArchitected Framework It provides guidance to help customers apply best practices in the design deliv ery and maintenance of AWS environments A cost optimized workload fully utilizes all resources achieves an outcome at the lowest possible price point and meets your functional requirements This whitepaper provides indepth guidance for building capabi lity within your organization designing your workload selecting your services configuring and operating the services and applying cost optimization techniques ArchivedAmazon Web Services Cost Optimization Pillar 1 Introduction The AWS Well Architected Framework helps you understand the decisions you make while building workloads on AWS The Framework provides architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It demonstrates a way to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected workloads greatly increases the likelihood of business success The framework is based on five pillars: •Operational Excellence •Security •Reliability •Performance Efficiency •Cost Optimization This paper focuses on the cost optimization pillar and how to architect workloads with the most effective use of services and resources to achieve business outcomes at the lowest price point You’ll learn how to apply the best practices of the cost optimization pillar within your organization Cost optimization can be challenging in traditional on premises solutions because you must predict future c apacity and business needs while navigating complex procurement processes Adopting the practices in this paper will help your organization achieve the following goals: •Practice Cloud Financial Management •Expenditure and usage awareness •Cost effective reso urces •Manage demand and supply resources •Optimize over time This paper is intended for those in technology and finance roles such as chief technology officers (CTOs) chief financial officers (CFOs) architects developers financial controllers financia l planners business analysts and operations team ArchivedAmazon Web Services Cost Optimization Pillar 2 members This paper does not provide implementation details or architectural patterns however it does include references to appropriate resources Cost Optimization Cost optimization is a continual proce ss of refinement and improvement over the span of a workload’s lifecycle The practices in this paper help you build and operate cost aware workloads that achieve business outcomes while minimizing costs and allowing your organization to maximize its retur n on investment Design Principles Consider the following design principles for cost optimization: Implement cloud financial management : To achieve financial success and accelerate business value realization in the cloud you must invest in Cloud Financial Management Your organization must dedicate the necessary time and resources for building capability in this new domain of technology a nd usage management Similar to your Security or Operations capability you need to build capability through knowledge building programs resources and processes to help you become a cost efficient organization Adopt a consumption model : Pay only for t he computing resources you consume and increase or decrease usage depending on business requirements For example development and test environments are typically only used for eight hours a day during the work week You can stop these resources when they ’re not in use for a potential cost savings of 75% (40 hours versus 168 hours) Measure overall efficiency : Measure the business output of the workload and the costs associated with delivery Use this data to understand the gains you make from increasing o utput increasing functionality and reducing cost Stop spending money on undifferentiated heavy lifting : AWS does the heavy lifting of data center operations like racking stacking and powering servers It also removes the operational burden of managing operating systems and applications with managed services This allows you to focus on your customers and business projects rather than on IT infrastructure Analyze and attribute expenditure : The cloud makes it easier to accurately identify the cost and u sage of workloads which then allows transparent attribution of IT costs to revenue streams and individual workload owners This helps measure return on ArchivedAmazon Web Services Cost Optimization Pillar 3 investment (ROI) and gives workload owners an opportunity to optimize their resources and reduce costs Definition There are five focus areas for cost optimization in the cloud: • Practice Cloud Financial Management • Expenditure and usage awareness • Costeffective resources • Manage demand and supplying resources • Optimize over time Similar to the other pillars wi thin the Well Architected Framework there are trade offs to consider for cost optimization For example whether to optimize for speed tomarket or for cost In some cases it’s best to optimize for speed —going to market quickly shipping new features o r meeting a deadline —rather than investing in upfront cost optimization Design decisions are sometimes directed by haste rather than data and the temptation always exists to overcompensate rather than spend time benchmarking for the most costoptimal de ployment Overcompensation can lead to over provisioned and under optimized deployments However it may be a reasonable choice if you must “lift and shift” resources from your on premises environment to the cloud and then optimize afterwards Investing th e right amount of effort in a cost optimization strategy up front allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practices and avoiding unnecessary over provisioning The following sections provide techniques and best practices for the initial and ongoing implementation of Cloud Financial Management and cost optimization for your workloads Practi ce Cloud Financial Management Cloud Financial Management (CFM) enables organizations to realize b usiness value and financial success as they optimize their cost and usage and scale on AWS ArchivedAmazon Web Services Cost Optimization Pillar 4 The following are Cloud Financial Management best practices: • Functional ownership • Finance and technology partnership • Cloud budgets and forecasts • Costaware processes • Costaware culture • Quantif y business value delivered through cost optimization Functional Ownership Establish a cost optimization function: This function is responsible for establishing and maintaining a culture of cost awareness It c an be an existing individual a team within your organization or a new team of key finance technology and organization stakeholders from across the organization The function (individual or team) prioritizes and spends the required percentage of their time on cost management and cost optimization activities For a small organization the function might spend a smaller percentage of time compared to a full time function for a larger enterprise The function require a multi disciplined approach with capabi lities in project management data science financial analysis and software/infrastructure development The function is can improve efficiencies of workloads by executing cost optimizations (centralized approach) influencing technology teams to execute o ptimizations (decentralized) or a combination of both (hybrid) The function may be measured against their ability to execute and deliver against cost optimization goals (for example workload efficiency metrics) You must secure executive sponsorship for this function The sponsor is regarded as champion for cost efficient cloud consumption and provides escalation support for the function to ensure that cost optimization activities are treated with the level of priority defined by the organization Toget her the sponsor and function ensure that your organization consumes the cloud efficiently and continue to deliver business value ArchivedAmazon Web Services Cost Optimization Pillar 5 Finance and Technology Partnership Establish a partnership between finance and technology: Technology teams innovate faster in the cloud due to shortened approval procurement and infrastructure deployment cycles This can be an adjustment for finance organizations previously used to executing time consuming and resource intensive processes for procuring and deploying capital in data center and on premises environments and cost allocation only at project approval Establish a partnership between key finance and technology stakeholders to create a shared understanding of organizational goals and develop me chanisms to succeed financially in the variable spend model of cloud computing Relevant teams within your organization must be involved in cost and usage discussions at all stages of your cloud journey including: • Financial leads: CFOs financial controll ers financial planners business analysts procurement sourcing and accounts payable must understand the cloud model of consumption purchasing options and the monthly invoicing process Due to the fundamental differences between the cloud (such as the rate of change in usage pay as you go pricing tiered pricing pricing models and detailed billing and usage information) compared to on premises operation it is essential that the finance organization understands how cloud usage can impact business as pects including procurement processes incentive tracking cost allocation and financial statements • Technology leads: Technology leads (including product and application owners) must be aware of the financial requirements (for example budget constraints) as well as business requirements (for example service level agreements) This allows the workload to be implemented to achieve the desired goals of the organization The partnership of finance and technology provides the following benefits: • Finance and t echnology teams have near real time visibility into cost and usage • Finance and technology teams establish a standard operating procedure to handle cloud spend variance • Finance stakeholders act as strategic advisors with respect to how capital is used to purchase commitment discounts (for example Reserved Instances or AWS Savings Plans) and how the cloud is used to grow the organization ArchivedAmazon Web Services Cost Optimization Pillar 6 • Existing accounts payable and procurement processes are used with the cloud • Finance and technology teams collaborate on forecasting future AWS cost and usage to align/build organizational budgets • Better cross organizational communication through a shared language and common understanding of financial concepts Additional stakeholders within your organization that shou ld be involved in cost and usage discussions include: • Business unit owners: Business unit owners must understand the cloud business model so that they can provide direction to both the business units and the entire company This cloud knowledge is critical when there is a need to forecast growth and workload usage and when assessing longer term purchasing options such as Reserved Instances or Savings Plans • Third parties: If your organization uses third parties (for example consultants or tools) ensure that they are aligned to your financial goals and can demonstrate both alignment through their engagement models and a return o n investment (ROI) Typically third parties will contribute to reporting and analysis of any workloads that they manage and the y will provide cost analysis of any workloads that they design Cloud Budgets and Forecasts Establish cloud budgets and forecasts: Customers use the cloud for efficiency speed and agility which creates a highly variable amount of cost and usage Costs ca n decrease with increases in workload efficiency or as new workloads and features are deployed Or workloads will scale to serve more of your customers which increases cloud usage and costs Existing organizational budgeting processes must be modified to incorporate this variability Adjust existing budgeting and forecasting processes to become more dynamic using either a trend based algorithm (using historical costs as inputs) or using business driver based algorithms (for example new product launches or regional expansion) or a combination of both trend and business drivers You can use AWS Cost Explorer to forecast daily (up to 3 months) or monthly (up to 12 months) cloud costs based on machine learning algorithms applied to your historical costs (trend based) ArchivedAmazon Web Services Cost Optimization Pillar 7 Cost Aware Processes Implement cost awareness in your organizational processes : Cost awareness must be implemented in new and existing organizational processes It is recommended to re use and modify existing processes where possible —this minimizes the impact to agility and velocity The following recommendations will help implement cost awareness in your workload: • Ensure that change management includes a cost measurement to quantify the financial impact of your changes This helps pro actively address cost related concerns and highlight cost savings • Ensure that cost optimization is a core component of your operating capabilities For example you can leverage existing incident management processes to investigate and identify root cause for cost and usage anomalies (cost overages) • Accelerate cost savings and business value realizatio n through automation or tooling When thinking about the cost of implementing frame the conversation to include an ROI component to justify the investment of time or money • Extend existing training and development programs to include cost aware training t hroughout your organization It is recommended that this includes continuous training and certification This will build an organization that is capable of self managing cost and usage Report and notify on cost and usage optimization: You must regularly r eport on cost and usage optimization within your organization You can implement dedicated sessions to cost optimization or include cost optimization in your regular operational reporting cycles for your workloads AWS Cost Explorer provides dashboards and reports You can track your progress of cost and usage against configured budgets with AWS Budgets Reports You can also use Amazon QuickSight with Cost and Usage Report (CUR) data to provide highly customized reporting with more granular data Implement notifications on cost and usage to ensure that changes in cost and usage can be acted upon quickly AWS Budgets allows you to provide notifications against targets We recommend configuring notifications on bo th increases and decreases and in both cost and usage for workloads Monitor cost and usage proactively: It is recommended to monitor cost and usage proactively within your organization not just when there are exceptions or anomalies ArchivedAmazon Web Services Cost Optimization Pillar 8 Highly visible dash boards throughout your office or work environment ensure that key people have access to the information they need and indicate the organization’s focus on cost optimization Visible dashboards enable you to actively promote successful outcomes and impleme nt them throughout your organization Cost Aware Culture Create a cost aware culture: Implement changes or programs across your organization to create a cost aware culture It is recommended to start small then as your capabilities increase and your organ ization’s use of the cloud increases implement large and wide ranging programs A cost aware culture allows you to scale cost optimization and cloud financial management through best practices that are performed in an organic and decentralized manner acro ss your organization This creates high levels of capability a cross your organization with minimal effort compared to a strict top down centralized approach Small changes in culture can have large impacts on the efficiency of your current and future wor kloads Examples of this include: • Gamify ing cost and usage across your organization This can be done through a publicly visible dashboard or a report that compares normalized costs and usage across teams (for example cost per workload cost per transact ion) • Recognizing cost efficiency Reward voluntary or unsolicited cost optimization accomplishments publicly or privately and learn from mistakes to avoid repeating them in the future • Create top down organizational requirements for workloads to run at p redefined budgets Keep up to date with new service releases: You may be able to implement new AWS services and features to increase cost efficiency in your workload Regularly review the AWS News Blog the AWS Cost Management blog and What’s New with AWS for information on new service and feature releases Quantif y Business Value Delivered Through Cost Optimization Quantify business value from cost optimization: In addition to reporting savings from cost optimization it is recommended that you quantify the additional value ArchivedAmazon Web Services Cost Optimization Pillar 9 delivered Cost optimization benefits are typically quantified in terms of lower costs per business out come For example you can quantify On Demand Amazon Elastic Compute Cloud (Amazon EC2) cost savings when you purchase Savings Plans which reduce cost and maintain workload output levels You can quantify cost reductions in AWS spending when idle Amazon E C2 instances are terminated or unattached Amazon Elastic Block Store (Amazon EBS) volumes are deleted Quantifying business value from cost optimization allows you to understand the entire set of benefits to your organization Because cost optimization is a necessary investment quantifying business value allow s you to explain the return on investment to stakeholders Quantifying business value can help you gain more buy in from stakeholders on future cost optimization investments and provides a framework to measure the outcomes for your organization’s cost optimization activities The benefits from cost optimization however go above and beyond cost reduction or avoidance Consider capturing additional data to measure efficiency improvements and business value Examples of improvement include: • Executing cost optimization best practices: For example resource lifecycle management reduces infrastructure and operational costs and creates time and unexpected budget for experimentation This increases organiza tion agility and uncovers new opportunities for revenue generation • Implementing automation: For example Auto Scaling which ensures elasticity at minimal effort and increases staff productivity by eliminating manual capacity planning work For more deta ils on operational resiliency refer to the Well Architected Reliability Pillar whitepaper • Forecasting future AWS costs: Forecasting enables finance stakeholders to set expectations with other internal and external organization stakeholders and helps improve your organization’s financial predictability AWS C ost Explorer can be used to perform forecasting for yo ur cost and usage Resources Refer to the following resources to learn more about AWS best practices for budgeting and forecasting cloud spend • Repor ting your budget metrics with budget reports • Forecasting with AWS Cost Explorer • AWS Training ArchivedAmazon Web Services Cost Optimization Pillar 10 • AWS Certification • AWS Cloud Management Tools partners Expenditure and Usage Awareness Understanding your organization’s costs and drivers is critical for managing your cost and usage effecti vely and identifying cost reduction opportunities Organizations typically operate multiple workloads run by multiple teams These teams can be in different organization units each with its own revenue stream The capability to attribute resource costs t o the workloads individual organization or product owners drives efficient usage behavior and helps reduce waste Accurate cost and usage monitoring allows you to understand how profitable organization units and products are and allows you to make more informed decisions about where to allocate resources within your organization Awareness of usage at all levels in the organization is key to driving change as change in usage drives changes in cost Consider taking a multi faceted approach to becoming aw are of your usage and expenditures Your team must gather data analyze and then report Key factors to consider include: • Governance • Monitoring cost and usage • Decommissioning Governance In order to manage your costs in the cloud you must manage your usag e through the governance areas below: Develop Organizational Policies: The first step in performing governance is to use your organization’s requirements to develop policies for your cloud usage These policies define how your organization uses the cloud a nd how resources are managed Policies should cover all aspects of resources and workloads that relate to cost or usage including creation modification and decommission over the resource’s lifetime Policies should be simple so that they are easily unde rstood and can be implemented effectively throughout the organization Start with broad high level policies such as which geographic Region usage is allowed in or times of the day that resources should be running Gradually refine the policies for the v arious organizational units and ArchivedAmazon Web Services Cost Optimization Pillar 11 workloads Common policies include which services and features can be used (for example lower performance storage in test/development environments) and which types of resources can be used by different groups (for example the largest size of resource in a development account is medium) Develop goals and targets: Develop c ost and usage goals and targets for your organization Goals provide guidance and direction to your organization on expected outcomes Targets provide sp ecific measurable outcomes to be achieved An example of a goal is: platform usage should increase significantly with only a minor (non linear) increase in cost An example target is: a 20% increase in platform usage with less than a 5% increase in costs Another common goal is that workloads need to be more efficient every 6 months The accompanying target would be that the cost per output of the workload needs to decrease by 5% every 6 months A common goal for cloud workloads is to increase workload ef ficiency which is to decrease the cost per business outcome of the workload over time It is recommended to implement this goal for all workloads and also set a target such as a 5% increase in efficiency every 6 12 months This can be achieved in the clo ud through building capability in cost optimization and through the release of new services and service features Account structure: AWS has a one parent tomany children account structure that is commonly known as a master (the parent formerly payer) ac count member (the child formerly linked) account A best practice is to always have at least one master with one member account regardless of your organization size or usage All workload resources should reside only within member accounts There is no o nesizefitsall answer for how many AWS accounts you should have Assess your current and future operational and cost models to ensure that the structure of your AWS accounts reflects your organization’s goals Some companies create multiple AWS accounts for business reasons for example: • Administrative and/or fiscal and billing isolation is required between organization units cost centers or specific workloads • AWS service limits are set to be specific to particular workloads • There is a requirement for isolation and separation between workloads and resources Within AWS Organizations consolidated billing creates the construct between one or more member accounts and the master account Member accounts allow you to isolate ArchivedAmazon Web Services Cost Optimization Pillar 12 and distinguish your cost and usage by groups A common practice is to have separate member accounts for each organ ization unit (such as finance marketing and sales) or for each environment lifecycle (such as development testing and production) or for each workload (workload a b and c) and then aggregate these linked accounts using consolidated billing Consolidated billing allows you to consolidate payment for multiple member AWS accounts under a single master account while still providing visibility for each linked account’s activity As costs and usage are aggregated in the master account this allows you to maximize your service volume discounts and maximize the use of your commitment discounts (Savings Plans and Reserved Instances) to achieve the highest discounts AWS Control Tower can quickly s et up and configure multiple AWS accounts ensuring that governance is aligned with your organization’s requirements Organizational Groups and Roles : After you develop policies you can create logical groups and roles of users within your organization Th is allows you to assign permissions and control usage Begin with high level groupings of people typically this aligns with organizational units and job roles (for example systems administrator in the IT Department or Financial controller) The groups j oin people that do similar tasks and need similar access Roles define what a group must do For example a systems administrator in IT requires access to create all resources but an analytics team member only needs to create analytics resources Controls — Notifications: A common first step in implementing cost controls is to setup notifications when cost or usage events occur outside of the policies This enables you to act quickly and verify if corrective action is required without restricting or negat ively impacting workloads or new activity After you know the workload and environment limits you can enforce governance In AWS notifications are conducted with AWS Budgets which a llows you to define a monthly budget for your AWS costs usage and commitment discounts (Savings Plans and Reserved Instances) You can create budgets at an aggregate cost level (for example all costs) or at a more granular level where you include only specific dimensions such as linked accounts services tags or Availability Zones You can also attach email notifications to your budgets which will trigger when current or forecasted costs or usage exceeds a defined percentage threshold Controls — Enforcement: As a second step you can enforce governance policies in AWS through AWS Identity and Access Management (IAM) and AWS Organizations Service Control Policies (SCP) IAM allows you to securely manage access to AWS ArchivedAmazon Web Services Cost Optimization Pillar 13 services and resources Using IAM you can control who can create and manage AWS resources the type of resources that can be created and where they can be created This minimizes the creation of resources that are not required Use the roles and groups created previously and assign IAM policies to enforce the correct usage SCP offers central control over the maximum available permissions for all accounts in your organization ensuring that your accounts stay within your access control guidelines SCPs are available only in an organization that has all features enabled and you can configure the SCPs to either deny or allow actions for member accounts by default Refer to the WellArchitected Security Pillar whitepaper for more details on implementing access management Controls — Service Quotas: Governance can also be implemented through management of Service Quotas By ensuring Service Quotas are set with minimum overhead and accurately maintained you can minimize re source creation outside of your organization’s requirements To achieve this you must understand how quickly your requirements can change understand projects in progress (both creation and decommission of resources) and factor in how fast quota changes c an be implemented Service Quotas can be used to increase your quotas when required AWS Cost Management services are integrated with the AWS Identity and Access Management (IAM) service You use the IAM service in conjunction with Cost Management services to control access to your financial data and to the AWS tools in the billing console Track workload lifecycl e: Ensure that you track the entire lifecycle of the workload This ensures that when workloads or workload components are no longer required they can be decommissioned or modified This is especially useful when you release new services or features The existing workloads and components may appear to be in use but should be decommissioned to redirect customers to the new service Notice previous stages of workloads — after a workload is in production previous environments can be decommissioned or greatl y reduced in capacity until they are required again AWS provides a number of management and governance services you can use for entity lifecycle tracking You can use AWS Config or AWS Systems Manager to provide a detailed inventory of your AWS resources and configuration It is recommended that you integrate with your existing project or asset management systems to keep track of active projects and produ cts within your organization Combining your current system with the rich set of events and metrics provided by AWS allows you to build a view of ArchivedAmazon Web Services Cost Optimization Pillar 14 significant lifecycle events and proactively manage resources to reduce unnecessary costs Refer to the WellArchitected Operational Excellence Pillar whitepaper for more details on implementing entity lifecycle tracking Monitor Cost and Usage Enable teams to take action on their cost and usage t hrough detailed visibility into the workload Cost optimization begins with a granular understanding of the breakdown in cost and usage the ability to model and forecast future spend usage and features and the implementation of sufficient mechanisms to align cost and usage to your organization’s objectives The following are required areas for monitoring your cost and usage: Configure detailed data sources: Enable hourly granularity in Cost Explorer and create a Cost and Usage Report (CUR) These data sources provide the most accurate view of cost and usage across your entire organization The CUR provides daily or hourly usage granularity rates costs and usage at tributes for all chargeable AWS services All possible dimensions are in the CUR including: tagging location resource attributes and account IDs Configure your CUR with the following customizations: • Include resource IDs • Automatically refresh the CUR • Hourly granularity • Versioning: Overwrite existing report • Data integration: Athena (Parquet format and compression) Use AWS Glue to prepare the data for analysis and use Amazon Athena to perform data analysis using SQL to query the data You can also use Amazon QuickSight to build custom and complex visualizations and distribute them throughout your organization Identify cost attribution categories: Work with your finance team and other relevant stakeholders to understand the requirements of how costs must be allocated within your organization Workload costs must be allocated throughout the entire lifecycle includin g development testing production and decommissioning Understand how the costs ArchivedAmazon Web Services Cost Optimization Pillar 15 incurred for learning staff development and idea creation are attributed in the organization This can be helpful to correctly allocate accounts used for this purpose to training and development budgets instead of generic IT cost budgets Establish workload metrics: Understand how your workload ’s output is measured against business success Each workload typically has a small set of major outputs that indicate performance If you have a complex workload with many components then you can prioritize the list or define and track metrics for each component Work with your teams to understand which metrics to use This unit will be u sed to understand the efficiency of the workload or the cost for each business output Assign organization meaning to cost and usage: Implement tagging in AWS to add organizati on information to your resources which will then be added to your cost and usage information A tag is a key value pair — the key is defined and must be unique across your organization and the value is unique to a group of resources An example of a key value pair is the key is Environment with a value of Production All resources in the production environment will have this key value pair Tagging allows you categorize and track your costs with meaningful relevant organization information You can app ly tags that represent organization categories (such as cost centers application names projects or owners) and identify workloads and characteristics of workloads (such as test or production) to attribute your costs and usage throughout your organizat ion When you apply tags to your AWS resources (such as EC2 instances or Amazon S3 buckets) and activate the tags AWS adds this information to your Cost and Usage Reports You can run reports and perform analysis on tagged and untagged resources to allow greater compliance with internal cost management policies and ensure accurate attribution Creating and implementing an AWS tagging standard across your organization’s accounts enables you to manage and govern your AWS environments in a consistent and un iform manner Use Tag Policies in AWS Organizations to define rules for how tags can be used on AWS resources in your accounts in AWS Organiza tions Tag Policies allow you to easily adopt a standardized approach for tagging AWS resources AWS Tag Editor allows you to add delete and manage tags of multiple resource s AWS Cost Categories allows you to assign organization meaning to your costs without requiring tags on resources You can map your cost and usage information to unique internal organization structures You define category rules to map and categorize costs using billing dimensions such as accounts and tags This provides another level of ArchivedAmazon Web Services Cost Optimization Pillar 16 management capability in addition to tagging You can also map specific accounts and tags to multiple projects Configure billing and cost optimization tools: To modify usage and adjust costs each person in your organization must have access to their cost and usage information It is recommended that all workloads and teams have the followin g tooling configured when they use the cloud: • Reports: Summarize of all cost and usage information • Notifications: Provide notifications when cost or usage is outside of defined limits • Current State: Configure a dashboard showing current levels of cost and usage The dashboard should be available in a highly visible place within the work environment (similar to an operations dashboard) • Trending: Provide the capability to show the variability in cost and usage over the required period of time with the required granularity • Forecasts: Provide the capability to show estimated future costs • Tracking: Show the current cost and usage against configured goals or targets • Analysis: Provide the capability for team members to perform custom and deep analysis down to the hourly granularity with all possible dimensions You can use AWS native tooling such as AWS Cost Explorer AWS Budgets and Amazon Athena with QuickSight to provide this capability You can also use third party tooling however you must ensure that the costs of this tooling provide value to your organization Allocate costs based on workload metrics: Cost Optimization is delivering business outcomes at the lowest price point which can only be achieved by allocating workload costs by workload metrics (measured by workload efficiency) Monitor the defined workload metrics through log files or other application monitoring Combine this data with the workload costs which can be obtaine d by looking at costs with a specific tag value or account ID It is recommended to perform this analysis at the hourly level Your efficiency will typically change if you have some static cost components (for example a backend database running 24/7) with a varying request rate (for example usage peaks at 9am – 5pm with few requests at night) Understanding the relationship between the static and variable costs will help you to focus your optimization activities ArchivedAmazon Web Services Cost Optimization Pillar 17 Decommission Resources After you manage a list of projects employees and technology resources over time you will be able to identify which resources are no longer being used and which projects that no longer have an owner Track resources over their lifetime: Decommission workload resources that are no longer required A common example is resources used for testing after testing has been completed the resources can be removed Tracking resources with tags (and running reports on those tags) will help you identify assets for decommission Us ing tags is an effective way to track resources by labeling the resource with its function or a known date when it can be decommissioned Reporting can then be run on these tags Example values for feature tagging are “featureX testing” to identify the p urpose of the resource in terms of the workload lifecycle Implement a decommissioning process: Implement a standardized process across your organization to identify and remove unused resources The process should define the frequency searches are performe d and the processes to remove the resource to ensure that all organization requirements are met Decommission resources: The frequency and effort to search for unused resources should reflect the potential savings so an account with a small cost should b e analyzed less frequently than an account with larger costs Searches and decommission events can be triggered by state changes in the workload such as a product going end of life or being replaced Searches and decommission events may also be triggered by external events such as changes in market conditions or product termination Decommission resources automatically: Use automation to reduce or remove the associated costs of the decommissioning process Designing your workload to perform automated deco mmissioning will reduce the overall workload costs during its lifetime You can use AWS Auto Scaling to perform the decommissioning process You can also implement custom code using the API or SDK to decommission workload resources automatically Resources Refer to the following resources to learn more about AWS best practices for expenditure awareness • AWS Tagging Strategies • Activating User Defined Cost Allocation Tags ArchivedAmazon Web Services Cost Optimization Pillar 18 • AWS Billing and Cost Management • Cost Management Blog • Multiple Account Billing Strategy • AWS SDK and Tools • Tagging best practices • WellArchitected Labs Cost Fundamentals • WellArchitected Labs – Expenditure Awareness Cost Effective Resources Using the appropriate services resources and configurations for your workloads is key to cost savings Consider the following when creating cost effective resources: • Evaluate cost when selecting services • Select the correct resource type size and number • Select the best pricing model • Plan for data transfer You can use AWS Solutions Architects AWS Solutions AWS Reference Architectures and APN Partners to help you choose an architecture based on what you have learned Evaluate Cost When Selecting Services Identify organization requirements: When selecting services for your workload it is key that you understand your organization priorities Ensure that you have a balance between cost and other Well Architected pillars such as performance and reliability A fully cost optimized workload is the solution that is most aligned to your organization’s requirements not necessarily the lowest cost Meet with all teams within your organization to collect information such as product business technical and finance Analyze all workload components: Perform a thorough analysis on all components in your workload Ensure that balance between the cost of analysis and the potential savings in the workload over its lifecycle You must find the current impact and potential future impact of the component Fo r example if the cost of the proposed resource is $10/month and under forecasted loads would not exceed $15/month ArchivedAmazon Web Services Cost Optimization Pillar 19 spending a day of effort to reduce costs by 50% ($5 a month) could exceed the potential benefit over the life of the system Using a faster and more efficient data based estimation will create the best overall outcome for this component Workloads can change over time the right set of services may not be optimal if the workload architecture or usage changes Analysis for selection of service s must incorporate current and future workload states and usage levels Implementing a service for future workload state or usage may reduce overall costs by reducing or removing the effort required to make future changes AWS Cost Explorer and the CUR can analyze the cost of a Proof of Concept (PoC) or running environment You can also use th e AWS Simple Monthly Calculator or the AWS Pricing Calculator to estimate workload costs Managed Services: Managed services remove the operationa l and administrative burden of maintaining a service which allows you to focus on innovation Additionally because managed services operate at cloud scale they can offer a lower cost per transaction or service Consider the time savings that will allow your team to focus on retiring technical debt innovation and value adding features For example you might need to “lift and shift” your on premises environment to the cloud as rapidly as possible and optimize later It is worth exploring the savings you could realize by using managed services that remove or reduce license costs Usually managed services have attributes that you can set to ensure sufficient capacity You must set and monitor these attributes so that your excess capacity is kept to a mini mum and performance is maximized You can modify the attributes of AWS Managed Services using the AWS Management Console or AWS APIs and SDKs to align resource needs with changing demand For example you can increase or decrease the number of nodes on an Amazon EMR cluster (or an Amazon Redshift cluster) to scale out or in You can also pack multiple instances on an AWS resource to enable higher density usage For example you can provision multiple small databases on a single Amazon Relational Database Se rvice (Amazon RDS) DB instance As usage grows you can migrate one of the databases to a dedicated RDS DB instance using a snapshot and restore process When provisioning workloads on managed services you must understand the requirements of adjusting the service capacity These requirements are typically time ArchivedAmazon Web Services Cost Optimization Pillar 20 effort and any impact to normal workload operation The provisioned resource must allow time for any changes to occur provision the required overhead to allow this The ongoing effort required to modify services can be reduced to virtually zero by using APIs and SDKs that are integrated with system and monitoring tools such as Amazon CloudWatch Amazon Relational Database Service (RDS) Amazon Redshift and Amazon ElastiCache provide a managed database service Amazon Athena Amazon Elastic Map Reduce (EMR) and Amazon Elasticsearch provide a managed analytics service AWS Managed Services (AMS) is a service that operates AWS infrastructure on behalf of enterprise customers and partners It provides a secure and compliant environment that you can deploy your workloads onto AMS uses enterprise cloud operating models with automation to allow you to meet your organization requirements move into the cloud faster and reduce your on going management costs Serverless or Application level Services: You can use serverless or application level services such as AWS L ambda Amazon Simple Queue Service (Amazon SQS) Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Email Se rvice (Amazon SES) These services remove the need for you to manage a resource and provide the function of code execution queuing services and message delivery The other benefit is that they scale in performance and cost in line with usage allowing efficient cost allocation and attribution For more information on Serverless refer to the WellArchitected Serverless Application lens whitepaper Analyze the workload for different usage over time: As AWS releases new services and features the optimal services for your workload may change Effort required should reflect potential benefits Workload review frequency depends on your organization requirements If it is a workload of significant cost implementing new services sooner will maximize cost savings so more frequent review can be advantageous Another trigger for review is change in usage patterns Significant changes in usage can indicat e that alternate services would be more optimal For example for higher data transfer rates a direct connect service may be cheaper than a VPN and provide the required connectivity Predict the potential impact of service changes so you can monitor for these usage level triggers and implement the most cost effective services sooner Licensing costs: The cost of software licenses can be eliminated through the use of open source software This can have significant impact on workload costs as the size of the workload scales Measure the benefits of licensed software against the total cost to ArchivedAmazon Web Services Cost Optimization Pillar 21 ensure that you have the most optimized workload Model any changes in licensing and how they would impact your workload costs If a vendor changes the cost of your database license investigate how that impacts the overall efficiency of your workload Consider historical pricing announcements from your vendors for trends of licensing changes across their products Licensing costs may also scale independently of throughpu t or usage such as licenses that scale by hardware (CPU bound licenses) These licenses should be avoided because costs can rapidly increase without corresponding outcomes You can use AWS License Manag er to manage the software licenses in your workload You can configure licensing rules and enforce the required conditions to help prevent licensing violations and also reduce costs due to license overages Select the Correct Resource Type Size and N umber By selecting the best resource type size and number of resources you meet the technical requirements with the lowest cost resource Right sizing activities takes into account all of the resources of a workload all of the attributes of each indiv idual resource and the effort involved in the right sizing operation Right sizing can be an iterative process triggered by changes in usage patterns and external factors such as AWS price drops or new AWS resource types Right sizing can also be one off if the cost of the effort to right size out weights the potential savings over the life of the workload In AWS there are a number of different approaches: • Perform cost modeling • Select size based on metrics or data • Select size automatically (based on metrics) Cost Modeling: Perform cost modeling for your workload and each of its components to understand the balance between resources and find the correct size for each resource in the workload given a specific level of performan ce Perform benchmark activities for the workload under different predicted loads and compare the costs The modeling effort should reflect potential benefit; for example time spent is proportional to component cost or predicted saving For best practices refer to the Review section of the Performance Efficiency Pillar of the AWS Well Architected Framework whitepaper AWS Compute Optimizer can assist with cost modeling for running workloads It provides right sizing recommendations for compute resources based on historical ArchivedAmazon Web Services Cost Optimization Pillar 22 usage This is the ideal data source for compute resources because it is a free service and it utilizes machine learning to make multiple recommendations depending on levels of risk You can also use Amazon CloudWatch and CloudWatch Logs with custom logs as data sources for right sizing operations for other services and workload components The following are recommendations for cost modeling data and metrics: • The monitoring must a ccurately reflect the end user experience Select the correct granularity for the time period and thoughtfully choose the maximum or 99th percentile instead of the average • Select the correct granularity for the time period of analysis that is required to cover any workload cycles For example if a two week analysis is performed you might be overlooking a monthly cycle of high utilization which could lead to under provisioning Metrics or data based selection: Select resource size or type based on worklo ad and resource characteristics; for example compute memory throughput or write intensive This selection is typically made using cost modeling a previous version of the workload (such as an on premises version) using documentation or using other so urces of information about the workload (whitepapers published solutions) Automatic selection based on metrics: Create a feedback loop within the workload that uses active metrics from the running workload to make changes to that workload You can use a managed service such as AWS Auto Scaling which you configur e to perform the right sizing operations for you AWS also provides APIs SDKs and features that allow resources to be modified with minimal effort You can program a workload to stop andstart an EC2 instance to allow a change of instance size or instance type This provides the benefits of right sizing while removing almost all the operational cost required to make the change Some AWS services have built in automatic type or size selection such as S3 Intelligent Tiering S3 Intelligent Tiering automatically moves your data between two access tiers: frequent access and infrequent access based on your usage patterns Select the Best Pricing Model Perform workload cost modeling: Consider the requirements of the workload components and understand the potential pricing models Define the availability requirement of the component Determine if there are multiple independent resources that perform the function in the workload and what the workload requirements are over time Compare the cost of the resources using the default On Demand pricing model ArchivedAmazon Web Services Cost Optimization Pillar 23 and other applicable models Factor in any potential changes in resources or workload components Perform regular account level analysis: Performing regular cost modeling ensures that opportunities to optimize across multiple workloads can be implemented For example if multiple workloads use On Demand at an aggregate level the risk of change is lower and implementing a commitment based discount will achieve a lower overall cost It is recommended to perform analysis in regular cycles of two weeks to 1 month This allows you to make small adjustment purchases so the coverage of your pricing models continues to evolve with your changing workloads and their components Use the AWS Cost Explorer recommendations tool to find opportunities for commitment discounts To find opportunities for Spot workloads use an hourly view of your overall usage and look for regular periods of changing usage or elasticity Pricing Models: AWS has multiple pricing models that allow you to pay for your resources in the most cost effective way that suits your organization’s needs The following section describes each purchasing model: • OnDemand • Spot • Commitment discounts Savings Plans • Commitment discounts Reserved Instances/C apacity • Geographic selection • Third party agreements and pricing OnDemand: This is the default pay as you go pricing model When you use resources (for example EC2 instances or services such as DynamoDB on demand) you pay a flat rate and you have no lon gterm commitments You can increase or decrease the capacity of your resources or services based on the demands of your application On Demand has an hourly rate but depending on the service can be billed in increments of 1 second (for example Amazon RD S or Linux EC2 instances) On demand is recommended for applications with short term workloads (for example a four month project) that spike periodically or unpredictable workloads that can’t be interrupted On demand is also suitable for workloads su ch as pre production environments which require uninterrupted runtimes but do not run long enough for a commitment discount (Savings Plans or Reserved Instances) ArchivedAmazon Web Services Cost Optimization Pillar 24 Spot: A Spot Instance is spare EC2 compute capacity available at discounts of up to 90% off On Demand prices with no long term commitment required With Spot Instances you can significantly reduce the cost of running your applications or scale your application’s compute capacity for the same budg et Unlike On Demand Spot Instances can be interrupted with a 2 minute warning if EC2 needs the capacity back or the Spot Instance price exceeds your configured price On average Spot Instances are interrupted less than 5% of the time Spot is ideal whe n there is a queue or buffer in place or where there are multiple resources working independently to process the requests (for example Hadoop data processing) Typically these workloads are fault tolerant stateless and flexible such as batch processin g big data and analytics containerized environments and high performance computing (HPC) Non critical workloads such as test and development environments are also candidates for Spot Spot is also integrated into multiple AWS services such as EC2 Auto Scaling groups (ASGs) Elastic MapReduce (EMR) Elastic Container Service (ECS) and AWS Batch When a Spot Instance needs to be reclaimed EC2 sends a two minute warning via a Spot Instance interruption notice delivered through CloudWatch Events as well as in the instance metadata During that two minute period your application can use the time to save its state drain runnin g containers upload final log files or remove itself from a load balancer At the end of the two minutes you have the option to hibernate stop or terminate the Spot Instance Consider the following best practices when adopting Spot Instances in your w orkloads: • Set your maximum price as the On Demand rate : This ensures that you will pay the current spot rate (the cheapest available price) and will never pay more than the On Demand rate Current and historical rates are available via the console and API • Be flexible across as many instance types as possible : Be flexible in both the family and size of the instance type to improve the likelihood of fulfilling your target capacity requirements obtain the lowest possible cost and minimize the impact of interruptions • Be flexible about where your workload will run: Available capacity can vary by Availability Zone This improves the likelihood of fulfilling your target capacity by tapping into multiple spare capacity pools and provides the lowest possible cost ArchivedAmazon Web Services Cost Optimization Pillar 25 • Design for continuity : Design your workloads for st atelessness and fault tolerance so that if some of your EC2 capacity gets interrupted it will not have impact on the availability or performance of the workload • We recommend using Spot Instances in combination with On Demand and Savings Plans/Reserved I nstances to maximize workload cost optimization with performance Commitment Discounts – Savings Plans: AWS provides a number of ways for you to reduce your costs by reserving or committing to use a certain amount of resources and receiving a discounted r ate for your resources A Savings Plan allows you to make an hourly spend commitment for one or three years and receive discounted pricing across your resources Savings Plans provide discounts for AWS Compute services such as EC2 Fargate and Lambda When you make the commitment you pay that commitment amount every hour and it is subtracted from your On Demand usage at the discount rate For example you commit to $50 an hour and have $150 an hour o f OnDemand usage Considering the Savings Plans pricing your specific usage has a discount rate of 50% So your $50 commitment covers $100 of On Demand usage You will pay $50 (commitment) and $50 of remaining On Demand usage Compute Savings Plans are the most flexible and provide a discount of up to 66% They automatically apply across Availability Zones instance size instance family operating system tenancy Region and compute service Instance Savings Plans have less flexibility but provide a higher discount rate (up to 72%) They automatically apply across Availability Zones instance size instance family operating system and tenancy There are three payment options: • No upfront payment: There is no upfront payment; you then pay a reduced hourly rate each month for the total hours in the month • Partial upfront payment: Provides a higher discount rate than No upfront Part of the usage is paid up front; you then pay a smaller reduced hourly rate each month for the total hours in the month • All upfront payment: Usage for the entire period is paid up front and no other costs are incurred for the remainder of th e term for usag e that is covered by the commitment You can apply any combination of these three purchasing options across your workloads ArchivedAmazon Web Services Cost Optimization Pillar 26 Savings plans apply first to the usage in the account they are purchased in from the highest discount percentage to the lowest the n they apply to the consolidated usage across all other accounts from the highest discount percentage to the lowest It is recommended to purchase all Savings Plans in an account with no usage or resources such as the master account This ensures that the Savings Plan applies to the highest discount rates across all of your usage maximizing the discount amount Workloads and usage typically change over time It is recommended to continually purchase small amounts of Savings Plans commitment over time This ensures that you maintain high levels of coverage to maximize your discounts and your plans closely match your workload and organization requirements at all times Do not set a target coverage in your accounts due to the variability of discount that is possible Low coverage does not necessarily indicate high potential savings You may have a low coverage in your account but if your usage is made up of small instances with a licensed operating system the potential savin g could be as low as a few percent Instead track and monitor the potential savings available in the Savings Plan recommendation tool Frequently review the Savings Plans recommendations in Cost Explorer (perform regular analysis) and continue to purchase commitments until the estimated savings are below the required discount for the organization For example track and monitor that your potential discounts remained below 20% if it goes above that a purchase must be made Monitor the utilization and cover age but only to detect changes Do not aim for a specific utilization percent or coverage percent as this does not necessarily scale with savings Ensure that a purchase of Savings Plans results in an increase in coverage and if there are decreases in coverage or utilization ensure they are quantified and known For example you migrate a workload resource to a newer instance type which reduces utilization of an existing plan but the performance benefit outweighs the saving reduction Commitment Disco unts – Reserved Instances/Commitment: Similar to Savings Plans Reserved Instances (RI) offer discounts up to 72% for a commitment to running a minimum amount of resources Reserved Ins tances are available for RDS Elasticsearch ElastiCache Amazon Redshift and DynamoDB Amazon CloudFront and AWS Elemental MediaConvert also provide discounts when you make minimum usage commitments Reserved Instances are currently available for EC2 ho wever Savings Plans offer the same discount levels with increased flexibility and no management overhead ArchivedAmazon Web Services Cost Optimization Pillar 27 Reserved Instances offer the same pricing options of no upfront partial upfront and all upfront and the same terms of one or three years Reserved Instances can be purchased in a Region or a specific Availability Zone They provide a capacity reservation when purchased in an Availability Zone EC2 features convertible RI’s however Savings Plans should be used for all EC2 instances due to increased flexibility and reduced operational costs The same process and metrics should be used to track and make purchases of Reserved Instances It is recommended to not track coverage of RI’s across your accounts It is also recommended that utilization % is no t monitored or tracked instead view the utilization report in Cost Explorer and use net savings column in the table If the net savings is a significantly large negative amount you must take action to remediate the unused RI EC2 Fleet : EC2 Fleet is a feature that allows you to define a target compute capacity and then specify the instance types and the balance of On Demand and Spot for the fleet EC2 Fleet will automatically launch the lowest price combination of resources to meet the defined capacity Geographic Selection: When you architect your solutions a best practice is to seek to place computing resources closer to users to provide lower latency and strong data sovereignty For global audiences you should use multiple locations to meet these needs You should select the geographic location that minimizes your costs The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Each AWS Region operates within local market conditions and resource pricing is different in each Region Choose a specific Region to operate a component of or your entire solution so that you can run at the lowest possible price globally You can use the AWS Simple Monthly Calculator to estimate the costs of your workload in various Regions Third party agreements and pricing: When you utilize third party solutions or services in the cloud it is important that the pricing structures are aligned to Cost Optimization outcomes Pricing should scale with the outcomes and value it provides An example of this is software that takes a percentage of savings it provides the more you save (outcome) the more it charges Agreements that scale with your bill are ty pically not ArchivedAmazon Web Services Cost Optimization Pillar 28 aligned to Cost Optimization unless they provide outcomes for every part of your specific bill For example a solution that provides recommendations for EC2 and charges a percentage of your entire bill will increase if you use other services for which it provides no benefit Another example is a managed service that is charged at a percentage of the cost of resources that are managed A larger instance size may not necessarily require more management effort but will be charged more Ensure that these service pricing arrangements include a cost optimization program or features in their service to drive efficiency Plan for Data Transfer An advantage of the cloud is that it is a managed network service There is no longer the need to m anage and operate a fleet of switches routers and other associated network equipment Networking resources in the cloud are consumed and paid for in the same way you pay for CPU and storage —you only pay for what you use Efficient use of networking resou rces is required for cost optimization in the cloud Perform data transfer modeling: Understand where the data transfer occurs in your workload the cost of the transfer and its associated benefit This allows you to make an informed decision to modify or accept the architectural decision For example you may have a Multi Availability Zone configuration where you replicate data between the Availability Zones You model the cost of structure and decide that this is an acceptable cost (similar to paying for compute and storage in both Availability Zone) to achieve the required reliability and resilience Model the costs over different usage levels Workload usage can change over time and different services may be more cost effective at different levels Use AWS Cost Explorer or the Cost and Usage Report (CUR) to understand and model your data transfer costs Configure a proof of concept (PoC) or test your workload and run a test with a realistic simulated load You can model your costs at different workload demands Optimize Data Transfer: Architecting for data transfer ensures that you minim ize data transfer costs This may involve using content delivery networks to locate data closer to users or using dedicated network links from your premises to AWS You can also use WAN optimization and application optimization to reduce the amount of dat a that is transferred between components Select services to reduce data transfer costs: Amazon CloudFront is a global content delivery network that delivers data with low latency and high transfer speeds It ArchivedAmazon Web Services Cost Optimization Pillar 29 caches data at edge locations across the world which reduces the load on your resources By using CloudFront you can reduce the administrative effort in delivering content to large numbers of users globally with minimum latency AWS Direct Connect allows you to establish a dedicated network connection to AWS This can reduce network costs increase bandwidth and provide a more consistent network experience than internet based connections AWS VPN allows you to establish a secure and private connection between your private network and the AWS global network It is ideal for small offices or business partners because it provides quick and easy connectivity and it is a fully managed and elastic service VPC Endpoints allow connectivity between AWS services over private networking and can be used to reduce public data transfe r and NAT gateways costs Gateway VPC endpoints have no hourly charges and support Ama zon S3 and Amazon DynamoDB Interface VPC endpoints are provided by AWS PrivateLink and have an hourly fee and per GB usage cost Resources Refer to the following resource s to learn more about AWS best practices for cost effective resources • AWS Managed Services: Enterprise Transformation Journey Video • Analyzing Your Costs with Cost Explorer • Accessing Reserved Instance Recommendations • Getting Started with Rightsizing Recommendations • Spot Instances Best Practices • Spot Fleets • How Reserved Instances Work • AWS Global Infrastructure • Spot Instance Advisor • WellArchitected Labs Cost Effective Resources ArchivedAmazon Web Services Cost Optimization Pillar 30 Manage Demand and Supply Resources When you move to the cloud you pay only for what you need You can supply resources to match the workload demand at the time they’re needed — eliminating the need for costly and wasteful overprovisioning You can also modify the demand using a throttle buffer or queue to smooth the demand and serve it with less resources The economic benefits of just intime supply should be balanced against the need to provision to account for resource failures high availability and provision time Depending on whether your demand is fixed or variable plan to create metrics and automation that will ensure that management of your environment is minimal – even as you scale When mod ifying the demand you must know the acceptable and maximum delay that the workload can allow In AWS you can use a number of different approaches for managing demand and supplying resources The following sections describe how to use these approaches: • Analyze the workload • Manage demand • Demand based supply • Time based supply Analyze the workload: Know the requirements of the workload The organization requirements should indicate the workload response times for requests The response time can be used to det ermine if the demand is managed or if the supply of resources will change to meet the demand The analysis should include the predictability and repeatability of the demand the rate of change in demand and the amount of change in demand Ensure that the analysis is performed over a long enough period to incorporate any seasonal variance such as endofmonth processing or holiday peaks Ensure that the analysis effort reflects the potential benefits of implementing scaling Look at the expected total cos t of the component and any increases or decreases in usage and cost over the workload lifetime You can use AWS Cost Explorer or Amazon QuickSight with the CUR or your application logs to perform a visual analysis of workload demand ArchivedAmazon Web Services Cost Optimization Pillar 31 Manage Demand Manage Demand – Throttling: If the source of the demand has retry capability then you can implement throttling Throttling tells the source that if it cannot service the request at the current time it should try again later The source will wait for a period of time and then re try the request Implementing throttling has the advantage of limiting the maximum amount of resources and cos ts of the workload In AWS you can use Amazon API Gateway to implement throttling Refer to the WellArchitected Reliability pillar whitep aper for more details on implementing throttling Manage Demand – Buffer based: Similar to throttling a buffer defers request processing allowing applications that run at different rates to communicate effectively A buffer based approach uses a queue to accept messages (units of work) from producers Messages are read by consumers and processed allowing the messages to run at the rate that meets the consumers’ business requirements You don’t have to worry about producers having to deal with thr ottling issues such as data durability and backpressure (where producers slow down because their consumer is running slowly) In AWS you can choose from multiple services to implement a buffering approach Amazon SQS is a managed service that provides queues that allow a single consumer to read individual messages Amazon Kinesis provides a stream that allows many consumers to read the same messages When architect ing with a buffer based approach ensure that you architect your workload to service the request in the required time and that you are able to handle duplicate requests for work Dynamic Supply Demand based supply: Leverage the elasticity of the cloud to supply resources to meet changing demand Take advantage of APIs or service features to programmatically vary the amount of cloud resources in your architecture dynamically This allows you to scale components in your architecture and automatically increa se the number of resources during demand spikes to maintain performance and decrease capacity when demand subsides to reduce costs Auto Scaling helps you adjust your capacity to maintain steady predict able performance at the lowest possible cost It is a fully managed and free service that integrates with Amazon EC2 instances and Spot Fleets Amazon ECS Amazon DynamoDB and Amazon Aurora ArchivedAmazon Web Services Cost Optimization Pillar 32 Auto Scaling provides automatic resource discovery to help find resources in your workload that can be configured it has built in scaling strategies to optimize performance costs or a balance between the two and provides predictive scaling to assist with regularly occurring spikes Auto Scaling can implement manual scheduled or demand based scaling you can also use metrics and alarms from Amazon CloudWatch to trigger scaling events for your workload Typical metrics can be standard Amazon EC2 metrics such as CPU utilization network throughput and ELB observed request/response latency When possible you should use a metric that is indicative of customer experience typically this a custom metric that might originate from application code within your workload When architecting with a demand based approach keep in mind two key considerations First understand how quickly you must provision new resources Second understand that the size of margin between supply and demand will shift You must be ready to cope with the rate of change in demand and also be ready for resource failures Elastic Load Balancing (ELB) helps you to scale by distributing demand across multiple resources As you implement more reso urces you add them to the load balancer to take on the demand AWS ELB has support for EC2 Instances containers IP addresses and Lambda functions Time based supply: A time based approach aligns resource capacity to demand that is predictable or well defined by time This approach is typically not dependent upon utilization levels of the resources A time based approach ensures that resources are available at the specific time they are required and can be provided without any delays due to start up pro cedures and system or consistency checks Using a time based approach you can provide additional resources or increase capacity during busy periods You can use scheduled Auto Scaling to implement a time based approach Workloads can be scheduled to scale out or in at defined times (for example the start of business hours) thus ensuring that resources are available when users or demand arrives You can also leverage the AWS APIs and SDKs and AWS CloudFormation to automatically provision and decommission entire environments as you need them This approach is well suited for development or test environments that run only in defined business hours or periods o f time You can use APIs to scale the size of resources within an environment (vertical scaling) For example you could scale up a production workload by changing the instance size ArchivedAmazon Web Services Cost Optimization Pillar 33 or class This can be achieved by stopping and starting the instance and selecting the different instance size or class This technique can also be applied to other resources such as EBS Elastic Volumes which can be modified to increase size adjust performance (IOPS) or change the volume type while in use When architecting with a ti mebased approach keep in mind two key considerations First how consistent is the usage pattern? Second what is the impact if the pattern changes? You can increase the accuracy of predictions by monitoring your workloads and by using business intelligen ce If you see significant changes in the usage pattern you can adjust the times to ensure that coverage is provided Dynamic Supply: You can use AWS Auto Scaling or incorporate scaling in your code with the AWS API or SDKs This reduces your overall workload costs by removing the operational cost from manually making changes to your environment and can be performed much faster This will ensure that the wor kload resourcing best matches the demand at any time Resources Refer to the following resources to learn more about AWS best practices for managing demand and supplying resources • API Gateway Throttling • Getting Started with Amazon SQS • Getting Start ed with Amazon EC2 Auto Scaling Optimize Over Time In AWS you optimize over time by reviewing new services and implementing them in your workload Review and Implement New Services As AWS releases new services and features it is a best practice to rev iew your existing architectural decisions to ensure that they remain cost effective As your requirements change be aggressive in decommissioning resources components and workloads that you no longer require Consider the following to help you optimize over time: • Develop a workload review process ArchivedAmazon Web Services Cost Optimization Pillar 34 • Review and implement services Develop a workload review process: To ensure that you always have the most cost efficient workload you must regularly review the workload to know if there are opportunities to impl ement new services features and components To ensure that you achieve overall lower costs the process must be proportional to the potential amount of savings For example workloads that are 50% of your overall spend should be reviewed more regularly a nd more thoroughly than workloads that are 5% of your overall spend Factor in any external factors or volatility If the workload services a specific geography or market segment and change in that area is predicted more frequent reviews could lead to c ost savings Another factor in review is the effort to implement changes If there are significant costs in testing and validating changes reviews should be less frequent Factor in the long term cost of maintaining outdated and legacy components and resources and the inability to implement new features into them The current cost of testing and validation may exceed the proposed benefit However over time the cost of making the change may significantly increase as the gap between the workload and the current technologies increases resulting in even larger costs For example the cost of moving to a new programming language may not currently be cost effective However in five years time the cost of people skilled in that language may increase and du e to workload growth you would be moving an even larger system to the new language requiring even more effort than previously Break down your workload into components assign the cost of the component (an estimate is sufficient) and then list the facto rs (for example effort and external markets) next to each component Use these indicators to determine a review frequency for each workload For example you may have webservers as a high cost low change effort and high external factors resulting in hi gh frequency of review A central database may be medium cost high change effort and low external factors resulting in a medium frequency of review Review the workload and implement services: To realize the benefits of new AWS services and features yo u must execute the review process on your workloads and implement new services and features as required For example you might review your workloads and replace the messaging component with Amazon Simple Email Service (SES) This removes the cost of opera ting and maintaining a fleet of instances while providing all the functionality at a reduced cost ArchivedAmazon Web Services Cost Optimization Pillar 35 Conclusion Cost optimization and Cloud Financial Management is an ongoing effort You should regularly work with your finance and technology teams review y our architectural approach and update your component selection AWS strives to help you minimize cost while you build highly resilient responsive and adaptive deployments To truly optimize the cost of your deployment take advantage of the tools tech niques and best practices discussed in this paper Contributors Contributors to this document include: • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services • Nathan Besh Cost Lead Well Architected Amazon Web Services • Levon Stepanian BDM C loud Financial Management Amazon Web Services • Keith Jarrett Business Development Lead – Cost Optimization • PT Ng Commercial Architect Amazon Web Services • Arthur Basbaum Business Developer Manager Amazon Web Service • Jarman Hauser Commercial Architect Amazon Web Services ArchivedAmazon Web Services Cost Optimization Pillar 36 Further Reading For additional information see: •AWS Well Architected Framework Document Revisions Date Description Hsjw 2020 Updated to incorporate CFM new services and integration with the WellArchitected too July 2018 Updated to reflect changes to AWS and incorporate learnings from reviews with customers November 2017 Updated to reflect changes to AWS and incorporate learnings from reviews with customers November 2016 First publication,General,consultant,Best Practices AWS_WellArchitected_Framework__HPC_Lens,ArchivedHigh Performance Computing Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/highperformancecomputinglens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 2 General Design Principles 3 Scenario s 6 Loosely Coupled Scenarios 8 Tightly Coupled Scenarios 9 Reference Architectures 10 The Five Pillars of the Well Architected Framework 20 Operational Excellence Pillar 20 Security Pillar 23 Reliability Pillar 25 Performance Efficiency Pillar 28 Cost Optimization Pillar 36 Conclusion 39 Contributors 40 Further Reading 40 Document Revisions 40 ArchivedAbstract This document describes the High Performance Computing (HPC) Lens for the AWS WellArchitected Framework The document covers common HPC scenarios and identif ies key elements to ensure that your workloads are architected according to best practices ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 1 Introduction The AWS Well Architected Framework helps you unde rstand the pros and cons of decisions you make while building systems on AWS 1 Use the Framework to learn architectural best practices for designing and operating reliable secure efficient and costeffective systems in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected systems greatly increases the likelihood of business success In this “Lens ” we focus on how to des ign deploy and architect your High Performance Computing (HPC) workloads on the AWS Cloud HPC workloads run exceptionally well in the cloud The natural ebb and flow and bursting characteristic of HPC workloads make them well suited for pay asyougo cloud infrastructure The ability to fine tune cloud resources and create cloud native architectures naturally accelerates the turnaround of HPC workloads For brevity we only cover details from the WellArchitected Framework that are specific to HPC workloads We recommend that you consider best practices and q uestions from the AWS Well Architected Frame work whitepaper 2 when designing your architecture This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS be st practices and strategies to use when designing and operating HPC in a cloud environment ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 2 Definition s The AWS Well Architected Framework is based on five pillars : operational excellence security reliability performance efficiency and cost optimizat ion When architecting solutions you make tradeoffs between pillars based upon your business context These business decisions can drive your engineering priorities You might reduce cost at the expense of reliability in development environments or for mission critical solutions you might optimize reliability with increased costs Security and operational excellence are generally not traded off against other pillars Throughout this paper we make the crucial distinction between loosely coupled – someti mes referred to as high throughput computing (HTC) in the community – and tightly coupled workloads We also cover server based and serverless designs Refer to the Scenarios section for a detailed discussion of these distinctions Some vocabulary of the A WS Cloud may differ from common HPC terminology For example HPC users may refer to a server as a “node” while AWS refers to a virtual server as an “instance” When HPC users commonly speak of “jobs” AWS refers to them as “workloads” AWS documentation u ses the term “vCPU” synonymously with a “thread” or a “hyperthread ” (or half of a physical core) Don’t miss this factor of 2 when quantifying the performance or cost of an HPC application on AWS Cluster p lacement groups are an AWS method of grouping your compute instances for applications with the highest network requirements A placement group is not a physical hardware element It is simply a logical rule keeping all nodes within a low latency radius of the network The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in t he world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Depending on the characteristics of your HPC worklo ad you may want your cluster to span Availability Zones (increasing reliability) or stay within a single Availability Zone (emphasizing low latency) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 3 General Design Principles In traditional computing environment s architectural decisions are often implem ented as static one time events sometimes with no major software or hardware upgrades during a computing system ’s lifetime As a project and its context evolve these initial decisions may hinder the system’s ability to meet changing business requirement s It’s different in the cloud A c loud infrastructure can grow as the project grows allowing for a continuously optimized capability In the cloud the capability to automate and test on demand lowers the risk of impact from infrastructure design changes This allows systems to evolve over time so that projects can take advantage of innovations as a standard practice The Well Architected Framework proposes a set of general design principles to facilitate good design in the cloud with highperformance computing: • Dynamic architectures : Avoid frozen static architectures and cost estimates that use a steady state model Your architecture must be dynamic: growing and shrinking to match your demands for HPC over time Match your architecture design and cost an alysis explicitly to the natural cycles of HPC activity For example a period of intense simulation efforts might be followed by a reduction in demand as the work moves from the design phase to the lab Or a long and steady data accumulation phase might be followed by a large scale analysis and data reduction phase Unlike many traditional supercomputing centers the AWS Cloud helps you avoid long queues lengthy quota applications and restrictions on customization and software installation Many HPC end eavors are intrinsically bursty and well matched to the cloud paradigms of elasticity and pay asyougo The elasticity and p ayasyougo model of AWS eliminates the painful choice between oversubscribed systems (waiting in queues) or idle systems (wasted money) Environments such as compute clusters can be “right sized” for a given need at any given time • Align the procurement model to the workload : AWS makes a range of compute procurement models available for the various HPC usage patterns Selecting the correct model ensure that you are only paying for what you need For example a research institute might run the same weather forecast application in different ways: o An academic research project investigates the role of a weather variable with a large number of parameter sweeps and ensembles These simulations are not urgent and cost is a primary concern They are a great ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 4 match for Amazon EC2 Spot I nstances Spot Instances let you take advantage of Amazon EC2 unused capacity and are available at up to a 90% discount compared to On Demand prices o During the wildfire season up totheminute local wind forecasts ensure the safety of firefighters Every minute of delay in the simulations decreases their chance of safe evacuation On Demand Instances must be used for these simulations to allow for the bursting of analyses and ensure that results are obtained without interruption o Every morning weather fo recasts are run for television broadcasts in the afternoon Scheduled Reserved Instances can be used to make sure that the needed capacity is available every day at the right time Use of this pricing model provides a discount compared with On Demand I nstances • Start from the data : Before you begin designing your architecture you must have a clear picture of the data Consider data origin size velocity and updates A holistic optimization of performance and cost focuses on compute and include s data cons iderations AWS has a strong offering of data and related services including data visualization which enables you to extract the most value from your data • Automate to simplify architectural experimentation : Automation through code allows you to create a nd replicate your systems at low cost and avoid the expense of manual effort You can track changes to your code audit the ir impact and revert to prev ious versions when necessary The ability to easily experiment with infrastructure allows you to optimiz e the architecture for performance and cost AWS offers tools such as AWS ParallelCluster that help you get started with treating your HPC cloud infrastructure as code • Enable collaboration : HPC work often occurs in a collaborative context sometimes spa nning many countries around the world Beyond immediate collaboration methods and results are often shared with the wider HPC and scientific community It’s important to consider in advance which tools code and data may be shared and with whom The del ivery methods should be part of this design process For example w orkflows can be shared in many ways on AWS: you can use Amazon Machine Images ( AMIs ) Amazon Elastic Block Store (Amazon EBS) snapshots Amazon Simple Storage Service (Amazon S3) buckets AWS CloudFormation templates AWS ParallelCluster configuration files AWS Marketplace products and scripts Take full advantage of the AWS security and collaboration features that make AWS an excellent environment for ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 5 you and your collaborators to solve your HPC problems This help s your computing solutions and datasets achieve a greater impact by secure ly sharing within a selective group or public ly sharing with the broader community • Use c loud native d esigns : It is usually unnecessary and suboptimal to replicate your on prem ises environment when you migrate workloads to AWS The breadth and depth of AWS services enables HPC workloads to run in new ways using new design patterns and cloud native solution s For example each user or group can use a separate cluster which can independently scale d epending on the load Users can rely on a managed service like AWS Batch or serverless computing like AWS Lambda to manage the underlying infrastructure Cons ider not using a traditional cluster schedule r and instead use a scheduler only if your workload requires it In the cloud HPC clusters do not require permanence and can be ephemeral resources When you automate your cluster deployment you can terminate one cluster and launch a new one quickly with the same or different parameters This method creates environments as necessary • Test real world workloads : The only way to know how your production workload will perform in the cloud is to test it on the cloud Most HPC applications are complex and their memory CPU and network patterns often can’t be reduced to a simple test Also application requirements for infrastructure vary based on which application solvers (mathematical methods or algorithms) yo ur model s use the size and complexity of your models etc For this reason generic benchmarks aren’t reliable predictors of actual HPC production performance Similarly there is little value in testing an application with a small benchmark set or “toy p roblem ” With AWS you only pay for what you actually use ; therefore it is feasible to do a realistic proof ofconcept with your own representative models A major advantage of a cloud based platform is that a realistic full scale test can be done before migration • Balance time toresults and cost reduction : Analyze performance using the most meaningful parameters: time and cost Focus on cost o ptimiz ation should be used for workloads that are not time sensitive Spot Instances are usually the least expensive method for nontimecritical workloads For example i f a researcher has a large number of lab measurements that must be analyzed sometime before next year’s conference Spot Instances can help analyze the largest possible number of measurements within the fixed research budget Conversely for timecritical workloads such as emergency response modeling cost optimization can be traded for performance and instance type procurement model and cluster size should be chosen for lowest and most imm ediate ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 6 execution time If comparing platforms it’s important to take the entire time to solution into account including non compute aspects such as provisioning resources staging data or in more traditional environments time spent in job queues Scenarios HPC cases are typically complex computational problems that require parallel processing techniques To support the calculations a w ellarchitected HPC infrastructure is capable of sustained performance for the duration of the calculation s HPC work loads span traditional applications like genomics computational chemistry financial risk modeling computer aided engineering weather prediction and seismic imaging as well as emerging applications like machine learning deep learning and autonomous driving Still the traditional grids or HPC clusters that support these calculations are remarkably similar in architecture with select cluster attributes optimized for the specific workload In AWS the network storage type compute (instance) type an d even deployment method can be strategically chosen to optimize performance cost and usability for a particular workload HPC is divided into two categories based on the degree of interaction between the concurrently running parallel processes: loosely coupled and tightly coupled workloads Loosely coupled HPC cases are those where the multiple or parallel processes don’t strongly interact with each other in the course of the entire simulation Tightly coupled HPC cases are those where the parallel proce sses are simultaneously running and regularly exchanging information between each other at each iteration or step of the simulation With loosely coupled workloads the completion of an entire calculation or simulation often requires hundreds to millions o f parallel processes These processes occur in any order and at any speed through the course of the simulation This offers flexibility on the computing infrastructure required for loosely coupled simulations Tightly coupled workloads have processes that regularly exchange information at each iteration of the simulation Typically these tightly coupled simulations run on a homogenous cluster The total core or processor count can range from tens to thousands and occasionally to hundreds of thousands if the infrastructure allows The interactions of the processes during the simulation place extra demands on the infrastructure such as the compute nodes and network infrastructure ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 7 The infrastructure used to run the huge variety of loosely and tightly coupl ed applications is differentiated by its ability for process interactions across nodes There are fundamental aspects that apply to both scenarios and specific design considerations for each Consider the following fundamentals for both scenarios when sele cting an HPC infrastructure on AWS: • Network : Network requirements can range from cases with low requirements such as loosely coupled applications with minimal communication traffic to tightly coupled and massively parallel applications that require a per formant network with large bandwidth and low latency • Storage : HPC calculations use create and move data in unique ways Storage infrastructure must support these requirements during each step of the calculation Input data is frequently stored on startup more data is created and stored while running and output data is moved to a reservoir location upon run completion Factors to be considered include data size media type transfer speeds shared access and storage properties (for example durab ility and availability) It is helpful to use a shared file system between nodes For example using a Network File System (NFS) share such as Amazon Elastic File System (EFS) or a Lustre file system such as Amazon FSx for Lustre • Compute : The Amazon EC 2 instance type defines the hardware capabilities available for your HPC workload Hardware capabilities include the processor type core frequency processor features (for example vector extensions) memory tocore ratio and network performance On AWS an instance is considered to be the same as an HPC node These terms are used interchangeably in this whitepaper o AWS offers managed services with the ability to access compute without the need to choose the underlying EC2 instance type AWS Lambda and AW S Fargate are compute services that allow you to run workloads without having to provision and manage the underlying servers • Deployment : AWS provides many options for deploying HPC workloads Instances can be manually launched from the AWS Management Cons ole For an automated deployment a variety of Software Development Kits (SDKs) is available for coding end toend solutions in different programming languages A popular HPC deployment option combines bash shel l scripting with the AWS Command Line Interface (AWS CLI) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 8 o AWS CloudFormation templates allow the specification of application tailored HPC clusters described as code so that they can be launched in minutes AWS ParallelCluster is open source software that coordinates the launch of a cluster through CloudFormation with already installed software (for example compilers and schedulers) for a traditional cluster experience o AWS provides managed deployment services for container based workloads such as Amazon EC2 Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) AWS Fargate and AWS Batch o Additional software options are available from third party companies in the AWS Marketplace and the AWS Partner Network (APN) Cloud computing makes it easy to experiment with infrastructure components and architecture design AWS strongly encourages testing instance types EBS volume types deployment methods etc to find the best performance at the lowest cost Loosely Coupled Scenarios A loosely coupled workload entails the processing of a large number of smaller jobs Generally the smaller job runs on one node either consuming one process or multiple processes with shared memory parallelization (SMP) for parallelization within that node The parallel processes or the iterations in the simulation are post processed to create one solution or discovery from the simulation Loosely coupled applications are found in many areas including Monte Carlo simulations image processing genomics ana lysis and Electronic Design Automation (EDA) The loss of one node or job in a loosely coupled workload usually doesn’t delay the entire calculation The lost work can be picked up later or omitted altogether The nodes involved in the calculation can var y in specification and power A suitable architecture for a loosely coupled workload has the following considerations: • Network : Because parallel processes do not typically interact with each other the feasibility or performance of the workloads is not sen sitive to the bandwidth and latency capabilities of the network between instances Therefore clustered placement groups are not necessary for this scenario because they weaken the resiliency without providing a performance gain ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 9 • Storage : Loosely coupled w orkloads vary in storage requirements and are driven by the dataset size and desired performance for transferring reading and writing the data • Compute : Each application is different but in general the application’s memory tocompute ratio drives the underlying EC2 instance type Some applications are optimized to take advantage of graphics processing units (GPUs) or field programmable gate array (FPGA) accelerators on EC2 instances • Deployment : Loosely coupled simulations often run across many — sometimes millions — of compute cores that can be spread across Availability Zones without sacrificing performance Loosely coupled simulations can be deployed with end toend services and solutions suc h as AWS Batch and AWS ParallelCluster or through a combination of AWS services such as Amazon Simple Queue Service (Amazon SQS) Auto Scaling AWS Lambda and AWS Step Functions Tightly Coupled Scenarios Tightly coupled applications consist of parallel processes that are dependent on each other to carry out the calculation Unlike a loosely coupled computation all processes of a tightly coupled simulation iterate together and require communication with one another An iteration is defined as one step o f the overall simulation Tightly coupled calculations rely on tens to thousands of processes or cores over one to millions of iterations The failure of one node usually leads to the failure of the entire calculation To mitigate the risk of complete fail ure application level checkpointing regularly occurs during a computation to allow for the restarting of a simulation from a known state These simulations rely on a Message Passing Interface (MPI) for interprocess communication Shared Memory Parallelism via OpenMP can be used with MPI Examples of tightly coupled HPC workloads include : computational fluid dynamics weather prediction and reservoir simulation A suitable architecture for a tightly coupled HPC workload has the following considerations: • Network : The network requirements for tightly coupled calculations are demanding Slow communication between nodes results in the slowdown of the entire calculation The largest instance size enhanced networking and cluster placement groups are required fo r consistent networking performance These techniques minimize simulation runtimes and reduce overall costs Tightly coupled applications range in size A large problem size spread over a large ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 10 number of processes or cores usually parallelizes well Smal l cases with lower total computational requirements place the greatest demand on the network Certain Amazon EC2 instances use the Elastic Fabric Adapter (EFA) as a network interface that enables running applications that requir e high levels of internode communications at scale on AWS EFA’s custom built operating system bypass hardware interface enhances the performance of interinstance communications which is critical to scaling tightly coupled applications • Storage : Tightly cou pled workloads vary in storage requirements and are driven by the dataset size and desired performance for transferring reading and writing the data Temporary data storage or scratch space requires special consideration • Compute : EC2 instances are offer ed in a variety of configurations with varying core to memory ratios For parallel applications it is helpful to spread memory intensive parallel simulations across more compute nodes to lessen the memory percore requirements and to target the best perfo rming instance type Tightly coupled applications require a homogenous cluster built from similar compute nodes Targeting the largest instance size minimizes internode network latency while providing the maximum network performance when communicating betw een nodes • Deployment : A variety of deployment options are available End toend automation is achievable as is launching simulations in a “traditional” cluster environment Cloud scalability enables you to launch hundreds of large multi process cases at once so there is no need to wait in a queue Tightly coupled simulations can be deployed with end toend solutions such as AWS Batch and AWS ParallelCluster or through solutions based on AWS services such as CloudFormation or EC2 Fleet Reference Archite ctures Many architectures apply to both loosely coupled and tightly coupled workloads and may require slight modifications based on the scenario Traditional on premises clusters force a one sizefitsall approach to the cluster infrastructure However t he cloud offers a wide range of possibilities and allows for optimization of performance and cost In the cloud your configuration can range from a traditional cluster experience with a scheduler and login node to a cloud native architecture with the adva ntages of cost efficiencies obtainable with cloud native solutions Five reference architectures are below: ArchivedAmazon Web Services AWS Well Architected Framework — High Perfor mance Computing Lens 11 1 Traditional cluster environment 2 Batch based architecture 3 Queue based architecture 4 Hybrid deployment 5 Serverless workflow Traditional Cluster Environm ent Many users begin their cloud journey with an environment that is similar to traditional HPC environments Th e environment often involves a login node with a scheduler to launch jobs A common approach to traditional cluster provisioning is based on a n AWS CloudFormation template for a compute cluster combined with customization for a user’s specific tasks AWS ParallelCluster is an example of an end toend cluster provisioning capability based on AWS CloudFormation Although the complex description of the architecture is hidden inside the template typical configuration options allow the user to select the instance type scheduler or bootstrap actions such as installing applications or synchro nizing data The template can be constructed and executed to provide an HPC environment with the “look and feel” of conventional HPC clusters but with the added benefit of scalability The login node maintains the scheduler shared file system and runnin g environment Meanwhile an automatic scaling mechanism allows additional instances to spin up as jobs are submitted to a job queue As instances become idle they are automatically terminated A cluster can be deployed in a persistent configuration or treated as an ephemeral resource Persistent clusters are deployed with a login instance and a compute fleet that can either be a fixed sized or tied to an Auto Scaling group which increase s and decrease s the compute fleet depending on the number of submitted jobs Persistent clusters always have some infrastructure running Alternatively clusters can be treated as ephemeral where each workload runs on its own cluster Ephemeral clusters are enabl ed by automation For example a bash script is combined with the AWS CLI or a Python script with the AWS SDK provides end toend case automation For each case resources are provisioned and launched data is placed on the nodes jobs are run across mult iple nodes and the case output is either retrieved automatically or sent to Amazon S3 Upon completion of the job the infrastructure is terminated These clusters treat infrastructure as code optimize costs and allow for complete version control of infrastructure changes ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 12 Traditional cluster architectures can be used for loosely and tightly coupled workloads For best performance tightly coupled workloads must use a compute fleet in a clustered placement group with homogenous instance types Reference Architecture Figure 1: Traditional cluster deployed with AWS ParallelCluster Workflow steps: 1 User initiates the creation of a cluster through the AWS ParallelCluster CLI and specification in the configuration file 2 AWS CloudFormation builds the cluster architecture as described in the cluster template file where the user contributed a few custom settings (for example by editing a configuration file or using a web in terface) 3 AWS CloudFormation deploys the infrastructure from EBS snapshot s created with customized HPC software/applications that cluster instances can access through an NFS export ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 13 4 The u ser logs into the login instance and submits jobs to the scheduler ( for example SGE Slurm ) 5 The login instance emits metrics to CloudWatch based on the job queue size 6 CloudWatch triggers Auto Scaling events to increase the number of compute instances if the job queue size exceeds a threshold 7 Scheduled jobs are processe d by the compute fleet 8 [Optional] User initiates cluster deletion and termination of all resources Batch Based Architecture AWS Batch is a fully managed service that enables you to run large scale compute workloads in the cloud without provisioning resource s or manag ing schedulers 3 AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the op timal quantity and type of compute resources (for example CPU or memory optimized instances) based on the volume and specified resource requirements of the batch jobs submitted It plans schedules and executes your batch computing workloads across the f ull range of AWS compute services and features such as Amazon EC2 4 and Spot Instances 5 Without the need to install and manage the batch computing software or server clusters necessary for running your jobs you can focus on analyzing results and gaining new insights With AWS Batch you package your application in a container specify your job’s dependencies and submit your batch jobs using the AWS Management Console the CLI or an SDK You can specify execution parameters and job dependencies and integrate with a br oad range of popular batch computing workflow engines and languages (for example Pegasus WMS Luigi and AWS Step Functions) AWS Batch provides default job queues and compute environment definitions that enable you to get started quickly An AWS Batch ba sed architecture can be used for both loosely and tightly coupled workloads Tightly coupled workloads should use Multi node Parallel Jobs in AWS Batch ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 14 Reference Architecture Figure 2: Example AWS Batch architecture Workflow steps: 1 User creates a job container uploads the container to the Amazon EC2 Container Registry (Amazon ECR ) or another container registry (for example DockerHub) and creates a job definition to AWS Batch 2 User submits jobs to a job queue in AWS Batch 3 AWS Batch pulls the image from the container registry and processes the jobs in the queue 4 Input and output data from each job is stored in an S3 bucket Queue Based Architecture Amazon SQS is a fully managed messag e queuing service that makes it easy to decouple preprocessing steps from compute steps and post processing steps 6 Building applications from individual components that perform discrete function s improves scalability and reliability Decoupling components is a best practice for designing modern applications Amazon SQS frequently lies at the heart of cloud native loosely coupled solutions Amazon SQS is often orchestrated with AWS CLI or AWS SDK scripted solutions for the deployment of applications from the desktop without users interacting with AWS ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Co mputing Lens 15 components directly A queue based architecture with SQS and EC2 requi res self managed compute infrastructure in contrast with a service managed deployment such as AWS Batch A queue based architecture is best for loosely coupled workloads and can quickly become complex if applied to tightly coupled workloads Reference Ar chitecture Figure 3: Amazon SQS deployed for a loosely coupled workload Workflow steps: 1 Multiple users submit jobs with the AWS CLI or SDK 2 The j obs are queued as messages in Amazon SQS 3 EC2 Instances poll the queue and start processing jobs 4 Amazon SQS emits metrics based on the number of messages (jobs) in the queue 5 An Amazon CloudWatch alarm is configured to notify Auto Scaling if the queue is longer than a specified length Auto Scaling increase s the number of EC2 instances 6 The EC2 instances pull so urce data and store result data in an S3 bucket ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 16 Hybrid Deployment Hybrid deployments are primarily considered by organizations that are invested in their onpremises infrastructure and also want to use AWS This approach allows organizations to augment on premises resources and creates an alternative path to AWS rather than an immediate full migration Hybrid scenarios vary from minimal coordination like workload separation to tightly integrated approaches like scheduler driven job placement For example an organization may separate their workloads and run all workloads of a certain type on AWS infra structure Alternatively organizations with a large investment in their on premises processes and infrastructure may desire a more seamless experience for their end users by managing AWS resources with their job scheduling software and potentially a job s ubmission portal Several job schedulers – commercial and open source – provide the capability to dynamically provision and deprovision AWS resources as necessary The underlying resource management relies on native AWS integrations (for example AWS CLI or API) and can allow for a highly customized environment depending on the scheduler Although job schedulers help manage AWS resources the scheduler is only one aspect of a successful deployment Critical factor s in successfully operating a hybrid scenario are data locality and data movement Some HPC workloads do not require or generate significant datasets; therefore data management is less of a concern However jobs that require large input data or that generate significant output data can become a bottleneck Techniques to address data management vary depending on organization For example one organization may have their end users manage the data transfer in their job submission scripts others might only run certain jobs in the location whe re a dataset resides a third organization might choose to duplicate data in both locations and yet another organization might choose to use a combination of several options Depending on the data management approach AWS provides several services to aid in a hybrid deployment For example AWS Direct Connect establishes a dedicated network connection between an on premises environment and AWS and AWS DataSync automatically moves data from on premises storage to Amazon S3 or Amazon Elastic File System Additional software options are available from third party companies in the AWS Marketplace and the AWS Partner Network (APN) Hybrid deployment architectures can be used for loosely and tightly coupled workloads However a single tightly coupled workload s hould reside either on premises or in AWS for best performance ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 17 Refe rence Architecture Figure 3: Example hybrid scheduler based deployment Workflow steps: 1 User submits the job to a scheduler ( for example Slurm) on an on premises login node 2 Scheduler e xecutes the job on either on premises compute or AWS infrastructure based on configuration 3 The jobs access shared storage based on their run location Serverless The loosely coupled cloud journey often leads to an environment that is entirely serverless meaning that you can concentrate on your applications and leave the server provisioning responsibility to managed services AWS Lambda can run code without the need to provision or manag e servers You pay only for the compute time you consume — there is no charge when your code is not running You upload your code and Lambda takes care of everything required to run and scale your code Lambda also has the capabilit y to automatically trigger events from other AWS services ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 18 Scalability is a second advantage of the serverless Lambda approach Although each worker may be modest in size – for example a compute core with some memory – the architecture can spawn thousands of concurrent Lambda workers thus reaching a large compute throughput capacity and earning the HPC label For example a large number of files can be analyzed by invocations of the same algorithm a large number of genomes can be analyzed in parallel or a large number of gene sites within a genome can be modeled The largest attainable scale and speed of scaling matter While server based architectures require time on the order of minutes to increase capacity in response to a request (even when using vir tual machines such as EC2 instances) serverless Lambda functions scale in seconds AWS Lambda enables HPC infrastructure that respond s immediately to an y unforeseen request s for compute intensive results and can fulfill a variable number of requests with out requiring any resources to be wastefully provisioned in advance In addition to compute there are other serverless architectures that aid HPC workflows AWS Step Functions let you coordinate multiple steps in a pipeline by stitching together different AWS services For example an automated genomics pipeline can be created with AWS Step Functions for coordination Amazon S3 for storage AWS Lambda for small tasks and AWS Batch for data processing Serverless architectures are best for loosely coupled workloads or as workflow coordination if combined with another HPC architecture Reference Architecture ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 19 Figure 4: Example Lambda deployed loosely coupled workload Workflow steps: 1 The u ser uploads a file to an S3 bucket through the AWS CLI or SDK 2 The input file is saved with an incoming prefix ( for example input/) 3 An S3 event automatically triggers a Lambda function to process the incoming data 4 The output file is saved back to the S3 bucket with a n outgoing prefix (for example output/ ) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 20 The Five Pillars of the Well Architected Framework This section describes HPC in the context of the five pillars of the WellArchitected Framework Each pillar discusses design principles definitions best practices evaluation questions consideratio ns key AWS services and useful links Operational Excellence Pillar The operational excellence pillar includes the ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures Design Principles In the cloud a number of principles drive operational excellence In particular the following are emphasized for HPC workloads See also the design principles in the AWS Well Architected Framework whitepaper • Automate cluster operations : In the cloud you can define your entire workload as code and update it with code This enables you to automate repetitive processes or procedures You benefit from being able to consistently reproduce infrastructure and implement operational procedures This includes automating the job submission process and responses to events such as job start completion or failure In HPC it is common for users to expect to repeat multiple steps for every job including for example uploading case files submittin g a job to a scheduler and moving result files Automate these repetitive steps with scripts or by event driven code to maximize usability and minimize costs and failures • Use cloud native architectures where applicable : HPC architectures typically take o ne of two forms The first is a traditional cluster configuration with a login instance compute nodes and job scheduler The second a cloud native architecture with automated deployments and managed services A single workload can run for each (ephemeral ) cluster or use serverless capabilities Cloud native architectures can optimize operations with democratizing advanced technologies; however the best technology approach aligns with the desired environment for HPC users ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 21 Definition There are three best practice areas for operational excellence in the cloud: • Prepare • Operat e • Evolve For more information on the prepare operate and evolve areas see the AWS Well Architected Framework whitepaper Evolve is not described in this whitepaper Best Practices Prepare Review the corresponding section in the AWS Well Architected Framework whitepaper As you prepare to deploy your workload consider using specialized softw are packages (commercial or open source ) to gain visibility into system information and leverage this information to defin e architecture patterns for your workloads Use automation tools such as AWS ParallelCluster or AWS CloudFormation to define these a rchitectures in a way that is configurable with variables The Cloud provides multiple scheduling options One option is to use AWS Batch which is a fully managed batch processing service with support for both single node and multi node tasks Another opt ion is to not use a scheduler For example you can create an ephemeral cluster to run a single job directly HPCOPS 1: How do you standardize architectures across clusters? HPCOPS 2: How do you schedule jobs – traditional schedulers AWS Batch or no scheduler with ephemeral clusters? Operate Operations must be standardized and manage d routinely Focus on automation small frequent changes regular quality assurance testing and defined mechanisms to track audit roll back and review changes Changes should not be large and infrequent should not require scheduled downtime and should not require manual execution A ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 22 wide range of logs and metrics based on key operational indicators for a workload must be collected and reviewed to ensure continuous oper ations AWS provides the opportunity to use additional tools for handling HPC operations These tools can vary from monitoring assistance to automating deployments For example you can have Auto Scaling restart failed instances use CloudWatch to monitor your cluster’s load metric s configur e notifications for when jobs finish or use a managed service (such as AWS Batch ) to implement retry rules for failed jobs Cloud native tools can greatly improve your application deployment and change management Release management processes whether manual or automated must be based on small incremental changes and tracked versions You must be able to revert releases that introduce issues without causing operational impact Use continuous integration and continuous deployment tools such as AWS CodePipeline and AWS CodeDeploy to automate change deployment Track source code changes w ith version control tools such as AWS CodeCommit and infrastructure configurations with automation tools such as AWS CloudFormation templates HPCOPS 3: How are you evolving your workload while minimizing the impact of change ? HPCOPS 4: How do you monit or your workload to ensure that it is operating as expected ? Using the cloud for HPC introduces new operational considerations While on premises clusters are fixed in size cloud clusters can scale to meet demand Cloudnative architectures for HPC also operate differently than on premises architectures For example they use different mechanisms for job submission and provisioning On Demand Instance resources as jobs arrive You must a dopt operational procedures that accommodate the elasticity of the c loud and the dynamic nature of cloud native architectures Evolve There are no best practices unique to HPC for the evolve practice area For more information see t he corresponding section in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 23 Security Pillar The security pillar includes the ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies Design Principles In the cloud there are a number of principles that help you strengthen your system ’s security The Design Principles from the AWS Well Architected Framework wh itepaper are recommended and do not vary for HPC workloads Definition There are five best practice areas for security in the cloud: • Identity and access management (IAM) • Detective controls • Infrastructure protection • Data protection • Incident response Before architecting any system you must establish security practices You must be able to control permissions identify security incidents protect your systems and services and maintain the confidentiality and integrity of data through data protection You should have a well defined and practiced process for responding to security incidents These tools and techniques are important because they support objectives such as preventing data loss and complying with regulatory obligations The AWS Shared Responsi bility Model enables organizations that adopt the cloud to achieve their security and compliance goals Because AWS physically secures the infrastructure that supports our cloud services you can focus on using services to accomplish your goals The AWS Cl oud provides access to security data and an automated approach to responding to security events All of the security best practice areas are vital and well documented in the AWS Well Architected Framework whitepaper The detective controls infrastructure protection and incident response areas are described in the AWS Well Architected Framework whitepaper They are not described in this whitepaper and do not require modification for HPC workloads ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 24 Best Practices Identity and Access Management (IAM) Identity and access management are key parts of an information security program They ensure that only authorized and authenticated users are able to access your resources For example you define principals (users groups services and roles that take action in your account) build out policies referencing these principals and impl ement strong credential management These privilege management elements form the core concepts of authentication and authorization Run HPC workloads autonomously and ephemerally to limit the exposure of sensitive data Autonomous deployments require minim al human access to instances which minimize s the exposure of the resources HPC data is produced within a limited time minimizing the possibility of potential unauthorized data access HPC SEC 1: How are you using managed services autonomous methods and ephemeral clusters to minimize human access to the workload infrastructure ? HPC architectures can use a variety of managed ( for example AWS Batch AWS Lambda) and unmanaged compute services ( for example Amazon EC2) When architectures requir e direct access to the compute environments such as connecting to an EC2 instance users commonly connect through a Secure Shell (SSH) and authenticate with an SSH key This access model is typical in a Traditional Cluster scenario All credentials inclu ding SSH keys must be appropriately protected and regularly rotated Alternatively AWS Systems Manager has a fully managed service (Session Manager ) that provides an interactive browser based shell and CLI experience It provide s secure and auditable ins tance management without the need to open inbound ports maintain bastion hosts and manage SSH keys Session Manager can be accessed through any SSH client that supports ProxyCommand HPC SEC 2: What methods are you using to protect and manage your creden tials? Detective Controls There are no best practices unique to HPC for the detective controls best practice area Review the corresponding section in the AWS WellArchitected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 25 Infrastructure Protection There are no best practices unique to HPC for the infrastructure best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Data Protection Before architecting any system you must establish foundational security practices For example data classification provides a way to categorize organizational data based on levels of sensitivity and encryption protects data by rendering it unintelligible to unauthorized access These tools and techniques are important because they su pport objectives such as preventing data loss or complyi ng with regulatory obligations HPCSEC 3 : How does your architecture address data requirements for storage availability and durability through the lifecycle of your results? In addition to the level of sensitivity and regulatory obligations HPC data can also be categorized according to when and how the data will next be used Final results are often retained while intermediate results which can be recreated if necessary may not need to be retained Careful evaluation and categorization of data allows for programmatic data migration of important data to more resilient storage solutions such as Amazon S3 and Amazon EFS An understanding of data longevity combined with programmatic handling of the dat a offers the minimum exposure and maximum protection for a WellArchitected infrastructure Incident Response There are no best practices unique to HPC for the incident response best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Reliability Pillar The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 26 Design Principles In the cloud a number of principles help you incr ease reliability In particular the following are emphasized for HPC workloads For more information refer to the design principles in the AWS Well Architec ted Framework whitepaper • Scale horizontally to increase aggregate system availability : It is important to consider horizontal scaling options that might reduce the impact of a single failure on the overall system For example rather than having one la rge shared HPC cluster running multiple jobs consider creating multiple clusters across the Amazon infrastructure to further isolate your risk of potential failures Since infrastructure can be treated as code you can horizontally scale resources inside a single cluster and you can horizontally scale the number of clusters running individual cases • Stop guessing capacity : A set of HPC clusters can be provisioned to meet current needs and scaled manually or automatically to meet increases or decreases in demand For example terminate idle compute nodes when not in use and run For example terminate idle c ompute nodes when not in use and run concurrent clusters for processing multiple computations rather than waiting in a queue • Manage change in automa tion: Automating changes to your infrastructure allows you to place a cluster infrastructure under version control and make exact duplicates of a previously created cluster Automation changes must be managed Definition There are three best practice areas for r eliability in the cloud : • Foundations • Change management • Failure management The change management area is described in the AWS Well Architec ted Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 27 Best Practices Foundations HPC REL 1 : How do you manag e AWS service limits for your accounts? AWS sets service limits (an upper limit on the number of each resource your team can request) to protect you from accidentally overprovisioning resources HPC applications often require a large number of compute instances simultaneously The ability and advantages of scaling horizontally are highly desirable for HPC workloads However scaling horizontally may require an increase to the AWS service limits before a large workload is deployed to either one large cluster or to many smaller clusters all at once Service limits must often be increased from the default values in order to handle the requirements of a large deployment Contact AWS Support to request an increase Change Management There are no best practices unique to HPC for the change management best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Failure Management Any complex system can expect for failures to occasionally occur and it critical to become aware of these failures respond to them and prevent them from happening again Failure scenarios can include the failure of a cluster to start up or the failure of a specific workload HPCREL 2: How does your application use checkpointing to recover from failures? Failure tolerance can be improved in multiple ways For long running cases incorporating regular checkpoints in your code allows you to continue from a partial state in the event of a failure Checkpointing is a common feature of application level failure management already built into many HPC applications The most common approach is for applications to periodically write out intermediate results The ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 28 intermediate results offer potential insight into application errors and the ability to restart the case as needed while only partially losing the work Checkpointing is useful on Spot Instances when you are using highly cost effective but potentially interruptible instances In addition some applications may bene fit from changing the default Spot interruption behavior ( for example stopping or hibernating the instance rather than terminating it) It is important to consider the durability of the storage option when relying on checkpointing for failure management HPCREL 3: How have you planned for failure tolerance in your architecture? Failure tolerance can be improved when deploying to multiple Availability Zones The lowlatency requirements of tightly coupled HPC applications require that each individual case r eside within a single cluster placement group and Availability Zone Alternatively loosely coupled applications do not have such low latency requirements and can improve failure management with the ability to deploy to several Availability Zones Consider the tradeoff between the reliability and cost pillars when making this design decision Duplication of compute and storage infrastructure (for example a head node and attached storage) incurs additional cost and there may be data transfer charges for moving data to an Availability Zone or to another AWS Region For non urgent use cases it may be preferable to only move into another Availability Zone as part of a disaster recovery (DR) event Performance Efficiency Pillar The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and on maintaining that efficiency as demand c hanges and technologies evolve Design Principles When designing for HPC i n the cloud a number of principles help you achieve perform ance efficiency: • Design the cluster for the application : Traditional clusters are static and require that the application be designed for the cluster AWS offers the capability to design the cluster for the application A one sizefitsall model is no longer necessary with individual clusters for each application When running a variety of ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 29 applications on AWS a variety of architectures can be used to meet each application’s demands This allows for the best perf ormance w hile minimizing cost • Test performance with a meaningful use case : The best method to gauge an HPC application’s performance on a particular architecture is to run a meaningful demonstr ation of the application itself An inadvertently small or large demons tration case – one without the expected compute memory data tra nsfer or network traffic –will not provide a meaningful test of application performance on AWS Although system specific benchmarks offer an understanding of the underlying compute infrastr ucture performance they do not reflect how an application will perform in the aggregate The AWS payasyougo model makes a proof ofconcept quick and cost effective • Use cloud native architectures where applicable : In the cloud managed serverless and cloud native architectures remove the need for you to run and maintain servers to carry out traditional compute activities Cloud native components for HPC target compute storage job orchestration and organization of the data and metadata The variety of AWS services allow s each step in the workload process to be decoupled and optimized for a more performant capability • Experiment often : Virtual and automatable resources allow you to quickly carry out comparative testing using different ty pes of instances storage and configurations Definition There are four best practice areas for p erformance efficiency in the cloud: • Selection • Review • Monitoring • Tradeoff s The review monitoring and tradeoffs areas are described in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 30 Best Practices Selection The optimal solution for a particular system varies based on the kind of workload you have WellArchitected systems use multiple solutions and enable different f eatures to improve performance An HPC architecture can rely on one or more different architectural elements for example queued batch cluster comput e containers serverless and event driven Compute HPCPERF 1 : How do you select your compute solution? The optimal compute solution for a particular HPC architecture depends on the workload deployment method degree of automation usage patterns and configuration Different compute solutions may be chosen for each step of a process Selecting the wrong compute solution s for an architecture can lead to lower performance efficiency Instances are virtualized servers and come in diff erent families and sizes to offer a wide variety of capabilities Some instance families target specific workloads for example compute memory or GPU intensive workloads Other instances are general purpose Both the targeted workload and general purpose instance families are useful for HPC applications Instances of particular interest to HPC include the compute optimized family and accelerated instance types such as GPUs and FPGAs Some instance families provide variants within the family for addit ional capabilities For example an instance family may have a variant with local storage greater networking capabilities or a different processor These variants can be viewed in the Instance Type Matrix 7 and may improve the performance of some HPC workloads Within each instance family one or more instance sizes allow vertical scaling of resources Some applications require a larger instance type (for example 24xlarg e) while others run on smaller types (for example large ) depending on the number or processes sup ported by the application The optimum performance is obtained with the largest instance type when working with a tightly coupled workload The T series instance family is designed for applications with moderate CPU usage that can benefit from bursting beyond a baseline level of CPU performance Most HPC ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 31 applications are compute intensive and suffer a performance decline with the T series instance family Applications vary in their requirements (for example desired cores processor speed memory requirements storage needs and networking specifications) When selecting an instance family and type begin with the specific needs of the applica tion Instance types can be mixed and matched for applications requiring targeted instances for specific application components Containers are a method of operating system virtualization that is attractive for many HPC workloads particularly if the appli cations have already been containerized AWS services such as AWS Batch Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS) help deploy containerized applications Functions abstract the execution environment AWS Lambda allows you to execute code without deploying running or maintaining an instance Many AWS services emit events based on activity inside the service and often a Lambda function can be triggered off of service events For example a Lambda fu nction can be executed after an object is uploaded to Amazon S3 Many HPC users use Lambda to automatically execute code as part of their workflow There are several choices to make when launching your selected compute instance: • Operating system : A current operating system is critical to achieving the best performance and ensuring access to the most up todate libraries • Virtualization type : Newgeneration EC2 instances run on the AWS Nitro System The Nitro System delivers all the host hardware’s compute a nd memory resources to your instances resulting in better overall performance Dedicated Nitro Cards enable high speed networking highspeed EBS and I/O acceleration Instances do not hold back resources for management software The Nitro Hypervisor is a lightweight hypervisor that manages memory and CPU allocation and delivers performance that is indistinguishable from bare metal The Nitro System also makes bare metal instances available to run without the Nitro Hypervisor Launching a bare metal instance boots the underlying server which includes verifying all hardware and firmware components This means it can take longer before the bare metal instance becomes available to start your workload as compared to a virtualized instance The additiona l initialization time must be considered when operating in a dynamic HPC environment where resources launch and terminate based on demand ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 32 HPCPERF 2: How do you optimize the compute environment for your application? Underlying hardware features : In additio n to cho osing an AMI you can further optimize your environment by taking advantage of the hardware features of the underlying Intel processors There are four primary methods to consider when optimizing the underlying hardware: 1 Advanced processor features 2 Intel Hyper Threading Technology 3 Processor affinity 4 Processor state control HPC applications can benefit from these advanced processor features (for example Advanced Vector Extensions) and can increas e their calculation speeds by compiling the software for the Intel architecture 8 The compiler options for architecture specific instructions vary by compiler (check the usage guide for your compiler) AWS enables Intel Hyper Threading Technology commonl y referred to as “hyperthreading ” by default Hyperthreading improves performance for some applications by allowing one process per hyperthread (two processes per core) Most HPC applications benefit from disabling hyperthreading and therefore it tends to be the preferred environment for HPC applications Hyperthreading is easily disabled in Amazon EC2 Unless an application has been tested with hyperthreading enabled it is recommended that hyperthreading be disabled and that process es are launched and individually pinned to cores when running HPC applications CPU or processor affinity allows process pinning to easily happen Processor affinity can be controlled in a variety of ways For example i t can be configured at the operating system level (available in both Windows and Linux) set as a compiler flag within the threading library or specified as an MPI flag during execution The chosen method of controlling processor affinity depends on your workload and application AWS enable s you to tune the processor state control on certain instance types 9 You may consider altering the C state (idle states) and P state (operational states) settings to optimize your performance The default C state and P state settings provide maximum performance which is optimal for most workloads However if your application would benefit from re duced latency at the cost of higher single or dual core ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 33 frequencies or from consistent performance at lower frequencies as opposed to spiky Turbo Boost frequencies experiment with the C state or P state settings available on select instances There are many compute options available to optimize a compute environment Cloud deployment allows experimentation on every level from operating system to instance type to bare metal deployments Because static clusters are tuned before deployment time spent expe rimenting with cloud based clusters is vital to achieving the desired performance Storage HPCPERF 3: How do you select your storage solution? The optimal storage solution for a particular HPC architecture depends largely on the individual applications tar geted for that architecture Workload deployment method degree of automation and desired data lifecycle patterns are also factors AWS offers a wide range of storage options As with compute the best performance i s obtained when targeting the specific s torage needs of an application AWS does not require you to overprovision your storage for a “onesizefitsall” approach and large highspeed shared file systems are no t always required Optimizing the c ompute choice is important for optimizing HPC performance and m any HPC applications will not benefit from the fastest storage solution possible HPC deployments often require a shared or high performance file system that is accessed by the cluster compute nodes There are several architecture patterns you can use to implement these storage solutions from AWS Managed Services AWS Marketplace offerings APN Partner solutions and open source configurations deployed on EC2 instances In particular A mazon FSx for Lustre is a managed service that provides a cost effective and performant solution f or HPC architectures requiring a high performance parallel file system Shared file systems can also be created from Amazon Elastic File System (EFS) or EC2 instances with Amazon EBS volumes or instance store volumes Frequently a simple NFS mount is used to create a shared directory When selecting your storage solution you may select an EBS backed instance for either or both of your local and shared storage s EBS volumes are often the basis for an HPC storage solution Various types of EBS volumes are available including magnetic hard disk drives (HDDs) general purpose solid state drives (SSDs) and Provisioned IOPS SSDs for high IOPS solutions They differ in throughput IOPS performance and cost ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 34 You can gain further performance enhancements by selecting an Amazon EBS optimized instance An EBS optimized instance uses an optimized configuration stack and provides additional dedicated capacity for Amazon E BS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other network traffic to and from your instance Choose an EBS optimized instance for more consistent performance and for HPC a pplications that rely on a low latency network or have intensive I/O data needs to EBS volumes To launch an EBS optimized instance choose an instance type that enables EBS optimization by default or choose an instance type that allows enabling EBS optim ization at launch Instance store volumes including nonvolatile memory express (NVMe) SSD volumes (only available on certain instance families) can be used for temporary block level storage Refer to the instance type matrix for EBS optimization and instance store volume support 10 When you select a storage solution ensure that it aligns with your access patterns to achieve the desired perf ormance It is eas y to experiment with different storage types and configurations With HPC workloads the most expensive option is not always the best performing solution Networking HPCPERF 4: How do you select your network solution ? The optimal network solution for an HPC workload varies based on latency bandwidth and throughput requirements Tightly coupled HP C applications often require the lowest latency possible for network connection s between com pute nodes For moderately sized tightly coupled workloads it is possible to select a large instance type with a large number of cores so that the application fits entirely within the instance without crossing the network at all Alternatively some applications are network bound and require high network performance I nstances with higher network performance can be selected for these applications The highest network performance is obtained with the largest instance type in a family Refer to the instance type matrix for more details 7 Multiple instances with low latency between the instances are required for large tightly coupled applications On AWS th is is achieved by launching compute nodes into a ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 35 cluster placement group which is a logical grouping of instances within an Availability Zone A cluster placement group provides non blocking and non oversubscribed connectivity including full bisection ba ndwidth between instances Use cluster placement groups for latency sensitive tightly coupled applications spanning multiple instances In addition to cluster placement groups tightly coupled applications benefit from an Elastic Fabric Adapter (EFA) a network device that can attach to your Amazon EC2 instance EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud based HPC systems It enables an OS bypass access model through the Libfabric API that allows HPC applications to communicate directly with the network interface hardware EFA enhances the performance of interinsta nce communication is optimized to work on the existing AWS network infrastructure and is critical for scaling tightly coupled applications 13 If an application cannot take advantage of EFA’s OS bypass functionality or an instance type does not support EFA optimal network performance can be obtained by selecting an instance type that supports enhanced networking Enhanced networking provides EC2 instances with higher networking performance and lower CPU utilization through the use of pass through rather than hardware emulated devices This method allows EC2 instances to achieve higher bandwidth higher packet persecond processing and lower inter instance latency compared to traditional device virtualization Enhance d networking is available on all current generation instance types and requires an AMI with supported drivers Although most current AMIs contain supported drivers custom AMIs may require updated drivers For more information on enabling enhanced networki ng and instance support refer to the enhanced networking documentation 11 Loosely coupled workloads are generally not sensitive to very low latency networking and do not require the use of a cluster placement group or the need to keep instances in the same Availability Zone or Region Review There are no best practices unique to HPC for the review best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Monitoring There are no best practices unique to HPC for the monitoring best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 36 Trade offs There are no best practices unique to HPC for the monitoring best pr actice area Review the corresponding section in the AWS Well Architected Framework whitepaper Cost Optimization Pillar The cost optimization pillar includes the continual process of refinement and improvement of an HPC system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the practices in this paper enables you to build and operate costaware systems that achieve business outcomes and minimize costs This allows your business to maximize its return on investment Design Principles For HPC in the cloud you can follow a number of principles to achieve cost optimization: • Adopt a consumption model : Pay only for the computing resources that you consume HPC workloads ebb and flow providing the opportunity to reduce costs by increasing a nd decreasing resource capacity on an as needed basis For example a low level run rate HPC capacity can be provisioned and reserved upfront so as to benefit from higher discounts while burst requirements may be provisioned with spot or on demand pricing and brought online only as needed • Optimize infrastructure costs for specific jobs : Many HPC workloads are part of a data processing pipeline that include s the data transfer pre processing computational calculations post processing data transfer and storage steps In the cloud rather than on a large and expensive server the computing platform is optimized at each step For example if a single step in a pipeline requires a large amount of memory you only need to pay for a more expensive large memo ry server for the memory intensive application while all other steps can run well on smaller and less expensive computing platforms Costs are reduced by optimizing infrastructure for each step of a workload • Burst workloads in the most efficient way : Savings are obtained for HPC workloads through horizontal scaling in the cloud When scaling horizontally many jobs or iterations of an entire workload are run simultaneously for less total elapsed time Depending on the application horizontal scaling can b e cost neutral while offering indirect cost savings by delivering results in a fraction of the time ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 37 • Make use of spot pricing : Amazon EC2 Spot Instances offer spare compute capacity in AWS at steep discounts compared to On Demand instances However Spot I nstances can be interrupted when EC2 needs to reclaim the capacity Spot Instances are frequently the most cost effective resource for flexible or fault tolerant workloads The intermittent nature of HPC workloads makes them well suited to Spot Instances The risk of Spot Instance interruption can be minimized by working with the Spot Advisor and the interruption impact can be mitigated by changing the default interruption behavior and using Spot Fleet to manage your Spot Instanc es The need to occasionally restart a workload is easily offset by the cost savings of Spot Instances • Assess the tradeoff of cost versus time : Tightly coupled massively parallel workloads are able to run on a wide range of core counts For these applica tions the run efficiency of a case typically falls off at higher core counts A cost versus turnaround time curve can be created if many cases of similar type and size will run Curves are specific to both the case type and application as scaling depends significantly on the ratio of computational to network requirements Larger workloads are capable of scaling further than smaller workloads With an understanding of the cost versus turnaround time tradeoff time sensitive workloads can run more quickly on more cores while cost savings can be achieved by running on fewer cores and at maximum efficiency Workloads can fall somewhere in between when you want to balance time sensitivity and cost sensitivity Definition There are four best practice areas for c ost optimization in the cloud: • Costeffective resources • Matching supply and demand • Expenditure awareness • Optimizing over time The matching supply and demand expenditure awareness and optimizing over time areas are described in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 38 Best Practices Cost Effective Resources HPCCOST 1: How have you evaluated available compute and storage options for your workload to optimize cost? HPCCOST 2: How have you evaluated the trade offs between job completion time and cost? Using the appropriate instances resources and features for your system is key to cost managem ent The instance choice may increase or decrease the overall cost of running an HPC workload For example a tightly coupled HPC workload might take five hours to run on a cluster of several smaller servers while a cluster of fewer and larger servers may cost double per hour but compute the result in one hour saving money overall The choice of storage can also impact cost Consider the potential tradeoff between job turnaround and cost optimization and test workloads with different instance sizes and storage options to optimize cost AWS offers a variety of flexible and cost effective pricing options to acquire instances from EC2 and other services in a way that best fits your needs On Demand Instances allow you to pay for compute capacity by the hour with no minimum commitments required Reserved Instances allow you to reserve capacity and offer savings relative to OnDemand pricing With Spot Instances you can leverage unused Amazon EC2 capacity and offer additional savings relative to OnDemand pric ing A WellArchitected system uses the most cost effective resources You can also reduce costs by using managed services for pre processing and post processing For example rather than maintaining servers to store and post process completed run data da ta can be stored on Amazon S3 and then post processed with Amazon EMR or AWS Batch Many AWS services provide features that further reduce your costs For example Auto Scaling is integrated with EC2 to automatically launch and terminate instances based on workload demand FSx for Lustre natively integrates with S3 and presents the entire contents of an S3 bucket as a Lustre filesystem This allows you to optimize your storage costs by provisioning a minimal Lustre filesystem for your immediate workload while maintaining your long term data in cost effective S3 storage S3 provides different Storage Classes so that you can use the most cost effective class for your data; Glacier or Glacier Deep Storage Classes enable you to archive data at the lowest cost ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 39 Experimenting with different instance types storage requirements and architectures can minimize costs while maintaining desirable performance Match ing Supply and Demand There are no best practices unique to HPC for the matching supply and demand best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Expenditure Awareness There are no best practices unique to HPC for the expenditure awareness best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Optimizing Over Time There are no best practices unique to HPC for the optimizing over time best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Conclusion This lens provides architectural best practices for designing and operating reliable secure efficient and cost effective systems for High Performance Computing workloads in the cloud We covered prototypical HPC architectures and overarching HPC design principles We revisited the five Well Architected pillars th rough the lens of HPC providing you with a set of questions to help you review an existing or proposed HPC architecture Applying the Framework to your architecture helps you build stable and efficient systems allowing you to focus on running HPC applica tions and pushing the boundaries of your field ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 40 Contributors The following individuals and organizations contributed to this document: • Aaron Bucher HPC Specialist Solutions Architect Amazon Web Services • Omar Shorbaji Global Solutions Architect Amazon Web Services • Linda Hedges HPC Application Engineer Amazon Web Services • Nina Vogl HPC Specialist Solutions Architect Amazon Web Services • Sean Smith HPC Software Development Engineer Amazon Web Services • Kevin Jorissen Solutions Architect – Climate and Weather Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Further Reading For additional information see the following: • AWS Well Architected Framework 12 • https://awsamazoncom/hpc • https://d1awsstaticcom/whitepapers/Intro_to_HPC_on_AWSpdf • https://d1awsstatic com/whitepapers/optimizing electronic design automation edaworkflows onawspdf • https://awsamazoncom/blogs/compute/real world awsscalability/ Document Revisions Date Description December 2019 Minor Updates November 2018 Minor Updates November 2017 Original publication ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 41 1 https://awsamazoncom/well architected 2 https://d0awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 3 https://awsamazoncom/ batch/ 4 https://awsamazoncom/ec2/ 5 https://awsamazoncom/ec2/spot/ 6 https://awsamazoncom/message queue 7 https://awsamazoncom/ec2/instance types/#instance typematrix 8 https://awsamazoncom/intel/ 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/processor_state_controlhtml 10 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtml#ebs optimization support 11 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 12 https://awsamazoncom/well architected 13 https://docsawsamazoncom/AWSEC2/latest/UserGuide/efahtml Notes,General,consultant,Best Practices AWS_WellArchitected_Framework__IoT_Lens,"ArchivedAWS IoT Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/iotlens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 1 Design and Manufacturing Layer 2 Edge Layer 2 Provisioning Layer 3 Communication Layer 4 Ingestion Layer 4 Analytics Layer 5 Application Layer 6 General Design Principles 8 Scenarios 9 Device Provisioning 9 Device Telemetry 11 Device Commands 12 Firmware Updates 14 The Pillars of the Well Architected Framework 16 Operational Excellence Pillar 16 Security Pillar 23 Reliability Pillar 37 Performance Efficiency Pillar 44 Cost Optimization Pillar 53 Conclusion 59 Contributors 59 Document Revisions 59 ArchivedAbstract This whitepaper describes the AWS IoT Lens for the AWS Well Architected Framework which enables customers to review and improve their cloud based architectures and better understand the business impact of their design decisions The document describes general design principles as well as specifi c best practices and guidance for the five pillars of the Well Architected Framework ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of the decisions you make when building systems on AWS Using the Framework allows you to learn architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having wellarchitected systems greatly increases the likelihood of business success In this “Lens” we focus on how to design deploy and architect your IoT workloads (Internet of Things) in the AWS Cloud To implement a wellarchitected IoT application you must follow wellarchitected principles starting from the procurement of connected physical assets (things) to the eventual decommissioning of those same assets in a secure reliable and automated fashion In addition to AWS Cloud best practices this document also articulates the i mpact considerations and recommendations for connecting physical assets to the internet This document only cover s IoT specific workload details from the Well Architected Framework We recommend that you read the AWS Well Architected Framework whitepaper and consider the best practices and questions for other lenses This document is intended for those in technology roles such as chief technology officers (CTOs) architects developers embedded engineers and operations team members After reading this document you will understand AWS best practices and strategies for IoT applications Definitions The AWS Well Architected Framework is based on five pillar s — operational excellence security reliability performance efficiency and cost optimization When architecting technology solutions you must make informed tradeoffs between pillars based upon your business context For IoT workloads AWS provides mul tiple services that allow you to design robust architectures for your applications Internet of Things (IoT) applications are composed of many devices (or things) that securely connect and interact with complementary edge based and cloud based components t o deliver business value IoT applications gather process analyze and act on data generated by connected devices This section presents an overview of the AWS components that are ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 2 used throughout this document to architect IoT workloads There are seven distinct logical layers to consider when building an IoT workload: • Design and manufacturing layer • Edge layer • Provisioning layer • Communications layer • Ingestion layer • Analytics layer • Application layer Design and Manufacturing Layer The design and manufacturi ng layer consists of product conceptualization business and technical requirements gathering prototyping module and product layout and design component sourcing and manufacturing Decisions made in each phase impact the next logical layers of the IoT workload described below For example some IoT device creators prefer to have a common firmware image burned and tested by the contract manufacturer This decision will partly determine what steps are required during the Provisioning layer You may go a step further and burn a unique certificate and priva cy key to each device during manufacturing This decision can impact the Communications layer since the type of credential can impact the subsequent selection of network protocols If the credential never expires it can simplify the Communications and Provisioning layers at the possible expense of increased data loss risk due to compromise of the issuing Certificate Authority Edge Layer The edge layer of your Io T workload consi sts o f the physical hardware of your devices the embedded operating system that manages the processes on your device and the device firmware which is the software and instructions programmed onto your IoT devices The edge is responsible for sensing and acti ng on other peripheral devices Common use cases are reading sensors connected to an edge device or changing the state of a peripheral based on a user action suc h as turning on a light when a motion sensor is activated ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 3 AWS IoT Device SDKs simplify usin g AWS IoT Core with your devices and applications with an API tailored to your programming language or platform Amazon FreeRTOS is a real time operating system for microcontrollers that lets you program small low power edge devices while leveraging mem oryefficient secure embedded libraries AWS IoT Greengrass is a software component that extends the Linux Operations System of your IoT devices AWS IoT Greengrass allows you to run MQTT local routing between devices data caching AWS IoT shadow sync local AWS Lambda functions and machine learning algorithms Provisioning Layer The provisioning layer of your IoT workloads consists of the Public Key Infrastructure (PKI) used to create unique device identities and the application workflow that provides configuration data to the device The provisioning layer is also involved with o ngoing maintenance and eventual decommissioning of devices over time IoT applications need a robust and automated provisioning layer so that devices can be added and managed by your IoT application in a frictionless way When you provision IoT devices you must install a unique cryptographic credential onto them By using X509 certificates you can implement a provisioning layer that securely creates a trusted identity for your device that can be used to authenticate and authorize against your communicati on layer X509 certificates are issued by a trusted entity called a certificate authority (CA) While X509 certificates do consume resources on constrained devices due to memory and processing requirements they are an ideal identity mechanism due to the ir operational scalability and widespread support by standard network protocols AWS Certificate Manager Private CA helps you automate the process of managing the lifecycle of private certificates for IoT devices using APIs Private certificates such as X509 certificates provide a secure way to give a device a long term identity that can be created during provisioning and used to identify and authorize device permissions against your IoT application AWS IoT Just In Time Registration (JITR) enables you t o programmatically register devices to be used with managed IoT platforms such as AWS IoT Core With Just In Time Registration when devices are first connected to your AWS IoT Core endpoint you can automatically trigger a workflow that can determine the validity of the certificate identity and determine what permissions it should be granted ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 4 Communication Layer The Communication layer handles the connectivity message routing among remote devices and routing between devices and the cloud The Communicati on layer lets you establish how IoT messages are sent and received by devices and how devices represent and store their physical state in the cloud AWS IoT Core helps you build IoT applications by providing a managed message broker that supports the use of the MQTT protocol to publish and subscribe IoT messages between devices The AWS IoT Device Registry helps you manage and operate your things A thing is a representation of a specific device or logical entity in the cloud Things can also have custom defined static attributes that help you identify categorize and search for your assets once deployed With the AWS IoT Device Shadow Service you can create a data store that contains the current state of a particular device The Device Shadow Service m aintains a virtual representation of each of your devices you connect to AWS IoT as a distinct device shadow Each device's shadow is uniquely identified by the name of the corresponding thing With Amazon API Gateway your IoT applications can make HTTP r equests to control your IoT devices IoT applications require API interfaces for internal systems such as dashboards for remote technicians and external systems such as a home consumer mobile application With Amazon API Gateway you can create common A PI interfaces without provisioning and managing the underlying infrastructure Ingestion Layer A key business driver for IoT is the ability to aggregate all the disparate data streams created by your devices and transmit the data to your IoT application in a secure and reliable manner The ingestion layer plays a key role in collecting device data while decoupling the flow of data with the communication between devices With AWS IoT rules engine you can build IoT applications such that your devices can interact with AWS services AWS IoT rules are analyzed and actions are performed based on the MQTT topic stream a message is received on ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 5 Amazon Kinesis is a managed service for streaming data enabling you to get timely insights and react quickly to new i nformation from IoT devices Amazon Kinesis integrates directly with the AWS IoT rules engine creating a seamless way of bridging from a lightweight device protocol of a device using MQTT with your internal IoT applications that use other protocols Similar to Kinesis Amazon Simple Queue Service (Amazon SQS) should be used in your IoT application to decouple the communication layer from your application layer Amazon SQS enables an event driven scalable ingestion queue when your application needs to pro cess IoT applications once where message order is not required Analytics Layer One of the benefits of implementing IoT solutions is the ability to gain deep insights and data about what's happening in the local/edge environment A primary way of realizing contextual insights is by implementing solutions that can process and perform analytics on IoT data Storage Services IoT workloads are often designed to generate large quantities of data Ensure that this discrete data is transmitted processed and consumed securely while being stored durably Amazon S3 is object based storage engineered to store and retrieve any amount of data from anywhere on the internet With Amazon S3 you can build IoT applications that store large amounts of da ta for a variety of purposes: regulatory business evolution metrics longitudinal studies analytics machine learning and organizational enablement Amazon S3 gives you a broad range of flexibility in the way you manage data for not just for cost optimi zation and latency but also for access control and compliance Analytics and Machine Learning Services After your IoT data reaches a central storage location you can begin to unlock the full value of IoT by implementing analytics and machine learning on device behavior With analytics systems you can begin to operationalize improvements in your device firmware as well as your edge and cloud logic by making data driven decisions based on your analysis With analytics and machine learning IoT systems c an implement proactive strategies like predictive maintenance or anomaly detection to improve the efficiencies of the system ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 6 AWS IoT Analytics makes it easy to run sophisticated analytics on volumes on IoT data AWS IoT Analytics manages the underlying I oT data store while you build different materialized views of your data using your own analytical queries or Jupyter notebooks Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is se rverless so there is no infrastructure to manage and customers pay only for the queries that they run Amazon SageMaker is a fully managed platform that enables you to quickly build train and deploy machine learning models in the cloud and the edge la yer With Amazon SageMaker IoT architectures can develop a model of historical device telemetry in order to infer future behavior Application Layer AWS IoT provides several ways to ease the way cloud native applications consume data generated by IoT devices These connected capabilities include features from serverless computing relational databases to create materialized views of your IoT data and management applications to operate inspect secure and manage your IoT operations Management Appli cations The purpose of management applications is to create scalable ways to operate your devices once they are deployed in the field Common operational tasks such as inspecting the connectivity state of a device ensuring device credentials are configure d correctly and querying devices based on their current state must be in place before launch so that your system has the required visibility to troubleshoot applications AWS IoT Device Defender is a fully managed service that audits your device fleets detects abnormal device behavior alerts you to security issues and helps you investigate and mitigate commonly encountered IoT security issues AWS IoT Device Management eases the organizing monitoring and managing of IoT devices at scale At scale cu stomers are managing fleets of devices across multiple physical locations AWS IoT Device Management enables you to group devices for easier management You can also enable real time search indexing against the current state of your devices through Device Management Fleet Indexing Both Device Groups and Fleet Indexing can be used with Over the Air Updates (OTA) when determining which target devices must be updated ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 7 User Applications In addition to managed applications other internal and external systems need different segments of your IoT data for building different applications To support end consumer views business operational dashboards and the other net new applications you build over time you will need several other technologies that can receive the required information from your connectivity and ingestion layer and format them to be used by other systems Database Services – NoSQL and SQL While a data lake can function as a landing zone for all of your unformatted IoT generated data to support all the formatted views on top of your IoT data you need to complement your data lake with structured and semi structured data stores For these purposes you should leverage both NoSQL and SQL databases These types of databases enable you to create diff erent views of your IoT data for distinct end users of your application Amazon DynamoDB is a fast and flexible NoSQL database service for IoT data With IoT applications customers often require flexible data models with reliable performance and automatic scaling of throughput capacity With Amazon Aurora your IoT architecture can store structured data in a performant and cost effective open source database When your data needs to be accessible to other IoT applications for predefined SQL queries relati onal databases provide you another mechanism for decoupling the device stream of the ingestion layer from your eventual business applications which need to act on discrete segments of your data Compute Services Frequently IoT workloads require application code to be executed when the data is generated ingested or consumed/realized Regardless of when compute code needs to be executed serverless compute is a highly cost effective choice Serverless compute can be leveraged from the edge to the core and from core to applications and analytics AWS Lambda allows you to run code without provisioning or managing servers Due to the scale of ingestion for IoT workloads AWS Lambda is an ideal fit for running statel ess event driven IoT applications on a managed platform ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 8 General Design Principles The Well Architected Framework identifies the following set of design principles in order to facilitate good design in the cloud with IoT: • Decouple ingestion from process ing: In IoT applications the ingestion layer must be a highly scalable platform that can handle a high rate of streaming device data By decoupling the fast rate of ingestion from the processing portion of your application through the use of queues buffe rs and messaging services your IoT application can make several decisions without impacting devices such as the frequency it processes data or the type of data it is interested in • Design for offline behavior : Due to things like connectivity issues or misconfigured settings devices may go offline for much more extended periods of time than anticipated Design your embedded software to handle extended periods of offline connectivity and create metrics in the cloud to track devices that are not communicat ing on a regular timeframe • Design lean data at the edge and enrich in the cloud : Given the constrained nature of IoT devices the initial device schema will be optimized for storage on the physical device and efficient transmissions from the device to you r IoT application For this reason unformatted device data will often not be enriched with static application information that can be inferred from the cloud For these reasons as data is ingested into your application you should prefer to first enrich the data with human readable attributes deserialize or expand any fields that the device serialized and then format the data in a data store that is tuned to support your applications read requirements • Handle personalization : Devices that connect to th e edge or cloud via Wi Fi must receive the Access Point name and network password as one of the first steps performed when setting up the device This data is usually infeasible to write to the device during manufacturing since it’s sensitive and site spec ific or from the cloud since the device isn’t connected yet These factors frequently make personalization data distinct from the device client certificate and private key which are conceptually upstream and from cloud provided firmware and configuration updates which are conceptually downstream Supporting personalization can impact design and manufacturing since it may mean that the device itself requires a user interface for direct data input or the need to provide a smartphone application to connec t the device to the local network ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 9 • Ensure that devices regularly send status checks : Even if devices are regularly offline for extended periods of time ensure that the device firmware contains application logic that sets a regular interval to send device status information to your IoT application Devices must be active participants in ensuring that your application has the right level of visibility Sending this regularly occurring IoT message ensures that your IoT application gets an updated view of the overall status of a device and can create processes when a device does not communicate within its expected period of time Scenarios This section addresses common scenarios related to IoT applications with a focus on how each scenario impacts the archit ecture of your IoT workload These examples are not exhaustive but they encompass common patterns in IoT We present a background on each scenario general considerations for the design of the system and a reference architecture of how the scenarios should be implemented Device Provisioning In IoT device provisioning is comp osed of several sequential steps The most important aspect is that each device must be given a unique identity and then subsequently authenticated by your IoT application using that identity As such the first step to provisioning a device is to install an identity The decisions you make in device design and manufacturing determines i f the device has a production ready firmware image and/or unique client credential by the time it reaches the customer Your decisions determine whether there are additional provisioning time steps that must be performed before a production device identify can be installed Use X509 client certificates in IoT for your applications — they tend to be more secure and easier to manage at scale than static passwords In AWS IoT Core the device is registered using its certificate along with a unique thing ident ifier The registered device is then associated with an IoT policy An IoT policy gives you the ability to create fine grained permissions per device Fine grained permissions ensure that only one device has permissions to interact with its own MQTT topics and messages This registration process ensures that a device is recognized as an IoT asset and that the data it generates can be consumed through AWS IoT to the rest of the AWS ecosystem To provision a device you must enable automatic registration and associate ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 10 a provisioning template or an AWS Lambda function with the initial device provisioning event This registration mechanism relies on the device receiving a unique certificate during provisioning (which can happen either during or after manufacturi ng) which is used to authenticate to the IoT application in this case AWS IoT One advantage of this approach is that the device can be transferred to another entity and be reprovisioned allowing the registration process to be repeated with the new own er’s AWS IoT account details Figure 1: Registration Flow 1 Set up the manufacturing device identifier in a database 2 The device connects to API Gateway and requests registration from the CPM The request is validated 3 Lambda requests X509 certificates f rom your Private Certificate Authority (CA) 4 Your provisioning system registered your CA with AWS IoT Core 5 API Gateway passes the device credentials to the device 6 The device initiates the registration workflow with AWS IoT Core ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 11 Device Telemetry There are many uses cases (such as industrial IoT) where the value for IoT is in collecting telemetry on how a machine is performing For example this data can be used to enable predictive maintenance preventing costly unforeseen equipment failures Telemetry must be collected from the machine and uploaded to an IoT application Another benefit of sending telemetry is the ability of your cloud applications to use this data for analysis and to interpret optimizations that can be made to your firmware over time Telemetry data is read only that is collected and transmitted to the IoT application Since telemetry data is passive ensure the MQTT topic for telemetry messages does not overlap with any topics that relate to IoT commands For example a telemetry topi c could be data/device/sensortype where any MQTT topic that begins with “data” is considered a telemetry topic From a logical perspective we have defined several scenarios for capturing and interacting with device data telemetry Figure 2: Options for capturing telemetry ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 12 1 One publishing topic and one subscriber For example a smart light bulb that publishes its brightness level to a single topic where only a single application can subscribe 2 One publishing topic with variables and one subscriber For example a collection of smart bulbs publishing their brightness on similar but unique topics Each subscriber can listen to a unique publish message 3 Single publishing topic and multiple subscribers In this case a light sensor that publi shes its values to a topic that all the light bulbs in a house subscribe to 4 Multiple publishing topics and a single subscriber For example a collection of light bulbs with motion sensors The smart home system subscribes to all of the light bulb topics inclusive of motion sensors and creates a composite view of brightness and motion sensor data Device Commands When you are building an IoT application you need the ability to interact with your device through commands remotely A n example in the indus trial vertical is to use remote commands to request specific data from a piece of equipment A n example usage in the smart home vertical is to use remote commands to schedule an alarm system remotely With AWS IoT Core you can implement commands using MQT T topics or the AWS IoT Device Shadow to send commands to a device and receive an acknowledgment when a device has executed the command Use the Device Shadow over MQTT topics for implementing commands The Device Shadow has several benefits over using standard MQTT topics such as a clientToken to track the origin of a request version numbers for managing conflict resolution and the abilit y to store commands in the cloud in the event that a device is offline and unable to receive the command when it is issued The device’s shadow is commonly used in cases where a command needs to be persisted in the cloud even if the device is currently not online When the device is back online the device requests the latest shadow information and executes the command ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 13 Figure 3: Using a message broker to send commands to a device AWS IoT Device Shadow Service IoT solutions that use the Device Shadow ser vice in AWS IoT Core manage command requests in a reliable scalable and straightforward fashion The Device Shadow service follows a prescriptive approach to both the management of device related state and how the state changes are communicated This app roach describes how the Device Shadows service uses a JSON document to store a device's current state desired future state and the difference between current and desired states Figure 4: Using Device Shadow with devices 1 A device reports initial devi ce state by publishing that state as a message to the update topic deviceID/shadow/update 2 The Device Shadow reads the message from the topic and records the device state in a persistent data store ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 14 3 A device subscribes to the delta messaging topic deviceId/shadow/update/delta upon which device related state change messages will arrive 4 A component of the solution publishes a desired state message to the topic deviceID/shadow/update and the Device Shadow tracking this device records the desired device state in a persistent data store 5 The Device Shadow publishes a delta message to the topic deviceId/shadow/update/delta and the Message Broker sends the message to the device 6 A device receives the delta message and performs the desired state changes 7 A device publishes an acknowledgment message reflecting the new state to the update topic deviceID/shadow/update and the Device Shadow tracking this device records the new state in a persistent data store 8 The Device Shadow publishes a message to the deviceId/shad ow/update/accepted topic 9 A component of the solution can now request the updated state from the Device Shadow Firmware Updates All IoT solutions must allow device firmware updates Supporting firmware upgrades without human intervention is critical for s ecurity scalability and delivering new capabilities AWS IoT Device Management provides a secure and easy way for you to manage IoT deployments including executing and tracking the status of firmware updates AWS IoT Device Management uses the MQTT protocol with AWS IoT message broker and AWS IoT Jobs to send firmware update commands to devices as well as to receive the status of those firmware updates over time An IoT solution must implement firmware updates using AWS IoT Jobs shown in the followi ng diagram to deliver this functionality ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 15 Figure 5: Updating fir mware on devices 1 A device subscribes to the IoT job notification topic deviceId/jobs/notify next upon which IoT job notification messages will arrive 2 A device publishes a message to deviceId/jobs/start next to start the next job and get the next job its job document and other details including any state saved in statusDetails 3 The AWS IoT Jobs service retrieves the next job document for the specific device and sends this document on the subscribed topic deviceId/jobs/start next/accepted 4 A device performs the actions specified by the job document using the deviceId/jobs/jobId/update MQTT topic to report on the progress of the job 5 During the upgrade process a device downloads firmware using a presigned URL for Amazon S3 Use code signing to sign the firmware when uploading to Amazon S3 By code signing your firmware the end device can verify the authenticity of the firmware before installing Amazon FreeRTOS devices can downloa d the firmware image directly over MQTT to eliminate the need for a separate HTTPS connection 6 The device publishes an update status message to the job topic deviceId/jobs/jobId/update reporting success or failure 7 Because this job's execution status has c hanged to final state the next IoT job available for execution (if any) will change ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 16 The Pillars of the Well Architected Framework This section describes each of the pillars and includes definitions best practices questions considerations and essenti al AWS services that are relevant when architecting solutions for AWS IoT Operational Excellence Pillar The Operational Excellence pillar includes operational practices and procedures used to manage production workloads Operational excellence comprises how planned changes are executed as well as responses to unexpected operational events Change execution and responses should be automated All processes and procedures of operational excellence must be documented tested and regularly reviewed Design Principles In addition to the overall Well Architected Framework operational excellence design principles there are five design principles for operational excellence for IoT in the cloud: • Plan for device provisioning : Design your device provisioning proce ss to create your initial device identity in a secure location Implement a public key infrastructure (PKI) that is responsible for distributing unique certificates to IoT devices As described above selection of crypto hardware with a pre generated priva te key and certificate eliminates the operational cost of running a PKI Otherwise PKI can be done offline with a Hardware Security Module (HSM) during the manufacturing process or during device bootstrapping Use technologies that can manage the Certifi cate Authority (CA) and HSM in the cloud ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 17 • Implement device bootstrapping : Devices that support personalization by a technician (in the industrial vertical) or user (in the consumer vertical) can also undergo provisioning For example a smartphone applica tion that interacts with the device over Bluetooth LE and with the cloud over Wi Fi You must d esign the ability for devices to programmatically update their configuration information using a globally distributed bootstrap API A bootstrapping design ensur es that you can programmatically send the device new configuration settings through the cloud These changes should include settings such as which IoT endpoint to communicate with how frequently to send an overall status for the device and any updated se curity settings such as server certificates The process of bootstrapping goes beyond initial provisioning and plays a critical role in device operations by providing a programmatic way to update device configuration through the cloud • Document device com munication patterns : In an IoT application device behavior is documented by hand at the hardware level In the cloud an operations team must formulate how the behavior of a device will scale once deployed to a fleet of devices A cloud engineer should re view the device communication patterns and extrapolate the total expected inbound and outbound traffic of device data and determine the expected infrastructure necessary in the cloud to support the entire fleet of devices During operational planning thes e patterns should be measured using device and cloud side metrics to ensure that expected usage patterns are met in the system • Implement over the air (OTA) updates : In order to benefit from long term investments in hardware you must be able to continuously update the firmware on the devices with new capabilities In the cloud you can apply a robust firmware update process that allows you to target specific devices for firmware updates roll out changes over time track success and failures of updates and have the ability to roll back or put a stop to firmware changes based on KPIs • Implement functional testing on physical assets : IoT device hardware and firmware must undergo rigorous testing before being deployed in the field Acceptance and functional testing are critical for your path to production The goal of functional testing is to run your hardware components embedded firmware and device application software through rigorous testing scenarios such as intermittent or reduced connectivity or failure of peripheral sensors while profiling the performance of the hardware The tests ensure that your IoT device will perform as expected when deployed ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 18 Definition There are three best practice areas for operational excellence in the cloud: 1 Preparation 2 Operation 3 Evolution In addition to what is covered by the Well Architected Framework concerning process runbooks and game days there are specific areas you should review to drive operational excellence within IoT applications Best Practices Preparation For IoT applications the need to procure provision test and deploy hardware in various environments means that the preparati on for operational excellence must be expanded to cover aspects of your deployment that will primarily run on physical devices and will not run in the cloud Operational metrics must be defined to measure and improve business outcomes and then determine if devices should generate and send any of those metrics to your IoT application You also must plan for operational excellence by creating a streamlined process of functional testing that allows you to simulate how devices may behave in their various enviro nments It is essential that you ask how to ensure that your IoT workloads are resilient to failures how devices can self recover from issues without human intervention and how your cloud based IoT application will scale to meet the needs of an ever increasing load of connected hardware When using an IoT platform you have the opportunity to use additional components/tools for handling IoT operations These tools include services that allow you to monitor and inspect device behavior capture connectivi ty metrics provision devices using unique identities and perform long term analysis on top of device data ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 19 IOTOPS 1 What factors drive your operational priorities? IOTOPS 2 How do you ensure that you are ready to support the operations of devices of your IoT workload? IOTOPS 3 How are you ensuring that newly provisioned devices have the required operational prerequisites? Logical security for IoT and data centers is similar in that both involve predominantly machine tomachine authentication However they differ in that IoT devices are frequently deployed to environments that cannot be assumed to be physically secure IoT applications also commonly require sensitive data to traverse the internet Due to these considerations it is vital for you to have an architecture that determines how devices will securely gain an identity continuously prove their identity be seeded wi th the appropriate level of metadata be organized and categorized for monitoring and enabled with the right set of permissions For successful and scalable IoT applications the management processes should be automated data driven and based on previou s current and expected device behavior IoT applications must support incremental rollout and rollback strategies By having this as part of the operational efficiency plan you will be equipped to launch a fault tolerant efficient IoT application In AWS IoT you can use multiple features to provision your individual device identities signed by your CA to the cloud This path involves provisioning devices with identities and then using just intimeprovisioning (JITP) just intimeregistration (JITR) or Bring Your Own Certificate (BYOC) to securely register your device certificates to the cloud Using AWS services including Route 53 Amazon API Gateway Lambda and DynamoDB will create a simple API interface to extend the provisioning process with device bootstrapping Operate In IoT operational health goes beyond the operational health of the cloud application and extends to the ability to measure monitor troubleshoot and remediate devices that are part of your application but are remotely deplo yed in locations that may be difficult or impossible to troubleshoot locally This requirement of remote operations must be ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 20 considered at design and implementation time in order to ensure your ability to inspect analyze and act on metrics sent from these remote devices In IoT you must establish the right baseline metrics of behavior for your devices be able to aggregate and infer issues that are occurring across devices and have a robust remediation plan that is not only executed in the cloud but al so part of your device firmware You must implement a variety of device simulation canaries that continue to test common device interactions directly against your production system Device canaries assist in narrowing down the potential areas to investigat e when operational metrics are not met Device canaries can be used to raise preemptive alarms when the canary metrics fall below your expected SLA In AWS you can create an AWS IoT thing for each physical device in the device registry of AWS IoT Core By creating a thing in the registry you can associate metadata to devices group devices and configure security permissions for devices An AWS IoT thing should be used to store static data in the thing registry while storing dynamic device data in the thing’s associated device shadow A device's shadow is a JSON document that is used to store and retrieve state information for a device Along with creating a virtual representation of your device in the device registry as part of the operational proces s you must create thing types that encapsulate similar static attributes that define your IoT devices A thing type is analogous to the product classification for a device The combination of thing thing type and device shadow can act as your first entr y point for storing important metadata that will be used for IoT operations In AWS IoT thing groups allow you to manage devices by category Groups can also contain other groups — allowing you to build hierarchies With organizational structure in your IoT application you can quickly identify and act on related devices by device group Leveraging the cloud allows you to automate the addition or removal of devices from groups based on your business logic and the lifecycle of your devices In IoT your d evices create telemetry or diagnostic messages that are not stored in the registry or the device’s shadow Instead these messages are delivered to AWS IoT using a number of MQTT topics To make this data actionable use the AWS IoT rules engine to route er ror messages to your automated remediation process and add diagnostic information to IoT messages An example of how you would route a message that contained an error status code to a custom workflow is below The rules engine inspects the status of a mess age and if it is an error it starts the Step Function workflow to remediate the device based off the error message detail payload ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 21 { ""sql"": ""SELECT * FROM 'command/iot/response WHERE code = 'eror'"" ""ruleDisabled"": false ""description"": ""Error Handling Workflow"" ""awsIotSqlVersion"": ""2016 0323"" ""actions"": [{ ""stepFunctions"": { ""executionNamePrefix"": ""errorExecution"" ""stateMachineName"": ""errorStateMachine"" ""roleArn"": ""arn:aws:iam::123456789012:role/aws_iot_step_functions"" } }] } To support operational insights to your cloud application generate dashboards for all metrics collected from the device broker of AWS IoT Core These metrics are available through CloudWatch Metrics In addition CloudWatch Logs contain information such as total successful messages inbound messages outbound connectivity success and errors To augment your production device deployments implement IoT simula tions on Amazon Elastic Compute Cloud (Amazon EC2) as device canaries across several AWS Regions These device canaries are responsible for mirroring several of your business use cases such as simulating error conditions like long running transactions se nding telemetry and implementing control operations The device simulation framework must output extensive metrics including but not limited to successes errors latency and device ordering and then transmit all the metrics to your operations system In addition to custom dashboards AWS IoT provides fleet level and device level insights driven from the Thing Registry and Device Shadow service through search capabilities such as AWS IoT Fleet Indexing The ability to search across your fleet eases the operational overhead of diagnosing IoT issues whether they occur at the device level or fleet wide level ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 22 Evolve IOTOPS 4 How do you evolve your IoT application with minimum impact to downstream IoT devices? IoT solutions frequently involve a combinatio n of low power devices remote locations low bandwidth and intermittent network connectivity Each of those factors poses communications challenges including upgrading firmware Therefore it's important for you to incorporate and implement an IoT updat e process that minimizes the impact to downstream devices and operations In addition to reducing downstream impact devices must be resilient to common challenges that exist in local environments such as intermittent network connectivity and power loss Use a combination of grouping IoT devices for deployment and staggering firmware upgrades over a period of time Monitor the behavior of devices as they are updated in the field and proceed only after a percentage of devices have upgraded successfully Use AWS IoT Device Management for creating deployment groups of devices and delivering over the air updates (OTA) to specific device groups During upgrades continue to collect all of the CloudWatch Logs telemetry and IoT device job messages and combine that information with the KPIs used to measure overall application health and the performance of any long running canaries Before and after firmware updates perform a retrospective analysis of operations metrics with participants spanning the business t o determine opportunities and methods for improvement Services like AWS IoT Analytics and AWS IoT Device Defender are used to track anomalies in overall device behavior and to measure deviations in performance that may indicate an issue in the updated fi rmware Key AWS Services Several services can be used to drive operational excellence for your IoT application The AWS Device Qualification Program helps you select hardware components that have been designed and tested for AWS IoT interoperability Quali fied hardware can get you to market faster and reduce operational friction AWS IoT Core offers features used to manage the initial onboarding of a device AWS IoT Device Management reduces the operational overhead of performing fleet wide operations such as device grouping and searching In addition Amazon CloudWatch is used for monitoring IoT metrics collecting logs generating alerts and triggering responses Other services and features that support the three areas of operational excellence are as fo llows: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 23 • Preparation : AWS IoT Core supports provisioning and onboarding your devices in the field including registering the device identity using just intime provisioning just intime registration or Bring Your Own Certificate Devices can then be associated with their metadata and dev ice state using the device registry and the Device Shadow • Operations : AWS IoT thing groups and Fleet Indexing allow you to quickly develop an organizational structure for your devices and search across the current metadata of your devices to perform recurring device operations Amazon CloudWatch allows you to monitor the operational health of your devices and your application • Responses : AWS IoT Jobs enables you to proactively push updates to one or more devices such as firmware updates or device configuration AWS IoT rules engine allows you to inspect IoT messages as they are received by AWS IoT Core and immedi ately respond to the data at the most granular level AWS IoT Analytics and AWS IoT Device Defender enable you to proactively trigger notifications or remediation based on real time analysis with AWS IoT Analytics and real time security and data threshol ds with Device Defender Security Pillar The Security pillar includes the ability to protect information systems and assets while delivering business value Design Principles In addition to the overall Well Architected Framework security design principle s there are specific design principles for IoT security: • Manage device security lifecycle holistically : Data security starts at the design phase and ends with the retirement and destruction of the hardware and data It is important to take an end toend approach to the security lifecycle of your IoT solution in order to maintain your competitive advantage and retain customer trust • Ensure least privilege permissions : Devices should all have fine grained access permissions that limit which topics a device can use for communication By restricting access one compromised device will have fewer opportunities to impact any other devices ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 24 • Secure device credentials at rest : Devices should securely store credential information at rest using mechanisms such as a dedicated crypto element or secure flash • Implement device identity lifecycle management : Devices maintain a device identity from creation through end of life A well designed identity system will keep track of a device’s identity track the validity of t he identity and proactively extend or revoke IoT permissions over time • Take a holistic view of data security : IoT deployments involving a large number of remotely deployed devices present a significant attack surface for data theft and privacy loss Use a model such as the Open Trusted Technology Provider Standard to systemically review your supply chain and solution design for risk and then apply appropriate mitigatio ns Definition There are five best practice areas for security in the cloud: 1 Identity and access management (IAM) 2 Detective controls 3 Infrastructure protection 4 Data protection 5 Incident response Infrastructure and data protection encompass the IoT device har dware as well as the end to end solution IoT implementations require expanding your security model to ensure that devices implement hardware security best practices and your IoT applications follow security best practices for factors such as adequately s coped device permissions and detective controls The security pillar focuses on protecting information and systems Key topics include confidentiality and integrity of data identifying and managing who can do what with privilege management protecting sy stems and establishing controls to detect security events ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 25 Best Practices Identity and Access Management (IAM) IoT devices are often a target because they are provisioned with a trusted identity may store or have access to strategic customer or business data (such as the firmware itself) may be remotely accessible over the internet and may be vulnerable to direct physical tampering To provide protection against unauthorized access you need to always begin with implementing security at the device leve l From a hardware perspective there are several mechanisms that you can implement to reduce the attack surface of tampering with sensitive information on the device such as: • Hardware crypto modules • Software supported solutions including secure flash • Physical function modules that cannot be cloned • Uptodate cryptographic libraries and standards including PKCS #11 and TLS 12 To secure device hardware you implement solutions such that private keys and sensitive identity are unique to and only stored on the device in a secure hardware location Implement hardware or software based modules that securely store and manage access to the private keys used to communicate with AWS IoT In addition to hardware security IoT devices must be given a valid identi ty which will be used for authentication and authorization in your IoT application During the lifetime of a device you will need to be able to manage certificate renewal and revocation To handle any changes to certificate information on a device you must first have the ability to update a device in the field The ability to perform firmware updates on hardware is a vital underpinning to a well architected IoT application Through OTA updates securely rotate device certificates before expiry including certificate authorities IOTSEC 1 How do you securely store device certificates and private keys for devices? IOTSEC 2 How do you associate AWS IoT identities with your devices? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 26 For example with AWS IoT you first provision X509 certificate and then separately create the IoT permissions for connecting to IoT publishing and subscribing to messages and receiving updates This separation of identity and permissions provides flexibility in managing your device security During the c onfiguration of permissions you can ensure that any device has the right level of identity as well as the right level of access control by creating an IoT policy that restricts access to MQTT actions for each device Ensure that each device has its own un ique X509 certificate in AWS IoT and that devices should never share certificates (one certificate for one device rule) In addition to using a single certificate per device when using AWS IoT each device must have its own unique thing in the IoT regist ry and the thing name is used as the basis for the MQTT ClientID for MQTT connect By creating this association where a single certificate is paired with its own thing in AWS IoT Core you can ensure that one compromised certificate cannot inadvertently assume an identity of another device It also alleviates troubleshooting and remediation when the MQTT ClientID and the thing name match since you can correlate any ClientID log message to the thing that is associated with that particular piece of communic ation To support device identity updates use AWS IoT Jobs which is a managed platform for distributing OTA communication and binaries to your devices AWS IoT Jobs is used to define a set of remote operations that are sent to and executed on one or mor e devices connected to AWS IoT AWS IoT Jobs by default integrate several best practices including mutual authentication and authorization device tracking of update progress and fleet wide wide metrics for a given update Enable AWS IoT Device Defender audits to track device configuration device policies and checking for expiring certificates in an automated fashion For example Device Defender can run audits on a scheduled basis and trigger a notification for expiring certificates With the combinati on of receiving notifications of any revoked certificates or pending expiry certificates you can automatically schedule an OTA that can proactively rotate the certificate IOTSEC 3 How do you authenticate and authorize user access to your IoT application ? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 27 Although many applications focus on the thing aspect of IoT in almost all verticals of IoT there is also a human component that needs the ability to communicate to and receive notifications from devices For example consumer IoT generally requires use rs to onboard their devices by associating them with an online account Industrial IoT typically entails the ability to analyze hardware telemetry in near real time In either case it's essential to determine how your application will identify authentica te and authorize users that require the ability to interact with particular devices Controlling user access to your IoT assets begins with identity Your IoT application must have in place a store (typically a database) that keeps track of a user's iden tity and also how a user authenticates using that identity The identity store may include additional user attributes that can be used at authorization time (for example user group membership) IoT device telemetry data is an example of a securable asset By treating it as such you can control the access each user has and audit individual user interactions When using AWS to authenticate and authorize IoT application users you have several options to implement your identity store and how that store main tains user attributes For your own applications use Amazon Cognito for your identity store Amazon Cognito provides a standard mechanism to express identity and to authenticate users in a way that can be directly consumed by your app and other AWS serv ices in order to make authorization decisions When using AWS IoT you can choose from several identity and authorization services including Amazon Cognito Identity Pools AWS IoT policies and AWS IoT custom authorizer For implementing the decoupled vie w of telemetry for your users use a mobile service such as AWS AppSync or Amazon API Gateway With both of these AWS services you can create an abstraction layer that decouples your IoT data stream from your user’s device data notification stream By cr eating a separate view of your data for your external users in an intermediary datastore for example Amazon DynamoDB or Amazon ElasticSearch Service you can use AWS AppSync to receive user specific notifications based only on the allowed data in your in termediary store In addition to using external data stores with AWS AppSync you can define user specific notification topics that can be used to push specific views of your IoT data to your external users If an external user needs to communicate directl y to an AWS IoT endpoint ensure that the user identity is either an authorized Amazon Cognito Federated Identity tha t is associated to an authorized Amazon Cognito role and a fine grained IoT policy or uses AWS IoT custom authorizer where the authorizat ion is managed by your own authorization service With either approach associate a fine grained policy to each user ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 28 that limits what the user can connect as publish to subscribe from and receive messages from concerning MQTT communication IOTSEC 4 H ow do you ensure that least privilege is applied to principals that communicates to your IoT application? After registering a device and establishing its identity it may be necessary to seed additional device information needed for monitoring metrics te lemetry or command and control Each resource requires its own assignment of access control rules By reducing the actions that a device or user can take against your application and ensuring that each resource is secured separately you limit the impact that can occur if any single identity or resource is used inadvertently In AWS IoT create fine grained permissions by using a consistent set of naming conventions in the IoT registry The first convention is to use the same unique identifier for a devic e as the MQTT ClientID and AWS IoT thing name By using the same unique identifier in all these locations you can easily create an initial set of IoT permissions that can apply to all of your devices using AWS IoT Thing Policy variables The second naming convention is to embed the unique identifier of the device into the device certificate Continuing with this approach store the unique identifier as the Com monName in the subject name of the certificate in order to use Certificate Policy Variables to bind IoT permissions to each unique device credential By using policy variables you can create a few IoT policies that can be applied to all of your device certificates while maintaining least privilege For example the IoT policy below would restrict any device to connect only using the unique identifier of the de vice (which is stored in the common name) as its MQTT ClientID and only if the certificate is attached to the device This policy also restricts a device to only publish on its individual shadow: { ""Version"": ""2012 1017"" ""Statement"": [{ ""Effect"": ""Allow"" ""Action"": [""iot:Connect""] ""Resource"": [""arn:aws:iot:us east 1:123456789012:client/${iot:CertificateSubjectCommonName}""] ""Condition"":{ ""Bool"":{ ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 29 ""iot:ConnectionThingIsAttached"": [""true""] } } } { ""Effect"":""Allow"" ""Action"":[""iot:Publish""] ""Resource"":[""arn:aws:iot:us east 1:123456789012:topic/$aws/things/${iot:ConnectionThingThingName}/ shadow/update""] } ] } Attach your device identity (certificate or Amazon Cognito Federated Identity ) to the thing in the AWS IoT registry using AttachThingPrincipal Although these scenarios apply to a single device communicating with its own set of topics and device shadows there are scenarios where a single device needs to act upon the state or topics of other devices For example you may be operating an edge appli ance in an industrial setting creating a home gateway to manage coordinating automation in the home or allowing a user to gain access to a different set of devices based on their specific role For these use cases leverage a known entity such as a group identifier or the identity of the edge gateway as the prefix for all of the devices that communicate to the gateway By making all of the endpoint devices use the same prefix you can make use of wildcards ""*"" in your IoT policies This approach balanc es MQTT topic security with manageability { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"":""Allow"" ""Action"":[""iot:Publish""] ""Resource"":[""arn:aws:iot:us east 1:123456789012:topic/$aws/things/edgegateway123 */shadow/update""] } ] } ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 30 In the preceding example the IoT operator would associate the policy with the edge gateway with the identifier edgegateway123 The permissions in this policy would then allow the edge appliance to publish to other Device Shado ws that are managed by the edge gateway This is accomplished by enforcing that any connected devices to the gateway all have a thing name that is prefixed with the identifier of the gateway For example a downstream motion sensor would have the identifie r edgegateway123 motionsensor1 and therefore can now be managed by the edge gateway while still restricting permissions Detective Controls Due to the scale of data metrics and logs in IoT applications aggregating and monitoring is an essential part of a well architected IoT application Unauthorized users will probe for bugs in your IoT application and will look to take advantage of individual devices to gain further access into other devices applications and cloud resources In order to operate an entire IoT solution you will need to manage detective controls not only for an individual device but also for the entire fleet of devices in your application You will need to enable several levels of logging monitoring and alerting to detect issues at the device level as well as the fleet wide level In a well architected IoT application each layer of the IoT application generates metrics and logs At a minimum your architecture should have metrics and logs related to the physical device the connect ivity behavior of your device message input and output rates per device provisioning activities authorization attempts and internal routing events of device data from one application to another IOTSEC 5: How are you analyzing application logs and met rics across cloud and devices? In AWS IoT you can implement detective controls using AWS IoT Device Defender CloudWatch Logs and CloudWatch Metrics AWS IoT Device Defender processes logs and metrics related to device behavior and connectivity behavior s of your devices AWS IoT Device Defender also lets you continuously monitor security metrics from devices and AWS IoT Core for deviations from what you have defined as appropriate behavior for each device Set a default set of thresholds when device beh avior or connectivity behavior deviates from normal activity Augment Device Defender metrics with the Amazon CloudWatch Metrics Amazon CloudWatch Logs generated by AWS IoT Core and Amazon GuardDuty These ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 31 service level logs provide important insight in to activity about not only activities related to AWS IoT Platform services and AWS IoT Core protocol usage but also provide insight into the downstream applications running in AWS that are critical components of your end to end IoT application All Amazon CloudWatch Logs should be analyzed centrally to correlate log information across all sources IOTSEC 6: How are you managing invalid identities in your IoT application? Security identities are the focal point of device trust and authorization to your IoT application It's vital to be able to manage invalid identities such as certificates centrally An invalid certificate can be revoked expired or made inactive As part of a wellarchitected application you must have a process for capturing all invali d certificates and an automated response based on the state of the certificate trigger In addition to the ability of capturing the events of an invalid certificate your devices should also have a secondary means of establishing secure communications to your IoT platform By enabling a bootstrapping pattern as described previously where t wo forms of identity are used for a device you can create a reliable fallback mechanism for detecting invalid certificates and providing a mechanism for a device or an administrator to establish trusted secure communication for remediation A wellarchitected IoT application establishes a certificate revocation list (CRL) that track s all revoked device certificates or certificate authorities (CAs) Use your own trusted CA for on boarding devices and synchronize your CRL on a regular basis to your IoT appl ication Your IoT application must reject connections from identities that are no longer valid With AWS you do not need to manage your entire PKI on premises Use AWS Certificate Manager (ACM) Private Certificate Authority to host your CA in the cloud O r you can work with an APN Partner to add preconfigured secure elements to your IoT device hardware specification ACM has the capability to export revoked cert ificate s to a file in an S3 bucket That same file can be used to programmatically revoke certi ficates against AWS IoT Core Another state for certificates is to be near their expiry date but still valid The client certificate must be valid for at least the service lifetime of the device It ’s up to your IoT application to keep track of devices near their expiry date and perfor m an OTA process to update their certificate to a new one with a later expiry along with logging information about why the certificate rotation was required for audit purposes ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 32 Enable AWS IoT Device Defender audits related to the certificate and CA expiry Device Defender produce s an audit log of certificates that are set to expire within 30 days Use this list to programmatically update devices before certificates are no longer valid You may also choose to build your own expiry store to manage certificat e expiry dates and programmatically query identify and trigger an OTA for device certificate replacement or renewal Infrastructure Protection Design time is the ideal phase for considering security requirements for infrastructure protection across the e ntire lifecycle of your device and solution By considering your devices as an extension of your infrastructure you can take into account how the entire device lifecycle impacts your design for infrastructure protection From a cost standpoint changes ma de in the design phase are less expensive than changes made later From an effectiveness standpoint data loss mitigations implemented at design time are likely to be more comprehensive than mitigations retrofitted Therefore planning the device and solu tion security lifecycle at design time reduces business risk and provides an opportunity to perform upfront infrastructure security analysis before launch One way to approach the device security lifecycle is through supply chain analysis For example eve n a modestly sized IoT device manufacturer or solution integrator has a large number of suppliers that make up its supply chain whether directly or indirectly To maximize solution lifetime and reliability ensure that you are receiving authentic componen ts Software is also part of the supply chain The production firmware image for a device include s drivers and libraries from many sources including silicon partners open source aggregation sites such as GitHub and SourceForge previous first party produ cts and new code developed by internal engineering To understand the downstream maintenance and support for first party firmware and software you must analyze each software provider in the supply chain to determine if it offers support and how it delivers patches This analysis is especially important for connected devices: software bugs are inevitable and represent a risk to your customers because a vulnerable device can be exploited remotely Your IoT device manufacturer or solution engineering team must learn about and patch bugs in a timely manner to reduce these risks IOTSEC 7 How are you vetting your suppliers contract manufacturers and other outsource relationships? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 33 IOTSEC 8 How are you planning the security lifecycle of your IoT devices? IOTSEC 9 How are you ensuring timely notification of security bugs in your third party firmware and software components? Although there is no cloud infrastructure to manage when using AWS IoT services there are integration points where AWS IoT Core inte racts on your behalf with other AWS services For example the AWS IoT rules engine consists of rules that are analyzed that can trigger downstream actions to other AWS services based on the MQTT topic stream Since AWS IoT communicates to your other AWS r esources you must ensure that the right service role permissions are configured for your application Data Protection Before architecting an IoT application data classification governance and controls must be designed and documented to reflect how the data can be persisted in the cloud and how data should be encrypted whether on a device or between the devices and the cloud Unlike traditional cloud applications data sensitivity and governance extend to the IoT devices that are deployed in remote loc ations outside of your network boundary These techniques are important because they support protecting personally identifiable data transmitted from devices and complying with regulatory obligations During the design process determine how hardware firm ware and data are handled at device end oflife Store long term historical data in the cloud Store a portion of current sensor readings locally on a device namely only the data required to perform local operations By only storing the minimum data requ ired on the device the risk of unintended access is limited In addition to reducing data storage locally there are other mitigations that must be implemented at the end of life of a device First the device should offer a reset option which can reset the hardware and firmware to a default factory version Second your IoT application can run periodic scans for the last logon time of every device Devices that have been offline for too long a period of time or are associated with inactive customer acco unts can be revoked Third encrypt sensitive data that must be persisted on the device using a key that is unique to that particular device IOTSEC 10: How do you classify manage and protect your data in transit and at rest? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 34 All traffic to and from AW S IoT must be encrypted using Transport Layer Security (TLS) In AWS IoT security mechanisms protect data as it moves between AWS IoT and other devices or AWS services In addition to AWS IoT you must implement device level security to protect not only t he device’s private key but also the data collected and processed on the device For embedded development AWS has several services that abstract components of the application layer while incorporating AWS security best practices by default on the edge F or microcontrollers AWS recommends using Amazon FreeRTOS Amazon FreeRTOS extends the FreeRTOS kernel with libraries for Bluetooth LE TCP/IP and other protocols In addition Amazon FreeRTOS contains a set of security APIs that allow you to create embedded applications that securely communicate with AWS IoT For Linux based edge gateways AWS IoT Greengrass can be used to extend cloud functionality to the edge of your network AWS IoT Greengrass implements several security features including mutual X509 certificate based authentication with connected devices AWS IAM policies and roles to manage communication permissions between AWS IoT Greengrass and cloud applications and subscriptions which are used to determine how and if data can be routed between connected devices and Greengrass core Incident Response Being prepared for incident response in IoT requires planning on how you will deal with two types of incidents in your IoT workload The first incident is an attack against an individual IoT device in an attempt to disrupt the performance or impact the device’s behavior The second incident is a larger scale IoT event such as network outages and DDoS attack In both scenarios the architecture of your IoT application play s a large role in determining how quickly you will be able to diagnose incidents correlate the data across the incident and then subsequently apply runbooks to the affected dev ices in an automated reliable fashion For IoT applications follow the following best practices for incident responses: • IoT devices are organized in different groups based on device attributes such as location and hardware version • IoT devices are sear chable by dynamic attributes such as connectivity status firmware version application status and device health ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 35 • OTA updates can be staged for devices and deployed over a period of time Deployment rollouts are monitored and can be automatically aborted if devices fail to maintain the appropriate KPIs • Any update process is resilient to errors and devices can recover and roll back from a failed software update • Detailed logging metrics and device telemetry are available that contain contextual informa tion about how a device is currently performing and has performed over a period of time • Fleet wide metrics monitor the overall health of your fleet and alert when operational KPIs are not met for a period of time • Any individual device that deviates from expected behavior can be quarantined inspected and analyzed for potential compromise o f the firmware and applications IOTSEC 11: How do you prepare to respond to an incident that impacts a single device or a fleet of devices? Implement a strategy in which your InfoSec team can quickly identify the devices that need remediation Ensure that the InfoSec team has runbooks that consider firmware versioning and patching for device updates Create automated processes that proactively apply security patches to vulnerable devices as they come online At a minimum your security team should be able to detect an incident on a specific device based on the device logs and current device behavior After an incident is identified the next phase is to quarantine the application To implement this with AWS IoT services you can use AWS IoT Things Groups w ith more restrictive IoT policies along with enabling custom group logging for those devices This allows you to only enable features that relate to troubleshooting as well as gather more data to understand root cause and remediation Lastly after an inc ident has been resolved you must be able to deploy a firmware update to the device to return it to a known state Key AWS Services The essential AWS security services in IoT are the AWS IoT registry AWS IoT Device Defender AWS Identity and Access Manage ment (IAM) and Amazon Cognito In combination these services allow you to securely control access to IoT devices AWS ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 36 services and resources for your users The following services and features support the five areas of security: Design : The AWS Device Q ualification Program provides IoT endpoint and edge hardware that has been pre tested for interoperability with AWS IoT Tests include mutual authentication and OTA support for remote patching AWS Identity and Access Management (IAM): Device credentials (X509 certificates IAM Amazon Cognito identity pools and Amazon Cognito user pools or custom authorization tokens) enable you to securely control device and external user access to AWS resources AWS IoT policies add the ability to implement fine grained access to IoT devices A CM Private CA provides a cloud based approach to creating and managing device certificates Use AWS IoT thing groups to manage IoT permissions at the group level instead of individually Detective controls : AWS IoT Device Defender records device communication and cloud side metrics from AWS IoT Core AWS IoT Device Defender can automate security responses by sending notifications through Amazon Simple Notification Service (Amazon SNS) to internal systems or adm inistrators AWS CloudTrail logs administrative actions of your IoT application Amazon CloudWatch is a monitoring service with integration with AWS IoT Core and can trigger CloudWatch Events to automate security responses CloudWatch captures detailed log s related to connectivity and security events between IoT edge components and cloud services Infrastructure protection : AWS IoT Core is a cloud service that lets connected devices easily and securely interact with cloud applications and other devices The AWS IoT rules engine in AWS IoT Core uses IAM permissions to communicate with other downstream AWS services Data protection : AWS IoT includes encryption capabilities for devices over TLS to protect your data in transit AWS IoT integrates directly with s ervices such as Amazon S3 and Amazon DynamoDB which support encryption at rest In addition AWS Key Management Service (AWS KMS) supports the ability for you to create and control keys used for encryption On devices you can use AWS edge offerings such as Amazon FreeRTOS AWS IoT Greengrass or the AWS IoT Embedded C SDK to support secure communication Incident response : AWS IoT Device Defender allows you to create security profiles that can be used to detect deviations from normal device behavior and trigger automated responses including AWS Lambda AWS IoT Device Management should be ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 37 used to group devices that need remediation and then using AWS IoT Jobs to deploy fixes to devices Resources Refer to the following resources to learn more about our b est practices for security : Documentation and Blogs • IoT Security Identity • AWS IoT Device Defender • IoT Authentication Model • MQTT on port 443 • Detect Anomalies with Device Defender Whitepapers • MQTT Topic Design Reliability Pillar The reliability pillar focuses on the ability to prevent and quickly recover from failures to meet business and customer demand Key topics include foundational elements around setup cross project requi rements recovery planning and change management Design Principles In addition to the overall WellArchitected Framework design principles there are three design principles for reliability for IoT in the cloud: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 38 • Simulate device behavior at production scale : Create a production scale test environment that closely mirrors your production deployment Leverage a multi step simulation plan that allows you to test your applications with a more significant load before your go live date During deve lopment ramp up your simulation tests over a period of time starting with 10% of overall traffic for a single test and incrementing over time ( that is 25% 50% then 100% of day one device traffic) During simulation tests monitor performance and review logs to ensure that the entire solution behaves as expected • Buffer message delivery from the IoT rules engine with streams or queues : Leverage managed services enable high throughput telemetry By injecting a queuing layer behind high throughput topics IoT applications can manage failures aggregate messaging and scale other downstream services • Design for failure and resiliency : It’s essential to plan for resiliency on the device itself Depending on your use case resiliency may entail robust retry l ogic for intermittent connectivity ability to roll back firmware updates ability to fail over to a different networking protocol or communicate locally for critical message delivery running redundant sensors or edge gateways to be resilient to hardware failures and the ability to perform a factory reset Definition There are three best practice areas for reliability in the cloud: 1 Foundations 2 Change management 3 Failure management To achieve reliability a system must have a well planned foundation and mo nitoring in place with mechanisms for handling changes in demand requirements or potentially defending an unauthorized denial of service attack The system should be designed to detect the failure and automatically heal itself Best Practices Foundation s IoT devices must continue to operate in some capacity in the face of network or cloud errors Design device firmware to handle intermittent connectivity or loss in connectivity in a way that is sensitive to memory and power constraints IoT cloud applica tions must ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 39 also be designed to handle remote devices that frequently transition between being online and offline to maintain data coherency and scale horizontally over time Monitor overall IoT utilization and create a mechanism to automatically increase c apacity to ensure that your application can manage peak IoT traffic To prevent devices from creating unnecessary peak traffic device firmware must be implemented that prevents the entire fleet of devices from attempting the same operations at the same ti me For example if an IoT application is composed of alarm systems and all the alarm systems send an activation event at 9am local time the IoT application is inundated with an immediate spike from your entire fleet Instead you should incorporat e a ran domization factor into those scheduled activities such as timed events and exponential back off to permit the IoT devices to more evenly distribute their peak traffic within a window of time The following questions focus on the considerations for reliab ility IOTREL 1 How do you handle AWS service limits for peaks in your IoT application? AWS IoT provides a set of soft and hard limits for different dimensions of usage AWS IoT outlines all of the data plane limits on the IoT limits page Data plane operations (for example MQTT Connect MQTT Publish and MQTT Subscribe) are the primary driver of your device connectivity Therefore it's important to review the IoT limits and ensure that your application adheres to any soft limits related to the data plane while not exceeding any hard limits that are imposed by the data plane The most important part of your IoT scaling approach is to ensure that you architect around any hard limits because exceeding limits that are not adjustable result s in application errors such as throttling and client errors Hard limits are related to throughput on a single IoT connection If you find your application exceeds a hard limit we recommend re designing your application to avoid those scenarios This can be done in several ways such as restructuring your MQTT topics or implementing cloud side logic to aggregate or filter messages before delivering the messages to the interested devices Soft limits in AWS IoT traditionally correlate to account level limits that are independent of a single device For any account level limits you should calculate your IoT usage for a single device and then multiply that usage by the number of devices to determine the base IoT limits that your application will require for yo ur initial product launch AWS ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 40 recommends that you have a ramp up period where your limit increases align closely to your current production peak usage with an additional buffer To ensure t hat the IoT application is not under provisioned : • Consult publishe d AWS IoT CloudWatch metrics for all of the limits • Monitor CloudWatch metrics in AWS IoT Core • Alert on CloudWatch throttle metrics which would signal if you need a limit increase • Set alarms for all thresholds in IoT including MQTT connect publish su bscribe receive and rule engine actions • Ensure that you request a limit increase in a timely fashion before reaching 100% capacity In addition to data plane limits the AWS IoT service has a control plane for administrative APIs The control plane manages the process of creating and storing IoT policies and principals creating the thing in the registry and associating IoT principals including certificates and Amazon Cognito federated identities Because bootstrapping and device registration is critical to the overall process it's important to plan control plane operations and limits Control plane API calls are based on throughput m easured in requests per second Control plane calls are normally in the order of magnitude of tens of requests per second It ’s important for you to work backward from peak expected registration usage to determine if any limit increases for control plane operations are needed Plan for sustained ramp up periods for onboarding devices so that the IoT limit increases align with regular day today data plane usage To protect against a burst in control plane requests your architecture should limit the access to these APIs to only authorized users or internal applications Implement back off and retry logic and queue inbound requests to control data rates to these APIs IOTREL 2 What is your strategy for managing ingestion and processing throughput of IoT da ta to other applications? Although IoT applications have communication that is only routed between other devices there will be messages that are processed and stored in your application In these cases the rest of your IoT application must be prepared to respond to incoming data All internal services that are dependent upon that data need a way to seamlessly scale the ingestion and processing of the data In a wellarchitected IoT application ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 41 internal systems are decoupled from the connectivity layer of the IoT platform through the ingestion layer The ingestion layer is composed of queues and streams that enable durable short term storage while allowing compute resources to process data indepe ndent of the rate of ingestion In order to optimize throughput use AWS IoT rules to route inbound device data to services such as Amazon Kinesis Data Streams Amazon Kinesis Data Firehose or Amazon Simple Queue Service before performing any compute oper ations Ensure that all the intermediate streaming points are provisioned to handle peak capacity This approach creates the queueing layer necessary for upstream applications to process data resiliently IOTREL 3 How do you handle device reliability when communicating with the cloud? IoT solution reliability must also encompass the device itself Devices are deployed in remote locations and deal with intermittent connectivity or loss in connectivity due to a variety of external factors that are out of y our IoT application ’s control For example if an ISP is interrupted for several hours how will the device behave and respond to these long periods of potential network outage? Implement a minimum set of embedded operations on the device to make it more r esilient to the nuances of managing connectivity and communication to AWS IoT Core Your IoT device must be able to operate without internet connectivity You must implement robust operations in your firmware provide the following capabilities : • Store impor tant messages durably offline and once reconnected send those messages to AWS IoT Core • Implement exponential retry and back off logic when connection attempts fail • If necessary have a separate failover network channel to deliver critical messages to AWS IoT This can include failing over from Wi Fi to standby cellular network or failing over to a wireless personal area network protocol (such as Bluetooth LE ) to send messages to a connected device or gateway • Have a method to set the current time usin g an NTP client or low drift real time clock A device should wait until it has synchronized its time before attempting a connection with AWS IoT Core If this isn’t possible the system provide s a way for a user to set the device’s time so that subsequent connections can succeed ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 42 • Send error codes and overall diagnostics messages to AWS IoT Core Change Management IOTREL 4 How do you roll out and roll back changes to your IoT application? It is important to implement the capability to revert to a previous version of your device firmware or your cloud application in the event of a failed rollout If your application is wellarchitected you will capture metrics from the device as well as metrics generated by AWS IoT Core and AWS IoT Device Defender You will also be alerted when your device canaries deviate from expected behavior after any cloud side changes Based on any deviations in your operational metrics you need the ability to: • Version all of the device firmware using Amazon S3 • Version the manifest or execution steps for your device firmware • Implement a known safe default firmware version for your devices to fall back to in the event of an error • Implement an update strategy us ing cryptographic code signing version checking and multiple non volatile storage partitions to deploy software images and rollback • Version all IoT rules engine configurations in CloudFormation • Version all downstream AWS Cloud resources using CloudFor mation • Implement a rollback strategy for reverting cloud side changes using CloudFormation and other infrastructure as code tools Treating your infrastructure as code on AWS allows you to automate monitoring and change management for your IoT application Version all of the device firmware artifacts and ensure that updates can be verified installed or rolled back when necessary Failure Management IOTREL 5 How does your IoT application withstand failure? Because IoT is an event driven workload your application code must be resilient to handling known and unknown errors that can occur as events are permeated through your application A wellarchitected IoT application has the ability to log and retry errors in data processing An IoT application will archive all data in its raw format By archiving ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 43 all data valid and invalid an architecture can more accurately restore data to a given point in time With the IoT rules engine an application can enable an IoT error action If a problem occurs when inv oking an action the rules engine will invoke the error action This allows you to captur e monitor alert and eventually retry messages that could not be delivered to their primary IoT action We recommend that an IoT error action is configured with a different AWS service from the primary action Use durable storage for error actions such as Amazon SQS or Amazon Kinesis Beginning with the rules engine your application logic should initially process messages from a queue and validate that the schema of that message is correct Your application logic should catch and log any known errors and optionally move those messages to their own DLQ for further analysis Have a catch all IoT rule that uses Amazon Kinesis Data Firehose and AWS IoT Analytics channels to transfer all raw and unformatted messages into long term storage in Amazon S3 AWS IoT Analytics data stores and Amazon Redshift for data warehousing IOTREL 6 How do you verify different levels of hardware failure modes for your physical assets? IoT implementations must allow for multiple types of failure at the device level Failures can be due to hardware software connectivity or unexpected adverse conditions One way to plan for thing failure is to deploy devices in pairs if possible or to deploy dual sensors across a fleet of devices deployed over the same coverage area (meshing) Regardless of the underlying cause for device failures if the device can communicate to your cloud application it should send diagnostic information about the hardware failure to AWS IoT Core using a diagnostics topic If the device loses connectivity because of the hardware failure use Fleet Indexing with connectivity status to track the change in connectivity status If the device is offline for extended per iods of time trigger an alert that the device may require remediation Key AWS Services Use Amazon CloudWatch to monitor runtime metrics and ensure reliability Other services and features that support the three areas of reliability are as follows: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 44 Foundations : AWS IoT Core enables you to scale your IoT application without having to manage the underlying infrastructure You can scale AWS IoT Core by requesting account level limit increases Change management : AWS IoT Device Management enables you to update devices in the field while using Amazon S3 to version all firmware software and update manifests for devices AWS CloudFormation lets you document your IoT infrastructure as code and provision cloud resources using a Clou dFormation template Failure management : Amazon S3 allows you to durably archive telemetry from devices The AWS IoT rules engine Error action enables you to fall back to other AWS services when a primary AWS service is returning errors Resources Refer to the following resources to learn more about our best practices related to reliability : Documentation and Blogs • Using Device Time to Validate AWS IoT S erver Certificates • AWS IoT Core Limits • IoT E rror Action • Fleet Indexing • IoT Atlas Performance Efficiency Pillar The Performance Efficiency pillar focuses on using computing resources efficiently Key topics include selecting the right resource types and sizes based on workload requirements monitoring performance and making informed decisions to maintain efficiency as business and technology needs evolve The performance eff iciency pillar focuses on the efficient use of computing resources to meet the requirements and the maintenance of that efficiency as demand changes and technologies evolve Design Principles In addition to the overall WellArchitected Framework performan ce efficiency design principles there are three design principles for performance efficiency for IoT in the cloud: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 45 • Use managed services : AWS provides several managed services across databases compute and storage which can assist your architecture in increasing the overall reliability and performance • Process data in batches : Decouple the connectivity portion of IoT applications from the ingestion and processing portion in IoT By decoupling the ingestion layer your IoT application can handle data in ag gregate and can scale more seamlessly by processing multiple IoT messages at once • Use event driven architectures : IoT systems publish events from devices and permeate those events to other subsystems in your IoT application Design mechanisms that cater t o event driven architectures such as leveraging queues message handling idempotency dead letter queues and state machines Definition There are four best practice areas for Performance Efficiency in the cloud: 1 Selection 2 Review 3 Monitoring 4 Tradeoffs Use a data driven approach when selecting a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types By reviewing your choices on a cyclical basis you will ensure that you are taking a dvantage of the continually evolving AWS platform Monitoring ensures that you are aware of any deviation from expected performance and allow s you to act Your architecture can make tradeoffs to improve performance such as using compression or caching or relaxing consistency requirements Best Practices Selection WellArchitected IoT solutions are made up of multiple systems and components such as devices connectivity databases data processing and analytics In AWS there are several IoT services dat abase offerings and analytics solutions that enable you to quickly build solutions that are wellarchitected while allowing you to focus on business objectives AWS recommends that you leverage a mix of managed AWS services that ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 46 best fit your workload Th e following questions focus on these considerations for performance efficiency IOTPERF 1 How do you select the best performing IoT architecture? When you select the implementation for your architecture use a data driven approach based on the longterm v iew of your operation IoT applications align naturally to event driven architectures Your architecture will combine services that integrate with event driven patterns such as notifications publishing and subscribing to data stream processing and event driven compute In the following sections we look at the five main IoT resource types that you should consider (devices connectivity databases compute and analytics) Devices The optimal embedded software for a particular system will vary based on the hardware footprint of the device For example network security protocols while necessary for preserving data privacy and integrity can have a relatively large RAM footprint For intranet and internet connections use TLS with a combination of a strong cipher suite and minimal footprint AWS IoT supports Elliptic Curve Cryptography (ECC) for devices connecting to AWS IoT using TLS A secure software and hardware platform on device should take precedence during the selection criteria for your devices AWS also has a number of IoT partners that provide hardware solutions that can securely integrate to AWS IoT In addition to selecting the right hardware partner you may choose to use a number of software component s to run your application logic on the device including Amazon FreeRTOS and AWS IoT Greengrass IOTPERF 2 How do you select your hardware and operating system for IoT devices? IoT Connectivity Before firmware is developed to communicate to the cloud imp lement a secure scalable connectivity platform to support the longterm growth of your devices over time Based on the anticipated volume of devices an IoT platform must be able to scale the communication workflows between devices and the cloud whether that is simple ingestion of telemetry or command and response communication between devices ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 47 You can build your IoT application using AWS services such as EC2 but you take on the undifferentiated heavy lifting for building unique value into your IoT offe ring Therefore AWS recommends that you use AWS IoT Core for your IoT platform AWS IoT Core supports HTTP WebSockets and MQTT a lightweight communication protocol designed to tolerate intermittent connections minimize the code footprint on devices a nd reduce network bandwidth requirements IOTPERF 3 How do you select your primary IoT platform? Databases You will have multiple databases in your IoT application each selected for attributes such as the write frequency of data to the database the rea d frequency of data from the database and how the data is structured and queried There are other criteria to consider when selecting a database offering: • Volume of data and retention period • Intrinsic data organization and structure • Users and applicatio ns consuming the data (either raw or processed ) and their geographical location/dispersion • Advanced analytics needs such as machine learning or real time visualizations • Data synchronization across other teams organizations and business units • Security of the data at the row table and database levels • Interactions with other related data driven events such as enterprise applications drillthrough dashboards or systems of interaction AWS has several database offerings that support IoT solutions For structured data you should use Amazon Aurora a highly scalable relational interface to organizational data For semi structured data that requires low latency for queries and will be used by multiple consumers use DynamoDB a fully managed multi region multi master database that provides consistent single digit millisecond latency and offers built in security backup and resto re and in memory caching For storing raw unformatted event data use AWS IoT Analytics AWS IoT Analytics filters transforms and enriches IoT data before storing it in a time series data store for analysis Use Amazon SageMaker to build train and d eploy machine learning models ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 48 based off of your IoT data in the cloud and on the edge using AWS IoT Services such as Greengrass Machine Learning Inference Consider storing your raw formatted time series data in a data warehouse solution such as Amazon R edshift Unformatted data can be imported to Amazon Redshift via Amazon S3 and Amazon Kinesis Data Firehose By archiving unformatted data in a scalable managed data storage solution you can begin to gain business insights explor e your data and identif y trends and patterns over time In addition to storing and leveraging the historical trends of your IoT data you must have a system that stores the current state of the device and provides the ability to query against the current state of all of your de vices This supports internal analytics and customer facing views into your IoT data The AWS IoT Shadow service is an effective mechanism to store a virtual representation of your device in the cloud AWS IoT device shadow is best suited for managing the current state of each device In addition for internal teams that need to query against the shadow for operational needs leverage the managed capabilities of Fleet Indexing which provide s a searchable index incorporating your IoT registry and shadow me tadata If there is a need to provide index based searching or filtering capability to a large number of external users such as for a consumer application dynamically archive the shadow state using a combination of the IoT rules engine Kinesis Data Fire hose and Amazon ElasticSearch Service to store your data in a format that allows fine grained query access for external users IOTPERF 4 How do you select the database for your IoT device state? Compute IoT applications lend themselves to a high flow of ingestion that requires continuous processing over the stream of messages Therefore an architecture must choose compute services that support the steady enrichment of stream processing and the execution of business applications during and prior to data s torage The most common compute service used in IoT is AWS Lambda which allows actions to be invoked when telemetry data reaches AWS IoT Core or AWS IoT Greengrass AWS Lambda can be used at different points throughout IoT The location where you elect t o trigger your business logic with AWS Lambda is influenced by the time that you want to process a specific data event ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 49 Amazon EC2 instances can also be used for a variety of IoT use cases They can be used for managed relational databases systems and for a variety of applications such as web reporting or to host existing on premises solutions IOTPERF 5 How do you select your compute solutions for processing AWS IoT events? Analytics The primary business case for implementing IoT solutions is to respo nd more quickly to how devices are performing and being used in the field By acting directly on incoming telemetry businesses can make more informed decisions about which new products or features to prioritize or how to more efficiently operate workflow s within their organization Analytics services must be selected in such a way that gives you varying views on your data based on the type of analysis you are performing AWS provides several services that align with different analytics workflows including timeseries analytics real time metrics and archival and data lake use cases With IoT data your application can generate time series analytics on top of the steaming data messages You can calculate metrics over time windows and then stream values to other AWS services In addition IoT applications that use AWS IoT Analytics can implement a managed AWS Data Pipeline consisting of data transformation enrichment and filtering before storing data in a time series data store Additionally with AWS IoT Analytics visualizations and analytics can be performed natively using QuickSight and Jupyter Notebooks Review IOTPERF 6 How do you evolve your architecture based on the historical analysis of your IoT application? When building complex IoT solutions you can devote a large percentage of time on efforts that do not directly impact your business outcome For example managing IoT protocols securing device identities and transferring telemetry between devices and the cloud Although these aspects of IoT are important they do not directly lead to differentiating value The pace of innovation in IoT can also be a challenge ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 50 AWS regularly releases new features and services based on the common challenges of IoT Perform a regular review of your data to see if new AWS IoT services can solve a current IoT gap in your architecture or if they can replace components of your architecture that are not core business differentiators Leverage services built to aggregate your IoT data store your data and the n later visualize your data to implement historical analysis You can leverage a combination of sending timestamp information from your IoT device with leveraging services like AWS IoT Analytics and timebased indexing to archive your data with associated timestamp information Data in AWS IoT Analytics can be stored in your own Amazon S3 bucket along with additional IT or OT operational and efficiency logs from your devices By combining this archival state of data in IoT with visualization tools you can make data driven decisions about how new AWS services can provide additional value and measure how the services improve efficiency across your fleet Monitoring IOTPERF 7 How are you running end to end simulation tests of your IoT application? IoT applic ations can be simulated using production devices set up as test devices (with a specific test MQTT namespace) or by using simulated devices All incoming data captured using the IoT rules engine is processed using the same work flows that are used for production The frequency of end toend simulations must be driven by your specific release cycle or device adoption You should test failure pathways (code that is only executed during a failure) to ensure that the solution is resilient to errors You should also continuously run device canaries against your production and pre production accounts The device canaries act as key indicators of the system performance during simulation tests Outputs of the tests sho uld be documented and remediation plans should be drafted User acceptance tests should be performed IOTPERF 8 How are you using performance monitoring in your IoT implementation? There are several key types of performance monitoring related to IoT deplo yments including device cloud performance and storage/analytics Create appropriate ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 51 performance metrics using data collected from logs with telemetry and command data Start with basic performance tracking and build on the metrics as your business core competencies expand Leverage CloudWatch Logs metric filters to transform your IoT application standard output into custom metrics through regex (regular expressions) pattern matching Create CloudWatch alarms based on your application ’s custom metrics to gain quick insight into your IoT application ’s behavi or Set up fine grained logs to track specific thing groups During IoT solution development enable DEBUG logging for a clear understanding of the progress of events about each IoT message as it passes from your devices through the message broker and the rules engine In production change the logging to ERROR and WARN In addition to cloud instrumentation you must run instrumentation on devices prior to deployment to ensure that devices make the most efficient use of their local resources and that firmware code does not lead to unwanted scenarios like memory leaks Deploy code that is highly optimized for constrained devices and monitor the health of your devices using device diagnostic messages publi shed to AWS IoT from your embedded application Tradeoffs IoT solutions drive rich analytics capabilities across vast areas of crucial enterprise functions such as operations customer care finance sales and marketing At the same time they can be used as efficient egress points for edge gateways Careful consideration must be given to architecting highly efficient IoT implementations where data and analytics are pushed to the cloud by devices and where machine learning algorithms are pulled down on the device gateways from the cloud Individual devices wil l be constrained by the throughput supported over a given network The frequency with which data is exchanged must be balanced with the transport layer and the ability of the device to optionally store aggregate and then send data to the cloud Send data from devices to the cloud at timing intervals that align to the time required by backend applications to process and take action on the data For example if you need to see data at a one second increment your device must send data at a more frequent tim e interval than one second Conversely if your application only reads data at an hourly rate you can make a trade off in performance by aggregating data points at the edge and sending the data every half hour ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 52 IOTPERF 9 How are you ensuring that data f rom your IoT devices is ready to be consumed by business and operational systems? IOTPERF 10 How frequently is data transmitted from devices to your IoT application? The speed with which enterprise applications business and operations need to gain visibility into IoT telemetry data determines the most efficient point to process IoT data In network constrained environments where the hardware is not limited use edge solutions such as AWS IoT Greengrass to operate and process data offline from the cloud In cases where both the network and hardware are constrained look for opportunities to compress message payloads by using binary formatting and grouping similar messages together into a single request For visualizations Amazon Kinesis Data Analytics en ables you to quickly author SQL code that continuously reads processes and stores data in near realtime Using standard SQL queries on the streaming data allows you to construct applications that transform and provide insights into your data With Kines is Data Analytics you can expose IoT data for streaming analytics Key AWS Services The key AWS service for performance efficiency is Amazon CloudWatch which integrates with several IoT services including AWS IoT Core AWS IoT Device Defender AWS IoT D evice Management AWS Lambda and DynamoDB Amazon CloudWatch provides visibility into your application’s overall performance and operational health The following services also support performance efficiency: Selection Devices : AWS hardware partners provi de production ready IoT devices that can be used as part of you IoT application Amazon FreeRTOS is an operating system with software libraries for microcontrollers AWS IoT Greengrass allows you to run local compute messaging data caching sync and ML at the edge Connectivity : AWS IoT Core is a managed IoT platform that supports MQTT a lightweight publish and subscribe protocol for device communication Database : Amazon DynamoDB is a fully man aged NoSQL datastore that supports single digit millisecond latency requests to support quick retrieval of different views of your IoT data ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 53 Compute : AWS Lambda is an event driven compute service that lets you run application code without provisioning ser vers Lambda integrates natively with IoT events triggered from AWS IoT Core or upstream services such as Amazon Kinesis and Amazon SQS Analytics : AWS IoT Analytics is a managed service that operationalizes device level analytics while providing a time s eries data store for your IoT telemetry Review : The AWS IoT Blog section on the AWS website is a resource for learning about what is newly launched as part of AWS IoT Monitoring : Amazon CloudWatch Metrics and Amazon CloudWatch Logs provide metrics logs filters alarms and notifications that you can integrate with your existing monitoring solution These metrics can be augmented with device telemetry to monitor your application Tradeoff : AWS IoT Greengrass and Amazon Kinesis are services that allow yo u to aggregate and batch data at different locations of your IoT application providing you more efficient compute performance Resources Refer to the following resources to learn more about our best practices related to performance efficiency : Documentat ion and Blogs • AWS Lambda Getting Started • DynamoDB Getting Started • AWS IoT Analytics User Guide • Amazon FreeRTOS Getting Started • AWS IoT Greengrass Getting Started • AWS IoT Blog Cost Optimization Pillar The Cost Optimization pillar includes the continual process of refinement and improvement of a system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 54 practices in this paper will e nable you to build and operate cost aware systems that achieve business outcomes and minimize costs allowing your business to maximize its return on investment Design Principles In addition to the overall WellArchitected Framework cost optimization desi gn principles there is one design principle for cost optimization for IoT in the cloud: • Manage manufacturing cost tradeoffs : Business partnering criteria hardware component selection firmware complexity and distribution requirements all play a role in manufacturing cost Minimizing that cost helps determine whether a product can be brought to market successfully over multiple product generations However t aking shortcuts in the selection of your components and manufacturer can increase downstream costs For example partnering with a reputable manufacturer helps minimize downstream hardware failure and customer support cost Selecting a dedicated crypto component can increase bill of material ( BOM ) cost but reduce downstream manufacturing and provisioning complexity since the part may already come with an onboard private key and certificate Definition There are four best practice areas for Cost Optimization in the cloud: 1 Costeffective resources 2 Match ing supply and demand 3 Expenditure awareness 4 Optimizing over time There are tradeoffs to consider For example do you want to optimize for speed to market or cost? In some cases it's best to optimize for speed —going to market quickly shipping new feature s or meeting a deadline —rather than investing in upfront cost optimization Design decisions are sometimes guided by haste as opposed to empirical data as the temptation always exists to overcompensate rather than spend time benchmarking for a cost optim al deployment This leads to over provisioned and under optimized deployments The following sections provide techniques and strategic guidance for your deployment’s initial and ongoing cost optimization ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 55 Best Practices Cost Effective Resources Given the s cale of devices and data that can be generated by an IoT application using the appropriate AWS services for your system is key to cost savings In addition to the overall cost for your IoT solution IoT architects often look at connectivity through the lens of BOM costs For BOM calculations you must predict and monitor what the long term costs will be for managing the connectivity to your IoT application throughout the lifetime of that device Leveraging AWS services will help you calculate initial BOM costs mak e use of cost effective services that are event driven and update your architecture to continue to lower your overall lifetime cost for connectivity The most straightforward way to increase the cost effectiveness of your resources is to group I oT events into batches and process data collectively By processing events in groups you are able to lower the overall compute time for each individual message Aggregation can help you save on compute resources and enable solutions when data is compresse d and archived before being persisted This strategy decreases the overall storage footprint without losing data or compromising the query ability of the data COST 1 How do you select an approach for batch enriched and aggregate data delivered from yo ur IoT platform to other services? AWS IoT is best suited for streaming data for either immediate consumption or historical analyses There are several ways to batch data from AWS IoT Core to other AWS services and the differentiating factor is driven by batching raw data (as is) or enrichi ng the data and then batching it Enriching transforming and filtering IoT telemetry data during (or immediately after ) ingestion is best performed by creating an AWS IoT rule that sends the data to Kinesis Data Streams Kinesis Data Firehose AWS IoT An alytics or Amazon SQS These services allow you to process multiple data events at once When dealing with raw device data from this batch pipeline you can use AWS IoT Analytics and Amazon Kinesis Data Firehose to transfer data to S3 buckets and Amazon Redshift To lower storage costs in Amazon S3 an application can leverage lifecycle policies that archive data to lower cost storage such as Amazon S3 Glacier Matching Supply and Demand Optimally matching supply to demand delivers the lowest cost for a system However given the susceptibility of IoT workloads to data bursts solutions must be dynamically ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 56 scalable and consider peak capacity when provisioning resources With the event driven flow of data you can choose to automatically provision your AWS resources to match your peak capacity and then scale up and down during known low periods of traffic The following questions focus on the considerations for cost optimization : COST 2 How do you match the supply of resources with device demand? Serverless technologies such as AWS Lambda and API Gateway help you create a more scalable and resilient architecture and y ou only pay for when your application utilizes those services AWS IoT Core AWS IoT Device Management AWS IoT Device Defender A WS IoT Greengrass and AWS IoT Analytics are also managed services that are pay per usage and do not charge you for idle compute capacity The benefit of managed services is that AWS manages the automatic provisioning of your resources If you utilize mana ged services you are responsible for monitoring and setting alerts for limit increases for AWS services When architecting to match supply against demand proactively plan your expected usage over time and the limits that you are most likely to exceed F actor those limit increases into your future planning Optimizing Over Time Evaluating new AWS features allows you to optimize cost by analyz ing how your devices are performing and make changes to how your devices communicate with your IoT To optimize the cost of your solution through changes to device firmware you should review the pricing components of AWS services such as AWS IoT determine where you are below billing metering thresholds for a given service and then weigh the trade offs between cost and performance COST 3 How do you optimize payload size between devices and your IoT platform? IoT applications must balance the networking throughput that can be realized by end devices with the most efficient way that data should be processed by your IoT ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 57 application We recommend that IoT deployments initially optimize data transfer based on the d evice constraints Begin by sending discrete data events from the device to the cloud making minimal use of batching multiple events in a single message Later if necessary you can use serialization frameworks to compress the messages prior to sending i t to your IoT platform From a cost perspective the MQTT payload size is a critical cost optimization element for AWS IoT Core An IoT message is billed in 5 KB increments up to 128 KB For this reason each MQTT payload size should be as close to possible to any 5 KB For example a payload that is currently sized at 6 KB is not as cost eff icient as a payload that is 10 KB because the overall costs of publishing that message is identical despite one message being larger than the other In order to take advantage of the payload size look for opportunities to either compress data or aggregate data into messages: • You should shorten values while keeping them legible If 5 digits of precision are sufficient then you should not use 12 digits in the payl oad • If you do not require IoT rules engine payload inspection you can use serialization frameworks to compress payloads to smaller sizes • You can send data less frequently and aggregate messages together within the billable increments For example sendi ng a single 2 KB message every second can be achieved at a lower IoT message cost by sending two 2 KB messages every other second This approach has tradeoffs that should be considered before implementation Adding complexity or delay in your devices may unexpectedly increase processing costs A cost optimization exercise for IoT payloads should only happen after your solution has been in production and you can use a data driven approach to determine the cost impact of changing the way data is sent to AWS IoT Core COST 4 How do you optimize the costs of storing the current state of your IoT device? WellArchitected IoT applications have a virtual representation of the device in the cloud This virtual representation is composed of a managed data store or specialized IoT application data store In both cases your end devices must be programmed in a way that efficiently transmits device state changes to your IoT application For example ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 58 your device should only send its full device state if your fi rmware logic dictates that the full device state may be out of sync and would be best reconciled by sending all current settings As individual state changes occur the device should optimize the frequency it transmits those changes to the cloud In AWS Io T device shadow and registry operations are metered in 1 KB increments and billing is per million access/modify operations The shadow stores the desired or actual state of each device and the registry is used to name and manage devices Cost optimizati on processes for device shadows and registry focus on managing how many operations are performed and the size of each operation If your operation is cost sensitive to shadow and registry operations you should look for ways to optimize shadow operations For example for the shadow you could aggregate several reported fields together into one shadow message update instead of sending each reported change independently Grouping shadow updates together reduces the overall cost of the shadow by consolidating updates to the service Key AWS Services The key AWS feature supporting cost optimization is cost allocation tags which help you to understand the costs of a system The following services and features are important in the three areas of cost optimizatio n: • Cost effective resources : Amazon Kinesis AWS IoT Analytics and Amazon S3 are AWS services that enable you to process multiple IoT messages in a single request in order to improve the cost effectiveness of compute resources • Matching supply and demand : AWS IoT Core is a managed IoT platform for managing connectivity device security to the cloud messaging routing and device state • Optimizing over time : The AWS IoT Blog section on the AWS website is a resource for learning about what is newly launche d as part of AWS IoT Resources Refer to the following resources to learn more about AWS best practices for cost optimization Documentation and Blogs ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 59 • AWS IoT Blogs Conclusion The AWS Well Architected Fra mework provides architectural best practices across the pillars for designing and operating reliable secure efficient and cost effective systems in the cloud for IoT applications The framework provides a set of questions that you can use to review an e xisting or proposed IoT architecture and also a set of AWS best practices for each pillar Using the framework in your architecture helps you produce stable and efficient systems which allows you to focus on your functional requirements Contributors The following individuals and organizations contributed to this document: • Olawale Oladehin Solutions Architect Specialist IoT • Dan Griffin Software Development Engineer IoT • Catalin Vieru Solutions Architect Specialist IoT • Brett Francis Product Solutions Architect IoT • Craig Williams Partner Solutions Architect IoT • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Document Revisions Date Description December 2019 Updated to include additional guidance on IoT SDK usage boots trapping device lifecycle management and IoT November 2018 First publication",General,consultant,Best Practices AWS_WellArchitected_Framework__Operational_Excellence_Pillar,ArchivedOperational Excellence Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/operationalexcellencepillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Operational Excellence 1 Design Principles 1 Definition 2 Organization 2 Organization Priorities 3 Operating Model 6 Organizational Culture 14 Prepare 18 Design Telemetry 18 Improve Flow 21 Mitigate Deployment Risks 24 Operational Readiness 26 Operate 30 Understanding Workload Health 30 Understanding Operational Health 33 Responding to Events 35 Evolve 39 Learn Share and Improve 39 Conclusion 42 Contributors 42 Further Reading 42 Document Revisions 43 ArchivedAbstract The focus of this paper is the operational excellence pillar of the AWS WellArchitected Framework It provides guidance to help you apply best practices in the design delivery and maintenance of AWS workloads ArchivedAmazon Web Services – Operational Excellence AWS Well Architected Framework Page 1 Introduction The AWS Well Architected Framework helps you understand the benefits and risks of decisions you make while building workloads on AWS By using the Framework you will learn operational and architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It provides a way to consistently measure your operations and architectures against best practices and identify areas for impro vement We believe that having WellArchitected workloads that are designed with operations in mind greatly increases the likelihood of business s uccess The framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the operational excellence pillar and how to apply it as the foundation of your well architected solu tions Operational excellence is challenging to achieve in environments where operations is perceived as a function isolated and distinct from the lines of business and development teams that it supports By adopting the practices in this paper you can bui ld architectures that provide insight to their status are enabled for effective and efficient operation and event response and can continue to improve and support your business goals This paper is intended for those in technology roles such as chief te chnology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS best practices and the strategies to use when designing cloud architectures for operation al excellence This paper does not pro vide implementation details or architectural patterns However it does include references to appropriate resources for this information ArchivedAmazon Web Services Operational Excellence Pillar 1 Operational Excellence The operational excellence pillar includes how your organization supports your business objectives your ability to run workloads effectively gain insight into their operations and to continuously improve supp orting processes and procedures to deliver business value Design Principles There are five design principles for operational excellence in the cloud: • Perform operations as code : In the cloud you can apply the same enginee ring discipline that you use for application code to your entire environment You can define your entire workload (applications infrastructure etc) as code and update it with code You can script your operations procedures and automate their execution b y triggering them in response to events By performing operations as code you limit human error and enable consistent responses to events • Make frequent small reversible changes : Design workloads to allow components to be updated regularly to increase t he flow of beneficial changes into your workload Make changes in small increments that can be reversed if they fail to aid in the identification and resolution of issues introduced to your environment (without affecting customers when possible) • Refine op erations procedures frequently : As you use operations procedures look for opportunities to improve them As you evolve your workload evolve your procedures appropriately Set up regular game days to review and validate that all procedures are effective a nd that teams are familiar with them • Anticipate failure : Perform “pre mortem” exercises to identify potential sources of failure so that they can be removed or mitigated Test your failure scenarios and validate your understanding of their impact Test your response procedures to ensure they are effective and that teams are familiar with their execution Set up regular game days to test workload and team responses to simula ted events • Learn from all operational failures : Drive improvement through lessons learned from all operational events and failures Share what is learned across teams and through the entire organization ArchivedAmazon Web Services Operational Excellence Pillar 2 Definition Operational excellence in the cloud is c omposed of four areas: • Organization • Prepare • Operate • Evolve Your organization’s leadership defines business objectives Your organization must understand requirements and priorities and use these to organize and conduct work to support the achievement of bu siness outcomes Your workload must emit the information necessary to support it Implementing services to enable integration deployment and delivery of your workload will enable an increased flow of beneficial changes into production by automating repet itive processes There may be risks inherent in the operation of your workload You must understand those risks and make an informed decision to enter production Your teams must be able to support your workload Business and operational metrics derived f rom desired business outcomes will enable you to understand the health of your workload your operations activities and respond to incidents Your priorities will change as your business needs and business environment changes Use these as a feedback loop to continually drive improvement for your organization and the operation of your workload Organization You need to understand your organization’s priorities your organizational structure and how your organization supports your team members so that the y can support your business outcomes To enable operational excellence you must understand the following: • Organization Priorities • Operating Model • Organizational Culture ArchivedAmazon Web Services Operational Excellence Pillar 3 Organization Priorities Your teams need to have a shared understanding of your entire workload their role in it and shared business goals to set the priorities that will enable business success Well defined priorities will maximize the benefits of your efforts Review your priorities regularly so that they can be updated as needs change Evaluate external customer needs: Involve key stakeholders including business development and operations teams to determine where to focus efforts on external customer needs Evaluate internal customer needs : Involve key stakeholders including busine ss development and operations teams to determin e where to focus efforts on internal customer needs Evaluating customer needs will ensure that you have a thorough understanding of the support that is required to achieve business outcomes Use your established priorities to focus your improvement efforts where they will have the greatest impact (for example developing team skills improving workload performance reducing costs automating runbooks or enhancing monitoring) Update your prio rities as needs change Evaluate governance requirements: Ensure that you are aware of guidelines or obligations defined by your organization that may mandate or emphasize specific focus Evaluate internal factors such as organization policy standards and requirements Validate that you have mechanisms to identify changes to governance If no governance requirements are identified ensure that you have applied due diligence to this determination Evaluate external compliance requirements: Ensure that you are aware of guidelines or obligations that may mandate or emphasize specific focus Evaluate external factors such as regulatory compliance requirements and industry standards Validate that you have mechanisms to identify changes to complia nce requirements If no compliance requirements are identified ensure that you have applied due diligence to this determination If there are external regulatory or compliance requirements that apply to your organization you should use the resources prov ided by AWS Cloud Compliance to help educate your teams so that they can determine the impact on your priorities ArchivedAmazon Web Services Operational Excellence Pillar 4 Evaluate threat landscape: Evaluate threats to the business (for example competition business risk and liabilities operational risks and information security threats) and maintain current information in a risk registry I nclude the impact of risks when determining where to focus efforts The WellArchitected Framework emphasizes learning measuring and improving It provides a consistent approach for you to evaluate architectures and implement designs that will scale over time AWS provides the AWS Well Architected Tool to help you review your approach prior to development the state of your workloads prior to production and the state of your workloads in production You can compare them to the latest AWS architectural best practices monitor the overall status of your workloads and gain insight to potential risks Enterprise Support customers are eligible for a guided Well Architected Review of their mission critical worklo ads to measure their architectures against AWS best practices They are also eligible for a n Operations Review designed to hel p them to identify gaps in their approach to operating in the cloud The cross team engagement of these reviews helps to establish common understanding of your workloads and how team roles contribute to success The needs identified through the review can help shape your priorities AWS Trusted Advisor is a tool that provides access to a core set of checks that recommend optimizations that may help shape your priorities Business and Enterprise Support customers receive access to additional checks focusing on security reli ability performance and cost optimization that can further help shape their priorities Evaluat e tradeoffs: Evaluate the impact of tradeoffs between competing interests or alternative approaches to help make informed decisions when determining where to focus operations efforts or choosing a course of action For example accelerating speed to marke t for new features may be emphasized over cost optimization or you may choose a relational database for non relational data to simplify the effort to migrate a system rather than migrating to a database optimized for your data type and updating your appl ication AWS can help you educate your teams about AWS and its services to increase their understanding of how their choices can have an impact on your workload You should use the resources provided by AWS Support (AWS Knowledge Center AWS Discussion Forms and AWS Support Center ) and AWS Documentation to educate your ArchivedAmazon Web Services Operational Excellence Pillar 5 teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS also shares best practices and patterns that we h ave learned through the operation of AWS in The Amazon Builders' Library A wide variety of other useful information is available through the AWS Blog and The Official AWS Podcast Manage benefits and risks: Manage benefit s and risks to make informed decisions when determining where to focus efforts For example it may be beneficial to deploy a workload with unresolved issues so that significant new features can be made available to customers It may be possible to mitigat e associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk You might find that you want to emphasize a small subset of your priorities at some point in time Use a balanced approach over the long term to ensure the development of needed capabilities and management of risk Review your priorities regularly and update your priorities as needs change Resources Refer to the following resources to learn more about AWS best practices for organizational priorities Documentation • AWS Trusted Advisor • AWS Cloud Compliance • AWS Well Architected Framework • AWS Business Support • AWS Enterprise Support • AWS Enterprise Support Entitlements • AWS Support Cloud Operations Reviews • AWS Cloud Adoption Framework ArchivedAmazon Web Services Operation al Excellence Pillar 6 Operating Model Your teams must understand their part in achieving business outcomes Teams need to understand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams The needs of a team will be shaped by their industry their organization the makeup of the team and the characteristics of their workload It is unreasonable to expect a s ingle operating model to be able to support all teams and their workloads The number of operating models present in an organization is likely to increase with the number of development teams You may need to use a combination of operating models Adoptin g standards and consuming services can simplify operations and limit the support burden in your operating model The benefit of development efforts on shared standards is magnified by the number of teams who have adopted the standard and who will adopt new features It’s critical that m echanisms exist to request additions changes and exceptions to standards in support of the teams’ activities Without this option standards become a constraint on innovation Requests should be approved where viable and de termined to be appropriate after an evaluation of benefits and risks A well defined set of responsibilities will reduce the frequency of conflicting and redundant efforts Business outcomes are easier to achieve when there is strong alignment and relation ships between business development and operations teams Operating Model 2 by 2 Representations These operating m odel 2 by 2 representations are illustrations to help you understand the relationships between teams in your environment These diagrams focu s on who does what and the relationships between teams but we will also discuss governance and decision making in context of these examples our teams may have responsibilities in multiple parts of multiple models depending on the workloads they support You may wish to break out more specialized discipline areas than the high level ones described There is the potential for endless variation on these models as you separate or aggregate activities or overlay teams and provide more specific detail ArchivedAmazon Web Services Operational Excellence Pillar 7 You m ay identify that you have overlapping or unrecognized capabilities across teams that can provide additional advantage or lead to efficiencies You may also identify unsatisfied needs in your organization that you can plan to address When evaluating organ izational change examine the trade offs between models where your individual teams exist within the models (now and after the change) how your teams’ relationship and responsibilities will change and if the benefits merit the impact on your organizatio n You can be successful using each of the following four operating models Some models are more appropriate for specific use cases or at specific points in your development Some of the se models may provide advantages over the ones in use in your environm ent • Fully Separated Operating Model • Separated Application Engineering and Operations (AEO) and Infrastructure Engineering and Operations (IEO) with Centralized Governance • Separated AEO and IEO with Centralized Governance and a Service Provider • Separated A EO and IEO with Decentralized Governance Fully Separated Operating model In the following diagram on the vertical axis we have applications and infrastructure Applications refer to the workload serving a business outcome and can be custom developed or pu rchased software Infrastructure refers to the physical and virtual infrastructure and other software that supports that workload On the horizontal axis we have Engineering and Operations Engineering refers to the develop ment building and testing of a pplications and i nfrastructure Operations is the deployment update and ongoing support of applications and i nfrastructure ArchivedAmazon Web Services Operational Excellence Pillar 8 In many organizations this “ fully separated ” model is present The activities in each quadrant are performed by a separate team Work is passed between teams through mechanisms such as work requests work queues tickets or by using an IT service management (ITSM) system The transition of tasks to or between teams increases complexity and creates bottlenecks and delays Requests may be delayed until they are a priority Defects identified late may require significant rework and may have to pass through the same teams and their functions once again If there are incidents that require action by engine ering teams their responses are delayed by the hand off activity There is a higher risk of m isalignment when business development and operations teams are organized around the activities or functions that are being performed This can lead to teams focusing on their specific responsibilities instead of focusing on achieving business outcomes Teams may be narrowly specialized physically isolated or logically isolated hindering communication and collaboration Separated AEO and IEO with Centralized Governance This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Similarly your infrastru cture engineers perform both the engineering and operation of the platforms they use to support application teams ArchivedAmazon Web Services Operational Excellence Pillar 9 For this example we are going to treat governance as centralized Standards are distributed provided or shared to the application teams You should use tools or services that enable you to centrally govern your environment s across accounts such as AWS Organizations Services like AWS Control Tower expand this management capability enabling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance using AWS Organizatio ns and automate provisioning of new accounts “You build it you run it” does not mean that the application team is responsible for the full stack tool chain and platform The platform engineering team provides a standardized set of services (for exampl e development tools monitoring tools backup and recovery tools and network) to the application team The platform team may also provide the application team access to approved cloud provider services specific configurations of the same or both Mecha nisms that provide a self service capability for deploying approved services and configurations such as AWS Service Catalog can help limit delays associated with fulfillment requests while enforcing governance The platform team enables full stack visibility so that application teams can differentiate between issues with their application components and the services and infrastructure components their applications consume The platform team may also p rovide assistance configuring these services and guidance on how to improve the applications teams’ operations ArchivedAmazon Web Services Operational Excellence Pillar 10 As discussed previously it’s critical that m echanisms exist for the application team to request additions changes and exceptions to standards in support of teams’ activities and innovation of their application The Separated AEO IEO model provides strong feedback loops to application teams Day to day operations of a workload increases contact with customers either through direct interaction or indirectly through support and feature requests This heightened visibility allows application teams to address issues more quickly The deeper engagement and closer relationship provides insight to customer needs and enables more rapid innovation All of this is also true for the platform team supporting the application teams Adopted standards may be pre approved for use reducing the amount of review necessary to enter production Consuming supported and tested standards provided by the platform team m ay reduce the frequency of issues with those services Adoption of standards enables application teams to focus on differentiating their workloads Separated AEO and IEO with Centralized Governance and a Service Provider This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Your organization may not have the existing skills or team members to support a dedicated platfo rm engineering and operations team or you may not want to make the investments of time and effort to do so Alternatively you may wish to have a platform team that is focused on creating capabilities that will differentiate your business but you want t o offload the undifferentiated day to day operations to an outsourcer Managed Services providers such as AWS Managed Services AWS Mana ged Services Partners or Managed Services Providers in the AWS Partner Network provide expertise implementing cloud environments and support your security an d compliance requirements and business goals ArchivedAmazon Web Services Operational Excellence Pillar 11 For this variation we are going to treat governance as centralized and managed by the platform team with account creation and policies managed with AWS Organizations and AWS Control Tower This model does require you to modify your mechanisms to work with those of your service provider It does not address the bottlenecks and delays created by transition of tasks between teams including your service provider or the potential rework related to the lat e identification of defects You gain the advantage of your providers’ standards best practices processes and expertise You also gain the benefit s of their ongoing development of their service offerings Adding Managed Services to your operating model can save you time and resources and lets you keep your internal teams lean and focused on strategic outcomes that will differentiate your business rather than developing new skills and capabilities Separated AEO and IEO with Decentralized Governance This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Similarly your infrastructure engineers perform both the engineering and operation of the platforms they use to support application teams ArchivedAmazon Web Services Operational Excellence Pillar 12 For this example we are going to treat governance as decentralized Standards are still distributed provided or shared t o application teams by the platform team but application teams are free to engineer and operate new platform capabilities in support of their workload In this model there are fewer constraints on the application team but that comes with a significant i ncrease in responsibilities Additional skills and potentially team members must be present to support the additional platform capabilities The risk of significant rework is increased if skill sets are not adequate and defects are not recognized early You should enforce policies that are not specifically delegated to application teams Use tools or services that enable you to centrally govern your environment s across accounts such as AWS Organizations Services like AWS Control Tower expand this management capability enabling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance usi ng AWS Organizations and automate provisioning of new accounts It’s beneficial to have m echanisms for the application team to request additions and changes to standards They may be able to contribute new standards that can provide benefit to other appli cation teams The platform teams may decide that providing direct support for these additional capabilities is an effective support for business outcomes This model limits constraints on innovation with significant skill and team member requirements It a ddresses many of the bottlenecks and delays created by transition of ArchivedAmazon Web Services Operational Excellence Pillar 13 tasks between teams while still promoting the development of effective relationships between teams and customers Relationships and Ownership Your operating model defines the relationship s between teams and supports identifiable ownership and responsibility Resources have identified owners: Understand who has ownership of each application workload platform and infrastructure component what business value is provided by that component and why that ownership exists Understanding the business value of these individual components and how they support business outcomes informs the processes and procedures applied against them Processes and procedures have identified owners: Understand who has ownership of the definition of individual processes and procedures why those specific process and procedures are used and why that ownership exists Understanding the reasons that specif ic processes and procedures are used enables identification of improvement opportunities Operations activities have identified owners responsible for their performance: Understand who has responsibility to perform specific activities on defined workloads and why that responsibility exists Understanding responsibility for performance of operations activities informs who will perform the action validate the result and provide feedback to the owner of the activity Team members know what they are responsi ble for: Understanding your role informs the prioritization of your tasks This enables team members to recognize needs and respond appropriately Mechanisms exist to identify responsibility and ownership : Where no individual or team is identified there a re defined escalation paths to someone with the authority to assign ownership or plan for that need to be addressed Mechanisms exist to request additions changes and exceptions: You are able to make requests to owners of processes procedures and resou rces Make informed decisions to approve requests where viable and determined to be appropriate after an evaluation of benefits and risks Responsibilities between teams are predefined or negotiated : There are defined or negotiated agreements between teams describing how they work with and support each other (for example response times service level objectives or service level ArchivedAmazon Web Services Operational Excellence Pillar 14 agreements) Understanding the impact of the teams’ work on business outcomes and the outcomes of other teams and organizations informs the prioritization of their tasks and enables them to respond appropriately When responsibility and ownership are undefined or unknown you are at risk of both not addressing necessary activities in a timely fashion and of redundant and potential ly conflicting efforts emerging to address those needs Resources Refer to the following resources to learn more about AWS best practices for operations design Videos •AWS re:Invent 2019: [REPEAT 1] How to ensure configuration compliance (MGT303 R1) •AWS re:Invent 2019: Automate everythi ng: Options and best practices (MGT304) Documentation •AWS Managed Services •AWS Organizations Features •AWS Control Tower Features Organizational Culture Provide support for your team me mbers so that they can be more effective in taking action and supporting your business outcome Executive Sponsorship : Senior leadership clearly sets expectations for the organization and evaluates success Senior leadership is the sponsor advocate and driver for the adoption of best practices and evolution of the organization Team members are empowered to take action when outcomes are at risk: The workload owner has defined guidance and scope empowering team members to respond when outcomes are at risk Escalation mechanisms are used to get direction when events are outside of the defined scope ArchivedAmazon Web Services Operational Excellence Pillar 15 Escalation is encouraged: Team members have mechanisms and are encouraged to escalate concerns to decision makers and stakeholders if they believe outcomes are at risk Escalation should be performed early and often so that risks can be identified and prevented from causing incidents Communications are timely clear and actionable: Mechanisms ex ist and are used to provide timely notice to team members of known risks and planned events Necessary context details and time (when possible) are provided to support determining if action is necessary what action is required and to take action in a t imely manner For example providing notice of software vulnerabilities so that patching can be expedited or providing notice of planned sales promotions so that a change freeze can be implemented to avoid the risk of service disruption Planned events ca n be recorded in a change calendar or maintenance schedule so that team members can identify what activities are pending On AWS AWS Systems Man ager Change Calendar can be used to record these details It supports programmatic checks of calendar status to determine if the calendar is open or closed to activity at a particular point of time Operations activities may be planned around specific “ approved” windows of time that are reserved for potentially disruptive activities AWS Systems Manager Maintenance Windows allows you to schedule activities against instances and other supported resources to automate the activities and make those activities discoverable Experim entation is encouraged: Experimentation accelerates learning and keeps team members interested and engaged An undesired result is a successful experiment that has identified a path that will not lead to success Team members are not punished for successfu l experiments with undesired results Experimentation is required for innovation to happen and turn ideas into outcomes Team members are enabled and encouraged to maintain and grow their skill set s: Team s must grow their skill sets to adopt new technologies and to support changes in demand and responsibilities in support of your workloads Growth of s kills in new technologies is frequently a source of team member satisfaction and support s innovation Support yo ur team members’ pursuit and maintenance of industry certifications that validate and acknowledge their growing ski lls Cross train to promote knowledge transfer and reduce the risk of significant impact when you lose skilled and experienced team members w ith institutional knowledge Provide dedicated structured time for learning ArchivedAmazon Web Services Operational Excellence Pillar 16 AWS provides resources including the AWS Getting Started Resource Center AWS Blogs AWS Online Tech Talks AWS Events and Webinars and the AWS Well Architected Labs that provide guidance examples and detailed walkthroughs to educate your teams AWS also shares best practices and patterns that we have learned through the operation of AWS in The Amazon Builders' Library and a wide variety of other useful educational material through the AWS Blog and The Official AWS Podcast You should take advantage of the education resou rces provided by AWS such as the WellArchitected labs AWS Support (AWS Knowledge Center AWS Discussion Forms and AWS Support Center ) and AWS Documentation to educate your teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS Training and Certification provides some free training through self paced digital courses on AWS fundamentals You can also register for instructor led training to further support the development of your teams’ AWS skills Resource teams appropriately: Maintain team member capaci ty and provide tools and resources to support your workload needs Overtasking team members increases the risk of incidents resulting from human error Investments in tools and resources (for example providing automation for frequently executed activiti es) can scale the effectiveness of your team enabling them to support additional activities Diverse opinions are encouraged and sought within and across teams: Leverage cross organizational diversity to seek multiple unique perspectives Use this perspective to increase innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspective s Organizational culture has a direct impact on team mem ber job satisfaction and retention Enable the engagement and capabilities of your team members to enable the success of your business Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2019: [REPEAT 1] How to ensure configuration compliance ArchivedAmazon Web Services Operational Excellence Pillar 17 (MGT303 R1) • AWS re:Invent 2019: Automate everythi ng: Options and best practices (MGT304) Documentation • AWS Managed Services • AWS Managed Services S ervice Description • AWS Organizations Features • AWS Control Tower Features ArchivedAmazon Web Services Operational Excellence Pillar 18 Prepare To prepare for operational excellence you ha ve to understand your workloads and their expected behaviors You will then be able design them to provide insight to their status and build the procedures to support them To prepare for operational excellence you need to perform the following: • Design Te lemetry • Improve Flow • Mitigate Deployment Risks • Understand Operational Readiness Design Telemetry Design your workload so that it provides the information necessary for you to understand its internal state (for example metrics logs events and traces) across all components in support of observability and investigating issues Iterate to develop the telemetry necessary to monitor the health of your workload identify when outcomes are at risk and enable effective responses In AWS you can emit and collect logs metrics and events from your application s and workloads components to enable you to understand their internal state and health You can integrate distributed tracing to track requests as the y travel through your workload Use this data t o understand how your application and underlying components interact and to analyze issues and performance When instrumenting your workload capture a broad set of information to enable situational awareness (for example changes in state user activity privilege access utilization counters) knowing that you can use filters to select the most useful information over time Implement application telemetry: Instrument your application code to emit information about its internal state status and achievement of business outcomes for example queue depth error messages and response times Use this information to determine when a response is required You s hould install and configure the Unified Amazon CloudWatch Logs Agent to send system level application logs and advanced metrics from your EC2 instances an d physical servers to Amazon CloudWatch ArchivedAmazon Web Services Operational Excellence Pillar 19 Generate and publish custom metrics using the AWS CLI or the CloudWatch API Ensure that you publish insightful business metrics as well as technical metrics to help you unders tand your customers’ behaviors You can send logs directly from your application to CloudWatch using the CloudWatch Logs API or send events using the AWS SDK and Amazon EventBridge Insert logging statements into your AWS Lambda code to automatically store them in CloudWatch Logs Implement and configure workload te lemetry: Design and configure your workload to emit information about its internal state and current status For example API call volume HTTP status codes and scaling events Use this information to help determine when a response is required Use a service like Amazon CloudWatch to aggregate logs and metrics from workload components (for example API logs from AWS CloudTrail AWS Lambda metrics Amazon VPC Flow Logs and other services ) Implement user activity telemetry: Instrument your application code to emit information about user activity for example click streams or started abandoned and completed transactions Use th is information to help understand how the application is used patterns of usage and to determine when a response is required Implement dependency telemetry: Design and configure your workload to emit information about the status (for example reachabil ity or response time) of resources it depends on Examples of external dependencies can include external databases DNS and network connectivity Use this information to determine when a response is required Implement transaction traceability: Implement your application code and configure your workload components to emit information about the flow of transactions across the workload Use this information to determine when a response is required and to assist you in identifying the factors contributing to an issue On AWS you can use distributed tracing services such as AWS X Ray to collect and record traces as transactions travel through your workload generate maps to see how transactions flow across your work load and services gain insight to the relationships between components and identify and analyze issues in real time Iterate and develop telemetry as workloads evolve to ensure that you continue to receive the information necessary to gain insight to the health of your workload ArchivedAmazon Web Services Operational Excellence Pillar 20 Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeSta r: The Central Experience to Quickly Start Developing Applications on AWS Documents • Accessing Amazon CloudWatch Logs for AWS Lambda • Monitoring CloudTrail Log Files with Amazon CloudWatch Logs • Publishing Flow Logs to CloudWatch Logs Documentation • Enhancing workload observability using Amazon CloudWatch Embedded Metric Format • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon CloudWatch • HighResolution Custom Metrics and Alarms for Amaz on CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies • Enhancing workload observability using Amazon CloudWatch Embedded Metric Format ArchivedAmazon Web Services Operational Excellence Pillar 21 Improve Flow Adopt approaches that improve the flow of changes into production and that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering production limit issues deployed and enable rapid identification and remediation of issues introduced thro ugh deployment activities In AWS you can view your entire workload (applications infrastructure policy governance and operations) as code It can all be defined in and updated using code This means you can apply the same engineering discipline that you use for application code to every element of your stack Use version control: Use version control to enable tracking of changes and releases Many AWS services offer version control capabilities Use a revision or source control system like AWS CodeCommit to manage code and other artifacts such as version controlled AWS CloudFormation templates of your infras tructure Test and validate changes: Test and validate changes to help limit and detect errors Automate testing to reduce errors caused by manual processes and reduce the level of effort to test On AWS you can create temporary parallel environments to lower the risk effort and cost of expe rimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Use configuration management systems : Use configuration management systems to make and track configuration changes These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes Use build and deployment management systems: Use build and deployment management systems These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes In AWS you can build Continuous Integration/Continuous Deployment (CI/CD) pipelines using services like the AWS Developer Tools (for example AWS CodeCommit AWS Co deBuild AWS CodePipeline AWS CodeDeploy and AWS CodeStar ) ArchivedAmazon Web Services Operational Excellence Pillar 22 Perform patch management: Perform patc h management to gain features address issues and remain compliant with governance Automate patch management to reduce errors caused by manual processes and reduce the level of effort to patch Patch and vulnerability management are part of your benefit and risk management activities It is preferable to have immutable infrastructures and deploy workloads in verified known good states Where that is not viable patching in place is the remaining option Updating machine images container images or Lambd a custom runtimes and additional libraries to remove vulnerabilities are part of patch management You should manage updates to Amazon Machine Images (AMIs) for Linux or Windows Server images using EC2 Image Builder You can use Amazon Elastic Container Registry with your existing pipeline to manage Amazon ECS images and manage Amazon EKS images AWS Lambda includes version management features Patching should not be performed on production systems without first testing in a safe environment Patches should only be applied if they support an operational or business outcome On AWS you can use AWS Systems Manager Patch Manager to automate the process of patching managed systems and schedule the activity using AWS Systems Manager Maintenance Windows Share design standards: Share best practices across teams to increase awareness and maximize the benefits of development efforts On AWS application compute infrastructure and operations can be defined and managed using code methodologies This allows for easy release sharing and adoption Many AWS services and resources are designed to be share d acros s accounts enabling you to share created assets and learnings across your teams For example you can share CodeCommit repositories Lambda functions Amazon S3 buckets and AMIs to specific accounts When you publish new resources or updates use Amazon SNS to provide cross account notifications Subscribers can us e Lambda to get new versions If shared standards are enforced in your organization i t’s critical that mechanisms exist to request additions changes and exceptions to standards in support of teams’ activities Without this option standards bec ome a con straint on innovation ArchivedAmazon Web Services Operational Excellence Pillar 23 Implement practices to improve code quality: Implement practices to improve code quality and minimize defects For example test driven development code reviews and standards adoption Use multiple environments: Use multiple enviro nments to experiment develop and test your workload Use increasing levels of controls as environments approach production to gain confidence your workload will operate as intended when deployed Make frequent small reversible changes: Frequent small and reversible changes reduce the scope and impact of a change This eases troubleshooting enables faster remediation and provides the option to roll back a change Fully automate integration and deployment: Automate build deployment and testing of the workload This reduces errors caused by manual processes and reduces the effort to deploy changes Apply metadata using Resource Tags and AWS Resource Groups following a consistent tagging strategy to enable identification of your resources Tag your resources for organization cost accounting access controls and targe ting the execution of automated operations activities Resources Refer to the following resources to learn more about AWS best practices for operations desig n Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeStar: The Central Experience to Quickly Start Developing Applications on AWS Documentation • What Is AWS Resou rce Groups • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon Clo udWatch ArchivedAmazon Web Services Operational Excellence Pillar 24 • HighResolution Custom Metrics and Alarms for Amazon CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies Mitigate Deployment Risks Adopt approaches that provide fast feedback on quality a nd enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of issues introduced through the deployment of changes The design of your workload should include how it will be deployed updated and operated You will want to implement engineering practices that align with defect reduction and quick and safe fixes Plan for unsuccessful changes: Plan to revert to a known good state or remediate in the production environment if a change does not have the desired outcome This preparation reduces recovery time through faster responses Test and validate changes: Test changes and validate the results at all lifecycle stages to confirm new features and minimize the risk and impact of failed deployments On AWS you can create temporary parallel environments to lower the risk effort and cost of experimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Use deployment management systems: Use deployment management systems to track and implement change This reduces errors cause by manual processes and reduces the effort to deploy changes In AWS y ou can build Continuous Integration/Continuous Deployment (CI/CD) pipelines using services like the AWS Developer Tools (for example AWS CodeCommit AWS CodeBuild AWS CodePipeline AWS CodeDeploy and AWS CodeStar ) ArchivedAmazon Web Services Operational Excellence Pillar 25 Have a change calendar and track when significant business or operational activities or events are planned that may be impacted by implementation of change Adjust activities to manage risk ar ound those plans AWS Systems Manager Change Calendar provides a mechanism to document blocks of time as open or closed to changes and why an d share that information with other AWS accounts AWS Systems Manager Automation scripts can be configured to adhere to the change calendar state AWS Systems Manager Maintenance Windows can be used to schedule the performance of AWS SSM Run Command or Automation scripts AWS Lambda invocations or AWS Step Function activities at specified times Mark these activities in your change calendar so that they can be included in your evaluation Test using limited deployments: Test with limited deployments alongside existing systems to confirm desired o utcomes prior to full scale deployment For example use deployment canary testing or one box deployments Deploy using parallel environments: Implement changes onto parallel environments and then transition over to the new environment Maintain the prior environment until there is confirmation of successful deployment Doing so minimizes recovery time by enabling rollback to the previous environment Deploy frequent small reversible changes: Use frequent small and reversible changes to reduce the scop e of a change This results in easier troubleshooting and faster remediation with the option to roll back a change Fully automate integration and deployment: Automate build deployment and testing of the workload This reduces errors cause by manual proc esses and reduces the effort to deploy changes Automate testing and rollback: Automate testing of deployed environments to confirm desired outcomes Automate rollback to previous known good state when outcomes are not achieved to minimize recovery time an d reduce errors caused by manual processes Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) ArchivedAmazon Web Services Operational Excellence Pillar 26 • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeSta r: The Central Experience to Quickly Start Developing Applications on AWS Documentation • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon CloudWatch • HighResolution Custom Metrics and Alarms for Amazon CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies Operational Readiness Evaluate the operational readiness of your workload processes procedures and personnel to understand the operational risks related to your workload You sh ould use a consistent process (including manual or automated checklists) to know when you are ready to go live with your workload or a change This will also enable you to find any areas that you need to make plans to address You will have runbooks that d ocument your routine activities and playbooks that guide your processes for issue resolution Ensure personnel capability: Have a mechanism to validate that you have the appropriate number of trained personnel to provide support for operational needs Train personnel and adjust personnel capacity as necessary to maintain effective support You will need to have enough team members to cover all activities (including on call) Ensure that your teams have the necessary skills to be successful with t raining on your workload your operations tools and AWS ArchivedAmazon Web Services Operational Excellence Pillar 27 AWS provides resources including the AWS Getting Started Resource Center AWS Blogs AWS Online Tech Talks AWS Events and Webinars and the AWS Well Architected Labs that provide guidance examples and detailed walkthroughs to educate your teams Additionally AWS Training and Certification provides some free training through selfpaced digital courses on AWS fundamentals You can also register for instructor led training to further support the development of your teams’ AWS skills Ensure consistent review of operational readiness: Ensure you have a consistent review of your readiness to operate a workload Review s must include at a minimum the operational readiness of the teams and the workload and security requirements Implement review activities in code and trigger automated review in response to events where appropriate to ensure consistency speed of execution and redu ce errors caused by manual processes You should automate workload configuration testing by making baselines using AWS Config and checking your configurations using AWS Config rules You can evaluate security requirements and compliance using the services and features of AWS Security Hub These services will aid in d etermining if your workloads are aligned with best practices and standards Use runbooks to perform procedures: Runbooks are documented procedures to achieve specific outcomes Enable consistent and prompt responses to well understood events by documenting procedures in runbooks Implement runbooks as code and trigger the execution of runbooks in response to events where appropriate to ensure consistency speed responses and reduce errors caused by manual processes Use playbooks to identify issues: Playb ooks are documented processes to investigate issues Enable consistent and prompt responses to failure scenarios by documenting investigation processes in playbooks Implement playbooks as code and trigger playbook execution in response to events where app ropriate to ensure consistency speed responses and reduce errors caused by manual processes AWS allows you to treat your operations as code scripting your runbook and playbook activities to reduce the risk of human error You can use Resource Tags or Resource Groups with your scripts to selectively execute based on criteria you have d efined (for example environment owner role or version) You can use scripted procedures to enable automation by triggering the scripts in response to events By treating both your operations and workloads as code you can also script and automate the evaluation of your environments ArchivedAmazon Web Services Operational E xcellence Pillar 28 You should script procedures on your instances using AWS Systems Manager (SSM) Run Command use AWS Systems Manager Automation to script actions and create workflows on instances and other resources or use AWS Lambda serverless compute functions to script responses to events across AWS service APIs and your own custom interfaces You can also use AWS Step Functions to coordinate multiple AWS s ervices scripted into serverless workflows Automate your responses by triggering these s cripts using CloudWatch Events and route desired events to additional operations support systems using Amazon EventBridge You should test your procedures failure scenarios and the success of your responses (for ex ample by holding game days and testing prior to going live) to identify areas you need to plan to address On AWS you can create temporary parallel environments to lower the risk effort and cost of experimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Perform failure injection testing in safe environments where there will be acceptable or no customer impact and develop or revise appropriate responses Make informed decisions to deploy systems and changes: Evaluate the capabilities of the team to support the workload and the workload's compliance with governance Evaluate these against the benefits of deployment when determining whether to transition a system or change into production Understand the benefits and risks to make informed decisions Use “pre mortems” to anticipate failure and create procedures where appropriate When you make changes to the checklists you use to evaluate your workloads plan what you will do with live systems that no longer comply Resources Refer to the following resources to learn more about AWS best practices for operational readiness Documentation • AWS Lambda • AWS Systems Manager • AWS Config Rules – Dynamic Compliance Checking for Cloud Resources • How to track configurati on changes to CloudFormation stacks using AWS Config ArchivedAmazon Web Services Operational Excellence Pillar 29 • Amazon Inspector Update blog post • AWS Events and Webinars • AWS Training • AWS Well Architected Labs • AWS launches Tag Policies • Using AWS Systems Manager Change Calendar to prevent changes during critical events ArchivedAmazon Web Services Operational Excellence Pillar 30 Operate Success is the achievement of business outcomes as measured by the metrics you define By understanding the health of your workload and operations you can identify when organizational and business outcomes may become at risk or are at risk and respond appropriately To be successful you must be able to : • Understand Workload Health • Understand Operational Health • Respond to Events Understanding Workload Health Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action Your team should be able to understand the health of your workload easily You will want to use metrics based on workload outcomes to gain useful insights You should use these metrics to implement dashboards with business and technical viewpoints that will help team members make informed decisions AWS makes it eas y to bring togethe r and analyze your worklo ad logs so that you can generate metrics understand the health of your workload and gain insight from operations over time Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business outcomes (for example o rder rate customer retention rate and profit versus operating expense) and customer outcomes (for example customer satisfaction) Evaluate KPIs to determine workload success Define workload metrics: Define workload metrics to measure the achievement of KPIs (for example abandoned shopping carts orders placed cost price and allocated workload expense) Define workload metrics to measure the health of the workload (for example interface response time error rate requests made requests completed and utilization) Evaluate metrics to determine if the workload is achieving desired outcomes and to understand the health of the workload ArchivedAmazon Web Services Operational Excellence Pillar 31 You should send log data to a service like CloudWatch Logs and generate metrics from observations of necessary log content CloudWatch has specialized features like Amazon CloudWatch Insights for NET and SQL Server and Container Insights that can assist you by identifying and setting up key metrics logs and alarms across your specifically supported application resources and technology stack Collect and analyze workload metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed You should aggregate log data from your application workload components services and API calls to a service like CloudWatch Logs Generate metrics from observations of necessary log content to enable insight into the performance of operations activities In the AWS shared responsibilit y model portions of monitoring are delivered to you through the AWS Personal Health Dashboard This dashboard provide s alerts and remediation guidance when AWS is experiencing events that might affect you Customers with Business and Enterprise Support subscriptions also get access to the AWS Health API enabling integration to the ir event management systems On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for long term storage Using AWS Glue you can discover and prepare your log data in Amazon S3 for analytics storing a ssociated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with Glue can then be used to analyze your log data querying it using standard SQL Using a business intelligence tool like Amazon QuickSight you can visualize explore and analyze your data An alternative solution would be to use the Amazon Elasticsearch Service and Kibana to collect analyze and display logs on AWS across multiple accounts and AWS Regions Establish workload metrics baselines: Establish ba selines for metrics to provide expected values as the basis for comparison and identification of under and over performing components Identify thresholds for improvement investigation and intervention Learn expected patterns of activity for workload: Establish patterns of workload activity to identify anomalous behavior so that you can respond appropriately if necessary ArchivedAmazon Web Services Operational Excellence Pillar 32 CloudWatch through the CloudWatch Anomaly Detection feature applies statistical and machine learning algorithms to generate a range of expected values that represent normal metric behavior Alert when workload outcomes are at risk: Raise an alert when workload outcomes are at risk so that you can respond appropriately if necessary Ideally you have previously identified a metric threshold that you are able to alarm upon or an event that you can use to trigger an automated response You can also use CloudWatch Logs Insights to interactively search and analyze your log data using a purpose built query language CloudWatch Logs Insights automatically discovers fields in logs from AWS services and custom log events in JSON It scales with your log v olume and query complexity and gives you answers in seconds helping you to search for the contributing factors of an incident Alert when workload anomalies are detected: Raise an alert when workload anomalies are detected so that you can respond appropri ately if necessary Your analysis of your workload metrics over time may establish patterns of behavior that you can quantify sufficiently to define an event or raise an alarm in response Once trained the CloudWatch Anomaly Detection feature can be used to alarm on detected anomali es or can provide overlaid expected values onto a graph of metric data for ongoing comparison Validate the achievement of outcomes and the effectiveness of KPIs and metrics: Create a business level view of your workload operations to help you determine if you are satisfying needs and to identify areas that need improvement to reach business goals Validate the effectiveness of KPIs an d metric s and revise them if necessary AWS also has support for third party log analysis systems and business intelligence tools through the AWS service APIs and SDKs (for example Grafana Kibana and Logstash) Resources Refer to the following resources to learn more about AWS best practices for understanding workload health ArchivedAmazon Web Services Operational Excellence Pillar 33 Videos • AWS re:Invent 2015: Log Monitor and Analyze your IT with Amazon CloudWatch (DVO315) • AWS re:Invent 2016: Amazon CloudWatch Logs and AWS Lambda: A Match Made in Heaven (DEV301) Documentation • What Is Amazon CloudWatch Applicati on Insights for NET and SQL Server? • Store and Monitor OS & Application Log Files with Amazon CloudWatch • API & CloudFormation Support for Amazon CloudWatch Dashboards • AWS Answers: Centralized Logging Understanding Operational Health Define capture and analyze operations metrics to gain visibility to wo rkload events so that you can take appropriate action Your team should be able to understand the health of your operations easily You will want to use metrics based on operations outcomes to gain useful insights You should use these metrics to implement dashboards with business and technical viewpoints that will help team members make informed decisions AWS makes it easier to bring together and analyze your operations logs so that you can generate metrics know the status of your operations and gain in sight from operations over time Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business (for example new features delivered) and customer outcomes (for example customer support cases) Evaluate KPIs to d etermine operations success Define operations metrics: Define operations metrics to measure the achievement of KPIs (for example successful deployments and failed deployments) Define operations metrics to measure the health of operations activities (for example mean time to detect an incident (MTTD) and mean time to recovery (MTTR) from an incident) Evaluate metrics to determine if operations are achieving desired outcomes and to understand the health of your operations activities ArchivedAmazon Web Services Operational Excellence Pillar 34 Collect and analyze operations metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed You should aggregate log data from the execution of you r operations activities and operations API calls into a service like CloudWatch Logs Generate metrics from observations of necessary log content to gain insight into the performance of operations activities On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for long term storage Using AWS Glue you can discover and prepare your log data in Amazon S3 for analytics storing associated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with Glue can then be used to analyze your log data querying it using standard SQL Using a business intelligence tool like Amazon QuickSight you can visualize explore and analyze your data Establish operations metrics baselines: Establish baselines for metrics to provide expected values as the basis f or comparison and identification of under and over performing operations activities Learn expected patterns of activity for operations : Establish patterns of operations activities to identify anomalous activity s o that you can respond appropriately if necessary Alert when workload outcomes are at risk: Raise an alert when operations outcomes are at risk so that you can res pond appropriately if necessary Ideally you have previously identified a metric that you are able to alarm upon o r an event that you can use to trigger an automated response You can also use CloudWatch Logs Insights to interactively search and analyze your log data using a purpose built query language CloudWatch Logs Insights automatically discovers fields in logs from AWS services and custom log events in JSON It scales with your log volume and query complexity and gives you answers in seconds helping you to search for the contributing factors of an incident Alert when operations anomalies are detected: Raise an alert when operations anomalies are detecte d so that you can respond appropriately if necessary Your analysis of your operations metrics over time may established patterns of behavior that you can quantify sufficiently to define an event or raise an alarm in response ArchivedAmazon Web Services Operational Excellence Pillar 35 Once trained the CloudWatch Anomaly Detection feature can be used to alarm on detected anomalies or can provide overlaid expected values onto a graph of metric data for ongoing comparison Validate the achievement of outcomes and the effectiveness of KPIs and metrics: Create a business level view of your operations activities to help you determine if you are satisfying needs and to identify areas that need improvement to reach business goals Validate the effectiveness of KPIs and metric s and revise them if necessary AWS also has support for third party log analysis systems and business intelligence tools through the AWS service APIs and SDKs (for example Grafana Kibana and Logstash) Resources Refer to the following resources to learn more about AWS best practices for understanding operational health Videos • AWS re:Invent 2015: Log Monitor and Analyze your IT with Ama zon CloudWatch (DVO315) • AWS re:Invent 2016: Amazon CloudWatch Logs and AWS Lambda: A Match Made in Heaven (DEV301) Documentation • Store and Monitor OS & Application Log Files with Amazon CloudWatch • API & CloudFormation Support for Amazon CloudWatch Dashboards • AWS Answers: Centralized Logging Responding to Events You should anticipate operational events both planned (for example sales promotion s deployments and failure tests) and unplanned (for example surges in utilization and component failures) You should use your existing runbooks and playbooks to deliver consistent results when you respond to alerts Defined alerts should be owned by a role or a team that is accountable for the response and escalations You will also want to know the business impact of your system components and use this to target efforts when needed You should perform a root cause analysis (RCA) after events and then prevent recurrence of failures or document workarounds ArchivedAmazon Web Services Operational Excellence Pillar 36 AWS simplifies your event response by providing tools supporting all aspects of your workload and operations as code These tools allow you to script responses to operations events and trigger their e xecution in response to monitoring data In AWS you can improve recovery time by replacing failed components with known good versions rather than trying to repair them You can then carry out analysis on the failed resource out of band Use processes for event incident and problem management: Have processes to address observed events events that require intervention (incidents) and events that require intervention and either recur or cannot currently be resolved (problems) Use these processes to miti gate the impact of these events on the business and your customers by ensuring timely and appropriate responses On AWS you can use AWS Systems Manager OpsCenter as a central location to view investigate and resolve operational issues related to any AWS resource It aggregates operational issue s and provid es contextually relevant data to assist in incident response Have a process per alert: Have a well defined response (runbook or playbook) with a specifically identified owner for any event for which you raise an alert This ensures effective and prompt responses to operations events and prevents actionable events from being obscured by less valuable notificat ions Prioritize operational events based on business impact: Ensure that when multiple events require intervention those that are most significant to the business are addressed first For example impacts can include loss of life or injury financial los s or damage to reputation or trust Define escalation paths: Define escalation paths in your runbooks and playbooks including what triggers escalation and procedures for escalation Specifically identify owners for each action to ensure effective and pr ompt responses to operations events Identify when a human decision is required before an action is taken Work with decision makers to have that decision made in advance and the action preapproved so that MTTR is not extended waiting for a response Enable push notifications: Communicate directly with your users (for example with email or SMS) when the services they use are impacted and again when the services return to normal operating conditions to enable users to take appropriate action ArchivedAmazon Web Services Operational Excellence Pillar 37 Communicat e status through dashboards: Provide dashboards tailored to their target audiences (for example internal technical teams leadership and customers) to communicate the current operating status of the business and provide metrics of interest You can creat e dashboards using Amazon CloudWatch Dashboards on customizable home pages in the CloudWatch console Using business intelligence services like Amazon QuickSight you can create and publish interactive dashboards of your workload and operational health (for example order rates connected users and transaction times) Create Dashboards that present syst em and business level views of your metrics Automate responses to events: Automate responses to events to reduce errors caused by manual processes and to ensure prompt and consistent responses There are multiple ways to automate the execution of runbook and playbook actions on AWS To respond to an event from a state change in your AWS resources or from your own custom events you should create CloudWatch Events rules to trigger responses through CloudWatch targets (for example Lambda functions Amazon Simple Notification Service (Amazon SNS) topics Amazon ECS tasks and AWS Systems Manager Automation) To respond to a metric that crosses a thre shold for a resource (for example wait time) you should create CloudWatch alarms to perform one or more actions using Amazon EC2 actions Auto Scaling actions or to send a notification to an Amazon SNS topic If you need to perform custom actions in response to an alarm invoke Lambda through Amazon SNS notification Use Amazon SNS to publish event notifications and escalation messages to keep people informed AWS also supports third party systems through the AWS service APIs and SDKs There are a number of monitoring tools provided by APN P artners and third parties that allow for monitoring notifications and responses Some of these tools include New Relic Splun k Loggly SumoLogic and Datadog You should keep critical manual procedures available for use when automated procedures fail Resources Refer to the following resources to learn more about AWS best practices for responding to events ArchivedAmazon Web Services Operational Excellence Pillar 38 Video • AWS re:Invent 2016: Automating Security Event Response from Idea to Code to Execution (SEC313) Documentation • What is Amazon CloudWatch Events? • How to Automatically Tag Amazon EC2 Resources in Response to API Events • Amazon EC2 Systems Manager Automation is now an Amazon CloudWatch Events Target • EC2 Run Command is Now a CloudWatch Events Target • Automate remediation actions for Amazon EC2 notifications and beyond using EC2 Systems Manager Automation and AWS Health • HighResolution Custom Met rics and Alarms for Amazon CloudWatch ArchivedAmazon Web Services Operational Excellence Pillar 39 Evolve Evolution is the continuous cycle of improvement over time Implement frequent small incremental changes based on the lessons learned from your operations activities and evaluate their success at bringing about improvement To evolve your operations over time you must be able to : • Learn Share and Improve Learn Share and Improve It’s essential that you regularly provide time for analysis of operations activities analysis of failures experimentation and making improvements When things fail you will want to ensure that your team as well as your larger engineering community learns from those failures You should analyze fa ilures to identify lessons learned and plan improvements You will want to regularly review your lessons learned with other teams to validate your insights Have a process for continuous improvement: Regularly evaluate and prioritize opportunities for improvement to focus efforts where they can provide the greatest benefits Perform post incident analysis : Review customer impacting events and identify the contributing causes and preventative action items Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Communicate contributing factors and corrective actions as appropriate tailored to target audiences Implement feedback loops: Include feedback loops in your procedures an d workloads to help you identify issues and areas that need improvement Perform Knowledge Management : Mechanisms exist for your team members to discover the information that they are looking for in a timely manner access it and identify that it’s curren t and complete Mechanisms are present to identify needed content content in need of refresh and content that should be archived so that it’s no longer referenced Define drivers for improvement: Identify drivers for improvement to help you evaluate and prioritize opportunities ArchivedAmazon Web Services Operational Excellence Pillar 40 On AWS you can aggregate the logs of all your operations activities workloads and infrastructure to create a detailed activity history You can then use AWS tools to analyze your operations and workload health over time (for ex ample identify trends correlate events and activities to outcomes and compare and contrast between environments and across systems) to reveal opportunities for improvement based on your drivers You should use CloudTrail to track API activity (through t he AWS Management Console CLI SDKs and APIs) to know what is happening across your accounts Track your AWS Developer Tools deployment activities with CloudTrail and CloudWatch This will add a detailed activity history of your deployments and their out comes to your CloudWatch Logs log data Export your log data to Amazon S3 for long term storage Using AWS Glue you discover and prepare your log data in Amazon S3 for analytics Use Amazon Athena through its native integration with Glue to analyze your log data Use a business intelligence tool like Amazon QuickSight to visualize explore and analyze your data Validate insights: Review your analysis results and responses with cross functional teams and business owners Use these reviews to establish common understanding identify additional impacts and determine courses of action Adjust responses as appropriate Perform operations metrics reviews: Regularly perform retrospective analysis of incidents and operations metrics with cross team participants including leadership from different areas of the business Use these reviews to identify opportunities for improvement potential courses of action and to share lessons learned Look for opportunities to improve in all of your environments (for example dev elopment test and production) Document and share lessons learned: Document and share lessons learned from the execution of operations activities so that you can use them internally and across teams You should share what your teams learn to increase the benefit across your organization You will want to share information and resources to prevent avoidable errors and ease development efforts This will allow you to focus on delivering desired features Use AWS Identity and Access Management (I AM) to define permissions enabling controlled access to the resources you wish to share within and across accounts You should then use version controlled AWS CodeCommit repositories to share application ArchivedAmazon Web Services Operational Excellence Pill ar 41 libraries scripted procedures procedure documentat ion and other system documentation Share your compute standards by sharing access to your AMIs and by authorizing the use of your Lambda functions across accounts You should also share your infrastructure standards as CloudFormation templates Through the AWS APIs and SDKs you can integrate external and third party tools and repositories (for example GitHub BitBucket and SourceForge) When sharing what you have learned and developed be careful to structure permissions to ensure the integrity of sha red repositories Allocate time to make improvements: Dedicate time and resources within your processes to make continuous incremental improvements possible On AWS you can create temporary duplicates of environments lowering the risk effort and cost o f experimentation and testing These duplicated environments can be used to test the conclusions from your analysis experiment and develop and test planned improvements Resources Refer to the following resources to learn more about AWS best practices fo r learning from experience Documentation • Querying Amazon VPC Flog Logs • Monitori ng Deployments with Amazon CloudWatch Tools • Analyzing VPC Flow Logs with Amazon Kinesis Data Firehose Amazon Athena and Amazon QuickSight • Share an AWS CodeCommit Repository • Use resource based policies to give other accounts and AWS services permission to use your Lambda resources • Sharing an AMI with Specific AWS Accounts • Using AWS Lambda with Amazon SNS ArchivedAmazon Web Services Operational Excellence Pillar 42 Conclusion Operational excellence is an ongoing and iterative effort Set up your organization for success by having shared goals Ensure that everyone understands their part in achieving business outcomes and how they impact the ability of others to succeed Provide support for your team members so that they can support your business outcomes Every operational event and failure should be treated as an opportunity to improve the operations of your architecture By understanding the needs of your workloads predefining runbooks for routine activities and playbooks to guide issue resolution using the operations as code features in AWS and maintaining situational awareness your operations will be better prepared and able to respon d more effe ctively when incidents occur Through focusing on incremental improvement based on priorities as they change and lessons learned from event response and retrospective analysis you will enable the success of your business by increasing the efficiency and effectiveness of your activities AWS strives to help you build and operate architectures that maximize efficiency while you build highly responsive and adaptive deployments To increase the operational excellen ce of your workloads you should use the bes t practices discussed in this paper Contributors • Brian Carlson Operations Lead Well Architected Amazon Web Services • Jon Steele Sr Technical Account Manager Amazon Web Services • Ryan King Technical Program Manager Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Further Reading For additional help consult the following sources: • AWS Well Architected Framework ArchivedAmazon Web Services Operational Excellence Pillar 43 Document Revisions Date Description July 2020 Updates to reflect new AWS services and features and latest best practices July 2018 Updates to reflect new AWS services and features and updated references November 2017 First publication,General,consultant,Best Practices AWS_WellArchitected_Framework__Performance_Efficiency_Pillar,ArchivedPerformance Efficiency Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/performanceefficiencypillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Performance Efficiency 1 Design Principles 2 Definition 2 Selection 4 Performance Architecture Selection 4 Compute Architecture Selection 8 Storage Architecture Selection 14 Database Architecture Selection 17 Network Architecture Selection 21 Review 29 Evolve Your Workload to Take Advantage of New Releases 30 Monitoring 32 Monitor Your Resources to Ensure That They Are Performing as Expected 33 Trade offs 35 Using Trade offs to Improve Performance 36 Conclusion 38 Contributors 38 Further Reading 38 Document Revisions 39 ArchivedAbstract This whitepaper focuses on the performance efficiency pillar of the Amazon Web Services (AWS) WellArchitected Framework It provides guidance to help c ustomers apply best practices in the design delivery and maintenance of AWS environments The performance efficiency pillar addresses best practices for managing production environments This paper does not cover the design and management of non producti on environments and processes such as continuous integration or delivery ArchivedAmazon Web Services Perfor mance Efficiency Pillar 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of decisions you make while building workloads on AWS Using the Framework helps you learn architectural best pra ctices for designing and operating reliable secure efficient and cost effective workloads in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected workloads greatly increases the likelihood of business success The framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on applying the principles of the performance efficiency pillar to your workloads In traditional on premises environments achieving high and lasting performance is challenging Using the principles in this paper will help you build architectures on AWS tha t efficiently deliver sustained performance over time This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you’ll understand AWS best practices and strategies to use when designing a performant cloud architecture This paper does not provide implementation details or architectural patterns However it does include references to appropriate resources Performance Efficiency The performa nce efficiency pillar focuses on the efficient use of computing resources to meet requirements and how to maintain efficiency as demand changes and technologies evolve ArchivedAmazon Web Services Performance Efficiency Pillar 2 Design Principles The following design principles can help you achieve and maintain e fficient workloads in the cloud • Democratize advanced technologies: Make advanced technology implementation easier for your team by delegating complex tasks to your cloud vendor Rather than asking your IT team to learn about hosting and running a new tech nology consider consuming the technology as a service For example NoSQL databases media transcoding and machine learning are all technologies that require specialized expertise In the cloud these technologies become services that your team can consu me allowing your team to focus on product development rather than resource provisioning and management • Go global in minutes: Deploying your workload in multiple AWS Regions around the world allows you to provide lower latency and a better experience for your customers at minimal cost • Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional compute activities For example serverless storage services can act as static websites (rem oving the need for web servers) and event services can host code This removes the operational burden of managing physical servers and can lower transactional costs because managed services operate at cloud scale • Experiment more often: With virtual and a utomatable resources you can quickly carry out comparative testing using different types of instances storage or configurations • Consider mechanical sympathy: Use the technology approach that aligns best with your goals For example consider data acces s patterns when you select database or storage approaches Definition Focus on the following areas to achieve performance efficiency in the cloud: • Selection • Review • Monitoring ArchivedAmazon Web Services Performance Efficiency Pillar 3 • Trade offs Take a data driven approach to building a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types Reviewing your choices on a regular basis ensures th at you are taking advantage of the continually evolving AWS Cloud Monitoring ensures that you are aware of any deviance from expected performance Make trade offs in your architecture to improve performance such as using compression or caching or relaxi ng consistency requirements ArchivedAmazon Web Services Performance Efficiency Pillar 4 Selection The optimal solution for a particular workload varies and solutions often combine multiple approaches Well architected workloads use multiple solutions and enable different features to improve performance AWS res ources are available in many types and configurations which makes it easier to find an approach that closely matches your needs You can also find options that are not easily achievable with on premises infrastructure For example a managed service such as Amazon DynamoDB provides a fully managed NoSQL database with single digit millisecond latency at any scale Performance Architecture Selection Often multiple approaches are required to get optimal performance across a workload Wellarchitected system s use multiple solutions and enable different features to improve performance Use a data driven approach to select the patterns and implementation for your architecture and achieve a cost effective solution AWS Solutions Architects AWS Reference Architectures and AWS Partner Network (APN) partners can help you select an architecture based on industry knowledge but data obtained through benchmarking or load testing will be required to optimize your architecture Your architecture will likely combine a number of different architectural approaches (for example event driven ETL or pipeline) The implementation of your architecture will use the AWS servic es that are specific to the optimization of your architecture's performance In the following sections we discuss the four main resource types to consider (compute storage database and network) Understand the available services and resources: Learn abo ut and understand the wide range of services and resources available in the cloud Identify the relevant services and configuration options for your workload and understand how to achieve optimal performance If you are evaluating an existing workload yo u must generate an inventory of the various services resources it consumes Your inventory helps you evaluate which components can be replaced with managed services and newer technologies Define a process for architectural choices: Use internal experienc e and knowledge of the cloud or external resources such as published use cases relevant ArchivedAmazon Web Services Performance Efficiency Pillar 5 documentation or whitepapers to define a process to choose resources and services You should define a process that encourages experimentation and benchmarking with the services that could be used in your workload When you write critical user stories for your architecture you should include performance requirements such as specifying how quickly each critical story should execute For these critical stories you sh ould implement additional scripted user journeys to ensure that you have visibility into how these stories perform against your requirement s Factor cost requirements into decisions: Workloads often have cost requirements for operation Use internal cost c ontrols to select resource types and sizes based on predicted resource need Determine which workload components could be replaced with fully managed services such as managed databases in memory caches and other services Reducing your operational workload allows you to focus resources on business outcomes For cost requirement best practices refer to the CostEffective Resources section of the Cost Optimization Pillar whitepaper Use policies or reference architectures: Maximize p erformance and efficiency by evaluating internal policies and existing reference architectures and using your analysis to select services and configurations for your workload Use guidance from your cloud provider or an appropriate partner : Use cloud compa ny resources such as solutions architects professional services or an appropriate partner to guide your decisions These resources can help review and improve your architecture for optimal performance Reach out to AWS for assistance when you need addit ional guidance or product information AWS Solutions Architects and AWS P rofessional Services provide guidance for solution implementation APN P artners provide AWS expertise to help you unlock agility and innovation for your business Benchmark existing workloads: Benchmark the performance of an existing workload to understand how it performs on the cloud Use the data collected from benchmarks to drive architectural decisions Use benchmarking with synthetic tests to generate data about how your workload’s components perform Benchmarking is generally quicker to set up than load testing and ArchivedAmazon Web Services Performance Efficiency Pillar 6 is used to evaluate the technology for a particular compon ent Benchmarking is often used at the start of a new project when you lack a full solution to load test You can either build your own custom benchmark tests or you can use an industry standard test such as TPCDS to benchmark your data warehousing workloads Industry benchmarks are helpful when comparing environments Custom benchmarks are useful for targeting specific types of operations that you expect to make in your architecture When benchmarkin g it is important to pre warm your test environment to ensure valid results Run the same benchmark multiple times to ensure that you’ve captured any variance over time Because benchmarks are generally faster to run than load tests they can be used earlier in the deployment pipeline and provide faster feedback on performance deviations When you evaluate a significant change in a component or service a benchmark can be a quick way to see if you can justify the effort to make the change Using benchmarki ng in conjunction with load testing is important because load testing informs you about how your workload will perform in production Load test your workload: Deploy your latest workload architecture on the cloud using different resource types and sizes M onitor the deployment to capture performance metrics that identify bottlenecks or excess capacity Use this performance information to design or improve your architecture and resource selection Load testing uses your actual workload so you can see how you r solution performs in a production environment Load tests must be executed using synthetic or sanitized versions of production data (remove sensitive or identifying information) Use replayed or pre programmed user journeys through your workload at scale that exercise your entire architecture Automatically carry out load tests as part of your delivery pipeline and compare the results against pre defined KPIs and thresholds This ensures that you continue to achieve required performance Amazon CloudWatch can collect metrics across the resources in your architecture You can also collect and publish custom metrics to surface business or derived metrics Use CloudWatch to set alarms that in dicate when thresholds are breached and signal that a test is outside of the expected performance Using AWS services you can run production scale environments to test your architecture aggressively Since you only pay for the test environment when it is needed you can carry out full scale testing at a fraction of the cost of using an on ArchivedAmazon Web Services Performance Efficiency Pillar 7 premises environment Take advantage of the AWS Cloud to test your workload to see where it fails to scale or scales in a non linear way You can use Amazon EC2 Spot Instances to generate loads at low costs and discover bottlenecks before they are experienced in production When load tests take considerable time to execute parallelize them using multiple copies of your test environment Your costs will be similar but your testing time will be reduced (It costs the same to run one EC2 instance for 100 hours as it does to run 100 instances for one hour) You can also lower the costs of load testing by using Spot Instance s and selecting Regions that have lower costs than the Regions you use for production The location of your load test clients should reflect the geographic spread of your end users Resources Refer to the following resources to learn more about AWS best pr actices for load testing Videos • Introducing The Amazon Builders’ Library (DOP328) Documentation • AWS Architecture Center • Amazon S3 Performance Optimization • Amazon EBS Volume Performance • AWS CodeDeploy • AWS CloudFormation • Load Testing CloudFront • AWS CloudWatch Dashboards ArchivedAmazon Web Services Performance Efficiency Pillar 8 Compute Architecture Selection The optimal compute choice for a particular workload can vary based on application design usage patterns and configuration settings Architectures may use different compute choices for various components and enable different features to improve performance Selecting the wrong compute choice for an architectu re can lead to lower performance efficiency Evaluate the available compute options: Understand the performance characteristics of the compute related options available to you Know how instances containers and functions work and what advantages or disadvantages they bring to your workload In AWS compute is available in three forms: instances containers and functions: Instances Instances are virtualized servers allowing you to change their capabilities with a button or an API cal l Because resource decisions in the cloud aren’t fixed you can experiment with different server types At AWS these virtual server instances come in different families and sizes and they offer a wide variety of capabilities including solid state drive s (SSDs) and graphics processing units (GPUs) Amazon Elastic Compute Cloud (Amazon EC2) virtual server instances come in different families and sizes They offer a wide variety of capabilities including solid state drives (SSDs) and graphics processing units (GPUs) When you launch an EC2 instance the instance type that you specify determines the hardware of the host computer used for your instance Each instance type offers different compute memory and stora ge capabilities Instance types are grouped in instance families based on these capabilities Use data to select the optimal EC2 instance type for your workload ensure that you have the correct networking and storage options and consider operating system settings that can improve the performance for your workload Containers Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource isolated processes When running conta iners on AWS you have two choices to make First choose whether or not you want to manage servers AWS Fargate is serverless compute for containers or Amazon EC2 can be used if you need control over the in stallation ArchivedAmazon Web Services Performance Efficiency Pillar 9 configuration and management of your compute environment Second choose which container orchestrator to use: Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) Amazon Ela stic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to automatically execute and manage containers on a cluster of EC2 instances or serverless instances using AWS Fargate You can natively integrate Amazon ECS with other services such as Amazon Route 53 Secrets Manager AWS Identity and Access Management (IAM) and Amazon CloudWatc h Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service You can choose to run your EKS clusters using AWS Fargate removing the need to provision and manage servers EKS is deeply integrated with services such as Amazon CloudWatch Auto Scaling Groups AWS Identity and Access Management (IAM) and Amaz on Virtual Private Cloud (VPC) When using containers you must use data to select the optimal type for your workload — just as you use data to select your EC2 or AWS Fargate instance types Consider container configuration options such as memory CPU and tenancy configuration To enable network access between container services consider using a service mesh such as AWS App Mesh which standardizes how your services communicate Service mesh gives you end toend visibility and ensur es highavailability for your applications Functions Functions abstract the execution environment from the code you want to execute For example AWS Lambda allows you to execute code without running an instance You can use AWS Lambda to run code for any type of application or backend service with zero administration Simply upload your code and AWS Lambda will manage everything required to run and scale that code You can set up y our code to automatically trigger from other AWS services call it directly or use it with Amazon API Gateway Amazon API Gateway is a fully managed service that makes it easy for developers to create p ublish maintain monitor and secure APIs at any scale You can create an API that acts as a “front door” to your Lambda function API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management ArchivedAmazon Web Services Performance Efficiency Pillar 10 To deliver optimal performance with AWS Lambda choose the amount of memory you want for your function You are allocated proportional CPU power and oth er resources For example choosing 256 MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128 MB of memory You can control the amount of time each function is allowed to run (up to a maximum of 300 seconds) Understand the available compute configuration options: Understand how various options complement your workload and which configuration options are best for your system Examples of these options include instance family sizes features (GPU I/O) function sizes container instances and single versus multi tenancy When selecting instance families and types you must also consider the configuration options available to meet your workload’s needs: • Graphics Processing Units (GPU) — Using general purpose computing on GPUs (GPGPU) you can build applications that benefit from the high degree of parallelism that GPUs provide by leveraging platforms (such as CUDA) in the development process If your workload requires 3D rendering or video compression GPUs enable hardware accelerated computation and encoding making your workload more efficient • Field Programmable Gate Arrays (FPGA) — Using FPGAs you can optimize your workloads by having custom hardware accelerated execution for your most demanding workloads You can define your algorithms by leveraging supported general programming languages such as C or Go or hardware oriented languages such as Verilog or VHDL • AWS Inferentia (Inf1) — Inf1 instances are built to support machine learning inference applications Using Inf1 instances customers can run large scale machine learning inference applications like image recognition speech recognition natural language processing personalization and fraud detection You can build a model in one of the popular machine learning frameworks such as TensorFlow PyTorch or MXNet and use GPU instances such as P3 or P3dn to train your model After your machine learning model is trained to meet your requirements you can deploy your model on Inf1 instances by using AWS Neuron a specialized software development kit (SDK) consisting of a compiler runtime and profiling tools that optimize the machine learning inference performance of Inferentia chips ArchivedAmazon Web Services Performance Efficiency Pillar 11 • Burstable inst ance families — Burstable instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance when required by your workload These instances are intended for workloads that do not use the ful l CPU often or consistently but occasionally need to burst They are well suited for general purpose workloads such as web servers developer environments and small databases These instances provide CPU credits that can be consumed when the instance mu st provide performance Credits accumulate when the instance doesn’t need them • Advanced computing features — Amazon EC2 gives you access to advanced computing features such as managing C state and P state registers and controlling turbo boost of processo rs Access to co processors allows cryptography operations offloading through AES NI or advanced computation through AVX extensions The AWS Nitro System is a combination of dedicated hardware and lightwe ight hypervisor enabling faster innovation and enhanced security Utilize AWS Nitro Systems when available to enable full consumption of the compute and memory resources of the host hardware Additionally dedicated Nitro Cards enable high speed networking high speed EBS and I/O acceleration Collect compute related metrics : One of the best ways to understand how your compute systems are performing is to record and track the true utilization of various resources This data can be used to make more accurat e determinations about resource requirements Workloads (such as those running on microservices architectures ) can generate large volumes of data in the form of metrics logs and events Determine if your existing monitoring and observability service can manage the data generated Amazon CloudWatch can be used to collect access and correlate this data on a sing le platform from across all your AWS resources applications and services running on AWS and onpremises servers so you can easily gain system wide visibility and quickly resolve issues Determine the required configuration by right sizing: Analyze the various performance characteristics of your workload and how these characteristics relate to memory network and CPU usage Use this data to choose resources that best match your workload's profile For example a memory intensive workload such as a database could be served best by the r family of instances However a bursting workload can benefit more from an elastic container system ArchivedAmazon Web Services Performance Efficiency Pillar 12 Use the available elasticity of resources: The cloud provides the flexibility to expand or reduce your resources dynami cally through a variety of mechanisms to meet changes in demand Combined with compute related metrics a workload can automatically respond to changes and utilize the optimal set of resources to achieve its goal Optimally matching supply to demand delive rs the lowest cost for a workload but you also must plan for sufficient supply to allow for provisioning time and individual resource failures Demand can be fixed or variable requiring metrics and automation to ensure that management does not become a b urdensome and disproportionately large cost With AWS you can use a number of different approaches to match supply with demand The Cost Optimization Pillar whitepaper describes how to use the following approaches to cost: • Demand based approach • Buffer based approach • Time based approach You must ensure that workload deployments can handle both scale up and scale down events Create test scenarios for scale down events to ensure that the workload behaves as expected Reevaluate compute needs based on metrics: Use system level metrics to identify the behavior and requirements of your workload over time Evaluate your workload's needs by comparing the av ailable resources with these requirements and make changes to your compute environment to best match your workload's profile For example over time a system might be observed to be more memory intensive than initially thought so moving to a different ins tance family or size could improve both performance and efficiency Resources Refer to the following resources to learn more about AWS best practices for compute Videos • Amazon EC2 foundations (CMP21 1R2) • Powering next gen Amazon EC2: Deep dive into the Nitro system • Deliver high performance ML inference with AWS Inferentia (CMP324 R1) ArchivedAmazon Web Services Performance Efficiency Pillar 13 • Optimize performance and cost for your AWS compute (CMP323 R1) • Better faster cheaper compute: Cost optimizing Amazon EC2 (CMP202 R1) Documentation • Instances: o Instance Types o Processor St ate Control for Your EC2 Instance • EKS Containers: EKS Worker Nodes • ECS Containers: Amazon ECS Container Instances • Functions: Lambda Function Configuration ArchivedAmazon Web Services Performance Efficien cy Pillar 14 Storage Architecture Selection The optimal storage solution for a par ticular system varies based on the kind of access method (block file or object) patterns of access (random or sequential) throughput required frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and du rability constraints Well architected systems use multiple storage solutions and enable different features to improve performance In AWS storage is virtualized and is available in a number of different types This makes it easier to match your storage m ethods with your needs and offers storage options that are not easily achievable with on premises infrastructure For example Amazon S3 is designed for 11 nines of durability You can also change from using magnetic hard disk drives (HDDs) to SSDs and e asily move virtual drives from one instance to another in seconds Performance can be measured by looking at throughput input/output operations per second (IOPS) and latency Understanding the relationship between those measurements will help you select the most appropriate storage solution Storage Services Latency Throughput Shareable Block Amazon EBS EC2 instance store Lowest consistent Single Mounted on EC2 instance copies via snapshots File system Amazon EFS Amazon FSx Low consistent Multiple Many clients Object Amazon S3 Lowlatency Web scale Many clients Archival Amazon S3 Glacier Minutes to hours High No From a latency perspective if your data is only accessed by one instance then you should use block storage such as Amazon EBS Distributed file systems such as Amazon EFS generally have a small latency overhead for each file operation so they should be used where multiple instances need access ArchivedAmazon Web Services Performance Efficiency Pillar 15 Amazon S3 has features than can reduce latency and increase throughput You can use cross region replication (CRR) to p rovide lower latency data access to different geographic regions From a throughput perspective Amazon EFS supports highly parallelized workloads (for example using concurrent operations from multiple threads and multiple EC2 instances) which enables hi gh levels of aggregate throughput and operations per second For Amazon EFS use a benchmark or load test to select the appropriate performance mode Understand storage characteristics and requirements: Understand the different characteristics (for example shareable file size cache size access patterns latency throughput and persistence of data) that are required to select the services that best fit your workload such as object storage block storage file storage or ins tance storage Determine the expected growth rate for your workload and choose a storage solution that will meet those rates Object and file storage solutions such as Amazon S3 and Amazon Elastic File System enable unlimited storage ; Amazon EBS have pr e determined storage sizes Elastic volumes allow you to dynamically increase capacity tune performance and change the type of any new or existing current generation volume with no downtime or performance impact but it requires OS filesystem changes Evaluate available configuration options: Evaluate the various characteristics and configuration options and how they relate to storage Understand where and how to use provisioned IOPS SSDs magnetic storage object storage archival storage or ephemera l storage to optimize storage space and performance for your workload Amazon EBS provides a range of options that allow you to optimize storage performance and cost for your workload These options are divided in to two major categories: SSD backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS) and HDD backed storage for throughput intensive workloads such as MapReduce and log processing (performanc e depends primarily on MB/s) SSDbacked volumes include the highest performance provisioned IOPS SSD for latency sensitive transactional workloads and general purpose SSD that balance price and performance for a wide variety of transactional data Amazon S3 transfer acceleration enables fast transfer of files over long distances between your client and your S3 bucket Transfer acceleration leverages Amazon CloudFront globally distributed edge locations to route data over an optimized network ArchivedAmazon Web Services Performance Efficiency Pillar 16 path For a workload in an S3 bucket that has intensive GET requests use Amazon S3 with CloudFront When uploading large files use multi part uploads with multiple parts uploading at the same time to hel p maximize network throughput Amazon Elastic File System (Amazon EFS) provides a simple scalable fully managed elastic NFS file system for use with AWS Cloud services and on premises resources To support a wi de variety of cloud storage workloads Amazon EFS offers two performance modes : general purpose performance mode and max I/O performance mode There are also two throughput modes to choose from for your file system Bursting Throughput and Provisioned Th roughput To determine which settings to use for your workload see the Amazon EFS User Guide Amazon FSx provides two file systems to choo se from: Amazon FSx for Windows File Server for enterprise workloads and Amazon FSx for Lustre for high performance workloads FSx is SSD backed and is designed to deliver fast predictable scalable and consistent performance Amazon FSx file systems deliver sustained high read and write speeds and consistent low latency data access You can choo se the throughput level you need to match your workload ’s needs Make decisions based on access patterns and metrics: Choose storage systems based on your workload's access patterns and configure them by determining how the workload accesses data Increase storage efficiency by choosing object storage over block storage Configure the storage options you choose to match your data access patterns How you access data impacts how the storage solution performs Select the storage solution that aligns best to y our access patterns or consider changing your access patterns to align with the storage solution to maximize performance Creating a RAID 0 (zero) array allows you to achieve a higher level of performance for a file system than what you can provision on a single volume Consider using RAID 0 when I/O performance is more important than fault tolerance For example you could use it with a heavily used database where data replication is already set up separately Select appropriate storage metrics for your w orkload across all of the storage options consumed for the workload When utilizing filesystems that use burst credits create alarms to let you know when you are approaching those credit limits You must create storage dashboards to show the overall workl oad storage health For storage systems that are a fixed sized such as Amazon EBS or Amazon FSx ensure that you are monitoring the amount of storage used versus the overall storage ArchivedAmazon Web Services Performance Efficiency Pillar 17 size and create automation if possible to increase the storage size when reaching a threshold Resources Refer to the following resources to learn more about AWS best practices for storage Videos • Deep dive on Amazon EBS (STG303 R1) • Optimize your storage performance with Amazon S3 (STG343) Documentation • Amazon EBS: o Amazon EC2 Storage o Amazon EBS Volume Types o I/O Characteristics • Amazon S3: Request Rate and Performance Considerations • Amazon Glacier: Amazon Glacier Documentation • Amazon EFS: Amazon EFS Performance • Amazon FSx: o Amazon FSx for Lustre Performance o Amazon FSx for Windows File Server Performance Database Architectu re Selection The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various sub systems and enable different features to improve performance Selecting the wrong database solution and features for a system can lead to lower performance efficiency Understand data characteristics: Understand the different characteristics of data in your workload Determine if the workload requires transactions how it interacts with data and what its performance demands are Use this data to select the best ArchivedAmazon Web Services Performance Efficiency Pillar 18 performing database approach for your workload (for example relational databases NoSQL Key value document wide column graph time series or in memory storage ) You can choose from many purpose built database engines including relational key value document in memory graph time series and ledger databases By picking the best database to sol ve a specific problem (or a group of problems ) you can break away from restrictive one sizefitsall monolithic databases and focus on building applications to meet the needs of your customers Relational databases store data with predefined schemas and r elationships between them These databases are designed to support ACID (atomicity consistency isolation durability) transactions and maintain referential integrity and strong data consistency Many t raditional applications enterprise resource plannin g (ERP) customer relationship management (CRM) and ecommerce use relational databases to store their data You can run many of these database engines on Amazon EC2 or choose from one of the AWS managed database services : Amazon Aurora Amazon RDS and Amazon Redshift Keyvalue databases are optimized for common access patterns typically to store and retrieve large volumes of data These databases deliver quick response times even in extreme volumes of concurrent requests High traffic web apps e commerce systems and gaming applications ar e typical use cases for key value databases In AWS you can utilize Amazon DynamoDB a fully managed multi Region multi master durable database with built in security backup and restore and in memory c aching for internet scale applications Inmemory databases are used for applications that require real time access to data By storing data directly in memory these databases deliver microsecond latency to applications for whom millisecond latency is not enough You may use in memory databases for application caching session management gaming leaderboards and geospatial applications Amazon ElastiCache is a fully managed in memory data store compatibl e with Redis or Memcached A document database is designed to store semi structured data as JSON like documents These databases help de velopers build and update applications such as content management catalogs and user profiles quickly Amazon DocumentDB is a fast scalable highly available and fully managed document database service that supports MongoDB workloads ArchivedAmazon Web Services Performance Efficiency Pillar 19 A wide column store is a type of NoSQL database It uses tables rows and columns but unlike a relational database the names and format of the columns can vary from row to row in the same table You typically see a wide column store in high scale industrial apps for equipment maintenance fleet management and route optimization Amazon Managed Apache Cassandra Service is a wide column scalable highly available and managed Apa che Cassandra –compatible database service Graph databases are for applications that must navigate and query millions of relationships between highly connected graph datasets with millisecond latency at large scale Many companies use graph databases for f raud detection social networking and recommendation engines Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets Time series databases efficiently collect synthesize and derive insights from data that changes over time IoT applications DevOps and industrial telemetry can utilize time series databases Amazon Timest ream is a fast scalable fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day Ledger databases provide a centralized and trusted authority to maintain a sca lable immutable and cryptographically verifiable record of transactions for every application We see ledger databases used for systems of record supply chain registrations and even banking transactions Amazon Quantum Ledger Database (QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks every application data change and mainta ins a complete and verifiable history of changes over time Evaluate the available options: Evaluate the services and storage options that are available as part of the selection process for your workload's storage mechanisms Understand how and when to u se a given service or system for data storage Learn about available configuration options that can optimize database performance or efficiency such as provisioned IOPs memory and compute resources and caching Database solutions generally have configur ation options that allow you to optimize for the type of workload Using benchmarking or load testing identify database metrics that matter for your workload Consider the configuration options for your selected database approach such as storage optimizat ion database level settings memory and cache ArchivedAmazon Web Services Performance Efficiency Pillar 20 Evaluate database caching options for your workload The three most common types of database caches are the following: • Database integrated caches: Some databases (such as Amazon Aurora) offer an integrated cache that is managed within the database engine and has built in write through capabilities • Local caches: A local cache stores your frequently used data within your application This speeds up your data retrieval and removes network traffic associated wi th retrieving data making data retrieval faster than other caching architectures • Remote caches: Remote caches are stored on dedicated servers and typically built upon key/value NoSQL stores such as Redis and Memcached They provide up to a million reques ts per second per cache node For Amazon DynamodDB workloads DynamoDB Accelerator (DAX) provides a fully managed in memory cache DAX is an in memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in memory cache Using DAX you can improve the read performance of your DynamoDB tables by up to 10 times — taking the time required for reads from milliseconds to microseconds even at millions of requests per second Collect and record database performance metrics: Use tools libraries and systems that record performance measurements related to database performance For example measure transactions per second slow queries or system latency i ntroduced when accessing the database Use this data to understand the performance of your database systems Instrument as many database activity metrics as you can gather from your workload These metrics may need to be published directly from the workloa d or gathered from an application performance management service You can use AWS X Ray to analyze and debug production distributed applications such as those built using a microservices architecture An X Ray trace can include segments which encapsulate all the data points for single component For example when your application makes a call to a database in response to a request it creates a segment for that request with a sub segment representing the databa se call and its result The sub segment can contain data such as the query table used timestamp and error status Once instrumented you should enable alarms for your database metrics that indicate when thresholds are breached ArchivedAmazon Web Services Performance Efficiency Pillar 21 Choose data storage based on access patterns: Use the access patterns of the workload to decide which services and technologies to use For example utilize a relational database for workloads that require transactions or a key value store that provides higher throughput but is e ventually consistent where applicable Optimize data storage based on access patterns and metrics: Use performance characteristics and access patterns that optimize how data is stored or queried to achieve the best possible performance Measure how optimiz ations such as indexing key distribution data warehouse design or caching strategies impact system performance or overall efficiency Resources Refer to the following resources to learn more about AWS best practices for databases Videos • AWS purpose built databases (DAT209 L) • Amazon Aurora storage demystified: How it all works (DAT309 R) • Amazon DynamoDB deep dive: Advanced design patterns (DAT403 R1) Documentation • AWS Database Caching • Cloud Databases with AWS • Amazon Aurora best practices • Amazon Redshift performance • Amazon Athena top 10 performance tips • Amazon Redshift Spectrum best practices • Amazon DynamoDB best practices • Amazon DynamoDB Accelerator Network Architecture Selection The opti mal network solution for a workload varies based on latency throughput requirements jitter and bandwidth Physical constraints such as user or on premises ArchivedAmazon Web Services Performance Efficiency Pillar 22 resources determine location options These constraints can be offset with edge locations or res ource placement On AWS networking is virtualized and is available in a number of different types and configurations This makes it easier to match your networking methods with your needs AWS offers product features (for example Enhanced Networking Ama zon EC2 networking optimized instances Amazon S3 transfer acceleration and dynamic Amazon CloudFront) to optimize network traffic AWS also offers networking features (for example Amazon Route 53 latency routing Amazon VPC endpoints AWS Direct Connect and AWS Global Accelerator ) to reduce network distance or jitter Understand how networking impacts performance: Analyze and understand how network related features impact workload performance For example network latency often impacts the user experien ce and not providing enough network capacity can bottleneck workload performance Since the network is between all application components it can have large positive and negative impacts on application performance and behavior There are also applications that are heavily dependent on network performance such as High Performance Computing (HPC) where deep network understanding is important to increase cluster performance You must determine the workload requirements for bandwidth latency jitter and thro ughput Evaluate available networking features: Evaluate networking features in the cloud that may increase performance Measure the impact of these features through testing metrics and analysis For example take advantage of network level features that are available to reduce latency network distance or jitter Many services commonly offer features to optimize network performance Consider product features such as EC2 instance network capability enhanced networking instance types Amazon EBS optimize d instances Amazon S3 transfer acceleration and dynamic CloudFront to optimize network traffic AWS Global Accelerator is a service that improves global application availability and performance using the AWS global network It optimizes the network path taking advantage of the vast congestion free AWS global network It provides static IP addresses that make it easy to move endpoints between Availability Zones or AWS Regions without needing to update your DNS configuration or change client facing applications ArchivedAmazon Web Services Performanc e Efficiency Pillar 23 Amazon S3 content acceleration is a feature that lets external users benefit from the networking optimizations of CloudFront to upload data to Amazon S3 This makes it easy to t ransfer large amounts of data from remote locations that don’t have dedicated connectivity to the AWS Cloud Newer EC2 instances can leverage enhanced networking N series EC2 instances such as M5n and M5dn leverage the fourth generation of custom Nitro card and Elastic Network Adapter (ENA) device to deliver up to 100 Gbps of network throughput to a single instance These instances offer 4x the network bandwidth and packet process compared to the base M5 instances and are ideal for network intensive appl ications Customers can also enable Elastic Fabric Adapter (EFA) on certain instance sizes of M5n and M5dn instances for low and consistent network latency Amazon Elastic Network Adapters (ENA) provide further optimization by delivering 20 Gbps of network capacity for your instances within a single placement group Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables you to run workloads requiring high levels of inter node communications at scale on AWS With EFA High Performance Computing (HPC) applications using the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) can scale to thousands of CPUs or GPUs Amazon EBS optimized instances use an op timized configuration stack and provide additional dedicated capacity for Amazon EBS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance Latency based routing (LBR) for Amazon Route 53 helps you improve your workload’s performance for a global audience LBR works by routing your customers to the AWS endpoint (for EC2 instances Elastic IP addresses or ELB load balancers) that provides the fastest ex perience based on actual performance measurements of the different AWS Regions where your workload is running Amazon VPC endpoints provide reliable connectivity to AWS services (for example Amazon S3) without requiring an internet gateway or a Network Ad dress Translation (NAT) instance Choose appropriately sized dedicated connectivity or VPN for hybrid workloads : When there is a requirement for on premise communication ensure that you have adequate bandwidth for workload performance Based on bandwidth requirements a single dedicated connection or a single VPN might not be enough and you must enable traffic load balancing across multiple connections ArchivedAmazon Web Services Performance Efficiency Pillar 24 You must estimate the bandwidth and latency requirements for your hybrid workload These numbers will drive the sizing requirements for AWS Direct Connect or your VPN endpoints AWS Direct Connect provides dedicated connectivity to the AWS environment from 50 Mbps up to 10 Gbps This gives you managed and controlled latency and provisioned bandwidth so your workload can connect easily and in a performant way to other environments Using one of the AWS Direct Connect partners you can have end toend connectivity from multiple environments thus providi ng an extended network with consistent performance The AWS SitetoSite VPN is a managed VPN service for VPCs When a VPN connection is created AWS provides tunnels to two different VPN endpoints With AWS Transit Gateway you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection AWS Transit Gateway also enables you to s cale beyond the 125Gbps IPsec VPN throughput limit by enabling equal cost multi path (ECMP) routing support over multiple VPN tunnels Leverage load balancing and encryption offloading: Distribute traffic across multiple resources or services to allow yo ur workload to take advantage of the elasticity that the cloud provides You can also use load balancing for offloading encryption termination to improve performance and to manage and route traffic effectively When implementing a scale out architecture wh ere you want to use multiple instances for service content you can leverage load balancers inside your Amazon VPC AWS provides multiple models for your applications in the ELB service Application Load Balancer is best suited for load balancing of HTTP a nd HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including microservices and containers Network Load Balancer is best suited for load balancing of TCP traffic where extreme performance is required It is capable of handling millions of requests per second while maintaining ultra low latencies and it is optimized to handle sudden and volatile traffic patterns Elastic Load Balancing provides integrated certificate management and SSL/TLS decryption allowing you the flexibility to centrally manage the SSL settings of the load balancer and offload CPU intensive work from your workload ArchivedAmazon Web Services Performance Efficiency Pillar 25 Choose network protocols to optimize network traffic: Make decisions about protocols for communication between systems and networks based on the impact to the workload’s performance There is a relationship between latency and bandwidth to achieve throughput If your file transfer is using TC P higher latencies will reduce overall throughput There are approaches to fix this with TCP tuning and optimized transfer protocols some approaches use UDP Choose location based on network requirements: Use the cloud location options available to reduc e network latency or improve throughput Utilize AWS Regions Availability Zones placement groups and edge locations such as Outposts Local Zones and Wavelength to reduce network latency or improve throughput The AWS Cloud infrastructure is built ar ound Regions and Availability Zones A Region is a physical location in the world having multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the ability to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center Choose the appropriate Region or Regions for your deployment based on the following key elements: • Where your users are located : Choosing a Region close to your workload’s users ensures lower latency when they use the workload • Where your data is located : For data heavy applications t he major bottleneck in latency is data transfer Application code should execute as close to the data as possible • Other constraints : Consider constraints such as security and compliance Amazon EC2 provides placement groups for networking A placement gro up is a logical grouping of instances within a single Availability Zone Using placement groups with supported instance types and an Elastic Network Adapter (ENA) enables workloads to participate in a low latency 25 Gbps network Placement groups are reco mmended for workloads that benefit from low network latency high network throughput or both Using placement groups has the benefit of lowering jitter in network communications ArchivedAmazon Web Services Performance Efficiency Pillar 26 Latency sensitive services are delivered at the edge using a global network of edge locations These edge locations commonly provide services such as content delivery network (CDN) and domain name system (DNS) By having these services at the edge workloads can respond with low latency to requests for content or DNS resolution These services also provide geographic services such as geo targeting of content (providing different content based on the end users’ location) or latency based routing to direct end users to the nearest Region (minimum latency) Amazon CloudFront is a global CDN that can be used to accelerate both static content such as images scripts and videos as well as dynamic content such as APIs or web applications It relies on a global network of edge locations that will cache the content and provide high performance network connectivity to your users CloudFront also accelerates many other features such as content uploading and dynamic applications making it a performance addition to all applications serving tr affic over the internet Lambda@Edge is a feature of Amazon CloudFront that will let you run code closer to users of your workload which improves performance and reduces latency Amazon Route 53 is a hig hly available and scalable cloud DNS web service It’s designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names like wwwexamplecom into numeric IP addresses like 19216821 that computers use to connect to each other Route 53 is fully compliant with IPv6 AWS Outposts is designed for workloads that need to remain on premises due to latency requirements whe re you want that workload to run seamlessly with the rest of your other workloads in AWS AWS Outposts are fully managed and configurable compute and storage racks built with AWS designed hardware that allow you to run compute and storage on premises whil e seamlessly connecting to AWS’s broad array of services in the cloud AWS Local Zones are a new type of AWS infrastructure designed to run workloads that require single digit millisecond latency like video rendering and graphics intensive virtual desktop applications Local Zones allow you to gain all the benefits of having compute and storage resources closer to end users AWS Wavelength is designed to deliver ultra low latency applications to 5G devices by extending AWS infrastructure services APIs and tools to 5G networks Wavelength embeds storage and compute inside telco providers 5G networks to help your 5G workloa d if it requires single digit millisecond latency such as IoT devices game streaming autonomous vehicles and live media production ArchivedAmazon Web Services Performance Efficiency Pillar 27 Use edge services to reduce latency and to enable content caching Ensure that you have configured cache control correct ly for both DNS and HTTP/HTTPS to gain the most benefit from these approaches Optimize network configuration based on metrics: Use collected and analyzed data to make informed decisions about optimizing your network configuration Measure the impact of th ose changes and use the impact measurements to make future decisions Enable VPC Flow logs for all VPC networks that are used by your workload VPC Flow Logs are a feature that allows you to capture information about the IP traffic going to and from networ k interfaces in your VPC VPC Flow Logs help you with a number of tasks such as troubleshooting why specific traffic is not reaching an instance which in turn help s you diagnose overly restrictive security group rules You can use flow logs as a security tool to monitor the traffic that is reaching your instance to profile your network traffic and to look for abnormal traffic behaviors Use networking metrics to make changes to networking configuration as the workload evolves Cloud based networks can b e quickly re built so evolving your network architecture over time is necessary to maintain performance efficiency Resources Refer to the following resources to learn more about AWS best practices for networking Videos • Connectivity to AWS and hybrid AWS network architectures (NET317 R1) • Optimizing Network Performance for Amazon EC2 Instances (CMP308 R1) Documentation • Transitioning to Latency Based Routing in Amazon Route 53 • Networking Products with AWS • EC2 o Amazon EBS – Optimized Instances o EC2 Enhanced Networking on Linux o EC2 Enhanced Networking on Windows o EC2 Placement Groups ArchivedAmazon Web Services Performance Efficiency Pillar 28 o Enabling Enhanced Networking with the Elastic Netw ork Adapter (ENA) on Linux Instances • VPC o Transit Gateway o VPC Endpoints o VPC Flow Logs • Elastic Load Balancers o Application Load Balancer o Network Load Balancer ArchivedAmazon Web Services Performance Efficiency Pillar 29 Review When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the performance of your workload In the cloud it’s much easier to experiment with new features and services because your infrastructure is code To adopt a data driven approach to architecture you should implement a performance review process that considerer s the following: • Infrastructure as code: Define your infrastructure as code using approaches such as AWS CloudFormation templates The use of templates allows you to place your infrastructure into source control alongside your application code and configur ations This allows you to apply the same practices you use to develop software in your infrastructure so you can iterate rapidly • Deployment pipeline: Use a continuous integration/continuous deployment (CI/CD) pipeline (for example source code repository build systems deployment and testing automation) to deploy your infrastructure This enables you to deploy in a repeatable consistent and low cost fashion as you iterate • Welldefined metrics: Set up your metrics and monitor to capture key performanc e indicators (KPIs) We recommend that you use both technical and business metrics For website s or mobile apps key metrics are capturing time to first byte or rendering Other generally applicable metrics include thread count garbage collection rate an d wait states Business metrics such as the aggregate cumulative cost per request can alert you to ways to drive down costs Carefully consider how you plan to interpret metrics For example you could choose the maximum or 99th percentile instead of the average • Performance test automatically: As part of your deployment process automatically trigger performance tests after the quicker running tests have passed successfully The automation should create a new environment set up initial conditions such a s test data and then execute a series of benchmarks and load tests Results from these tests should be tied back to the build so you can track performance changes over time For long running tests you can make this part of the pipeline asynchronous from the rest of the build Alternatively you could execute performance tests overnight using Amazon EC2 Spot Instances ArchivedAmazon Web Services Performance Efficiency Pillar 30 • Load generation: You should create a series of test scripts that replicate synthetic or prerecorded user journeys These scripts should be idempotent and not coupled and you might need to include “pre warming” scripts to yield valid results As much as possible your test scripts should replicate the behavior of usage in production You can use software or software asaservice (SaaS) solut ions to generate the load Consider using AWS Marketplace solutions and Spot Instances — they can be cost effective ways to generate the load • Performance visibility: Key metrics should be visible to your team especially metrics against each build version This allows you to see any significant positive or negative trend over time You should also display metrics on the number of errors or exceptions to make sure you are testing a working system • Visualization: Use visualization techniques that make it clear where performance issues hot spots wait states or low utilization is occurring Overlay performance metrics over architecture diagrams — call graphs or code can help identify issues quickly This performance review process can be implemented as a simple extension of your existing deployment pipeline and then evolved over time as your testing requirements become more sophisticated For future architectures you can generalize your approach and reuse the same process and artifacts Architectures performing poorly is usually the result of a non existent or broken performance review process If your architecture is performing poorly implementing a performance review process will allow you to apply Deming’s plandocheck act (PDCA) cycle to drive iterative improvement Evolve Your Workload to Take Advantage of New Releases Take advantage of the continual innovation at AWS driven by customer need We release new Regions edge locations services and features regularly Any of these releases could positively improve the performance efficiency of your architecture Stay up todate on new resources and services : Evaluate ways to improve performance as new services design patterns and pro duct offerings become available Determine which of these could improve performance or increase the efficiency of the workload through ad hoc evaluation internal discussion or external analysis ArchivedAmazon Web Services Performance Efficiency Pillar 31 Define a process to evaluate updates new features and ser vices from AWS For example building proof ofconcepts that use new technologies or consulting with an internal group When trying new ideas or services run performance tests to measure the impact that they have on the efficiency or performance of the wo rkload Take advantage of the flexibility that you have in AWS to test new ideas or technologies frequently with minimal cost or risk Define a process to improve workload performance: Define a process to evaluate new services design patterns resource ty pes and configurations as they become available For example run existing performance tests on new instance offerings to determine their potential to improve your workload Your workload's performance has a few key constraints Document these so that you know what kinds of innovation might improve the performance of your workload Use this information when learning about new services or technology as it becomes available to identify ways to alleviate constraints or bottlenecks Evolve workload performance over time: As an organization use the information gathered through the evaluation process to actively drive adoption of new services or resources when they become available Use the information you gather when evaluating new services or technologies to d rive change As your business or workload changes performance needs also change Use data gathered from your workload metrics to evaluate areas where you can get the biggest gains in efficiency or performance and proactively adopt new services and techno logies to keep up with demand Resources Refer to the following resources to learn more about AWS best practices for benchmarking Videos • Amazon Web Services YouTube Channel • AWS Online Tech Talks YouTube Channel • AWS Events YouTube Channel ArchivedAmazon Web Services Performance Efficiency Pillar 32 Monitoring After you implement your architecture you must monitor its performance so that you can remediate any issues before they impact your customers Monitoring metrics should be used to raise alarms when thresholds are breached Monitoring at AWS consists of five distinct phases which are explained in more detail in the Reliability Pillar whitepaper : 1 Generation – scope of monitoring metrics and thresholds 2 Aggregation – creating a complete view from multiple sour ces 3 Real time processing and alarming – recognizing and responding 4 Storage – data management and retention policies 5 Analytics – dashboards reporting and insights CloudWatch is a monitoring service for AWS Cloud resources and the workloads that run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files and set alarms CloudWatch can monitor AWS resources such as EC2 instances and RDS DB instances as well as custom metrics generated by your workloads and services an d any log files your applications generate You can use CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these insights to react quickly and keep your workload running smoothl y CloudWatch dashboards enable you to create reusable graphs of AWS resources and custom metrics so you can monitor operational status and identify issues at a glance Ensuring that you do not see false positives is key to an effective monitoring solu tion Automated triggers avoid human error and can reduce the time it takes to fix problems Plan for game days where simulations are conducted in the production environment to test your alarm solution and ensure that it correctly recognizes issues Moni toring solutions fall into two types: active monitoring (AM) and passive monitoring (PM) AM and PM complement each other to give you a full view of how your workload is performing Active monitoring simulates user activity in scripted user journeys across critical paths in your product AM should be continuously performed in order to test the performance and availability of a workload AM complements PM by being continuous lightweight ArchivedAmazon Web Services Performance Efficiency Pillar 33 and predictable It can be run across all environments (especially pre production environments) to identify problems or performance issues before they impact end users Passive monitoring is commonly used with web based workloads PM collects performance metrics from the browser (non webbased workloads can use a similar approach) You can collect metrics across all users (or a subset of users) geographies browsers and device types Use PM to understand the following issues: • User experience performance : PM provides you with metrics on what your users are experiencing which gives you a continuous view into how production is working as well as a view into the impact of changes over time • Geographic performance variability : If a workload has a global footprint and users access the workload from all around t he world using PM can enable you to spot a performance problem impacting users in a specific geography • The impact of API use : Modern workloads use internal APIs and third party APIs PM provides the visibility into the use of APIs so you can identify performance bottlenecks that originate not only from internal APIs but also from thirdparty API providers CloudWatch provides the ability to monitor and send notification alarms You can use automation to work around performance issues by triggering action s through Amazon Kinesis Amazon Simple Queue Service (Amazon SQS) and AWS Lambda Monitor Your Resources to Ensure That They Are Performing as Expected System performance can degrade over time Monitor system performance to identify degradation and reme diate internal or external factors such as the operating system or application load Record performance related metrics: Use a monitoring and observability service to record performance related metrics For example record database transactions slow queries I/O latency HTTP request throughput service latency or other key data Identify the performance metrics that matter for your workload and record them This data is an important part of being able to identify which components are impacting overall performance or efficiency of the workload ArchivedAmazon Web Services Performance Efficiency Pillar 34 Working back from the customer experience identify metrics that matter For each metric identify the target measurement approach and priority Use these to build alarms and notifications to proactively address performance related issues Analyze metrics when events or incidents occur : In response to (or during) an event or incident use monitoring dashboards or reports to understand and diagnose the impact These views provide insight into which portions of the workload are not performing as expected When you write critical user stories for your architecture include performance requirements such as specifying how quickly each critical story should execute For these critical stories implement additional scripted user journeys to ensure that you know how these stories perform against your requirement Establish Key Performance Indicators (KPIs) to measure workload performance : Identify the KPIs that indicate whether the workload is performing as intended For example an API based workload might use ov erall response latency as an indication of overall performance and an e commerce site might choose to use the number of purchases as its KPI Document the performance experience required by customers including how customers will judge the performance of the workload Use these requirements to establish your key performance indicators (KPIs) which will indicate how the system is performing overall Use monitoring to generate alarm based notifications: Using the performance related key performance indicato rs (KPIs) that you defined use a monitoring system that generates alarms automatically when these measurements are outside expected boundaries Amazon CloudWatch can collect metrics across the resources in your architecture You can also collect and publ ish custom metrics to surface business or derived metrics Use CloudWatch or a 3rd party monitoring service to set alarms that indicate when thresholds are breached; the alarms signal that a metric is outside of the expected boundaries Review metrics at r egular intervals: As routine maintenance or in response to events or incidents review which metrics are collected Use these reviews to identify which metrics were key in addressing issues and which additional metrics if they were being tracked would h elp to identify address or prevent issues ArchivedAmazon Web Services Performance Efficiency Pillar 35 As part of responding to incidents or events evaluate which metrics were helpful in addressing the issue and which metrics could have helped that are not currently being tracked Use this to improve the quality of metrics you collect so that you can prevent or more quickly resolve future incidents Monitor and alarm proactively: Use key performance indicators (KPIs) combined with monitoring and alerting systems to proactively address performance related issues Use alarms to trigger automated actions to remediate issues where possible Escalate the alarm to those able to respond if automated response is not possible For example you may have a system that can predict expected key perf ormance indicators (KPI) values and alarm when they breach certain thresholds or a tool that can automatically halt or roll back deployments if KPIs are outside of expected values Implement processes that provide visibility into performance as your workl oad is running Build monitoring dashboards and establish baseline norms for performance expectations to determine if the workload is performing optimally Resources Refer to the following resources to learn more about AWS best practices for monitoring to promote performance efficiency Videos • Cut through the chaos: Gain operational visibility and insight (MGT301 R1) Documentation • XRay Documentation • CloudWatch Documentation Trade offs When you architect solutions think about trade offs to ensure an optimal approach Depending on your situation you could trade consistency durability and space for time or latency to deliver higher performance Using AWS you can go global in minutes and deploy resources in multiple locations across the globe to be closer to your end u sers You can also dynamically add read only replicas to information stores (such as database systems ) to reduce the load on the primary database ArchivedAmazon Web Services Performance Efficiency P illar 36 AWS offers caching solutions such as Amazon ElastiCache which provides an in memory data store or cache and Amazon CloudFront which caches copies of your static content closer to end users Amazon DynamoDB Accelerator (DAX) provides a readthrough/write through distributed caching tier in front of Amazon DynamoDB supporting the same API but providi ng sub millisecond latency for entities that are in the cache Using Trade offs to Improve Performanc e When architecting solutions actively considering trade offs enables you to select an optimal approach Often you can improve performance by trading con sistency durability and space for time and latency Trade offs can increase the complexity of your architecture and require load testing to ensure that a measurable benefit is obtained Understand the areas where performance is most critical: Understand and identify areas where increasing the performance of your workload will have a positive impact on efficiency or customer experience For example a website that has a large amount of customer interaction can benefit from using edge services to move conte nt delivery closer to customers Learn about design patterns and services: Research and understand the various design patterns and services that help improve workload performance As part of the analysis identify what you could trade to achieve higher per formance For example using a cache service can help to reduce the load placed on database systems; however it requires some engineering to implement safe caching or possible introduction of eventual consistency in some areas Learn which performance con figuration options are available to you and how they could impact the workload Optimizing the performance of your workload depends on understanding how these options interact with your architecture and the impact they will have on both measured performanc e and the performance perceived by users The Amazon Builders’ Library provides readers with a detailed description of how Amazon builds and operates technology These free articles are written by Am azon’s senior engineers and cover topics across architecture software delivery and operations For example you can see how Amazon automates software delivery to achieve over 150 million deployments a year or how Amazon’s engineers implement principles such as shuffle sharding to build resilient systems that are highly available and fault tolerant ArchivedAmazon Web Services Performance Efficiency Pillar 37 Identify how trade offs impact customers and efficiency: When evaluating performance related improvements determine which choices will impact your customers and workload efficiency For example if using a key value data store increases system performance it is important to evaluate how the eventually consistent nature of it will impact customers Identify areas of poor performance in your system through metr ics and monitoring Determine how you can make improvements what trade offs those improvements bring and how they impact the system and the user experience For example implementing caching data can help dramatically improve performance but requires a clear strategy for how and when to update or invalidate cached data to prevent incorrect system behavior Measure the impact of performance improvements: As changes are made to improve performance evaluate the collected metrics and data Use this informati on to determine impact that the performance improvement had on the workload the workload’s components and your customers This measurement helps you understand the improvements that result from the tradeoff and helps you determine if any negative sideeffects were introduced A well architected system uses a combination of performance related strategies Determine which strategy will have the largest positive impact on a given hotspot or bottleneck For example sharding data across multiple relational d atabase systems could improve overall throughput while retaining support for transactions and within each shard caching can help to reduce the load Use various performance related strategies: Where applicable utilize multiple strategies to improve perf ormance For example using strategies like caching data to prevent excessive network or database calls using read replicas for database engines to improve read rates sharding or compressing data where possible to reduce data volumes and buffering and s treaming of results as they are available to avoid blocking As you make changes to the workload collect and evaluate metrics to determine the impact of those changes Measure the impacts to the system and to the end user to understand how your trade offs impact your workload Use a systematic approach such as load testing to explore whether the tradeoff improves performance Resources Refer to the following resources to learn more about AWS best practices for caching ArchivedAmazon Web Services Performance Efficiency Pillar 38 Video • Introducing The Amazon Builders’ Library (DOP328) Documentation • Amazon Builders’ Library • Best Practices for Implementing Amazon ElastiCache Conclusion Achieving and maintaining performance efficiency requires a data driven approach You should actively consider access patterns and trade offs tha t will allow you to optimize for higher performance Using a review process based on benchmarks and load tests allows you to select the appropriate resource types and configurations Treating your infrastructure as code enables you to rapidly and safely ev olve your architecture while you use data to make fact based decisions about your architecture Putting in place a combination of active and passive monitoring ensures that the performance of your architecture does not degrade over time AWS strives to he lp you build architectures that perform efficiently while delivering business value Use the tools and techniques discussed in this paper to ensure success Contributors The following individuals and organizations contributed to this document: • Eric Pullen Performance Efficiency Lead Well Architected Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Service s • Julien Lépine Specialist SA Manager Amazon Web Services • Ronnen Slasky Solutions Architect Amazon Web Services Further Reading For additional help consult the following sources: ArchivedAmazon Web Services Performance Efficiency Pillar 39 •AWS Well Architected Framework Document Revisions Date Description July 2020 Major review and update of content July 2018 Minor update for grammatical issues November 2017 Refreshed the whitepaper to reflect changes in AWS November 2016 First publication,General,consultant,Best Practices AWS_WellArchitected_Framework__Reliability_Pillar,"ArchivedReliability Pillar AWS WellArchitected Framework This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/reliabilitypillar/welcomehtmlArchivedReliability Pillar AWS WellArchitected Framework Reliability Pillar: AWS WellArchitected Framework Copyright © 2020 Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedReliability Pillar AWS WellArchitected Framework Table of Contents Abstract 1 Abstract 1 Introduction 2 Reliability 3 Design Principles 3 Definitions 3 Resiliency and the Components of Reliability 4 Availability 4 Disaster Recovery (DR) Objectives 7 Understanding Availability Needs 8 Foundations 9 Manage Service Quotas and Constraints 9 Resources 10 Plan your Network Topology 10 Resources 14 Workload Architecture 15 Design Your Workload Service Architecture 15 Resources 17 Design Interactions in a Distributed System to Prevent Failures 17 Resources 19 Design Interactions in a Distributed System to Mitigate or Withstand Failures 20 Resources 24 Change Management 25 Monitor Workload Resources 25 Resources 28 Design your Workload to Adapt to Changes in Demand 28 Resources 29 Implement Change 30 Additional deployment patterns to minimize risk: 32 Resources 32 Failure Management 34 Back up Data 34 Resources 35 Use Fault Isolation to Protect Your Workload 36 Resources 40 Design your Workload to Withstand Component Failures 41 Resources 43 Test Reliability 44 Resources 46 Plan for Disaster Recovery (DR) 47 Resources 49 Example Implementations for Availability Goals 50 Dependency Selection 50 SingleRegion Scenarios 50 2 9s (99%) Scenario 51 3 9s (999%) Scenario 52 4 9s (9999%) Scenario 54 MultiRegion Scenarios 56 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes 56 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute 59 Resources 61 Documentation 61 Labs 62 External Links 62 iiiArchivedReliability Pillar AWS WellArchitected Framework Books 62 Conclusion 63 Contributors 64 Further Reading 65 Document Revisions 66 Appendix A: DesignedFor Availability for Select AWS Services 68 ivArchivedReliability Pillar AWS WellArchitected Framework Abstract Reliability Pillar AWS Well Architected Framework Publication date: July 2020 (Document Revisions (p 66)) Abstract The focus of this paper is the reliability pillar of the AWS WellArchitected Framework It provides guidance to help customers apply best practices in the design delivery and maintenance of Amazon Web Services (AWS) environments 1ArchivedReliability Pillar AWS WellArchitected Framework Introduction The AWS WellArchitected Framework helps you understand the pros and cons of decisions you make while building workloads on AWS By using the Framework you will learn architectural best practices for designing and operating reliable secure efficient and costeffective workloads in the cloud It provides a way to consistently measure your architectures against best practices and identify areas for improvement We believe that having wellarchitected workload greatly increases the likelihood of business success The AWS WellArchitected Framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the reliability pillar and how to apply it to your solutions Achieving reliability can be challenging in traditional onpremises environments due to single points of failure lack of automation and lack of elasticity By adopting the practices in this paper you will build architectures that have strong foundations resilient architecture consistent change management and proven failure recovery processes This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS best practices and strategies to use when designing cloud architectures for reliability This paper includes highlevel implementation details and architectural patterns as well as references to additional resources 2ArchivedReliability Pillar AWS WellArchitected Framework Design Principles Reliability The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to This includes the ability to operate and test the workload through its total lifecycle This paper provides indepth best practice guidance for implementing reliable workloads on AWS Topics •Design Principles (p 3) •Definitions (p 3) •Understanding Availability Needs (p 8) Design Principles In the cloud there are a number of principles that can help you increase reliability Keep these in mind as we discuss best practices: •Automatically recover from failure: By monitoring a workload for key performance indicators (KPIs) you can trigger automation when a threshold is breached These KPIs should be a measure of business value not of the technical aspects of the operation of the service This allows for automatic notification and tracking of failures and for automated recovery processes that work around or repair the failure With more sophisticated automation it’s possible to anticipate and remediate failures before they occur •Test recovery procedures: In an onpremises environment testing is often conducted to prove that the workload works in a particular scenario Testing is not typically used to validate recovery strategies In the cloud you can test how your workload fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This approach exposes failure pathways that you can test and fix before a real failure scenario occurs thus reducing risk •Scale horizontally to increase aggregate workload availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload Distribute requests across multiple smaller resources to ensure that they don’t share a common point of failure •Stop guessing capacity: A common cause of failure in onpremises workloads is resource saturation when the demands placed on a workload exceed the capacity of that workload (this is often the objective of denial of service attacks) In the cloud you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over or underprovisioning There are still limits but some quotas can be controlled and others can be managed (see Manage Service Quotas and Constraints (p 9)) •Manage change in automation : Changes to your infrastructure should be made using automation The changes that need to be managed include changes to the automation which then can be tracked and reviewed Definitions This whitepaper covers reliability in the cloud describing best practice for these four areas: • Foundations • Workload Architecture 3ArchivedReliability Pillar AWS WellArchitected Framework Resiliency and the Components of Reliability • Change Management • Failure Management To achieve reliability you must start with the foundations—an environment where service quotas and network topology accommodate the workload The workload architecture of the distributed system must be designed to prevent and mitigate failures The workload must handle changes in demand or requirements and it must be designed to detect failure and automatically heal itself Topics •Resiliency and the components of Reliability (p 4) •Availability (p 4) •Disaster Recovery (DR) Objectives (p 7) Resiliency and the components of Reliability Reliability of a workload in the cloud depends on several factors the primary of which is Resiliency: •Resiliency is the ability of a workload to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues The other factors impacting workload reliability are: • Operational Excellence which includes automation of changes use of playbooks to respond to failures and Operational Readiness Reviews (ORRs) to confirm that applications are ready for production operations • Security which includes preventing harm to data or infrastructure from malicious actors which would impact availability For example encrypt backups to ensure that data is secure • Performance Efficiency which includes designing for maximum request rates and minimizing latencies for your workload • Cost Optimization which includes tradeoffs such as whether to spend more on EC2 instances to achieve static stability or to rely on automatic scaling when more capacity is needed Resiliency is the primary focus of this whitepaper The other four aspects are also important and they are covered by their respective pillars of the AWS WellArchitected Framework Many of the best practices here also address those aspects of reliability but the focus is on resiliency Availability Availability (also known as service availability) is both a commonly used metric to quantitatively measure resiliency as well as a target resiliency objective •Availability is the percentage of time that a workload is available for use Available for use means that it performs its agreed function successfully when required This percentage is calculated over a period of time such as a month year or trailing three years Applying the strictest possible interpretation availability is reduced anytime that the application isn’t operating normally including both scheduled and unscheduled interruptions We define availability as follows: 4ArchivedReliability Pillar AWS WellArchitected Framework Availability • Availability is a percentage uptime (such as 999%) over a period of time (commonly a month or year) • Common shorthand refers only to the “number of nines”; for example “five nines” translates to being 99999% available • Some customers choose to exclude scheduled service downtime (for example planned maintenance) from the Total Time in the formula However this is not advised as your users will likely want to use your service during these times Here is a table of common application availability design goals and the maximum length of time that interruptions can occur within a year while still meeting the goal The table contains examples of the types of applications we commonly see at each availability tier Throughout this document we refer to these values Availability Maximum Unavailability (per year)Application Categories 99% (p 51) 3 days 15 hours Batch processing data extraction transfer and load jobs 999% (p 52) 8 hours 45 minutes Internal tools like knowledge management project tracking 9995% (p 56) 4 hours 22 minutes Online commerce point of sale 9999% (p 54) 52 minutes Video delivery broadcast workloads 99999% (p 59) 5 minutes ATM transactions telecommunications workloads Measuring availability based on requests For your service it may be easier to count successful and failed requests instead of “time available for use” In this case the following calculation can be used: This is often measured for oneminute or fiveminute periods Then a monthly uptime percentage (time base availability measurement) can be calculated from the average of these periods If no requests are received in a given period it is counted at 100% available for that time Calculating availability with hard dependencies Many systems have hard dependencies on other systems where an interruption in a dependent system directly translates to an interruption of the 5ArchivedReliability Pillar AWS WellArchitected Framework Availability invoking system This is opposed to a soft dependency where a failure of the dependent system is compensated for in the application Where such hard dependencies occur the invoking system’s availability is the product of the dependent systems’ availabilities For example if you have a system designed for 9999% availability that has a hard dependency on two other independent systems that each are designed for 9999% availability the workload can theoretically achieve 9997% availability: Avail invok × Avail dep1× Avail dep2 = Avail workload 9999% × 9999% × 9999% = 9997% It’s therefore important to understand your dependencies and their availability design goals as you calculate your own Calculating availability with redundant components When a system involves the use of independent redundant components (for example redundant resources in different Availability Zones) the theoretical availability is computed as 100% minus the product of the component failure rates For example if a system makes use of two independent components each with an availability of 999% the effective availability of this dependency is 999999%: Avail effective =Avail MAX− ((100%−Avail dependency )×(100%−Avail dependency )) 999999% = 100% − (01%×01%) Shortcut calculation: If the availabilities of all components in your calculation consist solely of the digit nine then you can sum the count of the number of nines digits to get your answer In the above example two redundant independent components with three nines availability results in six nines Calculating dependency availability Some dependencies provide guidance on their availability including availability design goals for many AWS services (see Appendix A: DesignedFor Availability for Select AWS Services (p 68)) But in cases where this isn’t available (for example a component where the manufacturer does not publish availability information) one way to estimate is to determine the Mean Time Between Failure (MTBF) and Mean Time to Recover (MTTR) An availability estimate can be established by: For example if the MTBF is 150 days and the MTTR is 1 hour the availability estimate is 9997% 6ArchivedReliability Pillar AWS WellArchitected Framework Disaster Recovery (DR) Objectives For additional details see this document (Calculating Total System Availability) which can help you calculate your availability Costs for availability Designing applications for higher levels of availability typically results in increased cost so it’s appropriate to identify the true availability needs before embarking on your application design High levels of availability impose stricter requirements for testing and validation under exhaustive failure scenarios They require automation for recovery from all manner of failures and require that all aspects of system operations be similarly built and tested to the same standards For example the addition or removal of capacity the deployment or rollback of updated software or configuration changes or the migration of system data must be conducted to the desired availability goal Compounding the costs for software development at very high levels of availability innovation suffers because of the need to move more slowly in deploying systems The guidance therefore is to be thorough in applying the standards and considering the appropriate availability target for the entire lifecycle of operating the system Another way that costs escalate in systems that operate with higher availability design goals is in the selection of dependencies At these higher goals the set of software or services that can be chosen as dependencies diminishes based on which of these services have had the deep investments we previously described As the availability design goal increases it’s typical to find fewer multipurpose services (such as a relational database) and more purposebuilt services This is because the latter are easier to evaluate test and automate and have a reduced potential for surprise interactions with included but unused functionality Disaster Recovery (DR) Objectives In addition to availability objectives your resiliency strategy should also include Disaster Recovery (DR) objectives based on strategies to recover your workload in case of a disaster event Disaster Recovery focuses on onetime recovery objectives in response natural disasters largescale technical failures or human threats such as attack or error This is different than availability which measures mean resiliency over a period of time in response to component failures load spikes or software bugs Recovery Time Objective (RTO) Defined by the organization RTO is the maximum acceptable delay between the interruption of service and restoration of service This determines what is considered an acceptable time window when service is unavailable Recovery Point Objective (RPO) Defined by the organization RPO is the maximum acceptable amount of time since the last data recovery point This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service 7ArchivedReliability Pillar AWS WellArchitected Framework Understanding Availability Needs The relationship of RPO (Recovery Point Objective) RTO (Recovery Time Objective) and the disaster event RTO is similar to MTTR (Mean Time to Recovery) in that both measure the time between the start of an outage and workload recovery However MTTR is a mean value taken over several availability impacting events over a period of time while RTO is a target or maximum value allowed for a single availability impacting event Understanding Availability Needs It’s common to initially think of an application’s availability as a single target for the application as a whole However upon closer inspection we frequently find that certain aspects of an application or service have different availability requirements For example some systems might prioritize the ability to receive and store new data ahead of retrieving existing data Other systems prioritize real time operations over operations that change a system’s configuration or environment Services might have very high availability requirements during certain hours of the day but can tolerate much longer periods of disruption outside of these hours These are a few of the ways that you can decompose a single application into constituent parts and evaluate the availability requirements for each The benefit of doing this is to focus your efforts (and expense) on availability according to specific needs rather than engineering the whole system to the strictest requirement Recommendation Critically evaluate the unique aspects to your applications and where appropriate differentiate the availability and disaster recovery design goals to reflect the needs of your business Within AWS we commonly divide services into the “data plane” and the “control plane” The data plane is responsible for delivering realtime service while control planes are used to configure the environment For example Amazon EC2 instances Amazon RDS databases and Amazon DynamoDB table read/write operations are all data plane operations In contrast launching new EC2 instances or RDS databases or adding or changing table metadata in DynamoDB are all considered control plane operations While high levels of availability are important for all of these capabilities the data planes typically have higher availability design goals than the control planes Therefore workloads with high availability requirements should avoid runtime dependency on control plan operations Many AWS customers take a similar approach to critically evaluating their applications and identifying subcomponents with different availability needs Availability design goals are then tailored to the different aspects and the appropriate work efforts are executed to engineer the system AWS has significant experience engineering applications with a range of availability design goals including services with 99999% or greater availability AWS Solution Architects (SAs) can help you design appropriately for your availability goals Involving AWS early in your design process improves our ability to help you meet your availability goals Planning for availability is not only done before your workload launches It’s also done continuously to refine your design as you gain operational experience learn from real world events and endure failures of different types You can then apply the appropriate work effort to improve upon your implementation The availability needs that are required for a workload must be aligned to the business need and criticality By first defining business criticality framework with defined RTO RPO and availability you can then assess each workload Such an approach requires that the people involved in implementation of the workload are knowledgeable of the framework and the impact their workload has on business needs 8ArchivedReliability Pillar AWS WellArchitected Framework Manage Service Quotas and Constraints Foundations Foundational requirements are those whose scope extends beyond a single workload or project Before architecting any system foundational requirements that influence reliability should be in place For example you must have sufficient network bandwidth to your data center In an onpremises environment these requirements can cause long lead times due to dependencies and therefore must be incorporated during initial planning With AWS however most of these foundational requirements are already incorporated or can be addressed as needed The cloud is designed to be nearly limitless so it’s the responsibility of AWS to satisfy the requirement for sufficient networking and compute capacity leaving you free to change resource size and allocations on demand The following sections explain best practices that focus on these considerations for reliability Topics •Manage Service Quotas and Constraints (p 9) •Plan your Network Topology (p 10) Manage Service Quotas and Constraints For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk If you are using AWS Marketplace applications you must understand the limitations of those applications If you are using thirdparty web services or software as a service you must be aware of those limits also Aware of service quotas and constraints: You are aware of your default quotas and quota increase requests for your workload architecture You additionally know which resource constraints such as disk or network are potentially impactful Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location Along with looking up the quota values you can also request and track quota increases from the Service Quotas console or via the AWS SDK AWS Trusted Advisor offers a service quotas check that displays your usage and quotas for some aspects of some services The default service quotas per service are also in the AWS documentation per respective service for example see Amazon VPC Quotas Rate limits on throttled APIs are set within the API Gateway itself by configuring a usage plan Other limits that are set as configuration on their respective services include Provisioned IOPS RDS storage allocated and EBS volume allocations Amazon Elastic Compute Cloud (Amazon EC2) has its own service limits dashboard that can help you manage your instance Amazon Elastic Block Store (Amazon EBS) and Elastic IP address limits If you have a use case where service quotas impact your application’s performance and they are not adjustable to your needs then contact AWS Support to see if there are mitigations Manage quotas across accounts and regions: If you are using multiple AWS accounts or AWS Regions ensure that you request the appropriate quotas in all environments in which your production workloads run 9ArchivedReliability Pillar AWS WellArchitected Framework Resources Service quotas are tracked per account Unless otherwise noted each quota is AWS Regionspecific In addition to the production environments also manage quotas in all applicable nonproduction environments so that testing and development are not hindered Accommodate fixed service quotas and constraints through architecture: Be aware of unchangeable service quotas and physical resources and architect to prevent these from impacting reliability Examples include network bandwidth AWS Lambda payload size throttle burst rate for API Gateway and concurrent user connections to an Amazon Redshift cluster Monitor and manage quotas : Evaluate your potential usage and increase your quotas appropriately allowing for planned growth in usage For supported services you can manage your quotas by configuring CloudWatch alarms to monitor usage and alert you to approaching quotas These alarms can be triggered from Service Quotas or from Trusted Advisor You can also use metric filters on CloudWatch Logs to search and extract patterns in logs to determine if usage is approaching quota thresholds Automate quota management : Implement tools to alert you when thresholds are being approached By using Service Quotas APIs you can automate quota increase requests If you integrate your Configuration Management Database (CMDB) or ticketing system with Service Quotas you can automate the tracking of quota increase requests and current quotas In addition to the AWS SDK Service Quotas offers automation using AWS command line tools Ensure that a sufficient gap exists between the current quotas and the maximum usage to accommodate failover: When a resource fails it may still be counted against quotas until it’s successfully terminated Ensure that your quotas cover the overlap of all failed resources with replacements before the failed resources are terminated You should consider an Availability Zone failure when calculating this gap Resources Video •AWS Live re:Inforce 2019 Service Quotas Documentation •What Is Service Quotas? •AWS Service Quotas (formerly referred to as service limits) •Amazon EC2 Service Limits •AWS Trusted Advisor Best Practice Checks (see the Service Limits section) •AWS Limit Monitor on AWS Answers •AWS Marketplace: CMDB products that help track limits •APN Partner: partners that can help with configuration management Plan your Network Topology Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must 10ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology include network considerations such as intrasystem and intersystem connectivity public IP address management private IP address management and domain name resolution When architecting systems using IP addressbased networks you must plan network topology and addressing in anticipation of possible failures and to accommodate future growth and integration with other systems and their networks Amazon Virtual Private Cloud (Amazon VPC) lets you provision a private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network Use highly available network connectivity for your workload public endpoints : These endpoints and the routing to them must be highly a vailable To achieve this use highly available DNS content delivery networks (CDNs) API Gateway load balancing or reverse proxies Amazon Route 53 AWS Global Accelerator Amazon CloudFront Amazon API Gateway and Elastic Load Balancing (ELB) all provide highly available public endpoints You might also choose to evaluate AWS Marketplace software appliances for load balancing and proxying Consumers of the service your workload provides whether they are endusers or other services make requests on these service endpoints Several AWS resources are available to enable you to provide highly available endpoints Elastic Load Balancing provides load balancing across Availability Zones performs Layer 4 (TCP) or Layer 7 (http/https) routing integrates with AWS WAF and integrates with AWS Auto Scaling to help create a selfhealing infrastructure and absorb increases in traffic while releasing resources when traffic decreases Amazon Route 53 is a scalable and highly available Domain Name System (DNS) service that connects user requests to infrastructure running in AWS–such as Amazon EC2 instances Elastic Load Balancing load balancers or Amazon S3 buckets–and can also be used to route users to infrastructure outside of AWS AWS Global Accelerator is a network layer service that you can use to direct traffic to optimal endpoints over the AWS global network Distributed Denial of Service (DDoS) attacks risk shutting out legitimate traffic and lowering availability for your users AWS Shield provides automatic protection against these attacks at no extra cost for AWS service endpoints on your workload You can augment these features with virtual appliances from APN Partners and the AWS Marketplace to meet your needs Provision redundant connectivity between private networks in the cloud and onpremises environments: Use multiple AWS Direct Connect (DX) connections or VPN tunnels between separately deployed private networks Use multiple DX locations for high availability If using multiple AWS Regions ensure redundancy in at least two of them You might want to evaluate AWS Marketplace appliances that terminate VPNs If you use AWS Marketplace appliances deploy redundant instances for high availability in different Availability Zones AWS Direct Connect is a cloud service that makes it easy to establish a dedicated network connection from your onpremises environment to AWS Using Direct Connect Gateway your onpremises data center can be connected to multiple AWS VPCs spread across multiple AWS Regions This redundancy addresses possible failures that impact connectivity resiliency: • How are you going to be resilient to failures in your topology? • What happens if you misconfigure something and remove connectivity? • Will you be able to handle an unexpected increase in traffic/use of your services? • Will you be able to absorb an attempted Distributed Denial of Service (DDoS) attack? 11ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology When connecting your VPC to your onpremises data center via VPN you should consider the resiliency and bandwidth requirements that you need when you select the vendor and instance size on which you need to run the appliance If you use a VPN appliance that is not resilient in its implementation then you should have a redundant connection through a second appliance For all these scenarios you need to define an acceptable time to recovery and test to ensure that you can meet those requirements If you choose to connect your VPC to your data center using a Direct Connect connection and you need this connection to be highly available have redundant DX connections from each data center The redundant connection should use a second DX connection from different location than the first If you have multiple data centers ensure that the connections terminate at different locations Use the Direct Connect Resiliency Toolkit to help you set this up If you choose to fail over to VPN over the internet using AWS VPN it’s important to understand that it supports up to 125Gbps throughput per VPN tunnel but does not support Equal Cost Multi Path (ECMP) for outbound traffic in the case of multiple AWS Managed VPN tunnels terminating on the same VGW We do not recommend that you use AWS Managed VPN as a backup for Direct Connect connections unless you can tolerate speeds less than 1 Gbps during failover You can also use VPC endpoints to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without traversing the public internet Endpoints are virtual devices They are horizontally scaled redundant and highly available VPC components They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic Ensure IP subnet allocation accounts for expansion and availability: Amazon VPC IP address ranges must be large enough to accommodate workload requirements including factoring in future expansion and allocation of IP addresses to subnets across Availability Zones This includes load balancers EC2 instances and containerbased applications When you plan your network topology the first step is to define the IP address space itself Private IP address ranges (following RFC 1918 guidelines) should be allocated for each VPC Accommodate the following requirements as part of this process: • Allow IP address space for more than one VPC per Region • Within a VPC allow space for multiple subnets that span multiple Availability Zones • Always leave unused CIDR block space within a VPC for future expansion • Ensure that there is IP address space to meet the needs of any transient fleets of EC2 instances that you might use such as Spot Fleets for machine learning Amazon EMR clusters or Amazon Redshift clusters • Note that the first four IP addresses and the last IP address in each subnet CIDR block are reserved and not available for your use You should plan on deploying large VPC CIDR blocks Note that the initial VPC CIDR block allocated to your VPC cannot be changed or deleted but you can add additional nonoverlapping CIDR blocks to the VPC Subnet IPv4 CIDRs cannot be changed however IPv6 CIDRs can Keep in mind that deploying the largest VPC possible (/16) results in over 65000 IP addresses In the base 10xxx IP address space alone you could provision 255 such VPCs You should therefore err on the side of being too large rather than too small to make it easier to manage your VPCs Prefer hubandspoke topologies over manytomany mesh: If more than two network address spaces (for example VPCs and onpremises networks) are connected via VPC peering AWS Direct Connect or VPN then use a hubandspoke model like those provided by AWS Transit Gateway If you have only two such networks you can simply connect them to each other but as the number of networks grows the complexity of such meshed connections becomes untenable AWS Transit Gateway provides an easy to maintain hubandspoke model allowing the routing of traffic across your multiple networks 12ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology Figure 1: Without AWS Transit Gateway: You need to peer each Amazon VPC to each other and to each onsite location using a VPN connection which can become complex as it scales Figure 2: With AWS Transit Gateway: You simply connect each Amazon VPC or VPN to the AWS Transit Gateway and it routes traffic to and from each VPC or VPN Enforce nonoverlapping private IP address ranges in all private address spaces where they are connected: The IP address ranges of each of your VPCs must not overlap when peered or connected via VPN You must similarly avoid IP address conflicts between a VPC and onpremises environments or with other cloud providers that you use You must also have a way to allocate private IP address ranges when needed An IP address management (IPAM) system can help with this Several IPAMs are available from the AWS Marketplace 13ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Videos •AWS re:Invent 2018: Advanced VPC Design and New Capabilities for Amazon VPC (NET303) •AWS re:Invent 2019: AWS Transit Gateway reference architectures for many VPCs (NET406R1) Documentation •What Is a Transit Gateway? •What Is Amazon VPC? •Working with Direct Connect Gateways •Using the Direct Connect Resiliency Toolkit to get started •Multiple data center HA network connectivity •What Is AWS Global Accelerator? •Using redundant SitetoSite VPN connections to provide failover •VPC Endpoints and VPC Endpoint Services (AWS PrivateLink) •Amazon Virtual Private Cloud Connectivity Options Whitepaper •AWS Marketplace for Network Infrastructure •APN Partner: partners that can help plan your networking 14ArchivedReliability Pillar AWS WellArchitected Framework Design Your Workload Service Architecture Workload Architecture A reliable workload starts with upfront design decisions for both software and infrastructure Your architecture choices will impact your workload behavior across all five WellArchitected pillars For reliability there are specific patterns you must follow The following sections explain best practices to use with these patterns for reliability Topics •Design Your Workload Service Architecture (p 15) •Design Interactions in a Distributed System to Prevent Failures (p 17) •Design Interactions in a Distributed System to Mitigate or Withstand Failures (p 20) Design Your Workload Service Architecture Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making software components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler Serviceoriented architecture (SOA) interfaces use common communication standards so that they can be rapidly incorporated into new workloads SOA replaced the practice of building monolith architectures which consisted of interdependent indivisible units At AWS we have always used SOA but have now embraced building our systems using microservices While microservices have several attractive qualities the most important benefit for availability is that microservices are smaller and simpler They allow you to differentiate the availability required of different services and thereby focus investments more specifically to the microservices that have the greatest availability needs For example to deliver product information pages on Amazoncom (“detail pages”) hundreds of microservices are invoked to build discrete portions of the page While there are a few services that must be available to provide the price and the product details the vast majority of content on the page can simply be excluded if the service isn’t available Even such things as photos and reviews are not required to provide an experience where a customer can buy a product Choose how to segment your workload: Monolithic architecture should be avoided Instead you should choose between SOA and microservices When making each choice balance the benefits against the complexities—what is right for a new product racing to first launch is different than what a workload built to scale from the start needs The benefits of using smaller segments include greater agility organizational flexibility and scalability Complexities include possible increased latency more complex debugging and increased operational burden Even if you choose to start with a monolith architecture you must ensure that it’s modular and has the ability to ultimately evolve to SOA or microservices as your product scales with user adoption SOA and microservices offer respectively smaller segmentation which is preferred as a modern scalable and reliable architecture but there are tradeoffs to consider especially when deploying a microservice architecture One is that you now have a distributed compute architecture that can make it harder to achieve user latency requirements and there is additional complexity in debugging and tracing of user interactions AWS XRay can be used to assist you in solving this problem Another effect to consider is increased operational complexity as you proliferate the number of applications that you are managing which requires the deployment of multiple independency components 15ArchivedReliability Pillar AWS WellArchitected Framework Design Your Workload Service Architecture Figure 3: Monolithic architecture versus microservices architecture Build services focused on specific business domains and functionality: SOA builds services with well delineated functions defined by business needs Microservices use domain models and bounded context to limit this further so that each service does just one thing Focusing on specific functionality enables you to differentiate the reliability requirements of different services and target investments more specifically A concise business problem and small team associated with each service also enables easier organizational scaling In designing a microservice architecture it’s helpful to use DomainDriven Design (DDD) to model the business problem using entities For example for Amazoncom entities may include package delivery schedule price discount and currency Then the model is further divided into smaller models using Bounded Context where entities that share similar features and attributes are grouped together So using the Amazon example package delivery and schedule would be part of the shipping context while price discount and currency are part of the pricing context With the model divided into contexts a template for how to boundary microservices emerges Provide service contracts per API: Service contracts are documented agreements between teams on service integration and include a machinereadable API definition rate limits and performance expectations A versioning strategy allows clients to continue using the existing API and migrate their applications to the newer API when they are ready Deployment can happen anytime as long as the contract is not violated The service provider team can use the technology stack of their choice to satisfy the API contract Similarly the service consumer can use their own technology 16ArchivedReliability Pillar AWS WellArchitected Framework Resources Microservices take the concept of SOA to the point of creating services that have a minimal set of functionality Each service publishes an API and design goals limits and other considerations for using the service This establishes a “contract” with calling applications This accomplishes three main benefits: • The service has a concise business problem to be served and a small team that owns the business problem This allows for better organizational scaling • The team can deploy at any time as long as they meet their API and other “contract” requirements • The team can use any technology stack they want to as long as they meet their API and other “contract” requirements Amazon API Gateway is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale It handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Using OpenAPI Specification (OAS) formerly known as the Swagger Specification you can define your API contract and import it into API Gateway With API Gateway you can then version and deploy the APIs Resources Documentation •Amazon API Gateway: Configuring a REST API Using OpenAPI •Implementing Microservices on AWS •Microservices on AWS External Links •Microservices a definition of this new architectural term •Microservice TradeOffs •Bounded Context (a central pattern in DomainDriven Design) Design Interactions in a Distributed System to Prevent Failures Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) Identify which kind of distributed system is required: Hard realtime distributed systems require responses to be given synchronously and rapidly while soft realtime systems have a more generous time window of minutes or more for response Offline systems handle responses through batch or asynchronous processing Hard realtime distributed systems have the most stringent reliability requirements The most difficult challenges with distributed systems are for the hard realtime distributed systems also known as request/reply services What makes them difficult is that requests arrive unpredictably and responses must be given rapidly (for example the customer is actively waiting for the response) 17ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Prevent Failures Examples include frontend web servers the order pipeline credit card transactions every AWS API and telephony Implement loosely coupled dependencies: Dependencies such as queuing systems streaming systems workflows and load balancers are loosely coupled Loose coupling helps isolate behavior of a component from other components that depend on it increasing resiliency and agility If changes to one component force other components that rely on it to also change then they are tightly coupled Loose coupling breaks this dependency so that dependent components only need to know the versioned and published interface Implementing loose coupling between dependencies isolates a failure in one from impacting another Loose coupling enables you to add additional code or features to a component while minimizing risk to components that depend on it Also scalability is improved as you can scale out or even change underlying implementation of the dependency To further improve resiliency through loose coupling make component interactions asynchronous where possible This model is suitable for any interaction that does not need an immediate response and where an acknowledgment that a request has been registered will suffice It involves one component that generates events and another that consumes them The two components do not integrate through direct pointtopoint interaction but usually through an intermediate durable storage layer such as an SQS queue or a streaming data platform such as Amazon Kinesis or AWS Step Functions Figure 4: Dependencies such as queuing systems and load balancers are loosely coupled 18ArchivedReliability Pillar AWS WellArchitected Framework Resources Amazon SQS queues and Elastic Load Balancers are just two ways to add an intermediate layer for loose coupling Eventdriven architectures can also be built in the AWS Cloud using Amazon EventBridge which can abstract clients (event producers) from the services they rely on (event consumers) Amazon Simple Notification Service is an effective solution when you need highthroughput pushbased many tomany messaging Using Amazon SNS topics your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing While queues offer several advantages in most hard realtime systems requests older than a threshold time (often seconds) should be considered stale (the client has given up and is no longer waiting for a response) and not processed This way more recent (and likely still valid requests) can be processed instead Make all responses idempotent: An idempotent service promises that each request is completed exactly once such that making multiple identical requests has the same effect as making a single request An idempotent service makes it easier for a client to implement retries without fear that a request will be erroneously processed multiple times To do this clients can issue API requests with an idempotency token—the same token is used whenever the request is repeated An idempotent service API uses the token to return a response identical to the response that was returned the first time that the request was completed In a distributed system it’s easy to perform an action at most once (client makes only one request) or at least once (keep requesting until client gets confirmation of success) But it’s hard to guarantee an action is idempotent which means it’s performed exactly once such that making multiple identical requests has the same effect as making a single request Using idempotency tokens in APIs services can receive a mutating request one or more times without creating duplicate records or side effects Do constant work: Systems can fail when there are large rapid changes in load For example a health check system that monitors the health of thousands of servers should send the same size payload (a full snapshot of the current state) each time Whether no servers are failing or all of them the health check system is doing constant work with no large rapid changes For example if the health check system is monitoring 100000 servers the load on it is nominal under the normally light server failure rate However if a major event makes half of those servers unhealthy then the health check system would be overwhelmed trying to update notification systems and communicate state to its clients So instead the health check system should send the full snapshot of the current state each time 100000 server health states each represented by a bit would only be a 125 KB payload Whether no servers are failing or all of them are the health check system is doing constant work and large rapid changes are not a threat to the system stability This is actually how the control plane is designed for Amazon Route 53 health checks Resources Videos •AWS re:Invent 2019: Moving to eventdriven architectures (SVS308) •AWS re:Invent 2018: Close Loops & Opening Minds: How to Take Control of Systems Big & Small ARC337 (includes loose coupling constant work static stability) •AWS New York Summit 2019: Intro to Eventdriven Architectures and Amazon EventBridge (MAD205) (discusses EventBridge SQS SNS) Documentation •AWS Services That Publish CloudWatch Metrics •What Is Amazon Simple Queue Service? • Amazon EC2: Ensuring Idempotency 19ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures • The Amazon Builders' Library: Challenges with distributed systems •Centralized Logging solution •AWS Marketplace: products that can be used for monitoring and alerting •APN Partner: partners that can help you with monitoring and logging Design Interactions in a Distributed System to Mitigate or Withstand Failures Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) These best practices prevent failures and improve mean time between failures (MTBF) Implement graceful degradation to transform applicable hard dependencies into soft dependencies: When a component's dependencies are unhealthy the component itself can still function although in a degraded manner For example when a dependency call fails instead use a predetermined static response Consider a service B that is called by service A and in turn calls service C Figure 5: Service C fails when called from service B Service B returns a degraded response to service A When service B calls service C it received an error or timeout from it Service B lacking a response from service C (and the data it contains) instead returns what it can This can be the last cached good value or service B can substitute a predetermined static response for what it would have received from service C It can then return a degraded response to its caller service A Without this static response the failure in service C would cascade through service B to service A resulting in a loss of availability As per the multiplicative factor in the availability equation for hard dependencies (see Calculating availability with hard dependencies (p 5)) any drop in the availability of C seriously impacts effective availability of B By returning the static response service B mitigates the failure in C and although degraded makes service C’s availability look like 100% availability (assuming it reliably returns the static response under error conditions) Note that the static response is a simple alternative to returning an error and is not an attempt to recompute the response using different means Such attempts at a completely different mechanism to try to achieve the same result are called fallback behavior and are an antipattern to be avoided Another example of graceful degradation is the circuit breaker pattern Retry strategies should be used when the failure is transient When this is not the case and the operation is likely to fail the circuit breaker pattern prevents the client from performing a request that is likely to fail When requests are being processed normally the circuit breaker is closed and requests flow through When the remote system begins returning errors or exhibits high latency the circuit breaker opens and the dependency is ignored or results are replaced with more simply obtained but less comprehensive responses (which 20ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures might simply be a response cache) Periodically the system attempts to call the dependency to determine if it has recovered When that occurs the circuit breaker is closed Figure 6: Circuit breaker showing closed and open states In addition to the closed and open states shown in the diagram after a configurable period of time in the open state the circuit breaker can transition to halfopen In this state it periodically attempts to call the service at a much lower rate than normal This probe is used to check the health of the service After a number of successes in halfopen state the circuit breaker transitions to closed and normal requests resume Throttle requests: This is a mitigation pattern to respond to an unexpected increase in demand Some requests are honored but those over a defined limit are rejected and return a message indicating they have been throttled The expectation on clients is that they will back off and abandon the request or try again at a slower rate Your services should be designed to a known capacity of requests that each node or cell can process This can be established through load testing You then need to track the arrival rate of requests and if the temporary arrival rate exceeds this limit the appropriate response is to signal that the request has been throttled This allows the user to retry potentially to a different node/cell that might have available capacity Amazon API Gateway provides methods for throttling requests Amazon SQS and Amazon Kinesis can buffer requests smoothing out request rate and alleviate the need for throttling for requests that can be addressed asynchronously Control and limit retry calls: Use exponential backoff to retry after progressively longer intervals Introduce jitter to randomize those retry intervals and limit the maximum number of retries Typical components in a distributed software system include servers load balancers databases and DNS servers In operation and subject to failures any of these can start generating errors The default technique for dealing with errors is to implement retries on the client side This technique increases the reliability and availability of the application However at scale—and if clients attempt to retry the failed operation as soon as an error occurs—the network can quickly become saturated with new and retired requests each competing for network bandwidth This can result in a retry storm which will reduce availability of the service This pattern might continue until a full system failure occurs 21ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures To avoid such scenarios backoff algorithms such as the common exponential backoff should be used Exponential backoff algorithms gradually decrease the rate at which retries are performed thus avoiding network congestion Many SDKs and software libraries including those from AWS implement a version of these algorithms However never assume a backoff algorithm exists—always test and verify this to be the case Simple backoff alone is not enough because in distributed systems all clients may backoff simultaneously creating clusters of retry calls Marc Brooker in his blog post Exponential Backoff And Jitter explains how to modify the wait() function in the exponential backoff to prevent clusters of retry calls The solution is to add jitter in the wait() function To avoid retrying for too long implementations should cap the backoff to a maximum value Finally it’s important to configure a maximum number of retries or elapsed time after which retrying will simply fail AWS SDKs implement this by default and it can be configured For services lower in the stack a maximum retry limit of zero or one will limit risk yet still be effective as retries are delegated to services higher in the stack Fail fast and limit queues: If the workload is unable to respond successfully to a request then fail fast This allows the releasing of resources associated with a request and permits the service to recover if it’s running out of resources If the workload is able to respond successfully but the rate of requests is too high then use a queue to buffer requests instead However do not allow long queues that can result in serving stale requests that the client has already given up on This best practice applies to the serverside or receiver of the request Be aware that queues can be created at multiple levels of a system and can seriously impede the ability to quickly recover as older stale requests (that no longer need a response) are processed before newer requests in need of a response Be aware of places where queues exist They often hide in workflows or in work that’s recorded to a database Set client timeouts : Set timeouts appropriately verify them systematically and do not rely on default values as they are generally set too high This best practice applies to the clientside or sender of the request Set both a connection timeout and a request timeout on any remote call and generally on any call across processes Many frameworks offer builtin timeout capabilities but be careful as many have default values that are infinite or too high A value that is too high reduces the usefulness of the timeout because resources continue to be consumed while the client waits for the timeout to occur A too low value can generate increased traffic on the backend and increased latency because too many requests are retried In some cases this can lead to complete outages because all requests are being retried To learn more about how Amazon use timeouts retries and backoff with jitter refer to the Builder’s Library: Timeouts retries and backoff with jitter Make services stateless where possible: Services should either not require state or should offload state such that between different client requests there is no dependence on locally stored data on disk or in memory This enables servers to be replaced at will without causing an availability impact Amazon ElastiCache or Amazon DynamoDB are good destinations for offloaded state 22ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures Figure 7: In this stateless web application session state is offloaded to Amazon ElastiCache When users or services interact with an application they often perform a series of interactions that form a session A session is unique data for users that persists between requests while they use the application A stateless application is an application that does not need knowledge of previous interactions and does not store session information Once designed to be stateless you can then use serverless compute platforms such as AWS Lambda or AWS Fargate In addition to server replacement another benefit of stateless applications is that they can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request Implement emergency levers: These are rapid processes that may mitigate availability impact on your workload They can be operated in the absence of a root cause An ideal emergency lever reduces the cognitive burden on the resolvers to zero by providing fully deterministic activation and deactivation criteria Example levers include blocking all robot traffic or serving a static response Levers are often manual but they can also be automated Tips for implementing and using emergency levers: • When levers are activated do LESS not more • Keep it simple avoid bimodal behavior • Test your levers periodically These are examples of actions that are NOT emergency levers: • Add capacity • Call up service owners of clients that depend on your service and ask them to reduce calls • Making a change to code and releasing it 23ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Video •Retry backoff and jitter: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) Documentation •Error Retries and Exponential Backoff in AWS • Amazon API Gateway: Throttle API Requests for Better Throughput • The Amazon Builders' Library: Timeouts retries and backoff with jitter • The Amazon Builders' Library: Avoiding fallback in distributed systems • The Amazon Builders' Library: Avoiding insurmountable queue backlogs • The Amazon Builders' Library: Caching challenges and strategies Labs • WellArchitected lab: Level 300: Implementing Health Checks and Managing Dependencies to Improve Reliability External Links •CircuitBreaker (summarizes Circuit Breaker from “Release It!” book) Books • Michael Nygard “Release It! Design and Deploy ProductionReady Software” 24ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Change Management Changes to your workload or its environment must be anticipated and accommodated to achieve reliable operation of the workload Changes include those imposed on your workload such as spikes in demand as well as those from within such as feature deployments and security patches The following sections explain the best practices for change management Topics •Monitor Workload Resources (p 25) •Design your Workload to Adapt to Changes in Demand (p 28) •Implement Change (p 30) Monitor Workload Resources Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response Monitoring is critical to ensure that you are meeting your availability requirements Your monitoring needs to effectively detect failures The worst failure mode is the “silent” failure where the functionality is no longer working but there is no way to detect it except indirectly Your customers know before you do Alerting when you have problems is one of the primary reasons you monitor Your alerting should be decoupled from your systems as much as possible If your service interruption removes your ability to alert you will have a longer period of interruption At AWS we instrument our applications at multiple levels We record latency error rates and availability for each request for all dependencies and for key operations within the process We record metrics of successful operation as well This allows us to see impending problems before they happen We don’t just consider average latency We focus even more closely on latency outliers like the 999th and 9999th percentile This is because if one request out of 1000 or 10000 is slow that is still a poor experience Also although your average may be acceptable if one in 100 of your requests causes extreme latency it will eventually become a problem as your traffic grows Monitoring at AWS consists of four distinct phases: 1 Generation — Monitor all components for the workload 2 Aggregation — Define and calculate metrics 3 Realtime processing and alarming — Send notifications and automate responses 4 Storage and Analytics Generation — Monitor all components for the workload: Monitor the components of the workload with Amazon CloudWatch or thirdparty tools Monitor AWS services with Personal Health Dashboard All components of your workload should be monitored including the frontend business logic and storage tiers Define key metrics and how to extract them from logs if necessary and set create thresholds for corresponding alarm events 25ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Monitoring in the cloud offers new opportunities Most cloud providers have developed customizable hooks and insights into multiple layers of your workload AWS makes an abundance of monitoring and log information available for consumption which can be used to define changeindemand processes The following is just a partial list of services and features that generate log and metric data • Amazon ECS Amazon EC2 Elastic Load Balancing AWS Auto Scaling and Amazon EMR publish metrics for CPU network I/O and disk I/O averages • Amazon CloudWatch Logs can be enabled for Amazon Simple Storage Service (Amazon S3) Classic Load Balancers and Application Load Balancers • VPC Flow Logs can be enabled to analyze network traffic into and out of a VPC • AWS CloudTrail logs AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools • Amazon EventBridge delivers a realtime stream of system events that describes changes in AWS services • AWS provides tooling to collect operating systemlevel logs and stream them into CloudWatch Logs • Custom Amazon CloudWatch metrics can be used for metrics of any dimension • Amazon ECS and AWS Lambda stream log data to CloudWatch Logs • Amazon Machine Learning (Amazon ML) Amazon Rekognition Amazon Lex and Amazon Polly provide metrics for successful and unsuccessful requests • AWS IoT provides metrics for number of rule executions as well as specific success and failure metrics around the rules • Amazon API Gateway provides metrics for number of requests erroneous requests and latency for your APIs • Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources In addition monitor all of your external endpoints from remote locations to ensure that they are independent of your base implementation This active monitoring can be done with synthetic transactions (sometimes referred to as “user canaries” but not to be confused with canary deployments) which periodically execute some number of common tasks performed by consumers of the application Keep these short in duration and be sure not to overload your workflow during testing Amazon CloudWatch Synthetics enables you to create canaries to monitor your endpoints and APIs You can also combine the synthetic canary client nodes with AWS XRay console to pinpoint which synthetic canaries are experiencing issues with errors faults or throttling rates for the selected time frame Aggregation — Define and calculate metrics: Store log data and apply filters where necessary to calculate metrics such as counts of a specific log event or latency calculated from log event timestamps Amazon CloudWatch and Amazon S3 serve as the primary aggregation and storage layers For some services like AWS Auto Scaling and Elastic Load Balancing default metrics are provided “out the box” for CPU load or average request latency across a cluster or instance For streaming services like VPC Flow Logs and AWS CloudTrail event data is forwarded to CloudWatch Logs and you need to define and apply metrics filters to extract metrics from the event data This gives you time series data which can serve as inputs to CloudWatch alarms that you define to trigger alerts Realtime processing and alarming — Send notifications: Organizations that need to know receive notifications when significant events occur Alerts can also be sent to Amazon Simple Notification Service (Amazon SNS) topics and then pushed to any number of subscribers For example Amazon SNS can forward alerts to an email alias so that technical staff can respond 26ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Realtime processing and alarming — Automate responses: Use automation to take action when an event is detected for example to replace failed components Alerts can trigger AWS Auto Scaling events so that clusters react to changes in demand Alerts can be sent to Amazon Simple Queue Service (Amazon SQS) which can serve as an integration point for thirdparty ticket systems AWS Lambda can also subscribe to alerts providing users an asynchronous serverless model that reacts to change dynamically AWS Config continuously monitors and records your AWS resource configurations and can trigger AWS Systems Manager Automation to remediate issues Storage and Analytics: Collect log files and metrics histories and analyze these for broader trends and workload insights Amazon CloudWatch Logs Insights supports a simple yet powerful query language that you can use to analyze log data Amazon CloudWatch Logs also supports subscriptions that allow data to flow seamlessly to Amazon S3 where you can use or Amazon Athena to query the data It supports queries on a large array of formats For more information see Supported SerDes and Data Formats in the Amazon Athena User Guide For analysis of huge log file sets you can run an Amazon EMR cluster to run petabytescale analyses There are a number of tools provided by partners and third parties that allow for aggregation processing storage and analytics These tools include New Relic Splunk Loggly Logstash CloudHealth and Nagios However outside generation of system and application logs is unique to each cloud provider and often unique to each service An oftenoverlooked part of the monitoring process is data management You need to determine the retention requirements for monitoring data and then apply lifecycle policies accordingly Amazon S3 supports lifecycle management at the S3 bucket level This lifecycle management can be applied differently to different paths in the bucket Toward the end of the lifecycle you can transition data to Amazon S3 Glacier for longterm storage and then expiration after the end of the retention period is reached The S3 IntelligentTiering storage class is designed to optimize costs by automatically moving data to the most costeffective access tier without performance impact or operational overhead Conduct reviews regularly: Frequently review how workload monitoring is implemented and update it based on significant events and changes Effective monitoring is driven by key business metrics Ensure these metrics are accommodated in your workload as business priorities change Auditing your monitoring helps ensure that you know when an application is meeting its availability goals Root Cause Analysis requires the ability to discover what happened when failures occur AWS provides services that allow you to track the state of your services during an incident: •Amazon CloudWatch Logs: You can store your logs in this service and inspect their contents •Amazon CloudWatch Logs Insights: Is a fully managed service that enables you to run analyze massive logs in seconds It gives you fast interactive queries and visualizations •AWS Config: You can see what AWS infrastructure was in use at different points in time •AWS CloudTrail: You can see which AWS APIs were invoked at what time and by what principal At AWS we conduct a weekly meeting to review operational performance and to share learnings between teams Because there are so many teams in AWS we created The Wheel to randomly pick a workload to review Establishing a regular cadence for operational performance reviews and knowledge sharing enhances your ability to achieve higher performance from your operational teams Monitor endtoend tracing of requests through your system: Use AWS XRay or thirdparty tools so that developers can more easily analyze and debug distributed systems to understand how their applications and its underlying services are performing 27ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Documentation •Using Amazon CloudWatch Metrics •Using Canaries (Amazon CloudWatch Synthetics) •Amazon CloudWatch Logs Insights Sample Queries •AWS Systems Manager Automation •What is AWS XRay? •Debugging with Amazon CloudWatch Synthetics and AWS XRay • The Amazon Builders' Library: Instrumenting distributed systems for operational visibility Design your Workload to Adapt to Changes in Demand A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time Use automation when obtaining or scaling resources: When replacing impaired resources or scaling your workload automate the process by using managed AWS services such as Amazon S3 and AWS Auto Scaling You can also use thirdparty tools and AWS SDKs to automate scaling Managed AWS services include Amazon S3 Amazon CloudFront AWS Auto Scaling AWS Lambda Amazon DynamoDB AWS Fargate and Amazon Route 53 AWS Auto Scaling lets you detect and replace impaired instances It also lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets Amazon ECS tasks Amazon DynamoDB tables and indexes and Amazon Aurora Replicas When scaling EC2 instances or Amazon ECS containers hosted on EC2 instances ensure that you use multiple Availability Zones (preferably at least three) and add or remove capacity to maintain balance across these Availability Zones When using AWS Lambda they scale automatically Every time an event notification is received for your function AWS Lambda quickly locates free capacity within its compute fleet and runs your code up to the allocated concurrency You need to ensure that the necessary concurrency is configured on the specific Lambda and in your Service Quotas Amazon S3 automatically scales to handle high request rates For example your application can achieve at least 3500 PUT/COPY/POST/DELETE or 5500 GET/HEAD requests per second per prefix in a bucket There are no limits to the number of prefixes in a bucket You can increase your read or write performance by parallelizing reads For example if you create 10 prefixes in an Amazon S3 bucket to parallelize reads you could scale your read performance to 55000 read requests per second Configure and use Amazon CloudFront or a trusted content delivery network A content delivery network (CDN) can provide faster enduser response times and can serve requests for content that may cause unnecessary scaling of your workloads Obtain resources upon detection of impairment to a workload: Scale resources reactively when necessary if availability is impacted so as to restore workload availability You first must configure health checks and the criteria on these checks to indicate when availability is impacted by lack of resources Then either notify the appropriate personnel to manually scale the resource or trigger automation to automatically scale it 28ArchivedReliability Pillar AWS WellArchitected Framework Resources Scale can be manually adjusted for your workload for example changing the number of EC2 instances in an Auto Scaling group or modifying throughput of a DynamoDB table can be done through the console or AWS CLI However automation should be used whenever possible (see Use automation when scaling a workload up or down) Obtain resources upon detection that more resources are needed for a workload: Scale resources proactively to meet demand and avoid availability impact Many AWS services automatically scale to meet demand (see Use automation when scaling a workload up or down) If using EC2 instances or Amazon ECS clusters you can configure automatic scaling of these to occur based on usage metrics that correspond to demand for your workload For Amazon EC2 average CPU utilization load balancer request count or network bandwidth can be used to scale out (or scale in) EC2 instances For Amazon ECS average CPU utilization load balancer request count and memory utilization can be used to scale our (or scale in) ECS tasks Using Target Auto Scaling on AWS the autoscaler acts like a household thermostat adding or removing resources to maintain the target value (for example 70% CPU utilization) that you specify AWS Auto Scaling can also do Predictive Auto Scaling which uses machine learning to analyze each resource's historical workload and regularly forecasts the future load for the next two days Little’s Law helps calculate how many instances of compute (EC2 instances concurrent Lambda functions etc) that you need L=λW L = number of instances (or mean concurrency in the system) λ = mean rate at which requests arrive (req/sec) W = mean time that each request spends in the system (sec) For example at 100 rps if each request takes 05 seconds to process you will need 50 instances to keep up with demand Load test your workload: Adopt a load testing methodology to measure if scaling activity will meet workload requirements It’s important to perform sustained load testing Load tests should discover the breaking point and test performance of your workload AWS makes it easy to set up temporary testing environments that model the scale of your production workload In the cloud you can create a productionscale test environment on demand complete your testing and then decommission the resources Because you only pay for the test environment when it's running you can simulate your live environment for a fraction of the cost of testing on premises Load testing in production should also be considered as part of game days where the production system is stressed during hours of lower customer usage with all personnel on hand to interpret results and address any problems that arise Resources Documentation • AWS Auto Scaling: How Scaling Plans Work •What Is Amazon EC2 Auto Scaling? •Managing Throughput Capacity Automatically with DynamoDB Auto Scaling •What is Amazon CloudFront? •Distributed Load Testing on AWS: simulate thousands of connected users 29ArchivedReliability Pillar AWS WellArchitected Framework Implement Change •AWS Marketplace: products that can be used with auto scaling •APN Partner: partners that can help you create automated compute solutions External Links •Telling Stories About Little's Law Implement Change Controlled changes are necessary to deploy new functionality and to ensure that the workloads and the operating environment are running known properly patched software If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them Use runbooks for standard activities such as deployment: Runbooks are the predefined steps to achieve specific outcomes Use runbooks to perform standard activities whether done manually or automatically Examples include deploying a workload patching it or making DNS modifications For example put processes in place to ensure rollback safety during deployments Ensuring that you can roll back a deployment without any disruption for your customers is critical in making a service reliable For runbook procedures start with a valid effective manual process implement it in code and trigger automated execution where appropriate Even for sophisticated workloads that are highly automated runbooks are still useful for running game days (p 46) or meeting rigorous reporting and auditing requirements Note that playbooks are used in response to specific incidents and runbooks are used to achieve specific outcomes Often runbooks are for routine activities while playbooks are used for responding to non routine events Integrate functional testing as part of your deployment: Functional tests are run as part of automated deployment If success criteria are not met the pipeline is halted or rolled back These tests are run in a preproduction environment which is staged prior to production in the pipeline Ideally this is done as part of a deployment pipeline Integrate resiliency testing as part of your deployment: Resiliency tests (as part of chaos engineering) are run as part of the automated deployment pipeline in a preproduction environment These tests are staged and run in the pipeline prior to production They should also be run in production but as part of Game Days (p 46) Deploy using immutable infrastructure: This is a model that mandates that no updates security patches or configuration changes happen inplace on production systems When a change is needed the architecture is built onto new infrastructure and deployed into production The most common implementation of the immutable infrastructure paradigm is the immutable server This means that if a server needs an update or a fix new servers are deployed instead of updating the ones already in use So instead of logging into the server via SSH and updating the software version every change in the application starts with a software push to the code repository for example git push Since changes are not allowed in immutable infrastructure you can be sure about the state of the deployed system Immutable infrastructures are inherently more consistent reliable and predictable and they simplify many aspects of software development and operations 30ArchivedReliability Pillar AWS WellArchitected Framework Implement Change Use a canary or blue/green deployment when deploying applications in immutable infrastructures Canary deployment is the practice of directing a small number of your customers to the new version usually running on a single service instance (the canary) You then deeply scrutinize any behavior changes or errors that are generated You can remove traffic from the canary if you encounter critical problems and send the users back to the previous version If the deployment is successful you can continue to deploy at your desired velocity while monitoring the changes for errors until you are fully deployed AWS CodeDeploy can be configured with a deployment configuration that will enable a canary deployment Blue/green deployment is similar to the canary deployment except that a full fleet of the application is deployed in parallel You alternate your deployments across the two stacks (blue and green) Once again you can send traffic to the new version and fall back to the old version if you see problems with the deployment Commonly all traffic is switched at once however you can also use fractions of your traffic to each version to dial up the adoption of the new version using the weighted DNS routing capabilities of Amazon Route 53 AWS CodeDeploy and AWS Elastic Beanstalk can be configured with a deployment configuration that will enable a blue/green deployment Figure 8: Blue/green deployment with AWS Elastic Beanstalk and Amazon Route 53 Benefits of immutable infrastructure: •Reduction in configuration drifts: By frequently replacing servers from a base known and version controlled configuration the infrastructure is reset to a known state avoiding configuration drifts •Simplified deployments: Deployments are simplified because they don’t need to support upgrades Upgrades are just new deployments •Reliable atomic deployments: Deployments either complete successfully or nothing changes It gives more trust in the deployment process •Safer deployments with fast rollback and recovery processes: Deployments are safer because the previous working version is not changed You can roll back to it if errors are detected •Consistent testing and debugging environments: Since all servers use the same image there are no differences between environments One build is deployed to multiple environments It also prevents inconsistent environments and simplifies testing and debugging •Increased scalability: Since servers use a base image are consistent and repeatable automatic scaling is trivial •Simplified toolchain: The toolchain is simplified since you can get rid of configuration management tools managing production software upgrades No extra tools or agents are installed on servers Changes are made to the base image tested and rolledout 31ArchivedReliability Pillar AWS WellArchitected Framework Additional deployment patterns to minimize risk: •Increased security: By denying all changes to servers you can disable SSH on instances and remove keys This reduces the attack vector improving your organization’s security posture Deploy changes with automation: Deployments and patching are automated to eliminate negative impact Making changes to production systems is one of the largest risk areas for many organizations We consider deployments a firstclass problem to be solved alongside the business problems that the software addresses Today this means the use of automation wherever practical in operations including testing and deploying changes adding or removing capacity and migrating data AWS CodePipeline lets you manage the steps required to release your workload This includes a deployment state using AWS CodeDeploy to automate deployment of application code to Amazon EC2 instances onpremises instances serverless Lambda functions or Amazon ECS services Recommendation Although conventional wisdom suggests that you keep humans in the loop for the most difficult operational procedures we suggest that you automate the most difficult procedures for that very reason Additional deployment patterns to minimize risk: Feature flags (also known as feature toggles) are configuration options on an application You can deploy the software with a feature turned off so that your customers don’t see the feature You can then turn on the feature as you’d do for a canary deployment or you can set the change pace to 100% to see the effect If the deployment has problems you can simply turn the feature back off without rolling back Fault isolated zonal deployment: One of the most important rules AWS has established for its own deployments is to avoid touching multiple Availability Zones within a Region at the same time This is critical to ensuring that Availability Zones are independent for purposes of our availability calculations We recommend that you use similar considerations in your deployments Operational Readiness Reviews (ORRs) AWS finds it useful to perform operational readiness reviews that evaluate the completeness of the testing ability to monitor and importantly the ability to audit the applications performance to its SLAs and provide data in the event of an interruption or other operational anomaly A formal ORR is conducted prior to initial production deployment AWS will repeat ORRs periodically (once per year or before critical performance periods) to ensure that there has not been “drift” from operational expectations For more information on operational readiness see the Operational Excellence pillar of the AWS WellArchitected Framework Recommendation Conduct an Operational Readiness Review (ORR) for applications prior to initial production use and periodically thereafter Resources Videos •AWS Summit 2019: CI/CD on AWS 32ArchivedReliability Pillar AWS WellArchitected Framework Resources Documentation •What Is AWS CodePipeline? •What Is CodeDeploy? •Overview of a Blue/Green Deployment •Deploying Serverless Applications Gradually • The Amazon Builders' Library: Ensuring rollback safety during deployments • The Amazon Builders' Library: Going faster with continuous delivery •AWS Marketplace: products that can be used to automate your deployments •APN Partner: partners that can help you create automated deployment solutions Labs • WellArchitected lab: Level 300: Testing for Resiliency of EC2 RDS and S3 External Links •CanaryRelease 33ArchivedReliability Pillar AWS WellArchitected Framework Back up Data Failure Management Failures are a given and everything will eventually fail over time: from routers to hard disks from operating systems to memory units corrupting TCP packets from transient errors to permanent failures This is a given whether you are using the highestquality hardware or lowest cost components Werner Vogels CTO Amazoncom Lowlevel hardware component failures are something to be dealt with every day in in an onpremises data center In the cloud however you should be protected against most of these types of failures For example Amazon EBS volumes are placed in a specific Availability Zone where they are automatically replicated to protect you from the failure of a single component All EBS volumes are designed for 99999% availability Amazon S3 objects are stored across a minimum of three Availability Zones providing 99999999999% durability of objects over a given year Regardless of your cloud provider there is the potential for failures to impact your workload Therefore you must take steps to implement resiliency if you need your workload to be reliability A prerequisite to applying the best practices discussed here is that you must ensure that the people designing implementing and operating your workloads are aware of business objectives and the reliability goals to achieve these These people must be aware of and trained for these reliability requirements The following sections explain the best practices for managing failures to prevent impact on your workload Topics •Back up Data (p 34) •Use Fault Isolation to Protect Your Workload (p 36) •Design your Workload to Withstand Component Failures (p 41) •Test Reliability (p 44) •Plan for Disaster Recovery (DR) (p 47) Back up Data Back up data applications and configuration to meet requirements for recovery time objectives (RTO) and recovery point objectives (RPO) Identify and back up all data that needs to be backed up or reproduce the data from sources: Amazon S3 can be used as a backup destination for multiple data sources AWS services like Amazon EBS Amazon RDS and Amazon DynamoDB have built in capabilities to create backups Or thirdparty backup software can be used Alternatively if the data can be reproduced from other sources to meet RPO you may not require a backup Onpremises data can be backed up to the AWS Cloud using Amazon S3 buckets and AWS Storage Gateway Backup data can be archived using Amazon S3 Glacier or S3 Glacier Deep Archive for affordable nontime sensitive cloud storage If you have loaded data from Amazon S3 to a data warehouse (like Amazon Redshift) or MapReduce cluster (like Amazon EMR) to do analysis on that data this may be an example of data that can be 34ArchivedReliability Pillar AWS WellArchitected Framework Resources reproduced from other sources As long as the results of these analyses are either stored somewhere or reproducible you would not suffer a data loss from a failure in the data warehouse or MapReduce cluster Other examples that can be reproduced from sources include caches (like Amazon ElastiCache) or RDS read replicas Secure and encrypt backup: Detect access using authentication and authorization like AWS Identity and Access Management (IAM) and detect data integrity compromise by using encryption Amazon S3 supports several methods of encryption of your data at rest Using serverside encryption Amazon S3 accepts your objects as unencrypted data and then encrypts them before persisting them Using clientside encryption your workload application is responsible for encrypting the data before it is sent to S3 Both methods allow you to either use AWS Key Management Service (AWS KMS) to create and store the data key or you may provide your own key (which you are then responsible for) Using AWS KMS you can set policies using IAM on who can and cannot access your data keys and decrypted data For Amazon RDS if you have chosen to encrypt your databases then your backups are encrypted also DynamoDB backups are always encrypted Perform data backup automatically: Configure backups to be made automatically based on a periodic schedule or by changes in the dataset RDS instances EBS volumes DynamoDB tables and S3 objects can all be configured for automatic backup AWS Marketplace solutions or thirdparty solutions can also be used Amazon Data Lifecycle Manager can be used to automate EBS snapshots Amazon RDS and Amazon DynamoDB enable continuous backup with Point in Time Recovery Amazon S3 versioning once enabled is automatic For a centralized view of your backup automation and history AWS Backup provides a fully managed policybased backup solution It centralizes and automates the back up of data across multiple AWS services in the cloud as well as on premises using the AWS Storage Gateway In additional to versioning Amazon S3 features replication The entire S3 bucket can be automatically replicated to another bucket in a different AWS Region Perform periodic recovery of the data to verify backup integrity and processes: Validate that your backup process implementation meets your recovery time objective (RTO) and recovery point objective (RPO) by performing a recovery test Using AWS you can stand up a testing environment and restore your backups there to assess RTO and RPO capabilities and run tests on data content and integrity Additionally Amazon RDS and Amazon DynamoDB allow pointintime recovery (PITR) Using continuous backup you are able to restore your dataset to the state it was in at a specified date and time Resources Videos •AWS re:Invent 2019: Deep dive on AWS Backup ft Rackspace (STG341) Documentation •What Is AWS Backup? •Amazon S3: Protecting Data Using Encryption •Encryption for Backups in AWS 35ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload •Ondemand backup and restore for DynamoDB •EFStoEFS backup •AWS Marketplace: products that can be used for backup •APN Partner: partners that can help with backup Labs • WellArchitected lab: Level 200: Testing Backup and Restore of Data • WellArchitected lab: Level 200: Implementing BiDirectional CrossRegion Replication (CRR) for Amazon Simple Storage Service (Amazon S3) Use Fault Isolation to Protect Your Workload Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload Deploy the workload to multiple locations: Distribute workload data and resources across multiple Availability Zones or where necessary across AWS Regions These locations can be as diverse as required One of the bedrock principles for service design in AWS is the avoidance of single points of failure in underlying physical infrastructure This motivates us to build software and systems that use multiple Availability Zones and are resilient to failure of a single zone Similarly systems are built to be resilient to failure of a single compute node single storage volume or single instance of a database When building a system that relies on redundant components it’s important to ensure that the components operate independently and in the case of AWS Regions autonomously The benefits achieved from theoretical availability calculations with redundant components are only valid if this holds true Availability Zones (AZs) AWS Regions are composed of multiple Availability Zones that are designed to be independent of each other Each Availability Zone is separated by a meaningful physical distance from other zones to avoid correlated failure scenarios due to environmental hazards like fires floods and tornadoes Each Availability Zone also has independent physical infrastructure: dedicated connections to utility power standalone backup power sources independent mechanical services and independent network connectivity within and beyond the Availability Zone This design limits faults in any of these systems to just the one affected AZ Despite being geographically separated Availability Zones are located in the same regional area which enables highthroughput lowlatency networking The entire AWS region (across all Availability Zones consisting of multiple physically independent data centers) can be treated as a single logical deployment target for your workload including the ability to synchronously replicate data (for example between databases) This allows you to use Availability Zones in an active/active or active/standby configuration Availability Zones are independent and therefore workload availability is increased when the workload is architected to use multiple zones Some AWS services (including the Amazon EC2 instance data plane) are deployed as strictly zonal services where they have shared fate with the Availability Zone they are in Amazon EC2 instance in the other AZs will however be unaffected and continue to function Similarly if a failure in an Availability Zone causes an Amazon Aurora database to fail a readreplica Aurora instance in an unaffected AZ can be automatically promoted to primary Regional AWS services like Amazon DynamoDB on the other hand internally use multiple Availability Zones in an active/active configuration to achieve the availability design goals for that service without you needing to configure AZ placement 36ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload Figure 9: Multitier architecture deployed across three Availability Zones Note that Amazon S3 and Amazon DynamoDB are always MultiAZ automatically The ELB also is deployed to all three zones While AWS control planes typically provide the ability to manage resources within the entire Region (multiple Availability Zones) certain control planes (including Amazon EC2 and Amazon EBS) have the ability to filter results to a single Availability Zone When this is done the request is processed only in the specified Availability Zone reducing exposure to disruption in other Availability Zones In this AWS CLI example it illustrates getting Amazon EC2 instance information from only the useast2c Availability Zone: aws ec2 describeinstances filters Name=availabilityzoneValues=useast2c AWS Local Zones AWS Local Zones act similarly to Availability Zones within their respective AWS Region in that they can be selected as a placement location for zonal AWS resources like subnets and EC2 instances What makes them special is that they are located not in the associated AWS Region but near large population industry and IT centers where no AWS Region exists today Yet they still retain highbandwidth secure connection between local workloads in the local zone and those running in the AWS Region You should use AWS Local Zones to deploy workloads closer to your users for lowlatency requirements Amazon Global Edge Network Amazon Global Edge Network consists of edge locations in cities around the world Amazon CloudFront uses this network to deliver content to end users with lower latency AWS Global Accelerator enables you to create your workload endpoints in these edge locations to provide onboarding to the AWS global network close to your users Amazon API Gateway enables edgeoptimized API endpoints using a CloudFront distribution to facilitate client access through the closest edge location AWS Regions AWS Regions are designed to be autonomous therefore to use a multiregion approach you would deploy dedicated copies of services to each Region A multiregion approach is common for disaster recovery strategies to meet recovery objectives when oneoff largescale events occur See Plan for Disaster Recovery (DR) (p 47) for more information on these strategies Here however we focus instead on availability which seeks to deliver a mean 37ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload uptime objective over time For highavailability objectives a multiregion architecture will generally be designed to be active/active where each service copy (in their respective regions) is active (serving requests) Recommendation Availability goals for most workloads can be satisfied using a MultiAZ strategy within a single AWS Region Consider multiregion architectures only when workloads have extreme availability requirements or other business goals that require a multiregion architecture AWS provides customers the capabilities to operate services crossregion For example AWS provides continuous asynchronous data replication of data using Amazon Simple Storage Service (Amazon S3) Replication Amazon RDS Read Replicas (including Aurora Read Replicas) and Amazon DynamoDB Global Tables With continuous replication versions of your data are availability for immediate use in each of your active regions Using AWS CloudFormation you can define your infrastructure and deploy it consistently across AWS accounts and across AWS regions And AWS CloudFormation StackSets extends this functionality by enabling you to create update or delete AWS CloudFormation stacks across multiple accounts and regions with a single operation For Amazon EC2 instance deployments an AMI (Amazon Machine Image) is used to supply information such as hardware configuration and installed software You can implement an Amazon EC2 Image Builder pipeline that creates the AMIs you need and copy these to your active regions This ensures that these “Golden AMIs” have everything you need to deploy and scaleout your workload in each new region To route traffic both Amazon Route 53 and AWS Global Accelerator enable the definition of policies that determine which users go to which active regional endpoint With Global Accelerator you set a traffic dial to control the percentage of traffic that is directed to each application endpoint Route 53 supports this percentage approach and also multiple other available policies including geoproximity and latency based ones Global Accelerator automatically leverages the extensive network of AWS edge servers to onboard traffic to the AWS network backbone as soon as possible resulting in lower request latencies All of these capabilities operate so as to preserve each Region’s autonomy There are very few exceptions to this approach including our services that provide global edge delivery (such as Amazon CloudFront and Amazon Route 53) along with the control plane for the AWS Identity and Access Management (IAM) service The vast majority of services operate entirely within a single Region Onpremises data center For workloads that run in an onpremises data center architect a hybrid experience when possible AWS Direct Connect provides a dedicated network connection from your premises to AWS enabling you to run in both Another option is to run AWS infrastructure and services on premises using AWS Outposts AWS Outposts is a fully managed service that extends AWS infrastructure AWS services APIs and tools to your data center The same hardware infrastructure used in the AWS Cloud is installed in your data center AWS Outposts are then connected to the nearest AWS Region You can then use AWS Outposts to support your workloads that have low latency or local data processing requirements Automate recovery for components constrained to a single location: If components of the workload can only run in a single Availability Zone or onpremises data center you must implement the capability to do a complete rebuild of the workload within defined recovery objectives If the best practice to deploy the workload to multiple locations is not possible due to technological constraints you must implement an alternate path to resiliency You must automate the ability to recreate necessary infrastructure redeploy applications and recreate necessary data for these cases 38ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload For example Amazon EMR launches all nodes for a given cluster in the same Availability Zone because running a cluster in the same zone improves performance of the jobs flows as it provides a higher data access rate If this component is required for workload resilience then you must have a way to redeploy the cluster and its data Also for Amazon EMR you should provision redundancy in ways other than using MultiAZ You can provision multiple master nodes Using EMR File System (EMRFS) data in EMR can be stored in Amazon S3 which in turn can be replicated across multiple Availability Zones or AWS Regions Similarly for Amazon Redshift by default it provisions your cluster in a randomly selected Availability Zone within the AWS Region that you select All the cluster nodes are provisioned in the same zone Use bulkhead architectures to limit scope of impact: Like the bulkheads on a ship this pattern ensures that a failure is contained to a small subset of requests/users so that the number of impaired requests is limited and most can continue without error Bulkheads for data are often called partitions while bulkheads for services are known as cells In a cellbased architecture each cell is a complete independent instance of the service and has a fixed maximum size As load increases workloads grow by adding more cells A partition key is used on incoming traffic to determine which cell will process the request Any failure is contained to the single cell it occurs in so that the number of impaired requests is limited as other cells continue without error It is important to identify the proper partition key to minimize crosscell interactions and avoid the need to involve complex mapping services in each request Services that require complex mapping end up merely shifting the problem to the mapping services while services that require crosscell interactions create dependencies between cells (and thus reduce the assumed availability improvements of doing so) Figure 10: Cellbased architecture In his AWS blog post Colm MacCarthaigh explains how Amazon Route 53 uses the concept of shuffle sharding to isolate customer requests into shards A shard in this case consists of two or more cells Based on partition key traffic from a customer (or resources or whatever you want to isolate) is routed to its assigned shard In the case of eight cells with two cells per shard and customers divided among the four shards 25% of customers would experience impact in the event of a problem 39ArchivedReliability Pillar AWS WellArchitected Framework Resources Figure 11: Service divided into four traditional shards of two cells each With shuffle sharding you create virtual shards of two cells each and assign your customers to one of those virtual shards When a problem happens you can still lose a quarter of the whole service but the way that customers or resources are assigned means that the scope of impact with shuffle sharding is considerably smaller than 25% With eight cells there are 28 unique combinations of two cells which means that there are 28 possible shuffle shards (virtual shards) If you have hundreds or thousands of customers and assign each customer to a shuffle shard then the scope of impact due to a problem is just 1/28th That’s seven times better than regular sharding Figure 12: Service divided into 28 shuffle shards (virtual shards) of two cells each (only two shuffle shards out of the 28 possible are shown) A shard can be used for servers queues or other resources in addition to cells Resources Videos •AWS re:Invent 2018: Architecture Patterns for MultiRegion ActiveActive Applications (ARC209R2) •Shufflesharding: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) •AWS re:Invent 2018: How AWS Minimizes the Blast Radius of Failures (ARC338) •AWS re:Invent 2019: Innovation and operation of the AWS global network infrastructure (NET339) 40ArchivedReliability Pillar AWS WellArchitected Framework Design your Workload to Withstand Component Failures Documentation •What is AWS Outposts? •Global Tables: MultiRegion Replication with DynamoDB •AWS Local Zones FAQ • AWS Global Infrastructure •Regions Availability Zones and Local Zones • The Amazon Builders' Library: Workload isolation using shufflesharding Design your Workload to Withstand Component Failures Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency Monitor all components of the workload to detect failures: Continuously monitor the health of your workload so that you and your automated systems are aware of degradation or complete failure as soon as they occur Monitor for key performance indicators (KPIs) based on business value All recovery and healing mechanisms must start with the ability to detect problems quickly Technical failures should be detected first so that they can be resolved However availability is based on the ability of your workload to deliver business value so this needs to be a key measure of your detection and remediation strategy Failover to healthy resources: Ensure that if a resource failure occurs that healthy resources can continue to serve requests For location failures (such as Availability Zone or AWS Region) ensure you have systems in place to failover to healthy resources in unimpaired locations This is easier for individual resource failures (such as an EC2 instance) or impairment of an Availability Zone in a multiAZ workload as AWS services such as Elastic Load Balancing and AWS Auto Scaling help distribute load across resources and Availability Zones For multiregion workloads this is more complicated For example crossregion read replicas enable you to deploy your data to multiple AWS Regions but you still must promote the read replica to primary and point your traffic at it in the event of a primary location failure Amazon Route 53 and AWS Global Accelerator can also help route traffic across AWS Regions If your workload is using AWS services such as Amazon S3 or Amazon DynamoDB then they are automatically deployed to multiple Availability Zones In case of failure the AWS control plane automatically routes traffic to healthy locations for you For Amazon RDS you must choose MultiAZ as a configuration option and then on failure AWS automatically directs traffic to the healthy instance For Amazon EC2 instances or Amazon ECS tasks you choose which Availability Zones to deploy to Elastic Load Balancing then provides the solution to detect instances in unhealthy zones and route traffic to the healthy ones Elastic Load Balancing can even route traffic to components in your onpremises data center For MultiRegion approaches (which might also include onpremises data centers) Amazon Route 53 provides a way to define internet domains and assign routing policies that can include health checks to ensure that traffic is routed to healthy regions Alternately AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application then routes to endpoints in AWS Regions of your choosing using the AWS global network instead of the internet for better performance and reliability 41ArchivedReliability Pillar AWS WellArchitected Framework Design your Workload to Withstand Component Failures AWS approaches the design of our services with fault recovery in mind We design services to minimize the time to recover from failures and impact on data Our services primarily use data stores that acknowledge requests only after they are durably stored across multiple replicas These services and resources include Amazon Aurora Amazon Relational Database Service (Amazon RDS) MultiAZ DB instances Amazon S3 Amazon DynamoDB Amazon Simple Queue Service (Amazon SQS) and Amazon Elastic File System (Amazon EFS) They are constructed to use cellbased isolation and use the fault isolation provided by Availability Zones We use automation extensively in our operational procedures We also optimize our replaceandrestart functionality to recover quickly from interruptions Automate healing on all layers: Upon detection of a failure use automated capabilities to perform actions to remediate Ability to restart is an important tool to remediate failures As discussed previously for distributed systems a best practice is to make services stateless where possible This prevents loss of data or availability on restart In the cloud you can (and generally should) replace the entire resource (for example EC2 instance or Lambda function) as part of the restart The restart itself is a simple and reliable way to recover from failure Many different types of failures occur in workloads Failures can occur in hardware software communications and operations Rather than constructing novel mechanisms to trap identify and correct each of the different types of failures map many different categories of failures to the same recovery strategy An instance might fail due to hardware failure an operating system bug memory leak or other causes Rather than building custom remediation for each situation treat any of them as an instance failure Terminate the instance and allow AWS Auto Scaling to replace it Later carry out the analysis on the failed resource out of band Another example is the ability to restart a network request Apply the same recovery approach to both a network timeout and a dependency failure where the dependency returns an error Both events have a similar effect on the system so rather than attempting to make either event a “special case” apply a similar strategy of limited retry with exponential backoff and jitter Ability to restart is a recovery mechanism featured in Recovery Oriented Computing (ROC) and high availability cluster architectures Amazon EventBridge can be used to monitor and filter for events such as CloudWatch Alarms or changes in state in other AWS services Based on event information it can then trigger AWS Lambda (or other targets) to execute custom remediation logic on your workload Amazon EC2 Auto Scaling can be configured to check for EC2 instance health If the instance is in any state other than running or if the system status is impaired Amazon EC2 Auto Scaling considers the instance to be unhealthy and launches a replacement instance If using AWS OpsWorks you can configure Auto Healing of EC2 instances at the layer level For largescale replacements (such as the loss of an entire Availability Zone) static stability is preferred for high availability instead of trying to obtain multiple new resources at once Use static stability to prevent bimodal behavior: Bimodal behavior is when your workload exhibits different behavior under normal and failure modes for example relying on launching new instances if an Availability Zone fails You should instead build systems that are statically stable and operate in only one mode In this case provision enough instances in each zone to handle workload load if one zone were removed and then use Elastic Load Balancing or Amazon Route 53 health checks to shift load away from the impaired instances Static stability for compute deployment (such as EC2 instances or containers) will result in the highest reliability This must be weighed against cost concerns It’s less expensive to provision less compute capacity and rely on launching new instances in the case of a failure But for largescale failures (such as an Availability Zone failure) this approach is less effective because it relies on reacting to impairments as they happen rather than being prepared for those impairments before they happen Your solution should weigh reliability versus the cost needs for your workload By using more Availability Zones the amount of additional compute you need for static stability decreases 42ArchivedReliability Pillar AWS WellArchitected Framework Resources Figure 13: After traffic has shifted use AWS Auto Scaling to asynchronously replace instances from the failed zone and launch them in the healthy zones Another example of bimodal behavior would be a network timeout that could cause a system to attempt to refresh the configuration state of the entire system This would add unexpected load to another component and might cause it to fail triggering other unexpected consequences This negative feedback loop impacts availability of your workload Instead you should build systems that are statically stable and operate in only one mode A statically stable design would be to do constant work (p 19) and always refresh the configuration state on a fixed cadence When a call fails the workload uses the previously cached value and triggers an alarm Another example of bimodal behavior is allowing clients to bypass your workload cache when failures occur This might seem to be a solution that accommodates client needs but should not be allowed because it significantly changes the demands on your workload and is likely to result in failures Send notifications when events impact availability: Notifications are sent upon the detection of significant events even if the issue caused by the event was automatically resolved Automated healing enables your workload to be reliable However it can also obscure underlying problems that need to be addressed Implement appropriate monitoring and events so that you can detect patterns of problems including those addressed by auto healing so that you can resolve root cause issues Amazon CloudWatch Alarms can be triggered based on failures that occur They can also trigger based on automated healing actions executed CloudWatch Alarms can be configured to send emails or to log incidents in thirdparty incident tracking systems using Amazon SNS integration Resources Videos •Static stability in AWS: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) Documentation • AWS OpsWorks: Using Auto Healing to Replace Failed Instances •What Is Amazon EventBridge? •Amazon Route 53: Choosing a Routing Policy 43ArchivedReliability Pillar AWS WellArchitected Framework Test Reliability •What Is AWS Global Accelerator? • The Amazon Builders' Library: Static stability using Availability Zones • The Amazon Builders' Library: Implementing health checks •AWS Marketplace: products that can be used for fault tolerance •APN Partner: partners that can help with automation of your fault tolerance Labs • WellArchitected lab: Level 300: Implementing Health Checks and Managing Dependencies to Improve Reliability External Links •The Berkeley/Stanford RecoveryOriented Computing (ROC) Project Test Reliability After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect Test to validate that your workload meets functional and nonfunctional requirements because bugs or performance bottlenecks can impact the reliability of your workload Test the resiliency of your workload to help you find latent bugs that only surface in production Exercise these tests regularly Use playbooks to investigate failures: Enable consistent and prompt responses to failure scenarios that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contributing to a failure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated The playbook is proactive planning that you must do so as to be able to take reactive actions effectively When failure scenarios not covered by the playbook are encountered in production first address the issue (put out the fire) Then go back and look at the steps you took to address the issue and use these to add a new entry in the playbook Note that playbooks are used in response to specific incidents while runbooks are used to achieve specific outcomes Often runbooks are used for routine activities and playbooks are used to respond to nonroutine events Perform postincident analysis: Review customerimpacting events and identify the contributing factors and preventative action items Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Communicate contributing factors and corrective actions as appropriate tailored to target audiences Assess why existing testing did not find the issue Add tests for this case if tests do not already exist Test functional requirements: These include unit tests and integration tests that validate required functionality You achieve the best outcomes when these tests are run automatically as part of build and deployment actions For instance using AWS CodePipeline developers commit changes to a source repository where CodePipeline automatically detects the changes Those changes are built and tests are run After the tests are complete the built code is deployed to staging servers for testing From the staging server CodePipeline runs more tests such as integration or load tests Upon the successful completion of those tests CodePipeline deploys the tested and approved code to production instances 44ArchivedReliability Pillar AWS WellArchitected Framework Test Reliability Additionally experience shows that synthetic transaction testing (also known as “canary testing” but not to be confused with canary deployments) that can run and simulate customer behavior is among the most important testing processes Run these tests constantly against your workload endpoints from diverse remote locations Amazon CloudWatch Synthetics enables you to create canaries to monitor your endpoints and APIs Test scaling and performance requirements: This includes load testing to validate that the workload meets scaling and performance requirements In the cloud you can create a productionscale test environment on demand for your workload If you run these tests on scaled down infrastructure you must scale your observed results to what you think will happen in production Load and performance testing can also be done in production if you are careful not to impact actual users and tag your test data so it does not comingle with real user data and corrupt usage statistics or production reports With testing ensure that your base resources scaling settings service quotas and resiliency design operate as expected under load Test resiliency using chaos engineering: Run tests that inject failures regularly into preproduction and production environments Hypothesize how your workload will react to the failure then compare your hypothesis to the testing results and iterate if they do not match Ensure that production testing does not impact users In the cloud you can test how your workload fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This exposes failure pathways that you can test and fix before a real failure scenario occurs thus reducing risk Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production – Principles of Chaos Engineering In preproduction and testing environments chaos engineering should be done regularly and be part of your CI/CD cycle In production teams must take care not to disrupt availability and should use game days as a way to control risk of chaos engineering in production The testing effort should be commensurate with your availability goals Testing to ensure that you can meet your availability goals is the only way you can have confidence that you will meet those goals Test for component failures that you have designed your workload to be resilient against These include loss of EC2 instances failure of the primary Amazon RDS database instance and Availability Zone outages Test for external dependency unavailability Your workload’s resiliency to transient failures of dependencies should be tested for durations that may last from less than a second to hours Other modes of degradation might cause reduced functionality and slow responses often resulting in a brownout of your services Common sources of this degradation are increased latency on critical services and unreliable network communication (dropped packets) You want to use the ability to inject such failures into your system including networking effects such as latency and dropped messages and DNS failures such as being unable to resolve a name or not being able to establish connections to dependent services There are several thirdparty options for injecting failures These include open source options such as Netflix Chaos Monkey The Chaos ToolKit and Shopify Toxiproxy as well as commercial options like Gremlin We advise that initial investigations of how to implement chaos engineering use selfauthored scripts This enables engineering teams to become comfortable with how chaos is introduced into their workloads For examples of these see Testing for Resiliency of EC2 RDS and S3 using multiple languages such as a Bash Python Java and PowerShell You should also implement Injecting Chaos to Amazon EC2 using AWS Systems Manager which enables you to simulate brownouts and high CPU conditions using AWS Systems Manager Documents 45ArchivedReliability Pillar AWS WellArchitected Framework Resources Conduct game days regularly: Use game days to regularly exercise your procedures for responding to events and failures as close to production as possible (including in production environments) with the people who will be involved in actual failure scenarios Game days enforce measures to ensure that production events do not impact users Game days simulate a failure or event to test systems processes and team responses The purpose is to actually perform the actions the team would perform as if an exceptional event happened This will help you understand where improvements can be made and can help develop organizational experience in dealing with events These should be conducted regularly so that your team builds ""muscle memory"" on how to respond After your design for resiliency is in place and has been tested in nonproduction environments a game day is the way to ensure that everything works as planned in production A game day especially the first one is an “all hands on deck” activity where engineers and operations are all informed when it will happen and what will occur Runbooks are in place Simulated events are executed including possible failure events in the production systems in the prescribed manner and impact is assessed If all systems operate as designed detection and selfhealing will occur with little to no impact However if negative impact is observed the test is rolled back and the workload issues are remedied manually if necessary (using the runbook) Since game days often take place in production all precautions should be taken to ensure that there is no impact on availability to your customers Resources Videos •AWS re:Invent 2019: Improving resiliency with chaos engineering (DOP309R1) Documentation •Continuous Delivery and Continuous Integration •Using Canaries (Amazon CloudWatch Synthetics) •Use CodePipeline with AWS CodeBuild to test code and run builds •Automate your operational playbooks with AWS Systems Manager •AWS Marketplace: products that can be used for continuous integration •APN Partner: partners that can help with implementation of a continuous integration pipeline Labs • WellArchitected lab: Level 300: Testing for Resiliency of EC2 RDS and S3 External Links •Principles of Chaos Engineering •Resilience Engineering: Learning to Embrace Failure •Apache JMeter Books • Casey Rosenthal Nora Jones “Chaos Engineering ” (April 2020) 46ArchivedReliability Pillar AWS WellArchitected Framework Plan for Disaster Recovery (DR) Plan for Disaster Recovery (DR) Having backups and redundant workload components in place is the start of your DR strategy RTO and RPO are your objectives (p 7) for restoration of your workload Set these based on business needs Implement a strategy to meet these objectives considering locations and function of workload resources and data The probability of disruption and cost of recovery are also key factors that help to inform the business value of providing disaster recovery for a workload Both Availability and Disaster Recovery rely on the same best practices such as monitoring for failures deploying to multiple locations and automatic failover However Availability focuses on components of the workload while Disaster Recovery focuses on discrete copies of the entire workload Disaster Recovery has different objectives from Availability focusing on time to recovery after a disaster Define recovery objectives for downtime and data loss: The workload has a recovery time objective (RTO) and recovery point objective (RPO) Recovery Time Objective (RTO) is defined by the organization RTO is the maximum acceptable delay between the interruption of service and restoration of service This determines what is considered an acceptable time window when service is unavailable Recovery Point Objective (RPO) is defined by the organization RPO is the maximum acceptable amount of time since the last data recovery point This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service Use defined recovery strategies to meet the recovery objectives: A disaster recovery (DR) strategy has been defined to meet your workload objectives Choose a strategy such as: backup and restore; standby (active/passive); or active/active When architecting a multiregion disaster recovery strategy for your workload you should choose one of the following multiregion strategies They are listed in increasing order of complexity and decreasing order of RTO and RPO DR Region refers to an AWS Region other than the one primary used for your workload (or any AWS Region if your workload is on premises) Some workloads have regulatory data residency requirements If this applies to your workload in a locality that currently has only one AWS region then you can use the Availability Zones within that region as discrete locations instead of AWS regions •Backup and restore (RPO in hours RTO in 24 hours or less): Back up your data and applications using pointintime backups into the DR Region Restore this data when necessary to recover from a disaster •Pilot light (RPO in minutes RTO in hours): Replicate your data from one region to another and provision a copy of your core workload infrastructure Resources required to support data replication and backup such as databases and object storage are always on Other elements such as application 47ArchivedReliability Pillar AWS WellArchitected Framework Plan for Disaster Recovery (DR) servers are loaded with application code and configurations but are switched off and are only used during testing or when Disaster Recovery failover is invoked •Warm standby (RPO in seconds RTO in minutes): Maintain a scaleddown but fully functional version of your workload always running in the DR Region Businesscritical systems are fully duplicated and are always on but with a scaled down fleet When the time comes for recovery the system is scaled up quickly to handle the production load The more scaledup the Warm Standby is the lower RTO and control plane reliance will be When scaled up to full scale this is known as a Hot Standby •Multiregion (multisite) activeactive (RPO near zero RTO potentially zero): Your workload is deployed to and actively serving traffic from multiple AWS Regions This strategy requires you to synchronize data across Regions Possible conflicts caused by writes to the same record in two different regional replicas must be avoided or handled Data replication is useful for data synchronization and will protect you against some types of disaster but it will not protect you against data corruption or destruction unless your solution also includes options for pointintime recovery Use services like Amazon Route 53 or AWS Global Accelerator to route your user traffic to where your workload is healthy For more details on AWS services you can use for activeactive architectures see the AWS Regions section of Use Fault Isolation to Protect Your Workload (p 36) Recommendation The difference between Pilot Light and Warm Standby can sometimes be difficult to understand Both include an environment in your DR Region with copies of your primary region assets The distinction is that Pilot Light cannot process requests without additional action taken first while Warm Standby can handle traffic (at reduced capacity levels) immediately Pilot Light will require you to turn on servers possibly deploy additional (noncore) infrastructure and scale up while Warm Standby only requires you to scale up (everything is already deployed and running) Choose between these based on your RTO and RPO needs Test disaster recovery implementation to validate the implementation: Regularly test failover to DR to ensure that RTO and RPO are met A pattern to avoid is developing recovery paths that are rarely executed For example you might have a secondary data store that is used for readonly queries When you write to a data store and the primary fails you might want to fail over to the secondary data store If you don’t frequently test this failover you might find that your assumptions about the capabilities of the secondary data store are incorrect The capacity of the secondary which might have been sufficient when you last tested may be no longer be able to tolerate the load under this scenario Our experience has shown that the only error recovery that works is the path you test frequently This is why having a small number of recovery paths is best You can establish recovery patterns and regularly test them If you have a complex or critical recovery path you still need to regularly execute that failure in production to convince yourself that the recovery path works In the example we just discussed you should fail over to the standby regularly regardless of need Manage configuration drift at the DR site or region: Ensure that your infrastructure data and configuration are as needed at the DR site or region For example check that AMIs and service quotas are up to date AWS Config continuously monitors and records your AWS resource configurations It can detect drift and trigger AWS Systems Manager Automation to fix it and raise alarms AWS CloudFormation can additionally detect drift in stacks you have deployed Automate recovery: Use AWS or thirdparty tools to automate system recovery and route traffic to the DR site or region Based on configured health checks AWS services such as Elastic Load Balancing and AWS Auto Scaling can distribute load to healthy Availability Zones while services such as Amazon Route 53 and AWS Global Accelerator can route load to healthy AWS Regions 48ArchivedReliability Pillar AWS WellArchitected Framework Resources For workloads on existing physical or virtual data centers or private clouds CloudEndure Disaster Recovery available through AWS Marketplace enables organizations to set up an automated disaster recovery strategy to AWS CloudEndure also supports crossregion / crossAZ disaster recovery in AWS Resources Videos •AWS re:Invent 2019: Backupandrestore and disasterrecovery solutions with AWS (STG208) Documentation •What Is AWS Backup? •Remediating Noncompliant AWS Resources by AWS Config Rules •AWS Systems Manager Automation • AWS CloudFormation: Detect Drift on an Entire CloudFormation Stack •Amazon RDS: Crossregion backup copy •RDS: Replicating a Read Replica Across Regions •S3: CrossRegion Replication •Route 53: Configuring DNS Failover •CloudEndure Disaster Recovery • How do I implement an Infrastructure Configuration Management solution on AWS? •CloudEndure Disaster Recovery to AWS •AWS Marketplace: products that can be used for disaster recovery •APN Partner: partners that can help with disaster recovery 49ArchivedReliability Pillar AWS WellArchitected Framework Dependency Selection Example Implementations for Availability Goals In this section we’ll review workload designs using the deployment of a typical web application that consists of a reverse proxy static content on Amazon S3 an application server and a SQL database for persistent storage of data For each availability target we provide an example implementation This workload could instead use containers or AWS Lambda for compute and NoSQL (such as Amazon DynamoDB) for the database but the approaches are similar In each scenario we demonstrate how to meet availability goals through workload design for these topics: Topic For more information see this section Monitor resources Monitor Workload Resources (p 25) Adapt to changes in demand Design your Workload to Adapt to Changes in Demand (p 28) Implement change Implement Change (p 30) Back up data Back up Data (p 34) Architect for resiliency Use Fault Isolation to Protect Your Workload (p 36) Design your Workload to Withstand Component Failures (p 41) Test resiliency Test Reliability (p 44) Plan for disaster recovery (DR) Plan for Disaster Recovery (DR) (p 47) Dependency Selection We have chosen to use Amazon EC2 for our applications We will show how using Amazon RDS and multiple Availability Zones improves the availability of our applications We will use Amazon Route 53 for DNS When we use multiple Availability Zones we will use Elastic Load Balancing Amazon S3 is used for backups and static content As we design for higher reliability we must use services with higher availability themselves See Appendix A: DesignedFor Availability for Select AWS Services (p 68) for the design goals for the respective AWS services SingleRegion Scenarios Topics •2 9s (99%) Scenario (p 51) •3 9s (999%) Scenario (p 52) •4 9s (9999%) Scenario (p 54) 50ArchivedReliability Pillar AWS WellArchitected Framework 2 9s (99%) Scenario 2 9s (99%) Scenario These workloads are helpful to the business but it’s only an inconvenience if they are unavailable This type of workload can be internal tooling internal knowledge management or project tracking Or these can be actual customerfacing workloads but served from an experimental service with a feature toggle that can hide the service if needed These workloads can be deployed with one Region and one Availability Zone Monitor resources We will have simple monitoring indicating whether the service home page is returning an HTTP 200 OK status When problems occur our playbook will indicate that logging from the instance will be used to establish root cause Adapt to changes in demand We will have playbooks for common hardware failures urgent software updates and other disruptive changes Implement change We will use AWS CloudFormation to define our infrastructure as code and specifically to speed up reconstruction in the event of a failure Software updates are manually performed using a runbook with downtime required for the installation and restart of the service If a problem happens during deployment the runbook describes how to roll back to the previous version Any corrections of the error are done using analysis of logs by the operations and development teams and the correction is deployed after the fix is prioritized and completed Back up data We will use a vendor or purpose built backup solution to send encrypted backup data to Amazon S3 using a runbook We will test that the backups work by restoring the data and ensuring the ability to use it on a regular basis using a runbook We configure versioning on our Amazon S3 objects and remove permissions for deletion of the backups We use an Amazon S3 bucket lifecycle policy to archive or permanently delete according to our requirements Architect for resiliency Workloads are deployed with one Region and one Availability Zone We deploy the application including the database to a single instance Test resiliency The deployment pipeline of new software is scheduled with some unit testing but mostly whitebox/ blackbox testing of the assembled workload Plan for disaster recovery (DR) During failures we wait for the failure to finish optionally routing requests to a static website using DNS modification via a runbook The recovery time for this will be determined by the speed at which the infrastructure can be deployed and the database restored to the most recent backup This deployment can either be into the same Availability Zone or into a different Availability Zone in the event of an Availability Zone failure using a runbook 51ArchivedReliability Pillar AWS WellArchitected Framework 3 9s (999%) Scenario Availability design goal We take 30 minutes to understand and decide to execute recovery deploy the whole stack in AWS CloudFormation in 10 minutes assume that we deploy to a new Availability Zone and assume that the database can be restored in 30 minutes This implies that it takes about 70 minutes to recover from a failure Assuming one failure per quarter our estimated impact time for the year is 280 minutes or four hours and 40 minutes This means that the upper limit on availability is 999% The actual availability also depends on the real rate of failure the duration of failure and how quickly each failure actually recovers For this architecture we require the application to be offline for updates (estimating 24 hours per year: four hours per change six times per year) plus actual events So referring to the table on application availability earlier in the whitepaper we see that our availability design goal is 99% Summary Topic Implementation Monitor resources Site health check only; no alerting Adapt to changes in demand Vertical scaling via redeployment Implement change Runbook for deploy and rollback Back up data Runbook for backup and restore Architect for resiliency Complete rebuild; restore from backup Test resiliency Complete rebuild; restore from backup Plan for disaster recovery (DR) Encrypted backups restore to different Availability Zone if needed 3 9s (999%) Scenario The next availability goal is for applications for which it’s important to be highly available but they can tolerate short periods of unavailability This type of workload is typically used for internal operations that have an effect on employees when they are down This type of workload can also be customer facing but are not high revenue for the business and can tolerate a longer recovery time or recovery point Such workloads include administrative applications for account or information management We can improve availability for workloads by using two Availability Zones for our deployment and by separating the applications to separate tiers Monitor resources Monitoring will be expanded to alert on the availability of the website over all by checking for an HTTP 200 OK status on the home page In addition there will be alerting on every replacement of a web server and when the database fails over We will also monitor the static content on Amazon S3 for availability and alert if it becomes unavailable Logging will be aggregated for ease of management and to help in root cause analysis Adapt to changes in demand Automatic scaling is configured to monitor CPU utilization on EC2 instances and add or remove instances to maintain the CPU target at 70% but with no fewer than one EC2 instance per Availability 52ArchivedReliability Pillar AWS WellArchitected Framework 3 9s (999%) Scenario Zone If load patterns on our RDS instance indicate that scale up is needed we will change the instance type during a maintenance window Implement change The infrastructure deployment technologies remain the same as the previous scenario Delivery of new software is on a fixed schedule of every two to four weeks Software updates will be automated not using canary or blue/green deployment patterns but rather using replace in place The decision to roll back will be made using the runbook We will have playbooks for establishing root cause of problems After the root cause has been identified the correction for the error will be identified by a combination of the operations and development teams The correction will be deployed after the fix is developed Back up data Backup and restore can be done using Amazon RDS It will be executed regularly using a runbook to ensure that we can meet recovery requirements Architect for resiliency We can improve availability for applications by using two Availability Zones for our deployment and by separating the applications to separate tiers We will use services that work across multiple Availability Zones such as Elastic Load Balancing Auto Scaling and Amazon RDS MultiAZ with encrypted storage via AWS Key Management Service This will ensure tolerance to failures on the resource level and on the Availability Zone level The load balancer will only route traffic to healthy application instances The health check needs to be at the data plane/application layer indicating the capability of the application on the instance This check should not be against the control plane A health check URL for the web application will be present and configured for use by the load balancer and Auto Scaling so that instances that fail are removed and replaced Amazon RDS will manage the active database engine to be available in the second Availability Zone if the instance fails in the primary Availability Zone then repair to restore to the same resiliency After we have separated the tiers we can use distributed system resiliency patterns to increase the reliability of the application so that it can still be available even when the database is temporarily unavailable during an Availability Zone failover Test resiliency We do functional testing same as in the previous scenario We do not test the selfhealing capabilities of ELB automatic scaling or RDS failover We will have playbooks for common database problems securityrelated incidents and failed deployments Plan for disaster recovery (DR) Runbooks exist for total workload recovery and common reporting Recovery uses backups stored in the same region as the workload Availability design goal We assume that at least some failures will require a manual decision to execute recovery However with the greater automation in this scenario we assume that only two events per year will require this decision We take 30 minutes to decide to execute recovery and assume that recovery is completed within 30 minutes This implies 60 minutes to recover from failure Assuming two incidents per year our estimated impact time for the year is 120 minutes 53ArchivedReliability Pillar AWS WellArchitected Framework 4 9s (9999%) Scenario This means that the upper limit on availability is 9995% The actual availability also depends on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we require the application to be briefly offline for updates but these updates are automated We estimate 150 minutes per year for this: 15 minutes per change 10 times per year This adds up to 270 minutes per year when the service is not available so our availability design goal is 999% Summary Topic Implementation Monitor resources Site health check only; alerts sent when down Adapt to changes in demand ELB for web and automatic scaling application tier; resizing MultiAZ RDS Implement change Automated deploy in place and runbook for rollback Back up data Automated backups via RDS to meet RPO and runbook for restoring Architect for resiliency Automatic scaling to provide selfhealing web and application tier; RDS is MultiAZ Test resiliency ELB and application are selfhealing; RDS is Multi AZ; no explicit testing Plan for disaster recovery (DR) Encrypted backups via RDS to same AWS Region 4 9s (9999%) Scenario This availability goal for applications requires the application to be highly available and tolerant to component failures The application must be able to absorb failures without needing to get additional resources This availability goal is for mission critical applications that are main or significant revenue drivers for a corporation such as an ecommerce site a business to business web service or a high traffic content/media site We can improve availability further by using an architecture that will be statically stable within the Region This availability goal doesn’t require a control plane change in behavior of our workload to tolerate failure For example there should be enough capacity to withstand the loss of one Availability Zone We should not require updates to Amazon Route 53 DNS We should not need to create any new infrastructure whether it’s creating or modifying an S3 bucket creating new IAM policies (or modifications of policies) or modifying Amazon ECS task configurations Monitor resources Monitoring will include success metrics as well as alerting when problems occur In addition there will be alerting on every replacement of a failed web server when the database fails over and when an AZ fails Adapt to changes in demand We will use Amazon Aurora as our RDS which enables automatic scaling of read replicas For these applications engineering for read availability over write availability of primary content is also a key architecture decision Aurora can also automatically grow storage as needed in 10 GB increments up to 64 TB 54ArchivedReliability Pillar AWS WellArchitected Framework 4 9s (9999%) Scenario Implement change We will deploy updates using canary or blue/green deployments into each isolation zone separately The deployments are fully automated including a roll back if KPIs indicate a problem Runbooks will exist for rigorous reporting requirements and performance tracking If successful operations are trending toward failure to meet performance or availability goals a playbook will be used to establish what is causing the trend Playbooks will exist for undiscovered failure modes and security incidents Playbooks will also exist for establishing the root cause of failures We will also engage with AWS Support for Infrastructure Event Management offering The team that builds and operates the website will identify the correction of error of any unexpected failure and prioritize the fix to be deployed after it is implemented Back up data Backup and restore can be done using Amazon RDS It will be executed regularly using a runbook to ensure that we can meet recovery requirements Architect for resiliency We recommend three Availability Zones for this approach Using a three Availability Zone deployment each AZ has static capacity of 50% of peak Two Availability Zones could be used but the cost of the statically stable capacity would be more because both zones would have to have 100% of peak capacity We will add Amazon CloudFront to provide geographic caching as well as request reduction on our application’s data plane We will use Amazon Aurora as our RDS and deploy read replicas in all three zones The application will be built using the software/application resiliency patterns in all layers Test resiliency The deployment pipeline will have a full test suite including performance load and failure injection testing We will practice our failure recovery procedures constantly through game days using runbooks to ensure that we can perform the tasks and not deviate from the procedures The team that builds the website also operates the website Plan for disaster recovery (DR) Runbooks exist for total workload recovery and common reporting Recovery uses backups stored in the same region as the workload Restore procedures are regularly exercised as part of game days Availability design goal We assume that at least some failures will require a manual decision to execute recovery however with greater automation in this scenario we assume that only two events per year will require this decision and the recovery actions will be rapid We take 10 minutes to decide to execute recovery and assume that recovery is completed within five minutes This implies 15 minutes to recover from failure Assuming two failures per year our estimated impact time for the year is 30 minutes This means that the upper limit on availability is 9999% The actual availability will also depend on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we assume that the application is online continuously through updates Based on this our availability design goal is 9999% 55ArchivedReliability Pillar AWS WellArchitected Framework MultiRegion Scenarios Summary Topic Implementation Monitor resources Health checks at all layers and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones for Aurora RDS Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application Deployments are made by isolation zone Back up data Automated backups via RDS to meet RPO and automated restoration that is practiced regularly in a game day Architect for resiliency Implemented fault isolation zones for the application; auto scaling to provide selfhealing web and application tier; RDS is MultiAZ Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for diagnosing unknown problems; and a Root Cause Analysis process exists Plan for disaster recovery (DR) Encrypted backups via RDS to same AWS Region that is practiced in a game day MultiRegion Scenarios Implementing our application in multiple AWS Regions will increase the cost of operation partly because we isolate regions to maintain their autonomy It should be a very thoughtful decision to pursue this path That said regions provide a strong isolation boundary and we take great pains to avoid correlated failures across regions Using multiple regions will give you greater control over your recovery time in the event of a hard dependency failure on a regional AWS service In this section we’ll discuss various implementation patterns and their typical availability Topics •3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes (p 56) •5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute (p 59) 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes This availability goal for applications requires extremely short downtime and very little data loss during specific times Applications with this availability goal include applications in the areas of: banking 56ArchivedReliability Pillar AWS WellArchitected Framework 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes investing emergency services and data capture These applications have very short recovery times and recovery points We can improve recovery time further by using a Warm Standby approach across two AWS Regions We will deploy the entire workload to both Regions with our passive site scaled down and all data kept eventually consistent Both deployments will be statically stable within their respective regions The applications should be built using the distributed system resiliency patterns We’ll need to create a lightweight routing component that monitors for workload health and can be configured to route traffic to the passive region if necessary Monitor resources There will be alerting on every replacement of a web server when the database fails over and when the Region fails over We will also monitor the static content on Amazon S3 for availability and alert if it becomes unavailable Logging will be aggregated for ease of management and to help in root cause analysis in each Region The routing component monitors both our application health and any regional hard dependencies we have Adapt to changes in demand Same as the 4 9s scenario Implement change Delivery of new software is on a fixed schedule of every two to four weeks Software updates will be automated using canary or blue/green deployment patterns Runbooks exist for when Region failover occurs for common customer issues that occur during those events and for common reporting We will have playbooks for common database problems securityrelated incidents failed deployments unexpected customer issues on Region failover and establishing root cause of problems After the root cause has been identified the correction of error will be identified by a combination of the operations and development teams and deployed when the fix is developed We will also engage with AWS Support for Infrastructure Event Management Back up data Like the 4 9s scenario we automatic RDS backups and use S3 versioning Data is automatically and asynchronously replicated from the Aurora RDS cluster in the active region to crossregion read replicas in the passive region S3 crossregion replication is used to automatically and asynchronously move data from the active to the passive region Architect for resiliency Same as the 4 9s scenario plus regional failover is possible This is managed manually During failover we will route requests to a static website using DNS failover until recovery in the second Region Test resiliency Same as the 4 9s scenario plus we will validate the architecture through game days using runbooks Also RCA correction is prioritized above feature releases for immediate implementation and deployment 57ArchivedReliability Pillar AWS WellArchitected Framework 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes Plan for disaster recovery (DR) Regional failover is manually managed All data is asynchronously replicated Infrastructure in the warm standby is scaled out This can be automated using a workflow executed on AWS Step Functions AWS Systems Manager (SSM) can also help with this automation as you can create SSM documents that update Auto Scaling groups and resize instances Availability design goal We assume that at least some failures will require a manual decision to execute recovery however with good automation in this scenario we assume that only two events per year will require this decision We take 20 minutes to decide to execute recovery and assume that recovery is completed within 10 minutes This implies that it takes 30 minutes to recover from failure Assuming two failures per year our estimated impact time for the year is 60 minutes This means that the upper limit on availability is 9995% The actual availability will also depend on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we assume that the application is online continuously through updates Based on this our availability design goal is 9995% Summary Topic Implementation Monitor resources Health checks at all layers including DNS health at AWS Region level and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones in the active and passive regions for Aurora RDS Data and infrastructure synchronized between AWS Regions for static stability Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application deployments are made to one isolation zone in one AWS Region at a time Back up data Automated backups in each AWS Region via RDS to meet RPO and automated restoration that is practiced regularly in a game day Aurora RDS and S3 data is automatically and asynchronously replicated from active to passive region Architect for resiliency Automatic scaling to provide selfhealing web and application tier; RDS is MultiAZ; regional failover is managed manually with static site presented while failing over Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for 58ArchivedReliability Pillar AWS WellArchitected Framework 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute Topic Implementation diagnosing unknown problems; and a Root Cause Analysis process exists with communication paths for what the problem was and how it was corrected or prevented RCA correction is prioritized above feature releases for immediate implementation and deployment Plan for disaster recovery (DR) Warm Standby deployed in another region Infrastructure is scaled out using workflows executed using AWS Step Functions or AWS Systems Manager Documents Encrypted backups via RDS Crossregion read replicas between two AWS Regions Crossregion replication of static assets in Amazon S3 Restoration is to the current active AWS Region is practiced in a game day and is coordinated with AWS 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute This availability goal for applications requires almost no downtime or data loss for specific times Applications that could have this availability goal include for example certain banking investing finance government and critical business applications that are the core business of an extremely largerevenue generating business The desire is to have strongly consistent data stores and complete redundancy at all layers We have selected a SQLbased data store However in some scenarios we will find it difficult to achieve a very small RPO If you can partition your data it’s possible to have no data loss This might require you to add application logic and latency to ensure that you have consistent data between geographic locations as well as the capability to move or copy data between partitions Performing this partitioning might be easier if you use a NoSQL database We can improve availability further by using an ActiveActive approach across multiple AWS Regions The workload will be deployed in all desired Regions that are statically stable across regions (so the remaining regions can handle load with the loss of one region) A routing layer directs traffic to geographic locations that are healthy and automatically changes the destination when a location is unhealthy as well as temporarily stopping the data replication layers Amazon Route 53 offers 10second interval health checks and also offers TTL on your record sets as low as one second Monitor resources Same as the 3½ 9s scenario plus alerting when a Region is detected as unhealthy and traffic is routed away from it Adapt to changes in demand Same as the 3½ 9s scenario Implement change The deployment pipeline will have a full test suite including performance load and failure injection testing We will deploy updates using canary or blue/green deployments to one isolation zone at a time in one Region before starting at the other During the deployment the old versions will still be kept 59ArchivedReliability Pillar AWS WellArchitected Framework 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute running on instances to facilitate a faster rollback These are fully automated including a rollback if KPIs indicate a problem Monitoring will include success metrics as well as alerting when problems occur Runbooks will exist for rigorous reporting requirements and performance tracking If successful operations are trending towards failure to meet performance or availability goals a playbook will be used to establish what is causing the trend Playbooks will exist for undiscovered failure modes and security incidents Playbooks will also exist for establishing root cause of failures The team that builds the website also operates the website That team will identify the correction of error of any unexpected failure and prioritize the fix to be deployed after it’s implemented We will also engage with AWS Support for Infrastructure Event Management Back up data Same as the 3½ 9s scenario Architect for resiliency The applications should be built using the software/application resiliency patterns It’s possible that many other routing layers may be required to implement the needed availability The complexity of this additional implementation should not be underestimated The application will be implemented in deployment fault isolation zones and partitioned and deployed such that even a Region wideevent will not affect all customers Test resiliency We will validate the architecture through game days using runbooks to ensure that we can perform the tasks and not deviate from the procedures Plan for disaster recovery (DR) ActiveActive multiregion deployment with complete workload infrastructure and data in multiple regions Using a read local write global strategy one region is the primary database for all writes and data is replicated for reads to other regions If the primary DB region fails a new DB will need to be promoted Read local write global has users assigned to a home region where DB writes are handled This lets users read or write from any region but requires complex logic to manage potential data conflicts across writes in different regions When a region is detected as unhealthy the routing layer automatically routes traffic to the remaining healthy regions No manual intervention is required Data stores must be replicated between the Regions in a manner that can resolve potential conflicts Tools and automated processes will need to be created to copy or move data between the partitions for latency reasons and to balance requests or amounts of data in each partition Remediation of the data conflict resolution will also require additional operational runbooks Availability design goal We assume that heavy investments are made to automate all recovery and that recovery can be completed within one minute We assume no manually triggered recoveries but up to one automated recovery action per quarter This implies four minutes per year to recover We assume that the application is online continuously through updates Based on this our availability design goal is 99999% Summary 60ArchivedReliability Pillar AWS WellArchitected Framework Resources Topic Implementation Monitor resources Health checks at all layers including DNS health at AWS Region level and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones in the active and passive regions for Aurora RDS Data and infrastructure synchronized between AWS Regions for static stability Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application deployments are made to one isolation zone in one AWS Region at a time Back up data Automated backups in each AWS Region via RDS to meet RPO and automated restoration that is practiced regularly in a game day Aurora RDS and S3 data is automatically and asynchronously replicated from active to passive region Architect for resiliency Implemented fault isolation zones for the application; auto scaling to provide selfhealing web and application tier; RDS is MultiAZ; regional failover automated Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for diagnosing unknown problems; and a Root Cause Analysis process exists with communication paths for what the problem was and how it was corrected or prevented RCA correction is prioritized above feature releases for immediate implementation and deployment Plan for disaster recovery (DR) ActiveActive deployed across at least two regions Infrastructure is fully scaled and statically stable across regions Data is partitioned and synchronized across regions Encrypted backups via RDS Region failure is practiced in a game day and is coordinated with AWS During restoration a new database primary may need to be promoted Resources Documentation •The Amazon Builders' Library How Amazon builds and operates software 61ArchivedReliability Pillar AWS WellArchitected Framework Labs •AWS Architecture Center Labs •AWS WellArchitected Reliability Labs External Links • Adaptive Queuing Pattern: Fail at Scale •Calculating Total System Availability Books • Robert S Hammer “Patterns for Fault Tolerant Software” • Andrew Tanenbaum and Marten van Steen “Distributed Systems: Principles and Paradigms” 62ArchivedReliability Pillar AWS WellArchitected Framework Conclusion Whether you are new to the topics of availability and reliability or a seasoned veteran seeking insights to maximize your mission critical workload’s availability we hope this whitepaper has triggered your thinking offered a new idea or introduced a new line of questioning We hope this leads to a deeper understanding of the right level of availability based on the needs of your business and how to design the reliability to achieve it We encourage you to take advantage of the design operational and recoveryoriented recommendations offered here as well as the knowledge and experience of our AWS Solution Architects We’d love to hear from you–especially about your success stories achieving high levels of availability on AWS Contact your account team or use Contact US on our website 63ArchivedReliability Pillar AWS WellArchitected Framework Contributors Contributors to this document include: • Seth Eliot Principal Reliability Solutions Architect WellArchitected Amazon Web Services • Adrian Hornsby Principal Technical Evangelist Architecture Amazon Web Services • Philip Fitzsimons Sr Manager WellArchitected Amazon Web Services • Rodney Lester Principal Solutions Architect WellArchitected Amazon Web Services • Kevin Miller Director Software Development Amazon Web Services • Shannon Richards Sr Technical Program Manager Amazon Web Services 64ArchivedReliability Pillar AWS WellArchitected Framework Further Reading For additional information see: •AWS WellArchitected Framework 65ArchivedReliability Pillar AWS WellArchitected Framework Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 66) Updated Appendix A to update the Availability Design Goal for Amazon SQS Amazon SNS and Amazon MQ; Re order rows in table for easier lookup; Improve explanation of differences between availability and disaster recovery and how they both contribute to resiliency; Expand coverage of multiregion architectures (for availability) and multiregion strategies (for disaster recovery); Update referenced book to latest version; Expand availability calculations to include request based calculation and shortcut calculations; Improve description for Game DaysDecember 7 2020 Minor update (p 66) Updated Appendix A to update the Availability Design Goal for AWS LambdaOctober 27 2020 Minor update (p 66) Updated Appendix A to add the Availability Design Goal for AWS Global AcceleratorJuly 24 2020 Updates for new Framework (p 66)Substantial updates and new/ revised content including: Added “Workload Architecture” best practices section re organized best practices into Change Management and Failure Management sections updated Resources updated to include latest AWS resources and services such as AWS Global Accelerator AWS Service Quotas and AWS Transit Gateway added/updated definitions for Reliability Availability Resiliency better aligned whitepaper to the AWS Well Architected Tool (questions and best practices) used for WellArchitected Reviews reorder design principlesJuly 8 2020 66ArchivedReliability Pillar AWS WellArchitected Framework moving Automatically recover from failure before Test recovery procedures updated diagrams and formats for equations removed Key Services sections and instead integrated references to key AWS services into the best practices Minor update (p 66) Fixed broken link October 1 2019 Whitepaper updated (p 66) Appendix A updated April 1 2019 Whitepaper updated (p 66) Added specific AWS Direct Connect networking recommendations and additional service design goalsSeptember 1 2018 Whitepaper updated (p 66) Added Design Principles and Limit Management sections Updated links removed ambiguity of upstream/ downstream terminology and added explicit references to the remaining Reliability Pillar topics in the availability scenariosJune 1 2018 Whitepaper updated (p 66) Changed DynamoDB Cross Region solution to DynamoDB Global Tables Added service design goalsMarch 1 2018 Minor updates (p 66) Minor correction to availability calculation to include application availabilityDecember 1 2017 Whitepaper updated (p 66) Updated to provide guidance on high availability designs including concepts best practice and example implementationsNovember 1 2017 Initial publication (p 66) Reliability Pillar AWS Well Architected Framework publishedNovember 1 2016 67ArchivedReliability Pillar AWS WellArchitected Framework Appendix A: DesignedFor Availability for Select AWS Services Below we provide the availability that select AWS services were designed to achieve These values do not represent a Service Level Agreement or guarantee but rather provide insight to the design goals of each service In certain cases we differentiate portions of the service where there’s a meaningful difference in the availability design goal This list is not comprehensive for all AWS services and we expect to periodically update with information about additional services Amazon CloudFront Amazon Route 53 AWS Global Accelerator and the AWS Identity and Access Management Control Plane provide global service and the component availability goal is stated accordingly Other services provide services within an AWS Region and the availability goal is stated accordingly Many services operate within an Availability Zone separate from those in other Availability Zones In these cases we provide the availability design goal for a single AZ and when any two (or more) Availability Zones are used Note The numbers in the following table do not refer to durability (long term retention of data); they are availability numbers (access to data or functions) Service Component Availability Design Goal Amazon API Gateway Control Plane 99950% Data Plane 99990% Amazon Aurora Control Plane 99950% SingleAZ Data Plane 99950% MultiAZ Data Plane 99990% Amazon CloudFront Control Plane 99900% Data Plane (content delivery) 99990% Amazon CloudSearch Control Plane 99950% Data Plane 99950% Amazon CloudWatch CW Metrics (service) 99990% CW Events (service) 99990% CW Logs (service) 99950% Amazon DynamoDB Service (standard) 99990% Service (Global Tables) 99999% Amazon Elastic Block Store Control Plane 99950% Data Plane (volume availability) 99999% Amazon Elastic Compute Cloud (Amazon EC2)Control Plane 99950% SingleAZ Data Plane 99950% 68ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal MultiAZ Data Plane 99990% Amazon Elastic Container Service (Amazon ECS)Control Plane 99900% EC2 Container Registry 99990% EC2 Container Service 99990% Amazon Elastic File System Control Plane 99950% Data Plane 99990% Amazon ElastiCache Service 99990% Amazon Elasticsearch Service Control Plane 99950% Data Plane 99950% Amazon EMR Control Plane 99950% Amazon Kinesis Data Firehose Service 99900% Amazon Kinesis Data Streams Service 99990% Amazon Kinesis Video Streams Service 99900% Amazon MQ Data Plane 99950% Control Plane 99950% Amazon Neptune Service 99900% Amazon Redshift Control Plane 99950% Data Plane 99950% Amazon Rekognition Service 99980% Amazon Relational Database Service (Amazon RDS)Control Plane 99950% SingleAZ Data Plane 99950% MultiAZ Data Plane 99990% Amazon Route 53 Control Plane 99950% Data Plane (query resolution) 100000% Amazon SageMaker Data Plane (Model Hosting) 99990% Control Plane 99950% Amazon Simple Notification Service (Amazon SNS)Data Plane 99990% Control Plane 99900% Amazon Simple Queue Service (Amazon SQS)Data Plane 99980% 69ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal Control Plane 99900% Amazon Simple Storage Service (Amazon S3)Service (Standard) 99990% Amazon S3 Glacier Service 99900% AWS Auto Scaling Control Plane 99900% Data Plane 99990% AWS Batch Control Plane 99900% Data Plane 99950% AWS CloudFormation Service 99950% AWS CloudHSM Control Plane 99900% SingleAZ Data Plane 99900% MultiAZ Data Plane 99990% AWS CloudTrail Control Plane (config) 99900% Data Plane (data events) 99990% Data Plane (management events)99999% AWS Config Service 99950% AWS Data Pipeline Service 99990% AWS Database Migration Service (AWS DMS)Control Plane 99900% Data Plane 99950% AWS Direct Connect Control Plane 99900% AWS Global Accelerator Control Plane 99900% Data Plane 99995% AWS Glue Service 99990% AWS Identity and Access ManagementControl Plane 99900% Data Plane (authentication) 99995% AWS IoT Core Service 99900% AWS IoT Device Management Service 99900% AWS IoT Greengrass Service 99900% AWS Key Management Service (AWS KMS)Control Plane 99990% 70ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal Data Plane 99995% Single Location Data Plane 99900% Multi Location Data Plane 99990% AWS Lambda Function Invocation 99990% AWS Secrets Manager Service 99900% AWS Shield Control Plane 99500% Data Plane (detection) 99000% Data Plane (mitigation) 99900% AWS Storage Gateway Control Plane 99950% Data Plane 99950% AWS XRay Control Plane (console) 99900% Data Plane 99950% Elastic Load Balancing Control Plane 99950% Data Plane 99990% 71",General,consultant,Best Practices AWS_WellArchitected_Framework__Security_Pillar,ArchivedSecurity Pillar AWS Well Architected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/securitypillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Security 2 Design Principles 2 Definition 3 Operating Your Workload Securely 3 AWS Account Management and Separation 5 Identity and Access Management 7 Identity Management 7 Permissions Management 11 Detection 15 Configure 15 Investigate 18 Infrastructure Protect ion 19 Protecting Networks 20 Protecting Compute 23 Data Protection 27 Data Classification 27 Protecting Dat a at Rest 29 Protecting Data in Transit 32 Incident Response 34 Design Goals of Cloud Response 34 Educate 35 Prepare 36 Simulate 38 Iterate 39 Conclusion 40 ArchivedContributors 40 Further Reading 41 Document Revisions 41 ArchivedAbstract The focus of this paper is the security pillar of the WellArchitected Framework It provides guidance to help you apply best practices current recommendations in the design delivery and maintenance of secure AWS workloads ArchivedAmazon Web Services Security Pillar 1 Introduction The AWS Well Architected Framework helps you understand trade offs for decisions you make while building workloads on AWS By using the Framework you will learn current architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It provides a way fo r you to consistently measure your workload against best practices and identify areas for improvement We believe that having well architected workload s greatly increases the likelihood of business success The framework is based on five pillars: • Operation al Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the security pillar This will help you meet your business and regulatory requirements by following current AWS recommendations It’s intended for those in technology roles such as chief technology officers (CTOs) chief information security officers (CSOs/CISOs) architects developers and operations team members After reading this paper you will understand AWS current recommendations and strategies to use when designing cloud architectures with security in mind This paper doesn ’t provide implementation details or architectural patterns but does include references to appropriate resources for this information By adopting the practices in this paper you can build architectures that protect your data and systems control access and respond automatically to security events ArchivedAmazon Web Services Security P illar 2 Security The security pillar describes how to take advantage of cloud technologies to protect data systems and assets in a way that can improve your security posture This paper provides in depth best practice guidance for architecting secure workloads on AWS Design Principles In the cloud there are a number of principles that can help you strengthen your workload security: • Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources Centralize identity management and aim to eliminate reliance on long term static credentials • Enable traceability: Monitor alert and audit act ions and changes to your environment in real time Integrate log and metric collection with systems to automatically investigate and take action • Apply security at all layers: Apply a defense in depth approach with multiple security controls Apply to all layers (for example edge of network VPC load balancing every instance and compute service operating system application and code) • Automate security best practices: Automated software based security mechanisms improve your ability to securely scale more rapidly and cost effectively Create secure architectures including the implementation of controls that are defined and managed as code in version controlled templates • Protect data in transit and at rest : Classify your data into sensitivity le vels and use mechanisms such as encryption tokenization and access control where appropriate • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data This reduces the risk of mishandling or modification and human error when handling sensitive data • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align to your organizational requirements Run incident response simulations and use tools with automation to increase your speed for detection investigatio n and recovery ArchivedAmazon Web Services Security Pillar 3 Definition Security in the cloud is composed of five areas: 1 Identity and access management 2 Detection 3 Infrastructure protection 4 Data protection 5 Incident response Security and Compliance is a shared responsibility between AWS and you the customer This shared model can help re duce your operational burden You should carefully examine the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The nature of this shared responsibility also provides the flexibility and control that permits the deployment Operating Your Workload Securely To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an up todate register of potential threats Prioritize your threats and adapt your security controls to prevent detect and respond Revisit and maintain this in the context of the evolving security landscape Identify and validate control objectives: Based on yo ur compliance requirements and risks identified from your threat model derive and validate the control objectives and controls that you need to apply to your workload Ongoing validation of control objectives and controls help you measure the effectivenes s of risk mitigation Keep up to date with security threats: Recognize attack vectors by staying up to date with the latest security threats to help you define and implement appropriate controls ArchivedAmazon Web Services Security Pillar 4 Keep up to date with security recommendations : Stay up to date with both AWS and industry security recommendations to evolve the security posture of your workload Evaluate and implement new security services and features regularly: Evaluate and implement security services and features from AWS and APN Partners that allow you to evolve the security posture of your workload Automate testing and validation of security controls in pipelines: Establish secure baselines and templates for security mechanisms that are tested and validated as part of your build pipelines and processes Use tools and automation to test and validate all security controls continuously For example scan items such as machine images and infrastructure as code templates for security vulnerabilities irregularities and drift from an established baseline at each stage Reducing the number of security misconfigurations introduced into a production environment is critical —the more quality control and reduction of defects you can perform in the build process the better Design continuous integration and continuous deployment (CI/CD) pipelines to test for security issues whenever possible CI/CD pipelines offer the opportunity to enhance security at each stage of build and delivery CI/CD security tooling must also be kept updated to mitigate evolving threats Resources Refer to the following resources to learn more about operating your workload securely Videos • Security Best Practices the Well Architected Way • Enable AWS adoption at scale with automation and governance • AWS Security Hub: Manage Security Alerts & Automate Compliance • Automate your security on AWS Documentation • Overview of Security Processes • Security Bulletins • Security Blog • What's New with AWS • AWS Security Audit Guidelines ArchivedAmazon Web Services Security Pillar 5 • Set Up a CI/CD Pipeline on AWS AWS Account Management and Separation We recommend that you organize workloads in separate accounts and group accounts based on function compliance requirements or a common set of controls rather than mirroring your organization’s reporting structure In AWS accounts are a hard boundary zero trust container for your resources For example account level separation is strongly recommended for isolating production workloads from development and test workloads Separate workloads using accounts: Start with security and infrastructure in mind to enable your organization to set common guardrails as your workloads grow This approach provides b oundaries and controls between workloads Account level separation is strongly recommended for isolating production environments from development and test environments or providing a strong logical boundary between workloads that process data of different sensitivity levels as defined by external compliance requirements (such as PCI DSS or HIPAA) and workloads that don’t Secure AWS accounts: There are a number of aspects to securing your AWS accounts including the securing of and not using the root user and keeping the contact information up to date You can use AWS Organizations to centrally ma nage and govern your accounts as you grow and scale your workloads AWS Organizations helps you manage accounts set controls and configure services across your accounts Manage accounts centrally : AWS Organizations automates AWS account creation and management and control of those accounts after they are created When you create an account through AWS Organizations it is important to consider the email address you use as this will be the root user that allows the password to be reset Organizations allows you to group accounts into organizational units (OUs) which can represent different environments based on the workload’s requirements and purpose Set controls centrally : Control what your AWS accounts can do by only allowing specific services Regions and service actions at the appropriate level AWS Organi zations allows you to use service control policies (SCPs) to apply permission guardrails at the organization organizational unit or account level which apply to all AWS Identity and Access Management (IAM) users and roles For example you can apply an SCP that restricts users from launching resources in Regions that you have not explicitly allow ed AWS Control Tower offers a simplified way to set up and govern multiple accounts It automates the setu p of accounts in your AWS Organization ArchivedAmazon Web Services Security Pillar 6 automates provisioning applies guardrails (which include prevention and detection ) and provides you with a dashboard for vis ibility Configure services and resources centrally : AWS Organizations helps you configure AWS services that apply to all of your accounts For example you can configure central logging of all actions performed across your organization using AWS CloudTrail and prevent member account s from disabling logging You can also centrally aggregate data for rules that you’ve defined using AWS Config enabling you to audit your workloads for compliance and react quickly to changes AWS CloudFormation StackSets allow you to centrally manage AWS CloudFormation stacks across accounts and OUs in your organization This allows you to automatically provision a new account to meet your security requirements Resources Refer to the following resources to learn mo re about AWS recommendations for deploying and managing multiple AWS accounts Videos • Managing and governing multi account AWS environments using AWS Organizations • AXA: Scaling adoption with a Global Landing Zone • Using AWS Control Tower to Govern Multi Account AWS Environments Documentation • Establishing your best practice AWS environment • AWS Organizations • AWS Control Tower • Working with AWS CloudFormation StackSet s • How to use service control policies to set permission guardrails across accounts in your AWS Organization Hands on • Lab: AWS Account and Root User ArchivedAmazon Web Services Security Pillar 7 Identity and Access Management To use AWS services you must grant your users and applications access to resources in your AWS accounts As you run more workloads on AWS you need robust identity management and permissions in place to ensure that the right people have access to the righ t resources under the right conditions AWS offers a large selection of capabilities to help you manage your human and machine identities and their permissions The best practices for these capabilities fall into two main areas : • Identity management • Permiss ions management Identity Management There are two types of identities you need to manage when approaching operating secure AWS workloads • Human Identities : The administrators developers operators and consumers of your applications require an identity to access your AWS environments and applications These can be members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application mobile app or interactive command line tools • Machine Identities : Your workload applications operational tools and components require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You can also manage machine identities for external parties who need access Additionally you might also have machines outside of AWS that need access to your AWS environment Rely on a centralized identity provider: For workforce identities rely on an identity provider that enables you to manage identities in a centralized place This makes it easier to manage access across multiple applications and services because you are creat ing manag ing and revok ing access from a single location For example if someone leaves your organization you can revoke access for all applications and services (including AWS ) from one location This reduces the need for multiple credentials and provides an opportunity to integrate with existing human resources (HR) processes ArchivedAmazon Web Services Security Pillar 8 For federation with individual AWS accounts you can use centralized identities for AWS with a SAML 20 based provider with AWS IAM You can use any provider —whether hosted by you in AWS external to AWS or supplied by the AWS Partner Network (APN) —that is compatible with the SAML 20 protocol You can use federation between your AWS account and your chosen provider to grant a user or application access to call AWS API operations by using a SAML assertion to get temporary security credentials Web based single sign on is also supported allowing users to sign in to the AWS Management Console from your sign i n portal For federation to multiple accounts in your AWS Organization you can configure your identity source in AWS Single Sign On (AWS SSO) and specify where your users and groups are stored Once configured your identity provider is your source of truth and information can be synchronized using the System for Cross domain Identity Management (SC IM) v20 protocol You can then look up users or groups and grant them single sign on access to AWS accounts cloud applications or both AWS SSO integrates with AWS Organizations which enables you to configure your identity provider once and then grant access to existing and new accounts managed in your organization AWS SSO provides you with a default store which you can use to manage your users and groups If yo u choose to use the AWS SSO store create your users and groups and assign their level of access to your AWS accounts and applications keeping in mind the best practice of least privilege Alternatively you can choose to Connect to Your External Identity Provider using SAML 20 or Connec t to Your Microsoft AD Directory using AWS Directory Service Once configured you can sign into the AWS Management Console command line interface or the AWS mobile app by authenticating through your central identity provider For managing end users or consumers of your workloads such as a mobile app you can use Amazon Cognito It provides authentication authorization and user management for your web and mobile apps Your users can sign in directly with a user name and password or through a third party such as Amazon Apple Facebook or Google Leverage user groups and attributes: As the number of users you manage grows you will need to determine ways to organize them so that you can manage them at scale Place users with common security requirements in groups defined by your identity provider and put mechanisms in place to ensure that user attributes that may be used for access control ( for example department or location) are correct and updated Use these groups and attributes to control access rather than individual users This allows you to manage access centrally by changing a user’s group membership or ArchivedAmazon Web Services Security Pillar 9 attributes once with a permission set rather than updating many individual policies when a user’s access needs change You can use AWS SSO to manage user groups and attributes AWS SSO supports most commonly used attributes whether they are entered manually during user creation or automatically provi sioned using a synchronization engine such as defined in the System for Cross Domain Identity Management (SCIM) specification Use strong sign in mechanisms: Enforce minimum password length and educate your users to avoid common or reused passwords Enfo rce multi factor authentication (MFA) with software or hardware mechanisms to provide an additional layer of verification For example when using AWS SSO as the identity source configure the “context aware” or “always on” setting for MFA and allow users to enroll their own MFA devices to accelerate adoption When using an external identity provider (IdP) configure your IdP for MFA Use temporary credentials: Require ide ntities to dynamically acquire temporary credentials For workforce identities use AWS SSO or federation with IAM to access AWS accounts For machine ident ities such as EC2 instances or Lambda functions require the use of IAM roles instead of IAM users with long term access keys For human identities using the AWS Management Console require users to acquire temporary credentials and federate into AWS Yo u can do this using the AWS SSO user portal or configuring federation with IAM For users requ iring CLI access ensure that they use AWS CLI v2 which supports di rect integration with AWS Single Sign On (AWS SSO) Users can create CLI profiles that are linked to AWS SSO accounts and roles The CLI automatically retrieves AWS credentials from AWS SSO and refreshes them on your behalf This eliminates the need to cop y and paste temporary AWS credentials from the AWS SSO console For SDK users should rely on AWS STS to assume roles to receive temporary credentials In certain cases temporary credentials might not be practical You should be aware of the risks of stor ing access keys rotate these often and require MFA as a condition when possible For cases where you need to grant consumers access to your AWS resource s use Amazon Cognito identity pools and assign them a set of temporary limited privilege credentials to access your AWS resources The permissions for each us er are controlled through IAM roles that you create You can define rules to choose the role for each user based on claims in the user's ID token You can define a default role for authenticated users You can also define a separate IAM role with limited permissions for guest users who are not authenticated ArchivedAmazon Web Services Security Pillar 10 For machine identities you should rely on IAM roles to grant access to AWS For EC2 instances you can use roles for Amazon EC2 You can attach an IAM role to your EC2 instance to enable your applications running on Amazon EC2 to use temporary security credentials that AWS crea tes distributes and rotates automatically For accessing EC2 instances using keys or passwords AWS Systems Manager is a more secure way to access a nd manage your instances using a pre installed agent without the stored secret Additionally other AWS services such as AWS Lambda enable you to configure an IAM service role to grant the service permissions to perform AWS actions using temporary creden tials Audit and rotate credentials periodically: Periodic validation preferably through an automated tool is necessary to verify that the correct controls are enforced For human identities you should require users to change their passwords periodicall y and retire access keys in favor of temporary credentials We also recommend that you continuously monitor MFA settings in your identity provider You can set up AWS Config Rules to monitor these settings For machine identities you should rely on temporary credentials using IAM roles For situations where this is not possible frequent auditing and rotating access keys is necessary Store and use secrets secure ly: For credentials that are not IAM related such as database login s use a service that is designed to handle management of secrets such as AWS Secrets Manager AWS Secrets Manager makes it easy to manage rotat e and securely store encrypted secrets using supported services Calls to access the secrets are logged in CloudTrail for auditing purposes and IAM permissions can grant least privilege access to them Resources Refer to the following resources to learn more about AWS best practices for protecting your AWS credentials Videos • Mastering identity at every layer of the cake • Managing user permissions at scale with AWS SSO • Best Practices for Managing Retrieving & Rotating Secrets at Scale Documentation • The AWS Account Root User ArchivedAmazon Web Services Security Pillar 11 • AWS Account Root User Credentials vs IAM User Creden tials • IAM Best Practices • Setting an Account Password Pol icy for IAM Users • Getting Started with AWS Secrets Manager • Using Instance Profiles • Temporary Security Credentials • Identity Providers and Federation Permissions Management Manage permissions to control access to people and machine identities that require access to AWS and your workloads Permissions control who can access what and under what conditions Set permissions to specific human and machine identities to grant acces s to specific service actions on specific resources Additionally specify conditions that must be true for access to be granted For example you can allow developers to create new Lambda functions but only in a specific Region When managing your AWS en vironments at scale adhere to the following best practices to ensur e that identities only have the access they need and nothing more Define permission guardrails for your organization: As you grow and manage additional workloads in AWS you should separa te these workloads using accounts and manage those accounts using AWS Organizations We recommend that you establish common permission guardrails that restrict access to all identities in your organization For example you can restrict access to specific AWS Regions or prevent your team from deleting common resources such as an IAM role used by your central security team You can get started by implementing example service control policies such as preventing users from disabling key services You can use AWS Organizations to group accounts and set common controls on each group of accounts To set these common controls you can use services in tegrated with AWS Organizations Specifically you can use service control policies (SCPs) to r estrict access to group of accounts SCPs use the IAM policy language and enable you to establish controls that all IAM principals (users and roles) adhere to You can restrict access to specific service actions resources and based on specific condition t o meet the access control needs of your organization If necessary you can define exceptions ArchivedAmazon Web Services Security Pillar 12 to your guardrails For example you can restrict service actions for all IAM entities in the account except for a specific administrator role Grant least privil ege access: Establishing a principle of least privilege ensures that identities are only permitted to perform the most minimal set of functions necessary to fulfill a specific task while balancing usability and efficiency Operating on this principle limits unintended access and help s ensure that you can audit who has access to which resources In AWS identities have no permissions by default wi th the exception of the root user which should only be used for a few specific tasks You use policies to explicitly grant permissions attached to IAM or resou rce entities such as an IAM role used by federated identities or machines or resources ( for example S3 buckets) When you create and attach a policy you can specify the service actions resources and conditions that must be true for AWS to allow acces s AWS supports a variety of conditions to help you scope down access For example using the PrincipalOrgID condition key the identifier of the AWS Or ganizations is verified so access can be granted within your AWS Organization You can also control requests that AWS services make on your behalf like AWS CloudFormation creating an AWS Lambda function by using the CalledVia condition key This enables you to set granular permissions for your human and machine identities across AWS AWS also has capabilities that enable you to scale your permissions management and adhere to least privilege Permissions Boundaries : You can use permission boundaries to set the maximum permissions that an administrator can set This enables you to delegate the abili ty to create and manage permissions to developers such as the creation of an IAM role but limit the permissions they can grant so that they cannot escalate their privilege using what they have created Attribute based access control (ABAC) : AWS enables you to grant permissions based on attributes In AWS these are called tags Tags can be attached to IAM principals (users or roles) and to AWS resources Using IAM policies administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal For example as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the develop ers’ project tags As the team of developers adds resources to projects permissions are automatically applied based on attributes As a result no policy update is required for each new resource ArchivedAmazon Web Services Security Pillar 13 Analyze public and cross account access: In AWS you can gr ant access to resources in another account You grant direct cross account access using policies attached to resources ( for example S3 bucket policies) or by allowing an identity to assume an IAM role in another account When using resource policies you want to ensure you grant access to identities in your organization and are intentional about when you make a resource public Making a resource public should be used sparingly as this action allows anyone to access the resource IAM Access Analyzer uses mathematical methods ( that is provable security ) to identity all access paths to a resource from outside of its account It reviews resource policies continuously and reports findings of public and cross account access to make it eas y for you to analyze potentially broad access Share resources securely: As you manage workloads using separate accounts there will be cases w here you need to share resources between those accounts We recommend that you share resources using AWS Resource Access Manager ( AWS RAM) This service enables you to easily and securely share AWS resources with in your AWS Organization and Organizational Units Using AWS RAM access to shared resources is automatically granted or revoked as accounts are moved in and out of the Organization or Organization Unit with which they are shared This helps you ensure that resources are only shared with the accounts that you intend Reduce permissions continuously: Sometime s when teams and projects are just getting started you might choose to grant broad access to inspire innovation and agility We recommend that you evaluate access continuously and restrict access to only the permissions required and achieve least privilege AWS provides access analysis capabilities to help you identify unuse d access To help you identify unused users and roles AWS analyzes access activity and provides access key and role last used information You can use the last accessed timestamp to identify unused users and roles and remove them Moreover you can review service and action last accessed information to identify and tighten permissions for specific users and roles For example you can use last accessed information to identify the specific S3 actions that your application role requires and restrict access to only those These feature are available in the console and programmatically to enable you to incorporate them into your infrastructure workflows and automated tools Establish e mergency access process: You should have a process that allows emergency access to your workload in particular your AWS accounts in the unlikely event of an automated process or pipeline issue This process could include a combination of different capabi lities for example an emergency AWS cross account ArchivedAmazon Web Services Security Pillar 14 role for access or a specific process for administrators to follow to validate and approve an emergency request Resources Refer to the following resources to learn more about current AWS best practices for finegrained authorization Videos • Become an IAM Policy Master in 60 Minutes or Less • Separation of Duties Least Privilege Delegation & CI/CD Documentation • Grant least privilege • Working with Policies • Delegating Permissions to Administer IAM Users Groups and Credentials • IAM Access Analyze r • Remove unnecessary credentials • Assuming a role in the CLI with MFA • Permissions Boundaries • Attribute based access control (ABAC) Hands on • Lab: IAM Permission Boundaries Delegating Role Creation • Lab: IAM Tag Based Access Control for EC2 • Lab: Lambda Cross Account IAM Role Assumption ArchivedAmazon Web Services Security Pillar 15 Detection Detection enables you to identify a potential security misconfiguration threat or unexpec ted behavior It’s an essential part of the security lifecycle and can be used to support a quality process a legal or compliance obligation and for threat identification and response efforts There are different types of detection mechanisms For exampl e logs from your workload can be analyzed for exploits that are being used You should regularly review the detection mechanisms related to your workload to ensure that you are meeting internal and external policies and requirements Automated alerting an d notifications should be based on defined conditions to enable your teams or tools to investigate These mechanisms are important reactive factors that can help your organization identify and understand the scope of anomalous activity In AWS there are a number of approaches you can use when addressing detective mechanisms The following sections describe how to use these approaches: • Configure • Investigate Configure Configure service and application logging : A foundational practice is to establish a set of detection mechanisms at the account level This base set of mechanisms is aimed at recording and detecting a wide range of actions on all resources in your account They allow you to build out a comprehensive detective capability with options that include automated remediation and partner integrations to add functionality In AWS services in this base set include: • AWS CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation and remedia tion against desired configurations • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads ArchivedAmazon Web Services Security Pillar 16 • AWS Security Hub provides a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services and optional third party products to give you a comprehens ive view of security alerts and compliance status Building on the foundation at the account level many core AWS services for example Amazon Virtual Private Cloud (VPC) provide service level logging features VPC Flow Logs enable you to capture information about the IP traffic going to and from network interfaces that can provide valuable in sight into connectivity history and trigger automated actions based on anomalous behavior For EC2 instances and application based logging that doesn’t originate from AWS services logs can be stored and analyzed using Amazon CloudWatch Logs An agent collects the logs from the operating system and the applications that are running and automatically stores them Once the logs are available in CloudWatch Logs you can process them in real time or dive into analysis using Insights Equally important to collecting and aggregating logs is the ability to extract meaningful insight from the great volumes of log and event data generated by complex architectures See the Monitoring section of The Reliability Pillar whitepaper for more detail Logs can themselves contain data that is considered sensitive –either when application data has erroneously found its way into log files that the CloudWatch Logs agent is capturing or when cross region logging is configured for log aggregation and there are legislative considerations about shipping certain kinds of information acr oss borders One approach is to use Lambda functions triggered on events when logs are delivered to filter and redact log data before forwarding into a central logging location such as an S3 bucket The unredacted logs can be retained in a local bucket until a “reasonable time” has passed (as determined by legislation and your legal team) at which point an S3 lifecycle rule can automatically delete them Logs can further be protected in Amazon S3 by using S3 Object Lock where you can store objects using a write once readmany (WORM) model Analyze logs findings and metrics centrally : Security operations teams rely on the collection of logs and the use of search tools to d iscover potential events of interest which might indicate unauthorized activity or unintentional change However simply analyzing collected data and manually processing information is insufficient to keep up with the volume of information flowing from co mplex architectures Analysis and reporting alone don’t facilitate the assignment of the right resources to work an event in a timely fashion ArchivedAmazon Web Services Security Pillar 17 A best practice for building a mature security operations team is to deeply integrate the flow of security events and findings into a notification and workflow system such as a ticketing system a bug/issue system or other security information and event management (SIEM) system This takes the workflow out of email and static reports and allows you to route escalate and manage events or findings Many organizations are also integrating security alerts into their chat/collaboration and developer producti vity platforms For organizations embarking on automation an API driven low latency ticketing system offers considerable flexibility when planning “what to automate first” This best practice applies not only to security events generated from log messag es depicting user activity or network events but also from changes detected in the infrastructure itself The ability to detect change determine whether a change was appropriate and then route that information to the correct remediation workflow is esse ntial in maintaining and validating a secure architecture in the context of changes where the nature of their undesirability is sufficiently subtle that their execution cannot currently be prevented with a combination of IAM and Organizations configuratio n GuardDuty and Security Hub provide aggregation deduplication and analysis mechanisms for log records that are also made available to you via other AWS services Specifically GuardDuty ingests aggregates and analyses information from the VPC DNS service and inf ormation which you can otherwise see via CloudTrail and VPC Flow Logs Security Hub can ingest aggregate and analyze output from GuardDuty AWS Config Amazon Inspector Macie AWS Firewall Manager and a significant number of thirdparty security produc ts available in the AWS Marketplace and if built accordingly your own code Both GuardDuty and Security Hub have a Master Member model that can aggregate findings and insights across multiple accounts and Security Hub is often used by customers who have an on premises SIEM as an AWS side log and alert preprocessor and aggregator from which they can then ingest Amazon EventBridge via a Lambda based processor and forwarder Resources Refer to the following resources to learn more about current AWS recomme ndations for capturing and analyzing logs Videos • Threat management in the cloud: Amazon GuardDuty & AWS Security Hub • Centrally Monitoring Resource Configuration & Co mpliance ArchivedAmazon Web Services Security Pillar 18 Documentation • Setting up Amazon GuardDuty • AWS Security Hu b • Getting started: Amazon CloudWatch Logs • Amazon EventBridge • Configuring Athena to analyze CloudTrail logs • Amazon CloudWatch • AWS Config • Creating a trail in CloudTrail • Centralize logging solution Hands on • Lab: Enable Security Hub • Lab: Automated Deployment of Detective Controls • Lab: Amazon GuardDuty hands on Investigate Implement actionab le security events : For each detective mechanism you have you should also have a process in the form of a runbook or playbook to investigate For example when you enable Amazon GuardDuty it generates different findings You shou ld have a runbook entry for each finding type for example if a trojan is discovered your runbook has simple instructions that instruct someone to investigate and remed iate Automate response to events : In AWS investigating events of interest and information on potentially unexpected changes into an automated workflow can be achieved using Amazon EventBridge This serv ice provides a scalable rules engine designed to broker both native AWS event formats (such as CloudTrail events) as well as custom events you can generate from your application Amazon EventBridge also allows you to route events to a workflow system for those building incident response systems (Step Functions) or to a central Security Account or to a bucket for further analysis ArchivedAmazon Web Services Security Pil lar 19 Detecting change and routing this information to the correct workflow can also be accomplished using AWS Config rules AWS Con fig detects changes to in scope services (though with higher latency than Amazon EventBridge) and generates events that can be parsed using AWS Config rules for rollback enforcement of compliance policy and forwarding of information to systems such as c hange management platforms and operational ticketing systems As well as writing your own Lambda functions to respond to AWS Config events you can also take advantage of the AWS Config Rules Developme nt Kit and a library of open source AWS Config Rules Resources Refer to the following resources to learn more about current AWS best practices for integrating auditing controls with notification and workflow Videos • Amazon Detective • Remediating Amazon GuardDuty and AWS Security Hub Findings • Best Practices for Managing Security Operations on AWS • Achieving Continuous Compliance using AWS Config Documentation • Amazon Detective • Amazon EventBridge • AWS Config Rules • AWS Config Rules Repository (open source) • AWS Config Rules Development Kit Hands on • Solution: RealTime Insights on AWS Account Activity • Solution: Centralized Logging Infrastructure Protection Infrastructure protection encompasses control methodologies such as defense in depth that are necessary to meet best practices and organizational or regulatory ArchivedAmazon Web Services Security Pillar 20 obligations Use of these methodologies is critical for successful ongoing operations in the clo ud Infrastructure protection is a key part of an information security program It ensures that systems and services within your workload are protected against unintended and unauthorized access and potential vulnerabilities For example you’ll define tr ust boundaries (for example network and account boundaries) system security configuration and maintenance (for example hardening minimization and patching) operating system authentication and authorizations (for example users keys and access levels ) and other appropriate policy enforcement points (for example web application firewalls and/or API gateways) In AWS there are a number of approaches to infrastructure protection The following sections describe how to use these approaches: • Protecting networks • Protecting compute Protecting Networks The careful planning and management of your network design forms the foundation of how you provide isolation and boundaries for resources within your workload Because many resources in your workload operate in a VPC and inherit the security properties it’s critical that the design is supported with inspection and protection mechanisms backed by automation Likewise for workloads that operate outside a VPC using purely edge services and/or serverless the b est practices apply in a more simplified approach Refer to the AWS Well Architected Serverless Applications Lens for specific guidance on serverless secur ity Create network layers: Components such as EC2 instances RDS database clusters and Lambda functions that share reachability requirements can be segmented into layers formed by subnets For example a n RDS database cluster in a VPC with no need for in ternet access should be placed in subnets with no route to or from the internet This layered approach for the control s mitigate s the impact of a single layer misconfiguration which could allow unintended access For AWS Lambda you can run your functions in your VPC to take advance of VPCbased controls For network connectivity that can include thousands of VPCs AWS accounts and on premises networks you should use AWS Transit Gateway It acts as a hub that controls how traffic is routed among all the connected networks which act like spokes Traffic ArchivedAmazon Web Services Security Pillar 21 between an Amazon VPC and AWS Transit Gateway remains on the AWS private network which reduces external threat vectors such as distributed denial of service (DDoS) attacks and common exploits such as SQL injection cross site scripting cross site request forgery or abuse of broken authentication code AWS Transit Gateway interregion peering also encrypts inter region traffic with no single point of failure or bandwidth bottleneck Control traffic a t all layers: When architecting your network topology you should examine the connectivity requirements of each component For example if a component requires internet accessib ility (inbound and outbound) connectivity to VPCs edge services and external data centers A VPC allows you to define your network topology that spans an AWS Region with a private IPv4 address range that you set or an IPv6 address range AWS selects You should a pply multiple controls with a defense in depth approach for both in bound and outbound traffic including the use of security groups (stateful inspection firewall) Network ACLs subnets and route tables Within a VPC you can create subnets in an Availability Zone Each subnet can have an associated route table that defin es routing rules for managing the paths that traffic takes within the subnet You can define an internet routable subnet by having a route that goes to an internet or NAT gateway attached to the VPC or through another VPC When an instance RDS database or other service is launched within a VPC it has its own security group per network interface This firewall is outside the operating system layer and can be used to define rules for allowed inbound and outbound traffic You can also define relationships between security groups For example instances within a database tier security group only accept traffic from instances within the application tier by reference to the security groups applied to the instances involved Unless you are using non TCP proto cols it should n’t be necessary to have an EC2 instance directly accessible by the internet (even with ports restricted by security groups) without a load balancer or CloudFront This helps protect it from unintended access through an operating system or application issue A subnet can also have a network ACL attached to it which acts as a stateless firewall You should configure the network ACL to narrow the scope of traffic allowed between layers note that you need to define both inbound and outbound rules While some AWS services require components to access the internet to make API calls (this being where AWS API endp oints are located ) others use endpoints within your VPCs Many AWS services including Amazon S3 and DynamoDB support VPC endpoints and this technology has been general ized in AWS PrivateLink For VPC ArchivedAmazon Web Services Security Pillar 22 assets that need to make outbound connections to the internet these can be made outbound only (one way) through an AWS managed NAT gateway outbound only internet gateway or web proxies that you create and manage Impleme nt inspection and protection: Inspect and filter your traffic at each layer For components transacting over HTTP based protocols a web application firewall can help protect from common attacks AWS WAF is a web a pplication firewall that lets you monitor and block HTTP(s) requests that match your configurable rules that are forwarded to an Amazon API Gateway API Amazon CloudFront or an Application Load Balancer To get started with AWS WAF you can use AWS Managed Rules in combination with your own or use existing partner integrations For managing AWS WAF AWS Shield Advanced protections and Amazon VPC security groups across AWS Organizations you can use AWS Firewall Manager It allows you to centrally configure and manage firewall rules across your accounts and applications mak ing it easier to scale enforcement of common rules It also enables you to rapidly respond to attacks using AWS Shield Advanced or solutions that can automatically block unwanted requests to your web applications Automate network protection: Automate protection mechanisms to provide a self defending network based on threat intelligence and anomaly detection For example intrusion detection and prevention tools that can adapt to current threats and reduce their impact A web application firewall is an example of where you can automate network protection for example by using the AWS WAF Security Automations solution (https://githubcom/awslabs/aws wafsecurity automations ) to automatically b lock requests originating from IP addresses associated with known threat actors Resources Refer to the following resources to learn more about AWS best practices for protecting networks Video • AWS Transit Gatew ay reference architectures for many VPCs • Application Acceleration and Protection with Amazon CloudFront AWS WAF and AWS Shield • DDoS Attack Detection at Scale ArchivedAmazon Web Services Security Pillar 23 Docume ntation • Amazon VPC Documentation • Getting started with AWS WAF • Network Access Control Lists • Security Groups for Your VPC • Recommended Network ACL Rules for Your VPC • AWS Firewall Manager • AWS PrivateLink • VPC Endpoints • Amazon Inspector Hands on • Lab: Automated Deployment of VPC • Lab: Automated Deployment of Web Application Firewall Protecting Compute Perform vulnerability management : Frequently scan and patch for vulnerabilities in your code dependencies and in your infrastructure to help protect against new threats Using a build and deployment pipeline you can automate many parts of vulnerability management : • Using thirdparty st atic code analysis tools to identify common security issues such as unchecked function input bounds as well as more recent CVEs You can use Amazon CodeGuru for languages supported • Using thirdparty depend ency checking tools to determine whether libraries your code links against are the latest versions are themselves free of CVEs and have licensing conditions that meet your software policy requirements ArchivedAmazon Web Services Security Pillar 24 • Using Amazon Inspector you can perform configurati on assessments against your instances for known common vulnerabilities and exposures (CVEs) assess against security benchmarks and fully automate the notification of defects Amazon Inspector runs on production instances or in a build pipeline and it notifies developers and engineers when findings are present You can access findings programmatically and direct your team to backlogs and bug tracking systems EC2 Image Builder can be used to maintain s erver images (AMIs) with automated patching AWS provided security policy enforcement and other customizations • When using containers implement ECR Image Scanning in your build pipeline and on a regular basis against your image repository to look for CVEs in your containers • While Amazon Inspector and other tools are effective at identifying configurations and any CVEs that are present other methods are required to test your workload at the application level Fuzzing is a well known method of finding bugs using automation to inject malformed data into input fields and other areas of your application A number o f these functions can be performed using AWS services products in the AWS Marketplace or open source tooling Reduce attack surface: Reduce your attack surface by hardening operating systems minimizing components libraries and externally consumable se rvices in use To reduce your attack surface you need a threat model to identify the entry points and potential threats that could be encountered A common practice in reducing attack surface is to start at reducing unused components whether they are operating system p ackages applications etc (for EC2 based workloads) or external software modules in your code (for all workloads) Many hardening and security configuration guides exist for common operating systems and server software for example from the Center for Internet Security that you can use as a starting point and iterate Enable people to perform actions at a distance: Removing the ability for interactive access reduces the risk of human error and the potential for manual configuration or management For example use a change management workflow to manage EC2 instances using tools such as AWS Systems Manager instead of allowing direct access or via a bastion host AWS Systems Manager can automate a variety of maint enance and deployment tasks using features including automation workflows documents (playbooks) and the run command AWS CloudFormation stacks build from pipelines ArchivedAmazon Web Services Security Pillar 25 and can automate your infrastructure deployment and management tasks without using the AWS Management Console or APIs directly Implement managed services: Implement services that manage resources such as Amazon RDS AWS Lambda and Amazon ECS to reduce your security maintenance tasks as part of the shared responsibility model For example Amazon RDS helps you set up operate and scale a relational database automates administration tasks such as hardware provisioning database setup patching and backups This means you have mo re free time to focus on securing your application in other ways described in the AWS Well Architected Framework AWS Lambda lets you run code without provisioning or managing servers so you only need to focus on the connectivity invocation and security at the code level –not the infrastructure or operating system Validate software integrity : Implement mechanisms (eg code signing) to validate that the software code and libraries used in the workload are from trusted sources and have not been tampered with For example you should verify the code signing certificate of binaries and scripts to confirm the author and ensure it has not been tampered with since created by the author Additionally a checksum of software that you download compared to that of the checksum from the provider can help ensure it has not been tampered with Automate compute protection: Automate your protective compute mechanisms including vulnerability management reduction in attack surface and management of resources The au tomation will help you invest time in securing other aspects of your workload and reduce the risk of human error Resources Refer to the following resources to learn more about AWS best practices for protecting compute Video • Security best practices for the Amazon EC2 instance metadata service • Securing Your Block Storage on AWS • Securing Serverless and Container Services • Running high security workloads on Amazon EKS • Architecting Security through Policy Guardrails in Amazon EKS ArchivedAmazon Web Services Security Pillar 26 Documentation • Security Overview of AWS Lambda • Security in Amazon EC2 • AWS Systems Manager • Amazon Inspector • Writing your own AWS Systems Manager documents • Replacing a Bastion Host with Amazon EC2 Systems Manager Hands on • Lab: Automated Deployment of EC2 Web Application ArchivedAmazon Web Services Security Pillar 27 Data Protection Before architecting any workload foundational practices that influence security should be in place For example data classification provides a way to categorize data based on levels of sensitivity and encryption protects data by way of render ing it unintelligible to unauthorized access These methods are important because they support objectives such as preventing mishandling or complying with regulatory obligations In AWS there are a number of different approaches you can use when addressin g data protection The following section describes how to use these approaches: • Data classification • Protecting data at rest • Protecting data in transit Data Classification Data classification provides a way to categorize organizational data based on critica lity and sensitivity in order to help you determine appropriate protecti on and retention controls Identify the data within your workload : You need to understand the type and classiciation of data your workload is processing the associated business proce sses data owner applicable legal and compliance requirements where it’s stored and the resulting controls that are needed to be enforced This may include classifications to indicate if the data is intended to be publicly available if the data is inte rnal use only such as customer personally identifiable information (PII) or if the data is for more restricted access such as intellectual property legally privileged or marked sensititve and more By carefully managing an appropriate data classificatio n system along with each workload’s level of protection requirements you can map the controls and level of access/protection appropriate for the data For example public content is available for anyone to access but important content is encrypted and s tored in a protected manner that requires authorized access to a key for decrypting the content Define data protection controls: By using resource tags separate AWS accounts per sensitivity (and potentially also per caveat / enclave / community of intere st) IAM policies Organizations SCPs AWS KMS and AWS CloudHSM you can define and implement your policies for data classification and protection with encryption For example if you have a project with S3 buckets that contain highly critical data or EC2 ArchivedAmazon Web Services Security Pillar 28 instances that process confidential data they can be tagged with a “Project=ABC” tag Only your immediate team knows what the project code means and it provides a way to use attribute based access control You can define levels of access to the AWS KMS encryption keys through key policies and grants to ensure that only appropriate services have access to the sensitive content through a secure mechanism If you are making authorization decisions based on tags you should make sure that the permissions on the tags are defined appropriately using tag policies in AWS Organizations Define data lifecycle management: Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements Aspects including the duration for which you retain data data destruction processes data access management data transformation and data sharing should be considered When choosing a data classification methodology balance usability versus access You should also accommodate the mu ltiple levels of access and nuances for implementing a secure but still usable approach for each level Always use a defense in depth approach and reduce human access to data and mechanisms for transforming deleting or copying data For example requir e users to strongly authenticate to an application and give the application rather than the users the requisite access permission to perform “action at a distance” In addition ensure that users come from a trusted network path and require access to th e decryption keys Use tools such as dashboards and automated reporting to give users information from the data rather than giving them direct access to the data Automate identification and classification: Automat ing the identification and classificatio n of data can help you implement the correct controls Using automation for this instead of direct access from a person reduce s the risk of human error and exposure You should evaluate using a tool such as Amazon Macie that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved Resources Refer to the following resources to learn more about data classification Documentation • Data Classification Whitepaper • Tagging Your Amazon EC2 Resources ArchivedAmazon Web Services Security Pillar 29 • Amazon S3 Object Tagging Protecting Data at Rest Data at rest represents any data that you persist in non volatile storage for any duration in your workload This includes block stor age object storage databases archives IoT devices and any other storage medium on which data is persisted Protecting your data at rest reduces the risk of unauthorized access when encryption and appropriate access controls are implemented Encryptio n and tokenization are two important but distinct data protection schemes Tokenization is a process that allows you to define a token to represent an otherwise sensitive piece of information (for example a token to represent a customer’s credit card numb er) A token must be meaningless on its own and must not be derived from the data it is tokenizing –therefore a cryptographic digest is not usable as a token By carefully planning your tokenization approach you can provide additional protection for your content and you can ensure that you meet your compliance requirements For example you can reduce the compliance scope of a credit card processing system if you leverage a token instead of a credit card number Encryption is a way of transforming content in a manner that makes it unreadable without a secret key necessary to decrypt the content back into plaintext Both tokenization and encryption can be used to secure and protect information as appropriate Further masking is a techni que that allows part of a piece of data to be redacted to a point where the remaining data is not considered sensitive For example PCIDSS allows the last four digits of a card number to be retained outside the compliance scope boundary for indexing Implement secure key management: By defining an encryption approach that includes the storage rotation and access control of keys you can help provide protection for your content against unauthorized users and against unnecessary exposure to authorized use rs AWS KMS helps you manage encryption keys and integrates with many AWS services This service provides durable secure and redundant storage for your master keys You can define your key aliases as well as key level policies The policies help you define key administrators as well as key users Additionally AWS CloudHSM is a cloud based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud It helps you meet corporate contractual and regulatory compliance requirements for data security by using FIPS 140 2 Level 3 validated HSMs ArchivedAmazon Web Services Security Pillar 30 Enforce encryption at rest: You should ensure that the only way to store data is by using encr yption AWS KMS integrates seamlessly with many AWS services to make it easier for you to encrypt all your data at rest For example in Amazon S3 you can set default encry ption on a bucket so that all new objects are automatically encrypted Additionally Amazon EC2 supports the enforcement of encryption by setting a default encryption option for an entire Region Enforce access control: Different controls including access (using least privilege ) backups (see Reliability whitepaper) isolation and versioning can all help protect your data at rest Access to your data should be audited using detective mechanisms covered earlier in this paper including CloudTrail and service level log such as S3 access logs You should inventory what data is publicly accessible and plan for how you can reduce the amount of d ata available over time Amazon S3 Glacier Vault Lock and S3 Object Lock are capabilities providing mandatory access control —once a vault policy is locked with the compliance option not even the root user can change it until the lock expires The mechanis m meets the Books and Records Management requirements of the SEC CFTC and FINRA For more details see this whitepaper Audit the use of encryption keys: Ensure that you understand and audit the use of encryption keys to validate that the access control mechanisms on the keys are appropriately implemented For example any AWS service using an AWS KMS key logs each use in A WS CloudTrail You can then query AWS CloudTrail by using a tool such as Amazon CloudWatch Insights to ensure that all uses of your keys are valid Use mechanisms to keep people away from data: Keep all users away from directly accessing sensitive data and systems under normal operational circumstances For example use a change management workflow to manage EC2 instances using tools instead of allowing direct access or a bastion host This can be achieved using AWS Systems Manager Automation which uses automation documents that contain steps you use to perform tasks These documents can be stored in source control be peer reviewed before running and tested thoroughly to minimize risk compared to shell access Business users could have a dashboard instead of direct access to a data store to run q ueries Where CI/CD pipelines are not used determine which controls and processes are required to adequately provide a normally disabled break glass access mechanism Automate data at rest protection: Use automated tools to validate and enforce data at rest controls continuously for example verify that there are only encrypted storage resources You can automate validation that all EBS volumes are encrypted using AWS Config Rules AWS Security Hub can also verify a number of different controls through ArchivedAmazon Web Services Security Pillar 31 automated check s against security standards Additionally your AWS Config Rules can automatically remediate noncompliant resources Resources Refer to the following resources to learn more about AWS best practices for protecting data at rest Video • How Encryption Works in AWS • Securing Your Block Storage on AWS • Achieving security goals with AWS CloudHSM • Best Practices for Implementing AWS Key Management Service • A Deep Dive into AWS Encryption Services Documentation • Protecting Amazon S3 Data Using Encryption • Amazon EBS Encryption • Encrypting Amazon RDS Resources • Protecting Data Using Encryption • How AWS services use AWS KMS • Amazon EBS Encryption • AWS Key Management Service • AWS CloudHSM • AWS KMS Cryptographic Details Whitepaper • Using Key Policies in AWS KMS • Using Bucket Policies and User Policies • AWS Crypto Tools ArchivedAmazon Web Services Security Pillar 32 Protecting Data in Transit Data in transit is any data that is sent from one system to another This includes communication between resources within your workload as well as communicati on between other services and your end users By providing the appropriate level of protection for your data in transit you protect the confidentiality and integrity of your workload’s data Implement secure key and certificate management: Store encrypti on keys and certificates securely and rotate them at appropriate time intervals with strict access control The best way to accomplish this is to use a managed service such as AWS Certificate Manage r (ACM) It lets you easily provision manage and deploy public and private Transport Layer Security (TLS) certificates for use with AWS services and your internal connected resources TLS certificates are used to secure network communications and esta blish the identity of websites over the internet as well as resources on private networks ACM integrates with AWS resources such as Elastic Load Balancers Amazon CloudFront distributions and APIs on API Gateway also handl ing automatic certificate rene wals If you use ACM to deploy a private root CA both certificates and private keys can be provided by it for use in EC2 instances containers etc Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendation s to help you meet your organizational legal and compliance requirements AWS services provide HTTPS endpoints using TLS for communication thus providing encryption in transit when communicating with the AWS APIs Insecure protocols such as HTTP can be audited and blocked in a VPC through the use of security groups HTTP requests can also be automatically redirected to HTTPS in Amazon CloudFront or on an Application Load Balancer You have full control over your computing resources to implement encryption in transit across yo ur services Additionally you can use VPN connectivity into your VPC from an external network to facilitate encryption of traffic Third party solutions are available in the AWS Marketplace if you have special requirements Authenticate network communica tions: Using network protocols that support authentication allows for trust to be established between the parties This adds to the encryption used in the protocol to reduce the risk of communications being altered or intercepted Common protocols that imp lement authentication include Transport Layer Security (TLS) which is used in many AWS services and IPsec which is used in AWS Virtual Private Network (AWS VPN) ArchivedAmazon Web Services Security Pillar 33 Automate detection of unintended data access: Use tools such as Amazon GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level for example to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol In addition to Amazon GuardDuty Amazon VPC Flow Logs which capture network traffic information can be used with Amazon EventBridge to trigger detection of abnormal connecti ons–both successful and denied S3 Access Analyzer can help assess what data is accessible to who in your S3 buckets Resources Refer to the follow ing resources to learn more about AWS best practices for protecting data in transit Video • How can I add certificates for websites to the ELB using AWS Certificate Manager • Deep Dive on AWS Certificate Manager Private CA Documentation • AWS Certificate Manager • HTTPS Listeners for Your Application Load Balancer • AWS VPN • API Gateway Edge Optimized ArchivedAmazon Web Services Security Pillar 34 Incident Response Even with extremely mature preventive and detective controls your organization should still implement mechanisms to respond to and mitigate the potential impact of security incidents Your preparation strongly affects the ability of your teams to operate effectively during an incident to isolate and contain issues and to restore operations to a known good state Putting in place the tools and access ahead of a security incident then routinely practicing incident response through game days helps ensure that you can recover while minimizing business disruption Design Goals of Cloud Response Although the general processes and mechanisms of incident response such as those defined in the NIST SP 800 61 Computer Security Incident Handling Guide remain true we encourage you to evaluate these specific design goals that are relevant to responding to security incidents in a cloud environment: • Establish response objectives : Work with your stakeholders legal counsel and organizational leadership to determine the goal of responding to an incident Some common goals include containing and mitigating the issue recovering the affected resources preserving data for forensics and attribution • Document plans : Create plans to help you respond to communicate during and recover from an incident • Respond using the cloud : Implement your response patterns where the event and data occurs • Know what you have and what you need : Prese rve logs snapshots and other evidence by copying them to a centralized security cloud account Use tags metadata and mechanisms that enforce retention policies For example you might choose to use the Linux dd command or a Windows equivalent to make a complete copy of the data for investigative purposes • Use redeployment mechanisms : If a security anomaly can be attributed to a misconfiguration the remediation might be as simple as removing the variance by redeploying the resources with the proper con figuration When possible make your response mechanisms safe to execute more than once and in environments in an unknown state ArchivedAmazon Web Services Security Pillar 35 • Automate where possible : As you see issues or incidents repeat build mechanisms that programmatically triage and respond to c ommon situations Use human responses for unique new and sensitive incidents • Choose scalable solutions : Strive to match the scalability of your organization's approach to cloud computing and reduce the time between detection and response • Learn and i mprove your process : When you identify gaps in your process tools or people implement plans to fix them Simulations are safe methods to find gaps and improve processes In AWS there are a number of different approaches you can use when addressing incident response The following section describes how to use these approaches: • Educate your security operations and incident response staff about cloud technologies and how your organization intends to use them • Prepare your incident response team to detect and respond to incidents in the cloud enabl e detective capabilities and ensur e appropriate access to the necessary tools and cloud services Additionally prepare the necessary runbooks both manual and automated to ensure reliable and consistent respo nses Work with other teams to establish expected baseline operations and use that knowledge to identify deviations from those normal operations • Simulate both expected and unexpected security events within your cloud environment to understand the effect iveness of your preparation • Iterate on the outcome of your simulation to improve the scale of your response posture reduce time to value and further reduce risk Educate Automated processes enable organizations to spend more time focusing on measures to increase the security of their workloads Automated incident response also makes humans available to correlate events practice simulations devise new response procedures perform research develop new skills and test or build new tools Desp ite increased automation your team specialists and responders within a security organization still require continuous education Beyond general cloud experience you need to significantly invest in your people to be successful Your organization can ben efit by providing additional training to your staff to learn programming skills development processes (including version control systems ArchivedAmazon Web Services Security Pilla r 36 and deployment practices) and infrastructure automation The best way to learn is hands on through running incident response game days This allows for experts in your team to hone the tools and techniques while teaching others Prepare During an incident your incident response teams must have access to various tools and the workload resources involved in the incident Make sure that your teams have appropriate preprovisioned access to perform their duties before an event occurs All tools access and plans should be documented and tested before an event occurs to make sure that they can provide a timely response Identify key personnel and external resources: When you define your approach to incident response in the cloud in unison wi th other teams (such as your legal counsel leadership business stakeholders AWS Support Services and others) you must identify key personnel stakeholders and relevant contacts To reduce dependency and decrease response time make sure that your team specialist security teams and responders are educated about the services that you use and have opportunities to practice hands on We encourage you to identify external AWS security partners that can provide you with outside expertise and a different p erspective to augment your response capabilities Your trusted security partners can help you identify potential risks or threats that you might not be familiar with Develop incident management plans: Create plans to help you respond to communicate durin g and recover from an incident For example you can start at incident response plan with the most likely scenarios for your workload and organization Include how you would communicate and escalate both internally and externally Create incident response plans in the form of playbooks starting with the most likely scenarios for your workload and organization These might be events that are currently generated If you need a starting p lace you should look at AWS Trusted Advisor and Amazon GuardDuty findings Use a simple format such as markdown so it’s easily maintained but ensure that important commands or code snippets are included s o they can be executed without having to lookup other documentation Start simple and iterate Work closely with your security experts and partners to identify the tasks required to ensure that the processes are possible Define the manual descriptions of the processes you perform After this test the processes and iterate on the runbook pattern to improve the core logic of your response Determine what the exceptions are and what the alternative resolutions are for those scenarios For ArchivedAmazon Web Services Security Pillar 37 example in a deve lopment environment you might want to terminate a misconfigured Amazon EC2 instance But if the same event occurred in a production environment instead of terminating the instance you might stop the instance and verify with stakeholders that critical d ata will not be lost and that termination is acceptable Include how you would communicate and escalate both internally and externally When you are comfortable with the manual response to the process automate it to reduce the time to resolution Preprov ision access: Ensure that incident responders have the correct access pre provisioned into AWS and other relevant systems to reduce the time for investigation through to recovery Determining how to get access for the right people during an incident delays the time it takes to respond and can introduce other security weaknesses if access is shared or not properly provisioned while under pressure You must know what level of access your team members require (for example what kinds of actions they are likel y to take) and you must provision access in advance Access in the form of roles or users created specifically to respond to a security incident are often privileged in order to provide sufficient access Therefore use of these user accounts should be res tricted they should not be used for daily activities and usage alerted on Predeploy tools: Ensure that security personnel have the right tools pre deployed into AWS to reduce the time for investigation through to recovery To automate security engine ering and operations functions you can use a comprehensive set of APIs and tools from AWS You can fully automate identity management network security data protection and monitoring capabilities and deliver them using popular software development metho ds that you already have in place When you build security automation your system can monitor review and initiate a response rather than having people monitor your security position and manually react to events If your incident response teams continue to respond to alerts in the same way they risk alert fatigue Over time the team can become desensitized to alerts and can either make mistakes handling ordinary situations or miss unusual alerts Automation helps avoid alert fatigue by using functions that process the repetitive and ordinary alerts leaving humans to handle the sensitive and unique incidents You can improve manual processes by programmatically automating steps in the process After you define the remediation pattern to an event you c an decompose that pattern into actionable logic and write the code to perform that logic Responders can then execute that code to remediate the issue Over time you can automate more and more steps and ultimately automatically handle whole classes of c ommon incidents ArchivedAmazon Web Services Security Pillar 38 For tools that execute within the operating system of your EC2 instance you should evaluate using the AWS Systems Manager Run Command which enables you to remotely and securely administrate instances using an agent that you install on yo ur Amazon EC2 instance operating system It requires the AWS Systems Manager Agent (SSM Agent) which is installed by default on many Amazon Machine Images (AMIs) Be aware though that once an instance has been compromised no responses from tools or age nts running on it should be considered trustworthy Prepare forensic capabilities: Identify and prepare forensic investigation capabilities that are suitable including external specialists tools and automation Some of your incident response activities might include analyzing disk images file systems RAM dumps or other artifacts that are involved in an incident Build a customized forensic workstation that they can use to mount copies of any affected data volumes As forensic investigation techniques require specialist training you might need to engage external specialists Simulate Run game days: Game days also known as simulations or exercises are internal events that provide a structured opportunity to practice your incident management plans and procedures during a realistic scenario Game days are fundamentally about being prepared and iteratively improving your response capabilities Some of the reasons you might find value in performing game day activities include: • Validating readiness • Developing confidence – learning from simulations and training staff • Following compliance or contractual obligations • Generating artifacts for accreditation • Being agile – incremental improvement • Becoming faster and improving tools • Refining communication and escalation • Developing comfort with the rare and the unexpected For these reasons the value derived from participating in a SIRS activity increases an organization's effectiveness during stressful events Developing a SIRS act ivity that is both realistic and beneficial can be a difficult exercise Although testing your procedures or automation that handles well understood events has certain advantages it is just as ArchivedAmazon Web Services Security Pillar 39 valuable to participate in creative SIRS activities to test yo urself against the unexpected and continuously improve Iterate Automate containment and recovery capability: Automate containment and recovery of an incident to reduce response times and organizational impact Once you create and practice the processes an d tools from your playbooks you can deconstruct the logic into a code based solution which can be used as a tool by many responders to automate the response and remove variance or guess work by your responders This can speed up the lifecycle of a respon se The next goal is to enable this code to be fully automated by being invoked by the alerts or events themselves rather than by a human responder to create an event driven response With an event driven response system a detective mechanism triggers a responsive mechanism to automatically remediate the event You can use event driven response capabilities to reduce the time tovalue between detective mechanisms and responsive mechanisms To create this event driven architecture you can use AWS Lambda which is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you For example assume that you have an AWS account with the AWS CloudTrail service enabled If AWS CloudTrail is ever disabled (through the cloudtrail:StopLogging API call) you can use Amazon EventBridge to monitor for the specific cloudtrail:StopLogging event and invoke an AWS Lambda function to call cloudtrail:StartLogging to restart logging Resources Refer to the following resources to learn more about current AWS best practices for incident response Videos • Prepare for & respond to security incidents in your AWS environment • Automating Incident Response and Forensics • DIY guide to runbooks incident reports and incident response ArchivedAmazon Web Services Security Pillar 40 Documentation • AWS Incident Response Guide • AWS Step Functions • Amazon EventBridge • CloudEndure Disaster Recovery Hands on • Lab: Incident Response with AWS Console and CLI • Lab: Incident Response Playbook with Jupyter AWS IAM • Blog: Orchestrating a security incident response with AWS Step Functions Conclusion Security is an ongoing effort When incidents occur they should be treated as opportunities to improve the security of the architecture Having strong identity controls automating responses to security events protecting infrast ructure at multiple levels and managing well classified data with encryption provides defense in depth that every organization should implement This effort is easier thanks to the programmatic functions and AWS features and services discussed in this pap er AWS strives to help you build and operate architectures that protect information systems and assets while delivering business value Contributors The following individuals and organizations contributed to this document: • Ben Potter Principal Security Lead Well Architected Amazon Web Services • Bill Shinn Senior Principal Office of the CISO Amazon Web Services • Brigid Johnson Senior Software Development Manager AWS Identity Amazon Web Services • Byron Pogson Senior Solution Architect Amazon Web Services • Darran Boyd Principal Security Solutions Architect Financial Services Amazon Web Services ArchivedAmazon Web Services Security Pillar 41 • Dave Walker Principal Specialist Solutions Architect Security and Compliance Amazon Web Services • Paul Hawkins Senior Security Strategist Amazon Web Services • Sam Elmalak Senior Technology Leader Amazon Web Services Further Reading For additional help please consult the following sources: • AWS Well Architected Framework Whitepaper Document Revisions Date Description July 2020 Updated guidance on account identity and permissions management April 2020 Updated to expand advice in every area new best practices services and features July 2018 Updates to reflect new AWS services and features and updated references May 2017 Updated System Security Configuration and Maintenance section to reflect new AWS services and features November 2016 First publication,General,consultant,Best Practices AWS_WellArchitected_Framework__Serverless_Applications_Lens,"ArchivedServerless Application Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/serverlessapplicationslens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 1 Compute Layer 2 Data Layer 2 Messaging and Streaming Layer 3 User Management and Identity Layer 3 Edge Layer 4 Systems Monitoring and Deployment 4 Deployment Approaches 4 General Design Principles 7 Scenarios 8 RESTful Microservices 8 Alexa Skills 10 Mobile Backend 14 Stream Processing 18 Web Application 20 The Pillars of the Well Architected Framework 22 Operational Excellence Pillar 23 Security Pillar 33 Reliability Pillar 43 Performance Efficiency Pillar 51 Cost Optimization Pillar 62 Conclusion 72 Contributors 72 Further Reading 73 Document Revisions 73 Archived ArchivedAbstract This document describes the Serverless Applications Lens for the AWS Well Architected Framework The document covers common serverless applications scenarios and identif ies key elements to ensure that your workloads are architected according to best practices ArchivedAmazon Web Services Serverless Application Lens 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS 1 By using the Framework you will learn architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cl oud It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected systems greatly increases the likelihood of business success In this “Lens ” we foc us on how to design deploy and architect your serverless application workloads in the AWS Cloud For brevity we have only covered details from the WellArchitected Framework that are specific to serverless workloads You should still consider best pract ices and q uestions that have not been included in this document when designing your architecture We recommend that you read the AWS WellArchitected Framework whitepaper 2 This document is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this document you will understand AWS best practices and strategies to use when designing architectures for serverless applications Definitions The AWS Well Architected Framework is based on five pillars : operational excellence security reliability performance efficiency and cost optimization For serverless workloads AWS provides multiple core components (serverless and non serverless) that allow you to design robust architectures for your serverless applications In this section we will present an overview of the services that will be used throughout this document There are s even areas you should consider when building a serverless workload: • Compute layer • Data layer • Messaging and streaming layer • User management and identity layer • Edge layer ArchivedAmazon Web Services Serverless Application Lens 2 • Systems monitoring and deployment • Deployment approaches Compute Layer The compute layer of your workload manages requests from external systems controlling access and ensuring requests are appropriately authorized It contains the runtime environment that your business logic will be deployed and executed by AWS Lambd a lets you run stateless serverless application s on a manag ed platform that supports micro services architectures deployment and management of execution at the function layer With Amazon API Gateway you can run a fully managed REST API that integrates with Lambda to execute your business logic and includes traffic management authorization and access control monitoring and API versioning AWS Step Functions orchestrates serverless workflows including coordination state and function chaining as well as combining long running executions not supported within Lambda execution limits by breaking into multiple steps or by calling workers running on Amazon Elastic Compute Cloud (Amazon EC2) instances or on premises Data Layer The data layer of your workload m anages persistent storage from within a system It provides a secure mechanism to store the states that your business logic will need It provides a mechanism to trigger events in response to data changes Amazon DynamoDB helps you build serverless applications by providing a managed NoSQL database for persistent storage Combined with DynamoDB Streams you can respond in near real time to changes in your DynamoDB table by invoking Lambda functions DynamoDB Accelerator (DAX) adds a highly available in memory cache for DynamoDB that delivers up to 10x performance improvement from milliseconds to microseconds With Amazon Simple Storage Service (Amazon S3) you can build serverless web applications and websites by providing a highly available key value store from which static assets can be served via a Content Delivery Network (CDN) such as Amazon Cloud Front ArchivedAmazon Web Services Serverless Application Lens 3 Amazon Elasticsearch Service (Amazon ES) makes it easy to depl oy secure operate and scale Elasticsearch for log analytics full text search application monitoring and more Amazon ES is a fully managed service that provides both a search engine and analytics tools AWS AppSync is a managed GraphQL service with r ealtime and offline capabilities as well as enterprise grade security controls that make developing applications simple AWS AppSync provides a data driven API and consistent programming language for applications and devices to connect to services such a s DynamoDB Amazon ES and Amazon S3 Messaging and Streaming Layer The messaging layer of your workload manages communications between components The streaming layer manages real time analysis and processing of streaming data Amazon Simple Notification Service (Amazon SNS) provides a fully managed messaging service for pub/sub patterns using asynchronous event notifications and mobile push notifications for microservices distributed systems and serverless applications Amazon Kinesis makes it easy to c ollect process and analyze real time streaming data With Amazon Kinesis Data Analytics you can run standard SQL or build entire streaming applications using SQL Amazon Kinesis Data Firehose captures transforms and loads streaming data into Kinesis Data Analytics Amazon S3 Amazon Redshift and Amazon ES enabling near realtime analytics with existing business intelligence tools User Management and Identity Layer The user management and identity layer of your workload provides identity authenticat ion and authorization for both external and internal customers of your workload’s interfaces With Amazon Cognito you can easily add user sign up sign in and data synchronization to serverless applications Amazon Cognito user pools provide built in signin screens and federation with Facebook Google Amazon and Security Assertion Markup Language (SAML) Amazon Cognito Federated Identities lets you securely provide scoped access to AWS resources that are part of your serverless arc hitecture ArchivedAmazon Web Services Serverless Application Lens 4 Edge Layer The edge layer of your workload manages the presentation layer and connectivity to external customers It provides an efficient delivery method to external customers residing in distinct geographical locations Amazon CloudFront provi des a CDN that secur ely delivers web application content and data with low latency and high transfer speeds Systems Monitoring and Deployment The system monitoring layer of your workload manages system visibility through metrics and creates contextual awa reness of how it operates and behaves over time The deployment layer defines how your workload changes are promoted through a release management process With Amazon CloudWatch you can access system metrics on all the AWS services you use consolidate sy stem and application level logs and create business key performance indicators (KPIs) as custom metrics for your specific needs It provides dashboards and alerts that can trigger automated actions on the platform AWS X Ray lets you analyze and debug ser verless applications by providing distributed tracing and service maps to easily identify performance bottlenecks by visualizing a request end toend AWS Serverless Application Model (AWS SAM) is an extension of AWS CloudFormation that is used to package test and deploy serverless applications The AWS SAM CLI can also enable faster debugging cycles when developing Lambda functions locally Deployment Approaches A best practice for deployments in a microservice architecture is to ensure that a change doe s not break the service contract of the consumer If the API owner makes a change t hat breaks the service contract and the consumer is not prepared for it failures can occur Being aware of which consumers are using your APIs is the first step to ensure that deployments are safe Collecting metadata on consumers and their usage allows you to make data driven decisions about the impact of changes API Keys are an effective way ArchivedAmazon Web Services Serverless Application Lens 5 to capture metadata about the API consumer/clients and often used as a form of co ntact if a breaking change is made to an API Some customers who want to take a risk adverse approach to breaking changes may choose to clone the API and route customers to a different subdomain ( for example v2my servicecom) to ensure that existing consumers aren’t impacted While this approach enables new deployments with a new service contract the tradeoff is that the overhead of maintaining dual APIs (and subsequent backend infrastructure) require s additional overhead The table shows th e different approaches to deployment: Deployment Consumer Impact Rollback Event Model Factors Deployment Speed Allatonce All at once Redeploy older version Any event model at low concurrency rate Immediate Blue/Green All at once with some level of production environment testing beforehand Revert traffic to previous environment Better for async and sync event models at medium concurrency workloads Minutes to hours of validation and then immediate to customers Canar y/Linear 1–10% typical initial traffic shift then phased increases or all at once Revert 100% of traffic to previous deployment Better for high concurrency workloads Minutes to hours Allatonce Deployments Allatonce deployments involve making changes on top of the existing configuration An advantage to this style of deployment is that backend changes to data stores such as a relational database require a much smaller level of effort to reconcile transactions during the change cycle W hile this type of deployment style is low effort and can be made with little impact in low concurrency models it adds risk when it comes to rollback ArchivedAmazon Web Services Serverless Application Lens 6 and usually causes downtime An example scenario to use this deployment model is for development environme nts where the user impact is minimal Blue /Green Deployments Another traffic shifting pattern is enabling blue/green deployments This near zero downtime release enables traffic to shift to the new live environment (green) while still keeping the old prod uction environment (blue) warm in case a rollback is necessary Since API Gateway allows you to define what percentage of traffic is shifted to a particular environment; this style of deployment can be an effective technique Since blue/green deployments a re designed to reduce downtime many customers adopt this pattern for production changes Serverless architectures that follow the best practice of statelessness and idempotency are amenable to this deployment style because there is no affinity to the unde rlying infrastructure You should bias these deployments toward smaller incremental changes so that you can easily roll back to a working environment if necessary You need the right indicators in place to know if a rollback is required As a best practice we recommend customers using CloudWatch high resolution metrics which can monitor in 1 second intervals and quickly capture downward trends Used with CloudWatch alarms you can enable an expedited rollback to occur CloudWatch metrics can be captured on API Gateway Step Functions Lambda (including custom metrics) and DynamoDB Canary Deployments Canary deployments are an ever increasing way for you to leverage the new release of a software in a controlled environment and enabling rapid deployment cy cles Canary deployments involve deploying a small number of requests to the new change to analyze impact to a small number of your users Since you no longer need to worry about provisioning and scaling the underlying infrastructure of the new deployment the AWS Cloud ha s helped facilitate this adoption With Canary deployments in API Gateway you can deploy a change to your backend endpoint ( for example Lambda) while still maintaining the same API Gateway HTTP endpoint for consumers In addition you can also control what percentage of traffic is routed to new deployment and for a controlled traffic cutover A practical scenario for a canary deployment might be a new website You can monitor the clickthrough rates on a small number of end users before shifting all traffic to the new deployment ArchivedAmazon Web Services Serverless Application Lens 7 Lambda Version Control Like all software maintaining versioning enables the quick visibility of previously functioning code as well as the ability to revert back to a previous version if a new deployment is unsuccessful Lambda allows you to publish one or more immutable versions for individual Lambda functions; such that previous versions cannot be changed Each Lambda function version has a unique Amazon Resource Name (ARN) and new version changes are auditable as they are recorded in CloudTrail As a best practice in production customers should enable versioning to best leverage a reliable architecture To sim plify deployment operations and reduce the risk of error Lambda Aliases enable different variations of your Lambda function in your development workflow such as development beta and production An example of this is when an API Gateway integration with Lambda points to the ARN of a production alias The production alias will point to a Lambda version The value of this technique is that it enables a safe deployment when promoting a new version to the live environment because the Lambda Alias within the caller configuration remains static thus less changes to make General Design Principles The Well Architected Framework identifies a set of general design principles to facilitate good design in the cloud for serverless applications : • Speedy simple singular : Functions are concise short single purpose and their environment may live up to the ir request lifecycle Transactions are efficiently cost aware and thus faster executions are preferred • Think concurrent requests not total requests : Serverless applications take advantage of the concurrency model and tradeoffs at the design level are evaluated based on concurrency • Share nothing : Function runtime environment and underlying infrastructure are short lived therefore local resources such as temporary storage is not guaranteed State can be manipulated within a state machine execution lifecy cle and persistent storage is preferred for highly durable requirements • Assume no hardware affinity : Underlying infrastructure may change Leverage code or dependencies that are hardware agnostic as CPU flags for example may not be available consistent ly ArchivedAmazon Web Services Serverless Application Lens 8 • Orchestrate your application with state machines not functions : Chaining Lambda executions within the code to orchestrate the workflow of your application results in a monolithic and tightly coupled application Instead use a state machine to orchest rate transactions and communication flows • Use events to trigger transactions : Events such as writing a new Amazon S3 object or an update to a database allow for transaction execution in response to business functionalities This asynchronous event behavio r is often consumer agnostic and drives just intime processing to ensure lean service design • Design for failures and duplicates : Operations triggered from requests/events must be idempotent as failures can occur and a given request/event can be delivered more than once Include appropriate retries for downstream calls Scenarios In this section we cover the f ive key scenarios that are common in many serverless applications and how they influence the design and architecture of your serverless application workload s on AWS We will present the assumptions we made for each of these scenarios the common drivers for the design and a reference architecture of how these scenarios should be implemented RESTful Microservice s When building a microservice you’re thinking about how a business context can be delivered as a re usable service for your consumers The specific implementation will be tailored to individual use cases but there are several common themes across microservice s to ensure that your im plementation is secure resilient and constructed to give the best experience for your customers Building serverless microservices on AWS enables you to not only take advantage of the serverless capabilities themselves but also to use other AWS services and features as well as the ecosystem of AWS and AWS Partner Network (APN) tools Serverless technologies are built on top of fault tolerant infrastructure enabling you to build reliable services for your mission critical workloads The ecosystem of too ling enables you to streamline the build automate tasks orchestrate dependencies and monitor and govern your microservices Lastly AWS serverless tools are pay asyou go enabling you to grow the service with your business and keep your cost s down during entry phases and non peak times ArchivedAmazon Web Services Serverless Application Lens 9 Characteristics: • You want a secure easy tooperate framework that is simple to replicate and has high levels of resiliency and availability • You want to log utilization and access patterns to continually improve your backend to support customer usage • You are seeking to leverage managed services as much as possible for your platforms which reduces the heavy lifting associated with managing common platforms including security and scalability Reference Architecture Figure 1: Reference architecture for RESTful microservices 1 Customers leverage your microservices by making HTTP API calls Ideally your consumers should have a tightly bound service contract to your API to achieve consistent expectations of service levels and change control 2 Amazon API Gateway hosts RESTful HTTP requests and responses to customers In this scenario API Gateway provides built in authorization throttling security fault tolerance request/response mapping and performance optimizations 3 AWS Lambda contains the business logic to process incoming API calls and leverage DynamoDB as a p ersistent storage 4 Amazon DynamoDB persistently stores microservices data and scales based on demand Since microservices are often designed to do one thing well a schemaless NoSQL data store is regularly incorporated Configuration notes: AWS Lambda ClientAmazon API Gateway Amazon DynamoDB 1 2 3 4 ConsumerArchivedAmazon Web Services Serverless Application Lens 10 • Leverage API Ga teway logging to understand visibility of microservices consumer access behaviors This information is visible in Amazon CloudWatch Logs and can be quickly viewed through Log Pivots analyzed in CloudWatch Logs Insights or fed into other searchable engines such as Amazon ES or Amazon S3 (with Amazon Athena) The information delivered gives key visibility such as: o Understanding common customer locations which may change geographically based on the proximity of your backend o Understanding how customer input requests may have an impact on how you partition your database o Understanding the semantics of abnormal behavior which can be a security flag o Understanding errors latency and cache hits/misses to optimize configuration This model provides a framework tha t is easy to deploy and maintain and a secure environment that will scale as your needs grow Alexa Skills The Alexa Skills Kit gives developers the ability to extend Alexa's capabilities by building natural and engaging voice and visual experiences Succe ssful skills are habit forming where users routinely come back because it offers something unique it provides value in new novel and frictionless ways The biggest cause of frustration from users is when the skill doesn’t act how they expect it to and it might take multiple interactions before accomplishing what they need It’s essential to start by designing a voice interaction model and working backwards from that since some users may say too little too much or possibly something you aren’t expect ing The voice design process involves creating scripting and planning for expected as well as unexpected utterances ArchivedAmazon Web Services Serverless Application Lens 11 Figure 2: Alexa Skill example design script With a basic script in mind you can use the following techniques before start building a skill: • Outline the shortest route to completion o The shortest route to completion is generally when the user gives all information and slots at once an account is already linked if relevant and other prerequisites are satisfied in a single invocation of the skill • Outline alternate paths and decision trees o Often what the user says doesn’t include all information necessary to complete the request In the flow identify alternate pathways and use r decisions • Outline behind thescenes decisions the system logic will have to make o Identify behind thescenes system decisions for example with new or returning users A background system check might change the flow a user follows • Outline how the skill will help the user o Include clear directions in the help for what users can do with the skill Based on the complexity of the skill the help might provide one simple response or many responses • Outline the account linking process if present ArchivedAmazon Web Services Serverless Application Lens 12 o Determine the information that is required for account linking You also need to identif y how the skill will respond when account linking hasn’t been completed Characteristics: • You want to create a complete serverless architecture without managing any instance s or server s • You want your content to be decoupled from your skill as much as possible • You are looking to provide engaging voice experiences exposed as an API to optimize development across wide ranging Alexa devices Regions and languages • You want elasticity that scale s up and down to meet the demands of users and handles unexpected usage patterns Reference Architecture Figure 3: Reference architecture for an Alexa Skill 1 Alexa users interact with Alexa skills by speaking to Alexa enabled devices using voice as t he primary method of interaction ArchivedAmazon Web Services Serverless Application Lens 13 2 Alexa enabled devices listen for a wake word and activate as soon as one is recognized Supported wake words are Alexa Computer and Echo 3 The Alexa Service performs common Speech Language Understanding (SLU) processing o n behalf of your Alexa Skill including Automated Speech Recognition (ASR) Natural Language Understanding (NLU) and Text to Speech (TTS) conversion 4 Alexa Skills Kit (ASK) is a collection of self service APIs tools documentation and code examples that make it fast and easy for you to add skills to Alexa ASK is a trusted AWS Lambda trigger allowing for seamless integration 5 Alexa Custom Skill gives you control over the user experience allowing you to build a custom interaction model It is the most f lexible type of skill but also the most complex 6 A Lambda function using the Alexa Skills Kit allowing you to seamlessly build skills avoiding unneeded complexity Using it you can process different types of requests sent from the Alexa Service and build speech responses 7 A DynamoDB Database can provide a NoSQL data store that can elastically scale with the usage of your sill It is commonly used by skills to for persisting user state and sessions 8 Alexa Smart Home Skill allows you to control devices such as lights thermostats smart TVs etc using the Smart Home API Smart Home skills are simpler to build that custom skills as the don’t give you control over the interaction model 9 A Lambda function is used to respond to device discovery and control requ ests from the Alexa Service Developers use it to control a wide ranging number of devices including entertainment devices cameras lighting thermostats locks and many more 10 AWS Internet of Things (IoT) allows developers to securely connect their devices to AWS and control interaction between their Alexa skill and their devices 11 An Alexa enabled Smart Home can have an unlimited number of IoT connected devices receiving and responding and to directives from an Alexa Skill 12 Amazon S3 stores your skills s tatic assets including images content and media Its contents are securely served using CloudFront ArchivedAmazon Web Services Serverless App lication Lens 14 13 Amazon CloudFront Content Delivery Network (CDN) provides a CDN that serves content faster to geographically distributed mobile users and includes security mechanisms to static assets in Amazon S3 14 Account Linking is needed when your skill must authenticate with a nother system This action associa tes the Alexa user with a specific user in the other system Configuration notes: • Validate Smart Home request and response payloads by validating against the JSON schema for all possible Alexa Smart Home messages sent by a skill to Alexa • Ensure that your Lambda function timeout is le ss than eight seconds and can handle requests within that timeframe (The Alexa Service timeout is 8 seconds ) • Follow best pra ctices 7 when creating your DynamoDB tables Use on demand tables when you are not certain how m uch read/write capacity you need Otherwise choose provisioned capacity with auto matic scaling enabled For Skills that are heavy on ready DynamoDB Accelera tor (DAX) can greatly improve response times • Account linking can provide user information that may be stored in an external system Use that information to provide contextual and personalized experience for your user Alexa has guidelines on Account Linking to provide frictionless experiences • Use the s kill beta testing tool to collect early feedback on skill development and for skills versioning to reduce impact on skills that are already live • Use A SK CLI to automate skill development and deployment Mobile Backend Users increasingly expect their mobile applications t o have a fast consistent and feature rich user experience At the same time mobile user patterns are dynamic with unpredictable peak usage and often have a global footprint The growing demand from mobile users means that applications need a rich set of mobile services that work together seamlessly without sacrificing control and flexibility of the backend infrastructure Certain capabilities across mobile applications are expected by default: ArchivedAmazon Web Services Serverless Application Lens 15 • Ability to query mutate and subscribe to database change s • Offline persistence of data and bandwidth optimizations when connected • Search filtering and discovery of data in applications • Analytics of user behavior • Targeted messaging through multiple channels (Push Notifications SMS Email) • Rich content such as images and videos • Data synchronization across multiple devices and multiple users • FineGrained authorization controls for viewing and manipulating data Building a serverless mobile backend on AWS enables you to provide these capabilities while automaticall y managing scalability elasticity and availability in an efficient and cost effective way Characteristics: • You want to control application data behavior from the client and explicitly select what data you want from the API • You want your business logic to be decoupled from your mobile application as much as possible • You are looking to provide business functionalities as an API to optimize development across multiple platforms • You are seeking to leverage managed services to reduce undifferentiated heavy lifting of maintaining mobile backend infrastructure while providing high levels of scalability and availability • You want to optimize your mobile backend costs based upon actual user demand versus paying for idle resources Reference Architecture ArchivedAmazon Web Services Serverless Application Lens 16 Figure 2: Reference architecture for a mobile backend 1 Amazon Cognito is used for user management and as an identity provider for your mobile application Additionally it allows mobile users to leverage existing social identities s uch as Facebook Twitter Google+ and Amazon to sign in 2 Mobile users interact with the mobile application backend by performing GraphQL operations against AWS AppSync and AWS service APIs (for example Amazon S3 and Amazon Cognito) 3 Amazon S3 stores mobi le application static assets including certain mobile user data such as profile images Its contents are securely served via CloudFront 4 AWS AppSync hosts GraphQL HTTP requests and responses to mobile users In this scenario data from AWS AppSync is real time when devices are connected and data is available offline as well Data sources for this scenario are Amazon DynamoDB Amazon Elasticsearch Serv ice or AWS Lambda functions 5 Amazon Elasticsearch Service acts as a main search engine for your mobile application as well as analytics 6 DynamoDB provides persistent storage for your mobile application including mechanisms to expire unwanted data from in active mobile users through a Time to Live (TTL) feature ArchivedAmazon Web Services Serverless Application Lens 17 7 A Lambda function handles interaction with other thirdparty services or calling other AWS services for custom flows which can be part of the GraphQL response to clients 8 DynamoDB Streams captures itemlevel changes and enables a Lambda function to update additional data sources 9 A Lambda function manages streaming data between DynamoDB and Amazon ES allowing customers to combine data sources logical GraphQL types and operations 10 Amazon Pinpoint captures analytics from clients including user sessions and custom metrics for application insights 11 Amazon Pinpoint delivers messages to all users/devices or a targeted subset based on analytics that have been gathered Messages can be customized and sent using p ush notifications email or SMS channels Configuration notes: • Performance test 3 your Lambda functions with different memory and timeout settings to ensure that you’re usin g the most appropriate resources for the job • Follow best practices 4 when creating your DynamoDB tables and consider having AWS AppSync automatically provis ion them from a GraphQL schema which will use a well distributed hash key and create indexes for your operations Make certain to calculate your read/write capacity and table partitioning to ensure reasonable response times • Use the AWS AppSync server side data caching to optimize your application experience as a ll subsequent query requests to your API will be returned from the cache which means data sources won’t b e contacted directly unless the TTL expires • Follow best practices 5 when managing Amazon ES Domains Additionally Amazon ES provides an extensive guide 6 on designing concerning sharding and access patterns that also apply here • Use the finegrained access controls o f AWS AppSync configured in resolvers to filter GraphQL requests down to the peruser or group level if necessary This can be applied to AWS Identity and Access Management ( IAM) or Amazon Cognito User Pools authorization with AWS AppSync ArchivedAmazon Web Services Serverless Application Lens 18 • Use AWS Amplif y and Amplify CLI to compose and integrate your application with multiple AWS services Amplify Console also takes care of deploying and managing stacks For low latency requirements where near tonone business logic is required Amazon Cognito Federated I dentity can provide scoped credentials so that your mobile application can talk directly to an AWS service for example when uploading a user’s profile picture retrieve metadata files from Amazon S3 scoped to a user etc Stream Processing Ingesting and processing real time streaming data requires scalability and low latency to support a variety of applications such as activity tracking transaction order processing clickstream analysis data cleansing metrics generation log filtering i ndexing social media analysis and IoT device data telemetry and metering These applications are often spiky and process thousands of events per second Using AWS Lambda and Amazon Kinesis you can build a serverless stream process that automatically sca les without provisioning or managing servers Data processed by AWS Lambda can be stored in DynamoDB and analyzed later Characteristics: • You want to create a complete serverless architecture without managing any instance or server for processing streaming data • You want to use the Amazon Kinesis Producer Library (KPL) to take care of data ingestion from a data producer perspective Reference Architecture Here we are presenting a scenario for common stream processing which is a reference architecture for a nalyzing social media data ArchivedAmazon Web Services Serverless Application Lens 19 Figure 3: Reference architecture for stream processing 1 Data producers use the Amazon Kinesis Producer Library (KPL) to send social media streaming data to a Kinesis stream Amazon Kinesis Agent and custom data producers that leverage the Kinesis API can also be used 2 An Amazon Kinesis stream collects processes and analyzes realtime streaming data produced by data producers Data ingested into the stream can be processed by a consumer which in th is case is Lambda 3 AWS Lambda acts as a consumer of the stream that receives an array of the ingested data as a single event/invocation Further processing is carried out by the Lambda function The transformed data is then stored in a persistent storage which in this case is DynamoDB 4 Amazon DynamoDB provides a fast and flexible NoSQL database service including triggers that can integrate with AWS Lambda to make such data available elsewhere 5 Business users leverage a reporting interface on top of Dynam oDB to gather insights out of social media trend data Configuration notes: • Follow best practices 7 when re sharding Kinesis streams to accommodate a higher ingestion rate Concurrency for stream processing is dictated by the number of shards and by the parallelization factor Therefore adjust it according to your throughput requirements • Consider reviewing the Streaming Data Solutions whitepaper 8 for batch processing analytics on streams and other useful patterns ArchivedAmazon Web Services Serverless Application Lens 20 • When not using KPL make certain to take into account partial failures for non atomic operations such as PutRecords since the Kinesis API returns both successfully and unsuccessfully processed records 9 upon ingestion time • Duplicated r ecords 10 may occur and you mu st leverage both retries and idempotency within your application for both consumers and producers • Consider using Kinesis Data Firehose over Lambda when ingested data needs to be continuously loaded into Amazon S3 Amazon Redshift or Amazon ES • Consider using Kinesis Data Analytics over Lambda when standard SQL could be used to query streaming data and load only its results into Amazon S3 Amazon Redshift Amazon ES or Ki nesis Streams • Follow best practices for AWS Lambda stream based invocation 11 since that covers the effects on batch size concurrency per shard and monitor ing stream processing in more detail • Use Lambda maximum retry attempts maximum record age bisect batch on function error and on failure destination error controls to build more resilient stream processing applications Web Application Web applications typically have demanding requirements to ensure a consistent secure and reliable user experience To ensure high availability global availability and the ability to scale to thousands or potentially millions of users you often had to reserve substantial excess capacity to handle web requests at their highest anticipated demand This often required managing fleets of servers and additional infrastructure components which in turn led to significant capital expenditures and long lead times for capacity provisioning Using serverless computing on AWS you can deploy your entire web application stack without performing the undiffer entiated heavy lifting of managing servers guessing at provisioning capacity or paying for idle resources Additionally you do not have to compromise on security reliability or performance Characteristics: • You want a scalable web application that can go global in minutes with a high level of resiliency and availability • You want a consistent user experience with adequate response times ArchivedAmazon Web Services Serverless Application Lens 21 • You are seeking to leverage managed services as much as possible for your platforms to limit the heavy lifting assoc iated with managing common platforms • You want to optimize your costs based on actual user demand versus paying for idle resources • You want to create a framework that is easy to set up and operate and that you can extend with limited impact later Refere nce Architecture Figure 4: Reference architecture for a web application 1 Consumers of this web application m ight be geographically concentrated or distributed worldwide Leveraging Amazon CloudFront not only provides a better performance experience for these consumers through caching and optimal origin routing but also limits redundant calls to your backend 2 Amazon S3 hosts web application static assets and is securely served through CloudFront 3 An Amazon Cognito user p ool provides user management and identity provider feature s for your web application ArchivedAmazon Web Services Serverless Application Lens 22 4 In many scenarios a s static content from Amazon S3 is downloaded by the consumer dynamic content needs to be sent to or received by your application For example when a user submits data through a form Amazon API Gateway serves as the secure endpoint to make these calls and return responses displayed through your web application 5 An AWS Lambda function provides create read update and d elete (CRUD) operations on top of DynamoDB for your web application 6 Amazon DynamoDB can provide the backend NoSQL data store to elastically scale with the traffic of your web application Configuration Notes: • Follow best practices for deploying your serverless web application frontend on AWS More information can be found in the operational excellence pillar • For single page web applications use AWS Amplify Console to manage atomic deployments cache expirati on custom domain and user interface (UI) testing • Refer to the security pillar for recommendations on authentication and authorization • Refer to the RESTful Microservices scenario for recommendations on web application backend • For web applications that offer personalized services you can leverage API Gateway usage plans 12 as we ll as Amazon Cognito user pools to scope what different sets of users have access to For example a premium user can have higher throughput for API calls access to additional APIs additional storage etc • Refer to the Mobile Back end scenario if your application use s search capabilities that are not covered in this scenario The Pillars of the Well Architected Framework This section describes each of the pillars and includes definitions best practices questions consi derations and key AWS services that are relevant when architecting solutions for serverless applications For brevity we have only selected the questions from the Well Architected Framework that are specific to serverless workloads Questions that have n ot been included in this ArchivedAmazon Web Services Serverless Application Lens 23 document should still be considered when designing your architecture We recommend that you read the AWS Well Architected Framework whitepaper Operational Excellence Pillar The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures Definition There are three be st practice areas for operational excellence in the cloud: • Prepare • Operate • Evolve In addition to what is covered by the Well Architected Framework concerning processes runbooks and game days there are specific areas you should look into to drive operati onal excellence within serverless applications Best Practices Prepare There are no operational practices unique to serverless applications that belong to this subsection Operate OPS 1 : How do you understand the health of your Serverless application? Metrics and Alerts It’s important to understand Amazon CloudWatch Metrics and Dimensions for every AWS service you intend to use so that you can put a plan in a place to assess its behavior and add custom metrics where you see fit Amazon CloudWatch provi des automated cross service and per service dashboards to help you understand key metrics for the AWS services that you u se For custom metrics use Amazon CloudWatch Embedded Metric Format to log a batch of metrics ArchivedAmazon Web Services Serverless Application Lens 24 that will be processed asynchronously b y CloudWatch without impacting the performance of your Serverless application The following guidelines can be used whether you are creating a dashboard or looking to formulate a plan for new and existing applications when it comes to metrics: • Business Me trics o Business KPIs that will measure your application performance against business goals and are important to know when something is critically affecting your overall business revenue wise or not o Examples : Orders placed debit/credit card operations fl ights purchased etc • Customer Experience Metrics o Customer experience data dictates not only the overall effectiveness of its UI/UX but also whether changes or anomalies are affecting customer experience in a particular section of your application Often t imes these are measured in percentiles to prevent outliers when trying to understand the impact over time and how it ’s spread across your customer base o Examples : Perceived latency time it takes to add an item to a basket or to check out page load times etc • System Metrics o Vendor and application metrics are important to underpin root causes from the previous sections They also tell you if your systems are healthy at risk or already your customers o Examples : Percentage of HTTP errors/ success memory utilization function duration/error/throttling queue length stream records length integration latency etc • Operational Metrics o Operational metrics are equally important to understand sustainability and maintenance of a given system and crucial t o pinpoint how stability progressed/degraded over time ArchivedAmazon Web Services Serverless Application Lens 25 o Examples : Number of tickets (successful and unsuccessful resolutions etc) number of times people on call were paged availability CI/CD pipeline stats (successful/failed deployments feedback time cycle and lead time etc) CloudWatch Alarms should be configured at both individual and aggregated levels An individual level example is alarming on the Duration metric from Lambda or IntegrationLatency from API Gateway when invoked through API since different parts of the application likely have different profiles In this instance you can quickly identify a bad deployment that makes a function execute for much longer than usual Aggregate level examples include alarming but is not limited to the fo llowing metrics: • AWS Lambda : Duration Errors Throttling and ConcurrentExecutions For stream based invocations alert on IteratorAge For Asynchronous invocations alert on DeadLetterErrors • Amazon API Gateway : IntegrationLatency Latency 5XXError • Application Load Balancer : HTTPCode_ELB_5XX_ Count RejectedConnectionCount HTTPCode_Target_5XX_Count UnHealthyHostCount LambdaInternalError LambdaUserError • AWS AppSync : 5XX and Latency • Amazon SQS: ApproximateAgeOfO ldestMessage • Amazon Kinesis Data Streams: ReadProvisionedThroughputExceeded WriteProvisionedThroughputExceeded GetRecordsIteratorAgeMilliseconds PutRecordSuccess PutRecordsSuccess (if using Kinesis Producer Library) and GetRecordsSuccess • Amazon SNS : NumberOfNotificationsFailed NumberOfNotificationsFilteredOut InvalidAttributes • Amazon SES: Rejects Bounces Complaints Rendering Failures • AWS Step Functions: ExecutionThrottled ExecutionsFailed ExecutionsTimedOut • Amazon EventBridge: FailedInvocations ThrottledRules • Amazon S3: 5xxErrors TotalRequestLatency • Amazon DynamoDB: ReadThrottleEvents WriteThrottleEvents SystemErrors ThrottledRequests UserErrors ArchivedAmazon Web Services Serverless Application Lens 26 Centralized and structured logging Standardize your application logging to emit operational information about transactions correlation identifiers request identifiers across components and business outcomes Use this information to answer arbitrary questions about the state of your workload Below is an example of a structured logging using JSON as the output: { ""timestamp"":""2019 1126 18:17:33774"" ""level"":""INFO"" ""location"":""cancelcancel_booking:45"" ""service"":""booking"" ""lambda_function_name"":""test"" ""lambda_function_memory_size"":""128"" ""lambda_function_arn"":""arn:aws:lambda:eu west1: 12345678910:function:test"" ""lambda_request_id"":""52fdfc07 2182154f163f5f0f9a621d72"" ""cold_start"": ""true"" ""message"":{ ""operation"":""update_item"" ""details:"":{ ""Attributes"": { ""status"":""CANCELLED"" } ""ResponseMetadata"":{ ""RequestId"":""G7S3SCFDEMEINPG6AOC6CL5IDNVV4KQNSO5AEMVJF66Q9ASUAAJG"" ""HTTPStatusCode"":200 ""HTTPHeaders"":{ ""server"":""Server"" ""date"":""Thu 26 Nov 2019 18:17:33 GMT"" ""content type"":""application/x amzjson10"" ""content length"":""43"" ""connection"":""keep alive"" ""xamzn requestid"":""G7S3SCFDEMEINPG6AOC6CL5IDNVV4KQNSO5AEMVJF66Q9ASUAAJG"" ""xamzcrc32"":""1848747586"" } ""RetryAttempts"":0 } } ArchivedAmazon Web Services Serverless Application Lens 27 } } Centralized logging helps you search and analyze your serverless application log s Structured logging makes it easier to derive queries to answer arbitrary questions about the health of your application As your system grows and more logging is ingested consider using appropriate logging levels and a sampling mechanism to log a small percentage of logs in DEBUG mode Distributed Tracing Similar to non serverless applications anomalies can occur at larger scale in distributed systems Due to the nature of serverless architectures it ’s fundamental to have distributed tracing Making c hanges to your serverless application entails many of the same principles of deployment change and release management used in traditional workloads However there are subtle changes in how you use existing tools to accomplish these principles Active tr acing with AWS X Ray should be enabled to provide distributed tracing capabilities as well as to enable visual service maps for faster troubleshooting X Ray helps you identify performance degradation and quickly understand anomalies including latency dis tributions Figure 7: AWS X Ray Service Map visualizing 2 services Service Maps are helpful to understand integration points that need attention and resiliency practices For integration calls retries backoffs and possibly circuit breakers are necessary to prevent faults from propagat ing to downstream services ArchivedAmazon Web Services Serverless Application Len s 28 Another example is networking anomalies You should not rely on default timeouts and retry settings Instead tune them to fail fast if a socket read/w rite timeout happens where the default can be seconds if not minutes in certain clients XRay also provides two powerful features that can improve the efficiency on identifying anomalies within applications: Annotations and Subsegments Subsegments are he lpful to understand how application logic is constructed and what external dependencies it has to talk to Annotations are key value pairs with string number or Boolean values that are automatically indexed by AWS X Ray Combined they can help you quic kly identify performance stat istics on specific operations and business transactions for example how long it takes to query a database or how long it takes to process pictures with large crowds Figure 8: AWS X Ray Trace with subsegments beginning with ## ArchivedAmazon Web Services Serverless Application Lens 29 Figure 9: AWS X Ray Traces grouped by custom annotations OPS 2 : How do you approach application lifecycle management ? Prototyping Use infrastructure as code to create temporary environments for new features that you want to prototype a nd tear them down as you complete them You can use dedicated accounts per team or per developer depending on the size of the team and the level of automation within the organization Temporary environments allow for higher fidelity when working with mana ged services and increase levels of control to help you gain confidence that your workload integrates and operates as intended For configuration management use environment variables for infrequent changes such as logging level and database connection strings Use AWS System Manager Parameter Store for dynamic configuration such as feature toggles and store sensitive data using AWS Secrets Manager Testing Testing is commonly done through unit integration and acceptance tests Developing robust testing strategies allows you to emulate your serverless application under different loads and conditions Unit tests shouldn’t be different from non serverless applications and therefore can run locally without any changes ArchivedAmazon Web Services Serverless Application Lens 30 Integration tests shou ldn’t mock services you can’t control since they might change and provide unexpected results These tests are better performed when using real services because they can provide the same environment a serverless application would use when processing reques ts in production Acceptance or end toend tests should be performed without any changes because the primary goal is to simulate the end users’ action s through the available external interface Therefore there is no unique recommendation to be aware of he re In general Lambda and thirdparty tools that are available in the AWS Marketplace can be used as a test harness in the context of performance testing Here are some considerations during performance testing to be aware of: • Metrics such as invoked max memory used and init duration are available in CloudWatch Logs For more information read the performance pillar section • If your Lambda function runs inside Amazon Virtual Private Cloud (VPC) pay attention to available IP address space inside your subnet • Creating modularized code as separate functions outside of the handler enables more unit testable functions • Establishing externalized connection code (such as a connection pool to a relati onal database) referenced in the Lambda function’s static constructor/initialization code (that is global scope outside the handler) will ensure that external connection thresholds aren’t reached if the Lambda execution environment is reused • Use DynamoD B ondemand table unless your performance tests exceed current limits in your account • Take into account any other service limits that m ight be used within your serverless application under performance testing Deploying Use infrastructure as code and ver sion control to enable tracking of changes and releases Isolate development and production stages in separate environments This reduces errors caused by manual processes and helps increase levels of control to help you gain confidence that your workload operates as intended Use a serverless framework to model prototype build package and deploy serverless applications such as AWS SAM or Serverless Framework With infrastructure as code ArchivedAmazon Web Services Serverless Application Lens 31 and a framework you can parametrize your serverless application and its dependencies to ease deployment across isolated stages and across AWS accounts For example a CI/CD pipeline Beta stage can create the following resources in a beta AWS account and equally for the respective stages you may want to have in differen t accounts too (Gamma Dev Prod): OrderAPIBeta OrderServiceBeta OrderStateMachineBeta OrderBucketBeta OrderTableBeta Figure 10: CI/CD Pipeline for multiple accounts When deploying to production favor safe deployments over all atonce systems as new changes will gradually shift over time towards the end user in a canary or linear deployment Use CodeDeploy hooks ( BeforeAllowTraffic AfterAllowTraffic ) and alarms to gain more control over deployment validation rollback and any customiz ation you may need for your application ArchivedAmazon Web Services Serverless Application Lens 32 You can also combine the use of synthetic traffic custom metrics and alert s as part of a rollout deployment Th ese help you proactively detect errors with new changes that otherwise would have impacted your customer experience Evolve There are no operational practices unique to serverless applications that belong to this subsection Key AWS Services Key AWS services for oper ational excellence include AWS Systems Manager Parameter Store AWS SAM Cloud Watch AWS CodePipeline AWS XRay Lambda and API Gateway Resources Refer to the following resources to learn more about our best practices for operational excellence Documen tation & Blogs • API Gateway stage variables 13 • Lambda environment variables 14 • AWS SAM CLI 15 Figure 5: AWS CodeDeploy Lambda deployment and Hooks ArchivedAmazon Web Services Serverless A pplication Lens 33 • XRay latency distribution 16 • Troubleshooting Lambda based applications with X Ray 17 • System Manager (SSM) Parameter Store 18 • Contin uous Deployment for Serverless applications blog post 19 • SamFarm: CI/CD example 20 • Serverless Application example using CI/CD • Serverless Application example automating Alerts and Dashboard • CloudWatch Embedded Metric Format library for Python • CloudWatch Embedded Metric Format library for Nodejs • Example library to implement tracing structured logging and custom metrics • General AWS Limits • Stackery: Multi Account Best Practices Whitepaper • Practicing Continuous Integ ration /Continuous Delivery on AWS 21 Third Party Tools • Serverless Developer Tools page including third party frameworks/tools 22 • Stelligent: CodePipeline Dashboard for operational metrics Security Pillar The security pillar includes the ability to protect information systems and assets while delivering business value through risk assessm ents and mitigation strateg ies Definition There are five best practice areas for security in the cloud: • Identity and access management • Detective controls • Infrastructure protection ArchivedAmazon Web Services Serverless Application Lens 34 • Data protection • Incident response Serverless addresses some of today’s biggest security concerns as it removes infrastructure management tasks such as operating system patching updating binaries etc Although the attack surface is reduced compared to non serverless architectures the Op en Web Application Security Project ( OWASP ) and application security best practices still apply The questions in this section are designed to help you address specific ways an attacker could try to gain access to or exploit misconfigured permissions which could lead to abuse The practices described in this section strong ly influence the security of your entire cloud platform and so they should be validated carefully and also reviewed frequently The incident response category will not be described in thi s document because the practices from the AWS WellArchitected Framework still apply Best Practices Identity and Access Management SEC 1 : How do you control access to your Serverless API ? APIs are often targeted by attackers because of the operations that they can perform and the valuable data they can obtain There are various security best practices to defend against these attacks From an authentication/authorization perspective there are currently four mechanisms to authorize an API call within API Gateway: • AWS_IAM authorization • Amazon Cognito user pools • API Gateway Lambda authorizer • Resource policies Primarily you want to understand if and how any of these mechanisms are implemente d For consumers who currently are located within your AWS environment or have the means to retrieve AWS Identity and Access Management (IAM) temporary ArchivedAmazon Web Services Serverless Application Lens 35 credentials to access your environment you can use AWS_IAM authorization and add least privileged permi ssions to the respective IAM role to securely invoke your API The following diagram illustrat es using AWS_IAM authorization in this context: Figure 10: AWS_IAM authorization If you have an existing Identity Provider (IdP) you can use an API Gateway Lam bda authorizer to invoke a Lambda function to authenticate/validate a given user against your IdP You can use a Lambda authorizer for custom validation logic based on identity metadata A Lambda authorizer can send additional information derived from a bearer token or request context values to your backend service For example the authorizer can return a map containing user IDs user names and scope By using Lambda authorizers your backend does not need to map authorization tokens to user centric data allowing you to limit the exposure of such information to just the authorization function ArchivedAmazon Web Services Serverless Application Lens 36 Figure 6: API Gateway Lambda authorizer If you don’t have an IdP you can leverage Amazon Cognito user pools to either provide builtin user management or integrate with external identity providers such as Faceboo k Twitter Google+ and Amazon This is commonly seen in the mobile backend scenario where users authenticate by using existing accounts in social media platforms whil e being able to register/sign in with their email address/username This approach also provides granular authorization through OAuth Scopes ArchivedAmazon Web Services Serverless Application Lens 37 Figure 7: Amazon Cognito user pools API Gateway API Keys is not a security mechanism and should not be used for authorization unless it ’s a public API It should be used primarily to track a consumer’s usage across your API and could be used in addition to the authorizers previously mentioned in this section When using Lambda authorizers we strictly advise against pass ing credentials or any sort of sensitive data via query string parameters or headers otherwise you may open your system up to abuse Amazon API Gateway resource policies are JSON policy documents that can be attached to an API to control whether a specified AWS Principal can invoke the API This mechanism allows you to restrict API invocations by: • Users from a specified AWS account or any AWS IAM identity • Specified source IP address ranges or CIDR blocks • Specified virtual private clouds (VPCs) or VPC endpoi nts (in any account) With resource policies you can restrict common scenarios such as only allowing requests coming from known clients with a specific IP range or from another AWS account If you plan to restrict requests coming from private IP addresses it’s recommended to use API Gateway private endpoints instead ArchivedAmazon Web Services Serverless Application Lens 38 Figure 14: Amazon API Gateway Resource Policy based on IP CIDR With private endpoints API Gateway will restrict access to services and resources inside your VPC or those connected via Di rect Connect to your own data centers Combining both private endpoints and resource policies an API can be limited to specific resource invocations within a specific private IP range This combination is mostly used on internal microservices where they m ay be in the same or another account When it comes to large deployments and multiple AWS accounts organizations can leverage cross account Lambda authorizers in API Gateway to reduce maintenance and centralize security practices For example API Gateway has the ability to use Amazon Cognito User Pools in a separate account Lambda authorizers can also be created and managed in a separate account and then re used across multiple APIs managed by API Gateway Both scenarios are common for deployments with m ultiple microservices that need to standardize authorization practices across APIs ArchivedAmazon Web Services Serverless Application Lens 39 Figure 15: API Gateway Cross Account Authorizers SEC 2 : How are you managing the security boundaries of your Serverless Application? With Lambda function s it’s recommended that you follow least privileged access and only allow the access needed to perform a given operation Attaching a role with more permissions than necessary can open up your systems for abuse With the security context having smaller functio ns that perform scoped activities contribute to a more well architected serverless applicati on Regard ing IAM roles sharing an IAM role within more than one Lambda function will likely violate least privileged access Detective Controls Log management is an important part of a well architected design for reasons ranging from security/forensics to regulatory or legal requirements It is e qually important that you track vulnerabilities in application dependencies because attackers can exploit kn own vulnerabilities found in dependencies regardless of which programming language is used For application dependency vulnerability scans there are several commercial and open source solutions such as OWASP Dependency Check that can integrate within yo ur CI/CD pipeline It ’s important to include all your dependencies including AWS SDKs as part of your version control software repository ArchivedAmazon Web Services Serverless Application Lens 40 Infrastructure Protection For scenarios where your serverless application need s to interact with other components d eployed in a virtual private cloud (VPC) or applications residing on premises it’s important to ensure that networking boundaries are considered Lambda functions can be configured to access resources within a VPC Control traffic at all layers as describ ed in the AWS WellArchitected Framework For workloads that require outbound traffic filtering due to compliance reasons proxies can be used in the same manner that they are applied in non serverless architectures Enforcing networking boundaries solely at the application code level and giving instructions as to what resource s one could access is not recommended due to separation of concerns For service toservice communication favor dynamic authentication such as temporary credentials with AWS IAM over static keys API Gateway and AWS AppSync both support IAM Authorization that makes it ideal to protect communication to and from AWS services Data Protection Consider enabling API Gateway Access Logs and selectively choose only what you need since the logs might contain sensitive data depending on your serverless application design For this reason we recommend that y ou encrypt any sensitive data traversing your serverless application API Gateway and AWS AppSync employ TLS across all communications client s and integrations Although HTTP payloads are encrypted in transit request path and query strings that are part of a URL might not be Therefore sensitive data can be accidentally exposed via CloudWatch Logs if sent to standard output Additionally malformed or intercepted input can be used as an attack vector —either to gain access to a system or cause a malfun ction Sensitive data should be protected at all times in all layers possible as discussed in detail in the AWS WellArchitected Framework The recommendations in that whitepaper still apply here With regard to API Gateway sensitive data should be either encrypted at the client side before making its way as part of a n HTTP request or sent as a payload as part of a n HTTP POST request That also includes encrypting any headers that might contain sensitive data prior to making a given request ArchivedAmazon Web Services Serverless Application Lens 41 Concerning Lambda functions or any integrations that API Gateway may be configured with sensitive data should be encrypted before any processing or data manipulation This will prevent data le akage if such data gets exposed in persistent storage or by standard output that is streamed and persisted by Cloud Watch Logs In the scenarios described earlier in this document Lambda function s would persist encrypted data in either DynamoDB Amazon ES or Amazon S3 along with encryption at rest We strictly advise against s ending logging and storing unencrypted sensitive data either as part of HTTP request path/query strings or in standard output of a Lambda function Enabling logging in API Gateway where sensitive data is unencrypted is also discouraged As mentioned in the Detective Controls subsection you should consult your compliance team before enabling API Gateway logging in such cases SEC 3: How do you implement Application Security in your workload ? Review security awareness documents authored by AWS Security bulletins and industry threat intelligence as covered in the AWS WellArchitected Framework OWASP guidelines for application security still appl y Validate and sanitize inbound events and perform a security code review as you normally would for non serverless applications For API Gateway set up basic request validation as a first step to ensure that the request adheres to the configured JSON Schema request model as well as any required parameter s in the URI query string or headers Application specific deep validation should be implemented whether that is as a separate Lambda function library framework or service Store y our secrets such as database passwords or API keys in a secrets manager that allows for rotation secure and audited access Secrets Manager allow s finegrained policies for secrets including auditing Key AWS Services Key AWS services for security are A mazon Cognito IAM Lambda Cloud Watch Logs AWS CloudTrail AWS CodePipeline Amazon S3 Amazon ES DynamoDB and Amazon Virtual Private Cloud (Amazon VPC) ArchivedAmazon Web Services Serverless Application Lens 42 Resources Refer to the following resources to learn more about our best practices for security Documentation & Blogs • IAM role for Lambda function with Amazon S3 example 23 • API Gateway Request Validation 24 • API Gateway Lambda Authorizers 25 • Securing API Access with Amazon Cognito Federated Identities Amazon Cognito User Pools and Amaz on API Gateway 26 • Configuring VPC Access for AWS Lambda 27 • Filtering VPC outbound traffic with Squid Proxies 28 • Using A WS Secrets Manager with Lambda • Auditing Secrets with AWS Secrets Manager • OWASP Input validation cheat sheet • AWS Serverless Security Workshop Whitepapers • OWASP Secure Coding Best Practices 29 • AWS Security Best Practices 30 Partner Solutions • PureSec Serverless Security • Twistlock Serverless Security 31 • Protego Serverless Security • Snyk – Commercial Vulnerability DB and Dependency Che ck32 • Using Hashicorp Vault with Lambda & API Gateway Third Party Tools • OWASP Vulnerability Dependency Check 33 ArchivedAmazon Web Services Serverless Application Lens 43 Reliability Pillar The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues Definition There are thr ee best practice areas for reliability in the cloud: • Foundations • Change management • Failure management To achieve reliability a system must have a well planned foundation and monitoring in place with mechanisms for handling changes in demand requirements or potentially defending an unauthorized denial of service attack The system should be designed to detect failure and ideally automatically heal itself Best Practices Foundations REL 1 : How are you regulating inbound request rates ? Throttling In a microservices architecture API consumers may be in separate teams or even outside the organization This creates a vulnerability due to unknown access patterns as well as the risk of consumer credentials being compromised The service API can potentially be affected if the number of requests exceeds what the processing logic/backend can handle Additionally events that trigger new transactions such as an update in a database row or new objects being added to a n S3 bucket as part of the API will trigger additional executions throu ghout a serverless application Throttling should be enabled at the API level to enforce access patterns established by a service contract Defining a request access pattern strategy is fundamental to ArchivedAmazon Web Services Serverless Application Lens 44 establish ing how a consumer should use a service whether that is at the resou rce or global level Returning the appropriate HTTP status codes within your API (such as a 429 for throttling) help s consumers plan for throttled access by implementing back off and retries accordingly For more granular throttling and metering usage issuing API keys to consumers with usage plans in addition to global throttling enables API Gateway to enforce quota and access patterns in unexpected behavior API keys also simplif y the process for administr ators to cut off access if an individual consumer is making suspicious requests A common way to capture API keys is through a developer portal This provides you as the service provider with additional metadata associated with the consumers and requests You may capture the application contact information and business area/purpose and store this data in a durable data store such as DynamoDB This gives you additional validation of your consumers and provides traceability of logging with identities so that you can contact consumers for breaking change upgrades/issues As discussed in the security pillar API keys are not a security mechanism to authorize requests and therefore should only be used with one of the available authorization options available within API Gateway Concurrency controls are sometimes necessary to protect specific workloads against service failure as they may not scale as rapidly as Lambda Concurrency controls enable you to control the allocation of how many concurrent invocations of a particular Lambda function are set at the individual Lambda function level Lambda invocations that exceed the concurrency set of an individual function will be throttled by the AWS Lambda Service and the result will vary depending on their event source – Synchronous invocations return HTTP 429 error Asynchronous invocations will be queued and retried while Stream based event sources will retry up to their record expiration time ArchivedAmazon Web Services Serverless Application Lens 45 Figure 16: AWS Lambda concurrency controls Controlling concurrency is particularly useful for the following scenarios: • Sensitive backend or integrated systems that may have scaling limitations • Database Connection Pool restrictions such as a relational database which may impose concurrent limits • Critical Path Services: Higher priority Lambda functions such as authorization vs lower priority functions ( for example back office) against limits in the same account • Ability to disable Lambda function (concurrency = 0) in the event of anomalies • Limiting desired execution concurrency to protect against Distributed Denial of Service (DDoS) attacks Concurrency c ontrols for Lambda functions also limit its ability to scale beyond the concurrency set and draws from your account reserved concurrency pool For asynchronous processing use Kinesis Data Streams to effectively control concurrency with a single shard as o pposed to Lambda function concurrency control This gives you the flexibility to increase the number of shards or the parallelization factor to increase concurrency of your Lambda function ArchivedAmazon Web Services Serverless Application Lens 46 Figure 8: Concurrency controls for syn chronous and asynchronous requests REL 2: How are you building resiliency into your serverless application? Asynchronous Calls and Events Asynchronous calls reduce the latency on HTTP responses Multiple synchronous calls as well as long running wait cycles may result in timeouts and “locked ” code that prevents retry logic Event driven architectures enable streamlining asynchronous executions of code thus limiting consumer wait cycles These architectures are commonly implemented asynchronous ly using queues streams pub/sub Webhooks s tate machines and event rule managers across multiple components that perform a business functionality User experience is decoupled with asynchronous calls Instead of blocking the ent ire experience until the overall execution is complete d frontend systems receive a reference/job ID as part of their initial request and they subscribe for real time changes or in legacy systems use an additional API to poll its status This decoupling a llows the frontend to be more efficient by using event loops parallel or concurrency techniques while making such requests and lazily loading parts of the application when a response is partially or completely available The f rontend becomes a key elemen t in asynchronous calls as it becomes more robust with custom retries and caching It can halt an in flight request if no response has been received within an acceptable SLA be it caused by an anomaly transient condition networking or degraded environm ents ArchivedAmazon Web Services Serverless Application L ens 47 Alternatively when synchronous calls are necessary it ’s recommended at a minimum to ensure t hat the total execution time doesn’t exceed the API Gateway or AWS AppSync maximum timeout Use an external service ( for example AWS Step Functions) to coo rdinate business transactions across multiple services to control state and handle error handling that occur s along the request lifecycle Change Management This is covered in the AWS WellArchitected Framework and specific information on serverless can be found in the operational excellence pillar Failure Management Certain parts of a serverless application are dictated by asynchronous calls to various components in an event driven fashion such as by pub/su b and other patterns When asynchronous calls fail they should be captured and retried whenever possible Otherwise data loss can occur result ing in a degraded customer experience For Lambda functions build retry logic into your Lambda queries to ensu re that spiky workloads don’t overwhelm your backend Use structured logging as covered in the operational excellence pillar to log retries including contextual information about errors as they can be captured as a custom metric Use Lambda Destinations t o send contextual information about errors stack traces and retries into dedicated Dead Letter Queues (DLQ) such as SNS topics and SQS queues You also want to develop a plan to poll by a separate mechanism to re drive these failed events back to their intended service AWS SDKs provide back off and retry mechanisms by default when talking to other AWS services that are sufficient in most cases However review and tune them to suit your needs especially HTTP keepalive connection and socket timeout s Whenever possible use Step Functions to minimize the amount of custom try/catch backoff and retry logic within your serverless applications For more information see the cost optimization pillar section Use Step Functions integration to save failed state executions and their state into a DLQ ArchivedAmazon Web Services Serverless Application Lens 48 Figure 9: Step Functions state machine with DLQ step Partial failures can occur in non atomic operations such as PutRecords (Kinesis) and BatchWriteItem (DynamoDB) since they return successful if at least one record has been ing ested successfully Always i nspect the response when using such operations and programmatically deal with partial failures When consuming from Kinesis or DynamoDB streams use Lambda error handling controls such as max imum record age max imum retry attem pts DLQ on failure and Bisect batch on function error to build additional resiliency into your application For synchronous parts that are transaction based and depend on certain guarantees and requirements rolling back failed transactions as describe d by the Saga pattern 34 also can be achieved by using Step Functions state machines which will decouple and simplify the logic of your applicat ion ArchivedAmazon Web Services Serverless Application Lens 49 Figure 10: Saga pattern in Step Functions by Yan Cui Limits In addition to what is covered in the Well Architected Framework consider reviewing limits for burst and spiky use cases For example API Gateway and Lambda have different limits for steady and burst request rates Use scaling layers and asynchronous patterns when possible and perform load test to ensure that your current account limits can sustain your actual customer demand Key AWS Services Key AWS services for reliability are AWS Marketplace Trusted Advisor Cloud Watch Logs Cloud Watch API Gateway Lambda X Ray Step Functions Amazon SQS and Amazon SNS Resources Refer to the following resources to learn more about our best practices for reliability Documentation & Blogs • Limits in Lambda 35 • Limits in API Gateway 36 • Limits in Kin esis Streams 37 ArchivedAmazon Web Services Serverless Application Lens 50 • Limits in DynamoDB 38 • Limits in Step Functions 39 • Error handling patterns 40 • Serverless testing with Lambd a41 • Monitoring Lambda Functions Logs 42 • Versioning Lambda 43 • Stages in API Gateway 44 • API Retries in AWS 45 • Step Functions error handling 46 • XRay 47 • Lambda DLQ 48 • Error handling patterns with API Gateway and Lambda 49 • Step Functions Wait state 50 • Saga pattern 51 • Applying Saga pattern via Step Functions 52 • Serverless Application Repository App – DLQ Redriver • Troubleshooting retry and timeout issues with AWS SDK • Lambda resiliency controls for stream processing • Lambda Destinations • Serverless Application Repository App – Event Replay • Serverless Application Repository App – Event Storage and Backup Whitepapers • Microservices on AWS 53 ArchivedAmazon Web Services Serverless Application Lens 51 Performance Efficiency Pillar The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and the maintenance of that efficiency as demand changes and technologies evolve Definition Performance efficiency in the c loud is composed of four areas: • Selection • Review • Monitoring • Tradeoffs Take a data driven approach to selecting a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types By reviewing your choices on a cyc lical basis you will ensure that you are taking advantage of the continually evolving AWS Cloud Monitoring will ensure that you are aware of any deviance from expected performance and can take action on it Finally you can make tradeoffs in your archit ecture to improve performance such as using compression or caching or by relaxing consistency requirements PER 1 : How have you optimized the performance of your serverless application ? Selection Run performance test s on your serverless application using steady and burst rates Using the result try tuning capacity units and load test after changes to help you select the best configuration: • Lambda : Test different memory settings as CPU network and storage IOPS are allocated proportionally • API Gateway : Use Edge endpoints for geographically dispersed customers Use Regional for regional customers and when using other AWS services within the same Region ArchivedAmazon Web Services Serverless Application Lens 52 • DynamoDB: Use on demand for unpredictable application traffic otherwise provisioned mode for consistent traffic • Kinesis: Use enhanced fanout for dedicated input/output channel per consumer in multiple consumer scenarios U se an extended batch window for low volume transactions with Lambda Configure VPC access to your Lambda functions only when necessary Set up a NAT gateway if your VPC enabled Lambda function needs access to the internet As covered in the Well Architected Framework configure your NAT gateway across multiple Availability Zones for high availability and performance API Gatewa y Edge optimized APIs provide a fully managed CloudFront distribution to optimize access for geographically dispersed consumers API requests are routed to the nearest CloudFront Point of Presence (POP) which typically improves connection time Figure 11: Edge optimized API Gateway deployment API Gateway Regional endpoint doesn’t provide a CloudFront distribution and enables HTTP2 by default which helps reduce overall latency when requests originate from the same Region Region al endpoints also allow you to associate your own Amazon CloudFront distribution or an existing CDN ArchivedAmazon Web Services Serverless Application Lens 53 Figure 21: Regional Endpoint API Gateway deployment This table can help you decide whether to deploy and Edge optimized API or Regional API Endpoint: Edge optimized API Regional API Endpoint API is accessed across Regions Includes API Gateway managed CloudFront distribution X API is accessed within same Region Least request latency when API is accessed from same Region as API is deployed X Ability to associate own CloudFront distribution X This decision tree can help you decide when to deploy your Lambda function in a VPC ArchivedAmazon Web Services Serverless Application Lens 54 Figure 12: Decision tree for deploying a Lambda function in a VPC Optimize As a serverless architecture grows organically there are certain mechanisms that are commonly used across a variety of workload profiles Despite performance testing design tradeoffs should be considered to increase your application’s performance always keeping your SLA and requirements in mind API Gateway and AWS AppSync caching can be enabled to improve performance for applicable operations DAX can improve read responses significantly as well as Global and Local Secondary Indexes to prevent DynamoDB full table scan operations These details and resources were described in the Mobile Back end scenario ArchivedAmazon Web Services Serverless Application Lens 55 API Gateway content encoding allows API clients to request the payload to be compressed before being sent back in the response to an API request This reduces the number of bytes that are sent from API Gateway to API clients and decreases the time it takes to transfer the data You can enable content encoding in the API definition and you can also set the minimum response size that triggers compression By default APIs do not have content encoding support enabled Set your function timeout a few seconds higher than the average execution to account for any transient issues in downstream services used in the communication path This also applies when working with Step Functions activities tasks and SQS message visibility Choosing a default memory setting and timeout in AWS Lambda may have an undesired effect in performance c ost and operational procedures Setting the timeout much higher than the average execution may cause functions to execute for longer upon code malfunction resulting in higher costs and possibly reaching concurrency limits depending on how such function s are invoked Setting a timeout that equals one successful function execution may trigger a serverless application to abruptly halt an execution should a transient networking issue or abnormality in downstream services occur Setting a timeout without performing load testing and more importantly without considering upstream services may result in errors whenever any pa rt reache s its timeout first Follow best practices for working with Lambda functions 54 such as container reuse minimiz ing deployment package size to its runtime necessities and minimizing the complexity of your dependencies including frameworks that may not be optimized for fast startup The latency 99 th percentile ( P99) should always be taken into account as one may not impact application SLA agreed with other teams For Lambda functions in VPC avoid DNS resolution of public host names of underlying resources in your VPC For example if your Lambda function accesses an Amazon RDS DB instance in your VPC launch the instance with the nopublicly accessible option After a Lambda function has executed AWS Lambda maintains the execution context for some arbitrary time in anticipation of another Lambda function invocation That allows you to use the global scope for one off expensive operations for example ArchivedAmazon Web Services Serverless Application Lens 56 establ ishing a database connection or any initialization logic In subsequent invocations you can verify whether it’s still valid and reuse the existing connection Asynchronous Transactions Because your customers expect more modern and interactive user interfaces you can no longer sustain complex workflows using synchronous transactions The more service interaction you need the more you end up chaining calls that may end up increasing the risk on service stability as well as response time Modern UI framewor ks such as Angularjs VueJS and React asynchronous transactions and cloud native workflows provide a sustainable approach to meet customers demand as well as helping you decouple components and focus on process and business domains instead These asynchronous transactions (or often times described as an event driven architecture) kick off downstream subsequent choreographed events in the cloud instead of constraining clients to lock andwait (I/O blocking) for a response Asynchronous workflows handle a variety of use cases including but not limited to: data Ingestion ETL operations and order/request fulfillment In these use cases data is processed as it arrives and is retrieved as it changes We outline best practices for two common asynchronous workflows where you can learn a few optimization patterns for integr ation and async processing Serverless Data Processing In a serverless data processing workflow data is ingested from clients into Kinesis (using the Kinesis agent SDK or API) and arrives in Amazon S3 New objects kick off a Lambda function that is au tomatically executed This function is commonly used to transform or partition data for further processing and possibly stored in other destinations such as DynamoDB or another S3 bucket where data is in its final format As you may have different transfo rmations for different data types we recommend granularly splitting the transformations into different Lambda functions for optimal performance With this approach you have the flexibility to run data transformation in parallel and gain speed as well as cost ArchivedAmazon Web Services Serverless Application Lens 57 Figure 23: Asynchronous data ingestion Kinesis Data Firehose offers native data transformations that can be used as an alternative to Lambda where no add itional logic is necessary for transforming records in Apache Log/System logs to CSV JSON; JSON to Parquet or ORC Serverless Event Submission with Status Updates Suppose you have an e commerce site and a customer submits an order that kicks off an invent ory deduction and shipment process; or an enterprise application that submits a large query that may take minutes to respond The processes required to complete this common transaction may require multiple service calls that may take a couple of minutes t o complete Within those calls you want to safeguard against potential failures by adding retries and exponential backoffs However that can cause a suboptimal user experience for whoever is waiting for the transaction to complete For long and complex w orkflows similar to this you can integrate API Gateway or AWS AppSync with Step Functions that upon new authorized requests will start this business workflow Step Functions responds immediately with an execution ID to the caller (Mobile App SDK web ser vice etc) For legacy systems you can use the execution ID to poll Step Functions for the business workflow status via another REST API With WebSockets whether you’re using REST or GraphQL you can receive business workflow status in real time by providing updates in every step of the workflow Amazon Kinesis Firehose Amazon S3 AWS Lambda Amazon S3 Amazon DynamoDB Other Data SourcesArchivedAmazon Web Services Serverless Application Lens 58 Figure 24: Asynchronous workflow with Step Functions state machines Another common scenario is integrating API Gateway directly with SQS or Kinesis as a scaling layer A Lambda function would only be necessary if additional business information or a custom request ID format is expected from the caller Figure 25: Asynchronous workflow using a queue as a scaling layer In this second example SQS serves multiple purposes: 1 Storing the request record durably is important because the client can confidently proceed throughout the workflow knowing that the request will eventually be processed 2 Upon a burst of events that may temporarily ove rwhelm the backend the request can be polled for processing when resources become available Compared to the first example without a queue Step Functions is storing the data durably without the need for a queue or state tracking data sources In both ex amples the best practice is to pursue an asynchronous workflow after the client submits the request and avoiding the resulting response as blocking code if completion can take several minutes Amazon API GatewayAWS Step Functions Amazon API Gateway Amazon SQS AWS Lambda2 31 45 1 2 3 45 Event Processing Amazon API Gateway Amazon SQS AWS Lambda2 31 45 ArchivedAmazon Web Services Serverless Application Lens 59 With WebSockets AWS AppSync provides this capability out of the box via GraphQL subscriptions With subscriptions an authorized client could listen for data mutations they’re interested in This is ideal for data that is streaming or may yield more than a single response With AWS AppSync as status updates chan ge in DynamoDB clients can automatically subscribe and receive updates as they occur and it’s the perfect pattern for when data drives the user interface Figure 26: Asynchronous updates via WebSockets with AWS AppSync and GraphQL Web Hooks can be imple mented with SNS Topic HTTP subscriptions Consumers can host an HTTP endpoint that SNS will call back via a POST method upon an event ( for example a data file arriving in Amazon S3) This pattern is ideal when the clients are configurable such as another microservice which could host an endpoint Alternatively Step Functions supports callbacks where a state machine will block until it receives a response fo r a given task Figure 27: Asynchronous notification via Webhook with SNS Lastly polling could be costly from both a cost and resource perspective due to multiple clients constantly polling an API for status If polling is the only option due to environment constraints it ’s a best practice to establish SLAs with the clients to limit the number of “empty polls ” AWS AppSync Amazon S31 2 1 Amazon DynamoDB AWS SNSHTTP Amazon S31 2 ArchivedAmazon Web Services Serverless Application Lens 60 Figure 28: Client polling for updates on transaction recently made For example if a large data warehouse query takes an average of two minutes for a response the client should poll the API after two minutes with exponential backoff if the data is not available Th ere are two common patterns to ensure that clients aren’t polling more frequently than expected: Throttling and Timestamp for when is safe to poll again For timestamps the system being polled can return an extra field with a timestamp or time period as to when it is safe for the consumer to poll once again This approach follows an optimistic scenario where the consumer will respect and use this wisely and in the event of abuse you can also employ throttling for a more complete implementation Review See the AWS Well Architected Framework whitepaper for best practices in the review area for performance efficiency that apply to serverless applications Monitoring See the AWS Well Architected Framework whitepaper for best practices in the monitoring area for performance efficiency that apply to serverless applications Tradeoffs See the AWS Well Architected Framework whitepaper for best practices in the tradeoffs area for performance efficiency that apply to serverless applications Key AWS Services Key AWS Services for performance efficiency are DynamoDB Accelerator API Gateway Step Functions NAT gateway Amazon VPC and Lambda Amazon API Gateway1 4 Amazon S32 3 ArchivedAmazon Web Services Serverless Application Lens 61 Resources Refer to the following resources to learn more about our best practices for performance efficiency Documentation & Blogs • AWS Lambda FAQs 55 • Best Practices for Working with AWS Lambda Functions 56 • AWS Lambda: How It Works 57 • Understanding Container Reuse in AWS Lambda 58 • Configuring a Lambda Function to Access Resources in an Amazon VPC 59 • Enable API Caching to Enhance Responsiveness 60 • DynamoDB: Global Secondary Indexes 61 • Amazon DynamoDB Accelerator (DAX) 62 • Developer Guide: Kinesis Streams 63 • Java SDK : Performance improvement configuration • Nodejs SDK: Enabling HTTP Keep Alive • Nodejs SDK: Improving Imports • Using Amazon SQS queues and AWS Lambda for high throughput • Increasing stream processing performance with enhanced fan out • Lambda Power Tuning • When to use Amazon DynamoDB on demand and provisioned mode • Analyzing Log Data with Amazon CloudWatch Logs Insights • Integrating multiple data sources with AWS AppS ync • Step Functions Service Integrations • Caching patterns • Caching Serverless Applications • Best Practices for Amazon Athena and AWS Glue ArchivedAmazon Web Services Serverless Application Lens 62 Cost Optimization Pillar The cost optimization pillar includes the continual process of refinement and improvement of a system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the practices in this document will enable you to build and operate cost aware systems that achieve business outcomes and minimize costs thus allowing your business to maximize its return on investment Definition There are four best practice areas for cost optimization in the cloud: • Costeffective resources • Matching supply and demand • Expenditure awareness • Optimizing over time As with the other pillars there are tradeoffs to consider For example do you want to optimize for speed to market or for cost? In some cases it’s best to optimize for speed — going to market quickly shipping new features or simply meeting a deadline rather than investing in upfront cost optimization Design decisions are sometimes guided by haste a s opposed to empirical data as the temptation always exists to overcompensate “just in case” rather than spend time benchmarking for the most cost optimal deployment This often leads to drastically over provisioned and under optimized deployments The following sections provide techniques and strategic guidance for the initial and ongoing cost optimization of your deployment Generally serverless architectures tend to reduce cost s because some of the services such as AWS Lambda don’t cost anything while they’re idle However following certain best practices and making tradeoffs will help you reduce the cost of these solutions even more ArchivedAmazon Web Services Serverless Application Lens 63 Best Practices COST 1 : How do you optimize your costs ? Cost Effective Resources Serverless architectures are easier to manage in terms of correct resource allocation Due to its pay pervalue pricing model and scale based on demand serverless effectively reduces the capacity planning effort As covered in the operational excellence and performan ce pillar s optimizing your serverless application has a direct impact on the value it produces and its cost As Lambda proportionally allocates CPU network and storage IOPS based on memory the faster the execution the cheaper and more value your functi on produces due to 100 ms billing incremental dimension Matching Supply and Demand The AWS serverless architecture is designed to scale based on demand and as such there are no applicable practices to be followed Expenditure Awareness As covered in the A WS Well Architected Framework the increased flexibility and agility that the cloud enables encourages innovation and fast paced development and deployment It eliminates the manual processes and time associated with provisioning onpremises infrastructure including identifying hardware specifications negotiating price quotations managing purchase orders scheduling shipments an d then deploying the resources As your serverless architecture grows the number of Lambda functions APIs stages and other assets will multiply Most of these architectures need to be budgeted and forecasted in terms of costs and resource management –tagging can help you here You can allocate costs from your AWS bill to individual functions and APIs and obtain a granulat ed view of your costs per project in AWS Cost Explorer A good implementation is to share the same key value tag for assets that belong to the project programmatically and create custom reports based on the tags that you have created This feature will hel p you not only allocate your costs but also identify which resources belong to which projects ArchivedAmazon Web Services Serverless Application Lens 64 Optimizing Over Time See the AWS Well Architected Framework whitepaper for best practices in the Optimizing Over Time area for cost optimization that apply to serverless applications Logging Ingestion and Storage AWS Lambda uses CloudWatch Logs to store the output of the executions to identify and troubleshoot problems on executions as well as monitoring the serverless application These will impact the cost in the CloudWatch Logs service in two dimensions : ingestion and storage Set appropriate logging levels and remove unnecessary logging information to optimize log ingestion Use environment variables to control application logging level and sample logging in DEBUG mode to ensure you have additional insight when necessary Set log retention periods for new and existing CloudWatch Logs groups For log archival export and set cost effective storage classes that best suit your needs Direct Integrations If your L ambda function is not performing custom logic while integrating with other AWS services chances are that it may be unnecessary API Gateway AWS AppSync Step Functions EventBridge and Lambda Destinations can directly integrate with a number of service s and provide you more value and less operational overhead Most public serverless applications provide an API with an agnostic implementation of the contract provided as described in the Microservices scenario An example scenario where a direct integration is a better fit is ingesting click stream data through a REST API Figure 13: Sending data to Amazon S3 using Kinesis Data Firehose ArchivedAmazon Web Services Serverless Application Lens 65 In this scenario API Gateway will execute a Lam bda function that will simply ingest the incoming record into Kinesis Data Firehose that subsequently batches records before storing into a S3 bucket As no additional logic is necessary for this example we can use an API Gateway service proxy to directly integrate with Kinesis Data Firehose Figure 14: Reducing cost of sending data to Amazon S3 by implementing AWS service proxy With this approach we remove the cost of using Lambda and unnecessary invocations by implementing the AWS Service Proxy within API Gateway As a tradeoff this might introduce some extra complexity if multiple shards are necessary to meet the ingestion rate If latency sensitive you can stream data di rectly to your Kinesis Data Firehose by having the correct credentials at the expense of abstraction contract and API features Figure 15: Reducing cost of sending data to Amazon S3 by streaming directly using the Kinesis Data Firehose SDK For scenarios where you need to connect with internal resources within your VPC or on premises and no custom logic is required use API Gateway private integration ArchivedAmazon Web Services Serverless Application Lens 66 Figure 32: Amazon API Gateway private integration over Lambda in VPC to access private resources With this approach API Gateway sends each incoming request to an Internal Network Load Balancer that you own in your VPC which can forward the traffic to any backend either in the same VPC or on premises via IP address This approach has both cost and performance benefits as you don’t need an additional hop to send requests to a private backend with the added benefits of authorization throttling and caching mechanisms Another scenario is a fan out pattern where Amazon SNS broa dcast s messages to all of its subscribers This approach requires additional application logic to filter and avoid an unnecessary Lambda invocation ArchivedAmazon Web Services Serverless Application Lens 67 Figure 33: Amazon SNS without message attribute filtering SNS can filter events based on message attribut es and more efficiently deliver the message to the correct subscriber Figure 34: Amazon SNS with message attribute filtering Another example is long running processing tasks where you may need to wait for task completion before proceeding to the next st ep This wait state may be implemented within the Lambda code however it’s far more efficient to either transform to asynchronous processing using events or implement the waiting state using Step Functions For example in the following image we poll a n AWS Batch job and review its state every 30 seconds to see if it has finished Instead of coding this wait within the Lambda function we implement a poll ( GetJobStatus ) + wait ( Wait30Seconds ) + decider (CheckJobStatus ) ArchivedAmazon Web Services Serverless Application Lens 68 Figure 16: Implementing a wait state with AWS Step Functions Implementing a wait state with Step Functions won’t incur any further cost as the pricing model for Step Functions is based on transitions between states and not on the time spent within a state ArchivedAmazon Web Services Serverless Application Lens 69 Figure 17: Step Functions service integration synchronous wait Depending on the integration you have to wait Step Functions can wait synchronously before moving to the next task saving you an additional transition Code optimizati on As covered in the performance pillar optimizing your serverless application can effectively improve the value it produces per execution The use of global variables to maintain connections to your data stores or other services and resources will incre ase performance and reduce execution time which also reduce s the cost For more information see the performance pillar section An example where the use of managed service features can improve the value per execution is retrieving and filtering objects f rom Amazon S3 since fetching large objects from Amazon S3 requires higher memory for Lambda functions ArchivedAmazon Web Services Serverless Application Lens 70 Figure 37: Lambda function retrieving full S3 object In the previous diagram we can see that when retrieving large objects from Amazon S3 we migh t increase the memory consumption of the Lambda increase the execution time (so the function can transform iterate or collect required data) and in some case s only part of this information is needed This is represented with three columns in red (data not required) and one column in green (data required) Using Athena SQL queries to gather granular information needed for your execution reduces the retrieval time and object size on which perform transformations Figure 38: Lambda with Athena object r etrieval In the next diagram we can see that by querying Athena to get the specific data we reduce the size of the object retrieved and as an extra benefit we can reuse that content since Athena saves its query results in a S3 bucket and invoke the La mbda invocation as the results land in Amazon S3 asynchronously A similar approach could be using with S3 Select S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions As in the ArchivedAmazon Web Services Serverle ss Application Lens 71 previous example w ith Athena retrieving a smaller object from Amazon S3 reduces execution time and the memory u sed by the Lambda function 200 seconds 95 seconds # Download and process all keys for key in src_keys: response = s3_clientget_object(Bucket=src_bucket Key=key) contents = response['Body']read() for line in contentssplit(' \n')[:1]: line_count +=1 try: data = linesplit('') srcIp = data[0][:8] … # Select IP Address and Keys for key in src_keys: response = s3_clientselect_object_content (Bucket=src_bucket Key=key expression = SELECT SUBSTR(obj_1 1 8) obj_2 FROM s3object as obj) contents = response['Body']read() for line in contents: line_count +=1 try: … Figure 18: Lambda perf ormance statistics using Amazon S3 vs S3 Select Resources Refer to the following resources to learn more about our best practices for cost optimization Documentation & Blogs • Cloud Watch Logs Retention 64 • Exporting Cloud Watch Logs to Amazon S3 65 • Streaming Cloud Watch Logs to Amazon E S66 • Defining wait states in Step Functions state machines 67 • Coca Cola Vending Pass State Machine Powered by Step Functions 68 • Building high throughput genomics batch workflows on AWS 69 • Simplify your Pub/Sub Messaging with Amazon SNS Me ssage Filtering • S3 Select and Glacier Select ArchivedAmazon Web Services Serverless Application Lens 72 • Lambda Reference Architecture for MapReduce • Serverless Application Repository App – Autoset CloudWatch Logs group retention • Ten resources every Serverless Architect should know Whitepaper • Optimizing Enterprise Economics with Serverless Architectures 70 Conclusion While serverless applications take the undifferentiated heavy lifting off developers there are still important principles to apply For reliability by regularly testing f ailure pathways you will be more likely to catch errors before they reach production For performance starting backward from customer expectation will allow you to design for optimal experience There are a number of AWS tools to help optimize performance as well For c ost optimization you can reduc e unnecessary waste within your serverless application by sizing resources in accordance with traffic demand and improve value by optimizing your application For operations your architecture should strive t oward automation in responding to events Finally a secure application will protect your organization’s sensitive information assets and meet any compliance requirements at every layer The landscape of serverless applications is continuing to evolve wi th the ecosystem of tooling and processes growing and maturing As this occurs we will continue to update this paper to help you ensure that your serverless application s are well architected Contributors The following individuals and organizations contributed to this document: • Adam Westrich: Sr Solutions Architect Amazon Web Services • Mark Bunch: Enterprise Solutions Architect Amazon Web Services • Ignacio Garcia Alonso: Solutions Architect Amazon Web Services ArchivedAmazon Web Services Serverless Application Lens 73 • Heitor Lessa: Principal S erverless Lead Well Architected Amazon Web Services • Philip Fitzsimons: Sr Manager Well Architected Amazon Web Services • Dave Walker: Principal Specialist Solutions Architect Amazon Web Services • Richard Threlkeld: Sr Product Manager Mobile Amazon Web Services • Julian Hambleton Jones: Sr Solutions Architect Amazon Web Services Further Reading For additional information see the following: • AWS Well Architected Framework 71 Document Revisions Date Description December 2019 Updates throughout for new features and evolution of best practice November 2018 New scenarios for Alexa and Mobile and updates throughout to reflect new features and evolution of best practice November 2017 Initial publication 1 https://awsamazoncom/well architected 2 http://d0aws staticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 3 https://githubcom/alexcasalboni/aws lambda power tuning 4 http://docsawsamazoncom/amazondynamodb/latest/developerguide/BestPracticesh tml Notes ArchivedAmazon Web Services Serverless Application Lens 74 5 http://docsawsamazoncom/elasticsearch service/latest/developerguide/es managedomainshtml 6 https://wwwelasticco/guide/en/elasticsearch/guide/current/scalehtml 7 http://docsawsamazoncom/streams/latest/dev/kinesis record processor scalinghtml 8 https://d0awsstaticcom/whitepapers/whitepaper streaming datasolutions onaws withamazon kinesispdf 9 http://do csawsamazoncom/kinesis/latest/APIReference/API_PutRecordshtml 10 http://docsawsamazoncom/streams/latest/dev/kinesis record processor duplicatesht ml 11 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml#stream events 12 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway api usage planshtml 13 http://docsawsamazonc om/apigateway/latest/developerguide/stage variableshtml 14 http://docsawsamazoncom/lambda/latest/dg/env_variableshtml 15 https://githubcom/awslabs/serverless application model 16 https://awsamazoncom/blogs/aws/latency distribution graph inawsxray/ 17 http://docsawsamazoncom/lambda/latest/dg/lambda xrayhtml 18 http://docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 19 https://awsamazoncom/blogs/compute/continuous deployment forserverless applications/ 20 https://githubcom/awslabs/aws serverless samfarm 21 https://d0awsstaticcom/whitepapers/DevOps/practicing continuous integration continuous delivery onAWSpdf 22 https://awsamazoncom/serverless/developer tools/ 23 http://docsawsamazoncom/lambda/latest/dg/with s3example create iamrolehtml 24 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway method request validationhtml ArchivedAmazon Web Services Serverless Application Lens 75 25 http://docsawsamazoncom/apigateway/latest/developerguide/use custom authorizerhtml 26 https://awsamazoncom/blogs/compute/secure apiaccess withamazon cognito federated identities amazon cognito userpools andamazon apigateway/ 27 http://docsawsamazoncom/lambda/latest/dg/vpchtml 28 https://awsamazoncom/pt/articles/using squid proxy instances forwebservice access inamazon vpcanother example withawscodedeploy andamazon cloudwatch/ 29 https://wwwowasporg/images/0/08/OWASP_SCP_Quick_Reference_Guide_v2pdf 30 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practicespdf 31 https://wwwtwistlockcom/products/serverless security/ 32 https://snykio/ 33 https: //wwwowasporg/indexphp/OWASP_Dependency_Check 34 http://theburningmonkcom/2017/07/applying thesaga pattern withawslambda and stepfunctio ns/ 35 http://docsawsamazoncom/lambda/latest/dg/limitshtml 36 http://docsawsamazoncom/apigateway/latest/developerguide/limitshtml#api gateway limits 37 http://docsawsamazoncom/streams/latest/dev/service sizes andlimitshtml 38 http://docsawsamazoncom/amazondynamodb/lat est/developerguide/Limitshtml 39 http://docsawsamazoncom/step functions/latest/dg/limitshtml 40 https://awsamazoncom/blogs/compute/error handling patterns inamazon api gateway andawslambda/ 41 https://awsamazoncom/bl ogs/compute/serverless testing withawslambda/ 42 http://docsawsamazoncom/lambda/latest/dg/monitoring functions logshtml 43 http://docsawsamazoncom/lambda/latest/dg/versioning aliaseshtml 44 http://docsawsamazoncom/apigateway/latest/developerguide/stageshtml 45 http://docsawsamazoncom/general/lat est/gr/api retrieshtml ArchivedAmazon Web Services Serverless Application Lens 76 46 http://docsawsamazoncom/step functions/latest/dg/tutorial handling error conditionshtml#using state machine error conditions step4 47 http://docsawsamazoncom/xray/latest/devguide/xray services lambdahtml 48 http://docsawsamazoncom/lambda/latest/dg/dlqhtml 49 https://awsamazoncom/blogs/compute/e rrorhandling patterns inamazon api gateway andawslambda/ 50 http://docsawsamazoncom/step functions/latest/dg/amazon states language wait state html 51 http://microservicesio/patterns/data/sagahtml 52 http://theburningmon kcom/2017/07/applying thesaga pattern withawslambda and stepfunctions/ 53 https://d0awsstaticcom/whitepapers/microservices onawspdf 54 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml 55 https://awsamazoncom/lambda/faqs/ 56 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml 57 http://docsawsamazoncom /lambda/latest/dg/lambda introductionhtml 58 https://awsamazoncom/blogs/compute/container reuse inlambda/ 59 http://docsawsamazoncom/lambda/latest/dg/vpchtml 60 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway cachinghtml 61 http://docsawsamazoncom/amazondynamodb/latest/developerguide/GSIhtml 62 https://awsamazoncom/dynamodb/dax/ 63 http://docsawsamazoncom/streams/latest/dev/amazon kinesis streamshtml 64 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/SettingLogRetentionhtm l 65 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/S3ExportTasksConsole html ArchivedAmazon Web Services Serverless Application Lens 77 66 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/CWL_ES_Streamhtml 67 http://docsawsamazoncom/step functions/latest/dg/ama zonstates language wait statehtml 68 https://awsamazoncom/blogs/aws/things gobetter withstepfunctions/ 69 https://awsamazoncom/blogs/compute/buildin ghighthroughput genomics batch workflows onawsworkflow layer part4of4/ 70 https://d0awsstaticcom/whitepapers/optimizing enterprise economics serverless architecturespdf 71 https://awsamazoncom/well architected",General,consultant,Best Practices AWS_WellArchitected_Framework,ArchivedAWS WellArchitected Framework July 2020 This whitepaper describes the AWS WellArchitected Framework It provides guidance to help cus tomers apply best practices in the design delivery and maintenance of AWS environments We address general design principles as well as specific best practices and guidance in five conceptual areas that we define as the pillars of the WellArchitected FrameworkThis paper has been archived The latest version is available at: https://docsawsamazoncom/wellarchitected/latest/framework/welcomehtmlArchivedAWS WellArchitected Framework Notices Customers are responsible for making their own independent assessment of the in formation in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Copyright © 2020 Amazon Web Services Inc or its affiliatesArchivedAWS WellArchitected Framework Introduction 1 Definitions 2 On Architecture 3 General Design Principles 5 The Five Pillars of the Framework 6 Operational Excellence 6 Security 15 Reliability 22 Performance Efficiency 28 Cost Optimization 36 The Review Process 43 Conclusion 45 Contributors 46 Further Reading 47 Document Revisions 48 Appendix: Questions and Best Practices 49 Operational Excellence 49 Security 60 Reliability 69 Performance Efficiency 80 Cost Optimization 88 iiiArchivedAWS WellArchitected Framework Introduction The AWS WellArchitected Framework helps you understand the pros and cons of de cisions you make while building systems on AWS By using the Framework you will learn architectural best practices for designing and operating reliable secure effi cient and costeffective systems in the cloud It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement The process for reviewing an architecture is a constructive conversation about archi tectural decisions and is not an audit mechanism We believe that having wellarchi tected systems greatly increases the likelihood of business success AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases We have helped design and review thou sands of customers’ architectures on AWS From this experience we have identified best practices and core strategies for architecting systems in the cloud The AWS WellArchitected Framework documents a set of foundational questions that allow you to understand if a specific architecture aligns well with cloud best practices The framework provides a consistent approach to evaluating systems against the qualities you expect from modern cloudbased systems and the remedi ation that would be required to achieve those qualities As AWS continues to evolve and we continue to learn more from working with our customers we will continue to refine the definition of wellarchitected This framework is intended for those in technology roles such as chief technology of ficers (CTOs) architects developers and operations team members It describes AWS best practices and strategies to use when designing and operating a cloud workload and provides links to further implementation details and architectural patterns For more information see the AWS WellArchitected homepage AWS also provides a service for reviewing your workloads at no charge The AWS WellArchitected Tool (AWS WA Tool) is a service in the cloud that provides a consis tent process for you to review and measure your architecture using the AWS WellAr chitected Framework The AWS WA Tool provides recommendations for making your workloads more reliable secure efficient and costeffective To help you apply best practices we have created AWS WellArchitected Labs which provides you with a repository of code and documentation to give you handson ex perience implementing best practices We also have teamed up with select AWS Part ner Network (APN) Partners who are members of the AWS WellArchitected Partner program These APN Partners have deep AWS knowledge and can help you review and improve your workloads 1ArchivedAWS WellArchitected Framework Definitions Every day experts at AWS assist customers in architecting systems to take advantage of best practices in the cloud We work with you on making architectural tradeoffs as your designs evolve As you deploy these systems into live environments we learn how well these systems perform and the consequences of those tradeoffs Based on what we have learned we have created the AWS WellArchitected Frame work which provides a consistent set of best practices for customers and partners to evaluate architectures and provides a set of questions you can use to evaluate how well an architecture is aligned to AWS best practices The AWS WellArchitected Framework is based on five pillars — operational excel lence security reliability performance efficiency and cost optimization Table 1 The pillars of the AWS WellArchitected Framework Name Description Operational Excellence The ability to support development and run workloads effectively gain insight into their operations and to continuously improve supporting processes and proce dures to deliver business value Security The security pillar encompasses the ability to protect data systems and assets to take advantage of cloud technologies to improve your security Reliability The reliability pillar encompasses the ability of a work load to perform its intended function correctly and con sistently when it’s expected to This includes the ability to operate and test the workload through its total life cycle This paper provides indepth best practice guid ance for implementing reliable workloads on AWS Performance Efficiency The ability to use computing resources efficiently to meet system requirements and to maintain that effi ciency as demand changes and technologies evolve Cost Optimization The ability to run systems to deliver business value at the lowest price point In the AWS WellArchitected Framework we use these terms: • A component is the code configuration and AWS Resources that together deliver against a requirement A component is often the unit of technical ownership and is decoupled from other components 2ArchivedAWS WellArchitected Framework • The term workload is used to identify a set of components that together deliver business value A workload is usually the level of detail that business and technolo gy leaders communicate about • We think about architecture as being how components work together in a work load How components communicate and interact is often the focus of architecture diagrams •Milestones mark key changes in your architecture as it evolves throughout the product lifecycle (design testing go live and in production) • Within an organization the technology portfolio is the collection of workloads that are required for the business to operate When architecting workloads you make tradeoffs between pillars based on your business context These business decisions can drive your engineering priorities You might optimize to reduce cost at the expense of reliability in development environ ments or for missioncritical solutions you might optimize reliability with increased costs In ecommerce solutions performance can affect revenue and customer propen sity to buy Security and operational excellence are generally not tradedoff against the other pillars On Architecture In onpremises environments customers often have a central team for technology ar chitecture that acts as an overlay to other product or feature teams to ensure they are following best practice Technology architecture teams typically include a set of roles such as: Technical Architect (infrastructure) Solutions Architect (software) Data Ar chitect Networking Architect and Security Architect Often these teams use TOGAF or the Zachman Framework as part of an enterprise architecture capability At AWS we prefer to distribute capabilities into teams rather than having a central ized team with that capability There are risks when you choose to distribute decision making authority for example ensure that teams are meeting internal standards We mitigate these risks in two ways First we have practices 1 that focus on enabling each team to have that capability and we put in place experts who ensure that teams raise the bar on the standards they need to meet Second we implement mechanisms 2 that carry out automated checks to ensure standards are being met This distributed approach is supported by the Amazon leadership principles and establishes a culture 1Ways of doing things process standards and accepted norms 2 “Good intentions never work you need good mechanisms to make anything happen” Jeff Bezos This means replacing humans best efforts with mechanisms (often automated) that check for compliance with rules or process 3ArchivedAWS WellArchitected Framework across all roles that works back 3 from the customer Customerobsessed teams build products in response to a customer need For architecture this means that we expect every team to have the capability to cre ate architectures and to follow best practices To help new teams gain these capa bilities or existing teams to raise their bar we enable access to a virtual communi ty of principal engineers who can review their designs and help them understand what AWS best practices are The principal engineering community works to make best practices visible and accessible One way they do this for example is through lunchtime talks that focus on applying best practices to real examples These talks are recorded and can be used as part of onboarding materials for new team members AWS best practices emerge from our experience running thousands of systems at in ternet scale We prefer to use data to define best practice but we also use subject matter experts like principal engineers to set them As principal engineers see new best practices emerge they work as a community to ensure that teams follow them In time these best practices are formalized into our internal review processes as well as into mechanisms that enforce compliance The WellArchitected Framework is the customerfacing implementation of our internal review process where we have cod ified our principal engineering thinking across field roles like Solutions Architecture and internal engineering teams The WellArchitected Framework is a scalable mecha nism that lets you take advantage of these learnings By following the approach of a principal engineering community with distributed ownership of architecture we believe that a WellArchitected enterprise architecture can emerge that is driven by customer need Technology leaders (such as a CTOs or development managers) carrying out WellArchitected reviews across all your work loads will allow you to better understand the risks in your technology portfolio Using this approach you can identify themes across teams that your organization could ad dress by mechanisms training or lunchtime talks where your principal engineers can share their thinking on specific areas with multiple teams 3Working backward is a fundamental part of our innovation process We start with the customer and what they want and let that define and guide our efforts 4ArchivedAWS WellArchitected Framework General Design Principles The WellArchitected Framework identifies a set of general design principles to facili tate good design in the cloud: •Stop guessing your capacity needs : If you make a poor capacity decision when de ploying a workload you might end up sitting on expensive idle resources or deal ing with the performance implications of limited capacity With cloud computing these problems can go away You can use as much or as little capacity as you need and scale up and down automatically •Test systems at production scale : In the cloud you can create a productionscale test environment on demand complete your testing and then decommission the resources Because you only pay for the test environment when it's running you can simulate your live environment for a fraction of the cost of testing on premises •Automate to make architectural experimentation easier: Automation allows you to create and replicate your workloads at low cost and avoid the expense of manu al effort You can track changes to your automation audit the impact and revert to previous parameters when necessary •Allow for evolutionary architectures: Allow for evolutionary architectures In a tra ditional environment architectural decisions are often implemented as static one time events with a few major versions of a system during its lifetime As a business and its context continue to evolve these initial decisions might hinder the system's ability to deliver changing business requirements In the cloud the capability to au tomate and test on demand lowers the risk of impact from design changes This al lows systems to evolve over time so that businesses can take advantage of innova tions as a standard practice •Drive architectures using data : In the cloud you can collect data on how your ar chitectural choices affect the behavior of your workload This lets you make fact based decisions on how to improve your workload Your cloud infrastructure is code so you can use that data to inform your architecture choices and improve ments over time •Improve through game days : Test how your architecture and processes perform by regularly scheduling game days to simulate events in production This will help you understand where improvements can be made and can help develop organizational experience in dealing with events 5ArchivedAWS WellArchitected Framework The Five Pillars of the Framework Creating a software system is a lot like constructing a building If the foundation is not solid structural problems can undermine the integrity and function of the build ing When architecting technology solutions if you neglect the five pillars of opera tional excellence security reliability performance efficiency and cost optimization it can become challenging to build a system that delivers on your expectations and re quirements Incorporating these pillars into your architecture will help you produce stable and efficient systems This will allow you to focus on the other aspects of de sign such as functional requirements Operational Excellence The Operational Excellence pillar includes the ability to support development and run workloads effectively gain insight into their operations and to continuously improve supporting processes and procedures to deliver business value The operational excellence pillar provides an overview of design principles best prac tices and questions You can find prescriptive guidance on implementation in the Op erational Excellence Pillar whitepaper Design Principles There are five design principles for operational excellence in the cloud: •Perform operations as code : In the cloud you can apply the same engineering dis cipline that you use for application code to your entire environment You can define your entire workload (applications infrastructure) as code and update it with code You can implement your operations procedures as code and automate their execu tion by triggering them in response to events By performing operations as code you limit human error and enable consistent responses to events •Make frequent small reversible changes : Design workloads to allow components to be updated regularly Make changes in small increments that can be reversed if they fail (without affecting customers when possible) •Refine operations procedures frequently: As you use operations procedures look for opportunities to improve them As you evolve your workload evolve your proce dures appropriately Set up regular game days to review and validate that all proce dures are effective and that teams are familiar with them •Anticipate failure : Perform “premortem” exercises to identify potential sources of failure so that they can be removed or mitigated Test your failure scenarios and validate your understanding of their impact Test your response procedures to en 6ArchivedAWS WellArchitected Framework sure that they are effective and that teams are familiar with their execution Set up regular game days to test workloads and team responses to simulated events •Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures Share what is learned across teams and through the entire organization Definition There are four best practice areas for operational excellence in the cloud: •Organization •Prepare •Operate •Evolve Your organization’s leadership defines business objectives Your organization must understand requirements and priorities and use these to organize and conduct work to support the achievement of business outcomes Your workload must emit the in formation necessary to support it Implementing services to enable integration de ployment and delivery of your workload will enable an increased flow of beneficial changes into production by automating repetitive processes There may be risks inherent in the operation of your workload You must understand those risks and make an informed decision to enter production Your teams must be able to support your workload Business and operational metrics derived from de sired business outcomes will enable you to understand the health of your workload your operations activities and respond to incidents Your priorities will change as your business needs and business environment changes Use these as a feedback loop to continually drive improvement for your organization and the operation of your work load Best Practices Organization Your teams need to have a shared understanding of your entire workload their role in it and shared business goals to set the priorities that will enable business success Welldefined priorities will maximize the benefits of your efforts Evaluate internal and external customer needs involving key stakeholders including business devel opment and operations teams to determine where to focus efforts Evaluating cus tomer needs will ensure that you have a thorough understanding of the support that 7ArchivedAWS WellArchitected Framework is required to achieve business outcomes Ensure that you are aware of guidelines or obligations defined by your organizational governance and external factors such as regulatory compliance requirements and industry standards that may mandate or emphasize specific focus Validate that you have mechanisms to identify changes to internal governance and external compliance requirements If no requirements are identified ensure that you have applied due diligence to this determination Review your priorities regularly so that they can be updated as needs change Evaluate threats to the business (for example business risk and liabilities and infor mation security threats) and maintain this information in a risk registry Evaluate the impact of risks and tradeoffs between competing interests or alternative approaches For example accelerating speed to market for new features may be emphasized over cost optimization or you may choose a relational database for nonrelational data to simplify the effort to migrate a system without refactoring Manage benefits and risks to make informed decisions when determining where to focus efforts Some risks or choices may be acceptable for a time it may be possible to mitigate associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk Your teams must understand their part in achieving business outcomes Teams need to understand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams The needs of a team will be shaped by the customer they support their organization the makeup of the team and the char acteristics of their workload It's unreasonable to expect a single operating model to be able to support all teams and their workloads in your organization Ensure that there are identified owners for each application workload platform and infrastructure component and that each process and procedure has an identified owner responsible for its definition and owners responsible for their performance Having understanding of the business value of each component process and pro cedure of why those resources are in place or activities are performed and why that ownership exists will inform the actions of your team members Clearly define the re sponsibilities of team members so that they may act appropriately and have mech anisms to identify responsibility and ownership Have mechanisms to request addi tions changes and exceptions so that you do not constrain innovation Define agree ments between teams describing how they work together to support each other and your business outcomes Provide support for your team members so that they can be more effective in taking action and supporting your business outcomes Engaged senior leadership should set expectations and measure success They should be the sponsor advocate and driver for the adoption of best practices and evolution of the organization Empower team members to take action when outcomes are at risk to minimize impact and encour age them to escalate to decision makers and stakeholders when they believe there 8ArchivedAWS WellArchitected Framework is a risk so that it can be address and incidents avoided Provide timely clear and ac tionable communications of known risks and planned events so that team members can take timely and appropriate action Encourage experimentation to accelerate learning and keeps team members interest ed and engaged Teams must grow their skill sets to adopt new technologies and to support changes in demand and responsibilities Support and encourage this by pro viding dedicated structure time for learning Ensure your team members have the re sources both tools and team members to be successful and scale to support your business outcomes Leverage crossorganizational diversity to seek multiple unique perspectives Use this perspective to increase innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspectives If there are external regulatory or compliance requirements that apply to your organi zation you should use the resources provided by AWS Cloud Compliance to help ed ucate your teams so that they can determine the impact on your priorities The Well Architected Framework emphasizes learning measuring and improving It provides a consistent approach for you to evaluate architectures and implement designs that will scale over time AWS provides the AWS WellArchitected Tool to help you review your approach prior to development the state of your workloads prior to production and the state of your workloads in production You can compare workloads to the lat est AWS architectural best practices monitor their overall status and gain insight in to potential risks AWS Trusted Advisor is a tool that provides access to a core set of checks that recommend optimizations that may help shape your priorities Business and Enterprise Support customers receive access to additional checks focusing on security reliability performance and costoptimization that can further help shape their priorities AWS can help you educate your teams about AWS and its services to increase their understanding of how their choices can have an impact on your workload You should use the resources provided by AWS Support (AWS Knowledge Center AWS Discus sion Forums and AWS Support Center) and AWS Documentation to educate your teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS also shares best practices and patterns that we have learned through the operation of AWS in The Amazon Builders' Library A wide variety of oth er useful information is available through the AWS Blog and The Official AWS Pod cast AWS Training and Certification provides some free training through selfpaced digital courses on AWS fundamentals You can also register for instructorled training to further support the development of your teams’ AWS skills You should use tools or services that enable you to centrally govern your environ ments across accounts such as AWS Organizations to help manage your operating models Services like AWS Control Tower expand this management capability by en abling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance using AWS Organizations and automate provi 9ArchivedAWS WellArchitected Framework sioning of new accounts Managed Services providers such as AWS Managed Services AWS Managed Services Partners or Managed Services Providers in the AWS Partner Network provide expertise implementing cloud environments and support your se curity and compliance requirements and business goals Adding Managed Services to your operating model can save you time and resources and lets you keep your inter nal teams lean and focused on strategic outcomes that will differentiate your busi ness rather than developing new skills and capabilities The following questions focus on these considerations for operational excellence (For a list of operational excellence questions and best practices see the Appendix) OPS 1: How do you determine what your priorities are? Everyone needs to understand their part in enabling business success Have shared goals in order to set priorities for resources This will maximize the benefits of your efforts OPS 2: How do you structure your organization to support your business outcomes? Your teams must understand their part in achieving business outcomes Teams need to un derstand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams OPS 3: How does your organizational culture support your business outcomes? Provide support for your team members so that they can be more effective in taking action and supporting your business outcome You might find that you want to emphasize a small subset of your priorities at some point in time Use a balanced approach over the long term to ensure the development of needed capabilities and management of risk Review your priorities regularly and update them as needs change When responsibility and ownership are undefined or unknown you are at risk of both not performing necessary action in a timely fashion and of redundant and potentially conflicting efforts emerging to address those needs Organizational culture has a direct impact on team member job satisfaction and re tention Enable the engagement and capabilities of your team members to enable the success of your business Experimentation is required for innovation to happen and turn ideas into outcomes Recognize that an undesired result is a successful experi ment that has identified a path that will not lead to success Prepare To prepare for operational excellence you have to understand your workloads and their expected behaviors You will then be able design them to provide insight to their status and build the procedures to support them Design your workload so that it provides the information necessary for you to under stand its internal state (for example metrics logs events and traces) across all com 10ArchivedAWS WellArchitected Framework ponents in support of observability and investigating issues Iterate to develop the telemetry necessary to monitor the health of your workload identify when outcomes are at risk and enable effective responses When instrumenting your workload cap ture a broad set of information to enable situational awareness (for example changes in state user activity privilege access utilization counters) knowing that you can use filters to select the most useful information over time Adopt approaches that improve the flow of changes into production and that en able refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering production limit issues deployed and enable rapid identification and remediation of issues introduced through deployment activities or discovered in your environments Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of issues introduced through the deployment of changes Plan for unsuc cessful changes so that you are able to respond faster if necessary and test and val idate the changes you make Be aware of planned activities in your environments so that you can manage the risk of changes impacting planed activities Emphasize fre quent small reversible changes to limit the scope of change This results in easier troubleshooting and faster remediation with the option to roll back a change It also means you are able to get the benefit of valuable changes more frequently Evaluate the operational readiness of your workload processes procedures and per sonnel to understand the operational risks related to your workload You should use a consistent process (including manual or automated checklists) to know when you are ready to go live with your workload or a change This will also enable you to find any areas that you need to make plans to address Have runbooks that document your routine activities and playbooks that guide your processes for issue resolution Un derstand the benefits and risks to make informed decisions to allow changes to enter production AWS enables you to view your entire workload (applications infrastructure policy governance and operations) as code This means you can apply the same engineering discipline that you use for application code to every element of your stack and share these across teams or organizations to magnify the benefits of development efforts Use operations as code in the cloud and the ability to safely experiment to develop your workload your operations procedures and practice failure Using AWS CloudFor mation enables you to have consistent templated sandbox development test and production environments with increasing levels of operations control 11ArchivedAWS WellArchitected Framework The following questions focus on these considerations for operational excellence OPS 4: How do you design your workload so that you can understand its state? Design your workload so that it provides the information necessary across all components (for example metrics logs and traces) for you to understand its internal state This enables you to provide effective responses when appropriate OPS 5: How do you reduce defects ease remediation and improve flow into production? Adopt approaches that improve flow of changes into production that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering pro duction limit issues deployed and enable rapid identification and remediation of issues in troduced through deployment activities OPS 6: How do you mitigate deployment risks? Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of is sues introduced through the deployment of changes OPS 7: How do you know that you are ready to support a workload? Evaluate the operational readiness of your workload processes and procedures and person nel to understand the operational risks related to your workload Invest in implementing operations activities as code to maximize the productivity of operations personnel minimize error rates and enable automated responses Use “premortems” to anticipate failure and create procedures where appropriate Apply metadata using Resource Tags and AWS Resource Groups following a consistent tag ging strategy to enable identification of your resources Tag your resources for orga nization cost accounting access controls and targeting the execution of automated operations activities Adopt deployment practices that take advantage of the elastic ity of the cloud to facilitate development activities and predeployment of systems for faster implementations When you make changes to the checklists you use to eval uate your workloads plan what you will do with live systems that no longer comply Operate Successful operation of a workload is measured by the achievement of business and customer outcomes Define expected outcomes determine how success will be mea sured and identify metrics that will be used in those calculations to determine if your workload and operations are successful Operational health includes both the health of the workload and the health and success of the operations activities performed in support of the workload (for example deployment and incident response) Establish metrics baselines for improvement investigation and intervention collect and an alyze your metrics and then validate your understanding of operations success and how it changes over time Use collected metrics to determine if you are satisfying cus tomer and business needs and identify areas for improvement 12ArchivedAWS WellArchitected Framework Efficient and effective management of operational events is required to achieve op erational excellence This applies to both planned and unplanned operational events Use established runbooks for wellunderstood events and use playbooks to aid in investigation and resolution of issues Prioritize responses to events based on their business and customer impact Ensure that if an alert is raised in response to an event there is an associated process to be executed with a specifically identified owner Define in advance the personnel required to resolve an event and include es calation triggers to engage additional personnel as it becomes necessary based on urgency and impact Identify and engage individuals with the authority to make a de cision on courses of action where there will be a business impact from an event re sponse not previously addressed Communicate the operational status of workloads through dashboards and notifica tions that are tailored to the target audience (for example customer business devel opers operations) so that they may take appropriate action so that their expectations are managed and so that they are informed when normal operations resume In AWS you can generate dashboard views of your metrics collected from workloads and natively from AWS You can leverage CloudWatch or thirdparty applications to aggregate and present business workload and operations level views of opera tions activities AWS provides workload insights through logging capabilities including AWS XRay CloudWatch CloudTrail and VPC Flow Logs enabling the identification of workload issues in support of root cause analysis and remediation The following questions focus on these considerations for operational excellence OPS 8: How do you understand the health of your workload? Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action OPS 9: How do you understand the health of your operations? Define capture and analyze operations metrics to gain visibility to operations events so that you can take appropriate action OPS 10: How do you manage workload and operations events? Prepare and validate procedures for responding to events to minimize their disruption to your workload All of the metrics you collect should be aligned to a business need and the outcomes they support Develop scripted responses to wellunderstood events and automate their performance in response to recognizing the event Evolve You must learn share and continuously improve to sustain operational excellence Dedicate work cycles to making continuous incremental improvements Perform post 13ArchivedAWS WellArchitected Framework incident analysis of all customer impacting events Identify the contributing factors and preventative action to limit or prevent recurrence Communicate contributing factors with affected communities as appropriate Regularly evaluate and prioritize opportunities for improvement (for example feature requests issue remediation and compliance requirements) including both the workload and operations procedures Include feedback loops within your procedures to rapidly identify areas for improve ment and capture learnings from the execution of operations Share lessons learned across teams to share the benefits of those lessons Analyze trends within lessons learned and perform crossteam retrospective analysis of op erations metrics to identify opportunities and methods for improvement Implement changes intended to bring about improvement and evaluate the results to determine success On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for longterm storage Using AWS Glue you can discover and prepare your log da ta in Amazon S3 for analytics and store associated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with AWS Glue can then be used to analyze your log data querying it using standard SQL Using a business intel ligence tool like Amazon QuickSight you can visualize explore and analyze your da ta Discovering trends and events of interest that may drive improvement The following questions focus on these considerations for operational excellence OPS 11: How do you evolve operations? Dedicate time and resources for continuous incremental improvement to evolve the effec tiveness and efficiency of your operations Successful evolution of operations is founded in: frequent small improvements; pro viding safe environments and time to experiment develop and test improvements; and environments in which learning from failures is encouraged Operations support for sandbox development test and production environments with increasing lev el of operational controls facilitates development and increases the predictability of successful results from changes deployed into production Resources Refer to the following resources to learn more about our best practices for Opera tional Excellence Documentation •DevOps and AWS Whitepaper 14ArchivedAWS WellArchitected Framework •Operational Excellence Pillar Video •DevOps at Amazon Security The Security pillar includes the security pillar encompasses the ability to protect data systems and assets to take advantage of cloud technologies to improve your security The security pillar provides an overview of design principles best practices and ques tions You can find prescriptive guidance on implementation in the Security Pillar whitepaper Design Principles There are seven design principles for security in the cloud: •Implement a strong identity foundation : Implement the principle of least privi lege and enforce separation of duties with appropriate authorization for each inter action with your AWS resources Centralize identity management and aim to elimi nate reliance on longterm static credentials •Enable traceability : Monitor alert and audit actions and changes to your environ ment in real time Integrate log and metric collection with systems to automatically investigate and take action •Apply security at all layers : Apply a defense in depth approach with multiple secu rity controls Apply to all layers (for example edge of network VPC load balancing every instance and compute service operating system application and code) •Automate security best practices : Automated softwarebased security mechanisms improve your ability to securely scale more rapidly and costeffectively Create se cure architectures including the implementation of controls that are defined and managed as code in versioncontrolled templates •Protect data in transit and at rest : Classify your data into sensitivity levels and use mechanisms such as encryption tokenization and access control where appropri ate •Keep people away from data : Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data This reduces the risk of mis handling or modification and human error when handling sensitive data •Prepare for security events : Prepare for an incident by having incident manage ment and investigation policy and processes that align to your organizational re 15ArchivedAWS WellArchitected Framework quirements Run incident response simulations and use tools with automation to in crease your speed for detection investigation and recovery Definition There are six best practice areas for security in the cloud: •Security •Identity and Access Management •Detection •Infrastructure Protection •Data Protection •Incident Response Before you architect any workload you need to put in place practices that influence security You will want to control who can do what In addition you want to be able to identify security incidents protect your systems and services and maintain the con fidentiality and integrity of data through data protection You should have a wellde fined and practiced process for responding to security incidents These tools and tech niques are important because they support objectives such as preventing financial loss or complying with regulatory obligations The AWS Shared Responsibility Model enables organizations that adopt the cloud to achieve their security and compliance goals Because AWS physically secures the infra structure that supports our cloud services as an AWS customer you can focus on us ing services to accomplish your goals The AWS Cloud also provides greater access to security data and an automated approach to responding to security events Best Practices Security To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in op erational excellence at an organizational and workload level and apply them to all ar eas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations 16ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security (For a list of secu rity questions and best practices see the Appendix) SEC 1: How do you securely operate your workload? To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations In AWS segregating different workloads by account based on their function and compliance or data sensitivity requirements is a recommended approach Identity and Access Management Identity and access management are key parts of an information security program ensuring that only authorized and authenticated users and components are able to access your resources and only in a manner that you intend For example you should define principals (that is accounts users roles and services that can perform ac tions in your account) build out policies aligned with these principals and implement strong credential management These privilegemanagement elements form the core of authentication and authorization In AWS privilege management is primarily supported by the AWS Identity and Ac cess Management (IAM) service which allows you to control user and programmat ic access to AWS services and resources You should apply granular policies which as sign permissions to a user group role or resource You also have the ability to require strong password practices such as complexity level avoiding reuse and enforcing multifactor authentication (MFA) You can use federation with your existing directory service For workloads that require systems to have access to AWS IAM enables secure access through roles instance profiles identity federation and temporary credentials 17ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 2: How do you manage identities for people and machines? There are two types of identities you need to manage when approaching operating secure AWS workloads Understanding the type of identity you need to manage and grant access helps you ensure the right identities have access to the right resources under the right con ditions Human Identities: Your administrators developers operators and end users require an identity to access your AWS environments and applications These are members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application or interactive commandline tools Machine Identities: Your service applications operational tools and workloads require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You may also manage machine identities for external parties who need access Additionally you may also have machines outside of AWS that need access to your AWS environment SEC 3: How do you manage permissions for people and machines? Manage permissions to control access to people and machine identities that require access to AWS and your workload Permissions control who can access what and under what condi tions Credentials must not be shared between any user or system User access should be granted using a leastprivilege approach with best practices including password re quirements and MFA enforced Programmatic access including API calls to AWS ser vices should be performed using temporary and limitedprivilege credentials such as those issued by the AWS Security Token Service AWS provides resources that can help you with Identity and access management To help learn best practices explore our handson labs on managing credentials & au thentication controlling human access and controlling programmatic access Detection You can use detective controls to identify a potential security threat or incident They are an essential part of governance frameworks and can be used to support a quality process a legal or compliance obligation and for threat identification and response efforts There are different types of detective controls For example conducting an in ventory of assets and their detailed attributes promotes more effective decision mak ing (and lifecycle controls) to help establish operational baselines You can also use internal auditing an examination of controls related to information systems to en sure that practices meet policies and requirements and that you have set the correct automated alerting notifications based on defined conditions These controls are im portant reactive factors that can help your organization identify and understand the scope of anomalous activity In AWS you can implement detective controls by processing logs events and mon itoring that allows for auditing automated analysis and alarming CloudTrail logs 18ArchivedAWS WellArchitected Framework AWS API calls and CloudWatch provide monitoring of metrics with alarming and AWS Config provides configuration history Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behav ior to help you protect your AWS accounts and workloads Servicelevel logs are also available for example you can use Amazon Simple Storage Service (Amazon S3) to log access requests The following questions focus on these considerations for security SEC 4: How do you detect and investigate security events? Capture and analyze events from logs and metrics to gain visibility Take action on security events and potential threats to help secure your workload Log management is important to a WellArchitected workload for reasons ranging from security or forensics to regulatory or legal requirements It is critical that you an alyze logs and respond to them so that you can identify potential security incidents AWS provides functionality that makes log management easier to implement by giv ing you the ability to define a dataretention lifecycle or define where data will be preserved archived or eventually deleted This makes predictable and reliable data handling simpler and more cost effective Infrastructure Protection Infrastructure protection encompasses control methodologies such as defense in depth necessary to meet best practices and organizational or regulatory obligations Use of these methodologies is critical for successful ongoing operations in either the cloud or onpremises In AWS you can implement stateful and stateless packet inspection either by using AWSnative technologies or by using partner products and services available through the AWS Marketplace You should use Amazon Virtual Private Cloud (Amazon VPC) to create a private secured and scalable environment in which you can define your topology—including gateways routing tables and public and private subnets The following questions focus on these considerations for security SEC 5: How do you protect your network resources? Any workload that has some form of network connectivity whether it’s the internet or a pri vate network requires multiple layers of defense to help protect from external and internal networkbased threats SEC 6: How do you protect your compute resources? Compute resources in your workload require multiple layers of defense to help protect from external and internal threats Compute resources include EC2 instances containers AWS Lambda functions database services IoT devices and more 19ArchivedAWS WellArchitected Framework Multiple layers of defense are advisable in any type of environment In the case of in frastructure protection many of the concepts and methods are valid across cloud and onpremises models Enforcing boundary protection monitoring points of ingress and egress and comprehensive logging monitoring and alerting are all essential to an ef fective information security plan AWS customers are able to tailor or harden the configuration of an Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 Container Service (Amazon ECS) contain er or AWS Elastic Beanstalk instance and persist this configuration to an immutable Amazon Machine Image (AMI) Then whether triggered by Auto Scaling or launched manually all new virtual servers (instances) launched with this AMI receive the hard ened configuration Data Protection Before architecting any system foundational practices that influence security should be in place For example data classification provides a way to categorize organiza tional data based on levels of sensitivity and encryption protects data by way of ren dering it unintelligible to unauthorized access These tools and techniques are impor tant because they support objectives such as preventing financial loss or complying with regulatory obligations In AWS the following practices facilitate protection of data: • As an AWS customer you maintain full control over your data • AWS makes it easier for you to encrypt your data and manage keys including regu lar key rotation which can be easily automated by AWS or maintained by you • Detailed logging that contains important content such as file access and changes is available • AWS has designed storage systems for exceptional resiliency For example Amazon S3 Standard S3 Standard–IA S3 One ZoneIA and Amazon Glacier are all designed to provide 99999999999% durability of objects over a given year This durability level corresponds to an average annual expected loss of 0000000001% of objects • Versioning which can be part of a larger data lifecycle management process can protect against accidental overwrites deletes and similar harm • AWS never initiates the movement of data between Regions Content placed in a Region will remain in that Region unless you explicitly enable a feature or leverage a service that provides that functionality 20ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 7: How do you classify your data? Classification provides a way to categorize data based on criticality and sensitivity in order to help you determine appropriate protection and retention controls SEC 8: How do you protect your data at rest? Protect your data at rest by implementing multiple controls to reduce the risk of unautho rized access or mishandling SEC 9: How do you protect your data in transit? Protect your data in transit by implementing multiple controls to reduce the risk of unautho rized access or loss AWS provides multiple means for encrypting data at rest and in transit We build fea tures into our services that make it easier to encrypt your data For example we have implemented serverside encryption (SSE) for Amazon S3 to make it easier for you to store your data in an encrypted form You can also arrange for the entire HTTPS en cryption and decryption process (generally known as SSL termination) to be handled by Elastic Load Balancing (ELB) Incident Response Even with extremely mature preventive and detective controls your organization should still put processes in place to respond to and mitigate the potential impact of security incidents The architecture of your workload strongly affects the ability of your teams to operate effectively during an incident to isolate or contain systems and to restore operations to a known good state Putting in place the tools and ac cess ahead of a security incident then routinely practicing incident response through game days will help you ensure that your architecture can accommodate timely in vestigation and recovery In AWS the following practices facilitate effective incident response: • Detailed logging is available that contains important content such as file access and changes • Events can be automatically processed and trigger tools that automate responses through the use of AWS APIs • You can preprovision tooling and a “clean room” using AWS CloudFormation This allows you to carry out forensics in a safe isolated environment 21ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 10: How do you anticipate respond to and recover from incidents? Preparation is critical to timely and effective investigation response to and recovery from security incidents to help minimize disruption to your organization Ensure that you have a way to quickly grant access for your security team and auto mate the isolation of instances as well as the capturing of data and state for forensics Resources Refer to the following resources to learn more about our best practices for Security Documentation •AWS Cloud Security •AWS Compliance •AWS Security Blog Whitepaper •Security Pillar •AWS Security Overview •AWS Security Best Practices •AWS Risk and Compliance Video •AWS Security State of the Union •Shared Responsibility Overview Reliability The Reliability pillar includes the reliability pillar encompasses the ability of a work load to perform its intended function correctly and consistently when it’s expected to this includes the ability to operate and test the workload through its total lifecycle this paper provides indepth best practice guidance for implementing reliable work loads on aws The reliability pillar provides an overview of design principles best practices and questions You can find prescriptive guidance on implementation in the Reliability Pil lar whitepaper 22ArchivedAWS WellArchitected Framework Design Principles There are five design principles for reliability in the cloud: •Automatically recover from failure : By monitoring a workload for key perfor mance indicators (KPIs) you can trigger automation when a threshold is breached These KPIs should be a measure of business value not of the technical aspects of the operation of the service This allows for automatic notification and tracking of failures and for automated recovery processes that work around or repair the fail ure With more sophisticated automation it’s possible to anticipate and remediate failures before they occur •Test recovery procedures : In an onpremises environment testing is often con ducted to prove that the workload works in a particular scenario Testing is not typ ically used to validate recovery strategies In the cloud you can test how your work load fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This approach exposes failure pathways that you can test and fix before a real fail ure scenario occurs thus reducing risk •Scale horizontally to increase aggregate workload availability : Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload Distribute requests across multiple smaller resources to en sure that they don’t share a common point of failure •Stop guessing capacity : A common cause of failure in onpremises workloads is re source saturation when the demands placed on a workload exceed the capacity of that workload (this is often the objective of denial of service attacks) In the cloud you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over or underprovisioning There are still limits but some quotas can be controlled and others can be managed (see Manage Service Quotas and Constraints) •Manage change in automation: Changes to your infrastructure should be made us ing automation The changes that need to be managed include changes to the au tomation which then can be tracked and reviewed Definition There are four best practice areas for reliability in the cloud: •Foundations •Workload Architecture •Change Management 23ArchivedAWS WellArchitected Framework •Failure Management To achieve reliability you must start with the foundations — an environment where service quotas and network topology accommodate the workload The workload ar chitecture of the distributed system must be designed to prevent and mitigate fail ures The workload must handle changes in demand or requirements and it must be designed to detect failure and automatically heal itself Best Practices Foundations Foundational requirements are those whose scope extends beyond a single workload or project Before architecting any system foundational requirements that influence reliability should be in place For example you must have sufficient network band width to your data center With AWS most of these foundational requirements are already incorporated or can be addressed as needed The cloud is designed to be nearly limitless so it’s the re sponsibility of AWS to satisfy the requirement for sufficient networking and compute capacity leaving you free to change resource size and allocations on demand The following questions focus on these considerations for reliability (For a list of reli ability questions and best practices see the Appendix) REL 1: How do you manage service quotas and constraints? For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk REL 2: How do you plan your network topology? Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must include network considerations such as intra and intersystem connectivity pub lic IP address management private IP address management and domain name resolution For cloudbased workload architectures there are service quotas (which are also re ferred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations to protect services from abuse Workloads often exist in multiple environments You must mon itor and manage these quotas for all workload environments These include multiple cloud environments (both publicly accessible and private) and may include your exist ing data center infrastructure Plans must include network considerations such as in trasystem and intersystem connectivity public IP address management private IP ad dress management and domain name resolution 24ArchivedAWS WellArchitected Framework Workload Architecture A reliable workload starts with upfront design decisions for both software and infra structure Your architecture choices will impact your workload behavior across all five WellArchitected pillars For reliability there are specific patterns you must follow With AWS workload developers have their choice of languages and technologies to use AWS SDKs take the complexity out of coding by providing languagespecific APIs for AWS services These SDKs plus the choice of languages allow developers to im plement the reliability best practices listed here Developers can also read about and learn from how Amazon builds and operates software in The Amazon Builders' Li brary The following questions focus on these considerations for reliability REL 3: How do you design your workload service architecture? Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making soft ware components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler REL 4: How do you design interactions in a distributed system to prevent failures? Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not neg atively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) REL 5: How do you design interactions in a distributed system to mitigate or withstand failures? Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload Change Management Changes to your workload or its environment must be anticipated and accommodat ed to achieve reliable operation of the workload Changes include those imposed on your workload such as spikes in demand as well as those from within such as feature deployments and security patches 25ArchivedAWS WellArchitected Framework Using AWS you can monitor the behavior of a workload and automate the response to KPIs For example your workload can add additional servers as a workload gains more users You can control who has permission to make workload changes and audit the history of these changes The following questions focus on these considerations for reliability REL 6: How do you monitor workload resources? Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response REL 7: How do you design your workload to adapt to changes in demand? A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time REL 8: How do you implement change? Controlled changes are necessary to deploy new functionality and to ensure that the work loads and the operating environment are running known software and can be patched or re placed in a predictable manner If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them When you architect a workload to automatically add and remove resources in re sponse to changes in demand this not only increases reliability but also ensures that business success doesn't become a burden With monitoring in place your team will be automatically alerted when KPIs deviate from expected norms Automatic logging of changes to your environment allows you to audit and quickly identify actions that might have impacted reliability Controls on change management ensure that you can enforce the rules that deliver the reliability you need Failure Management In any system of reasonable complexity it is expected that failures will occur Reliabil ity requires that your workload be aware of failures as they occur and take action to avoid impact on availability Workloads must be able to both withstand failures and automatically repair issues With AWS you can take advantage of automation to react to monitoring data For ex ample when a particular metric crosses a threshold you can trigger an automated ac tion to remedy the problem Also rather than trying to diagnose and fix a failed re source that is part of your production environment you can replace it with a new one and carry out the analysis on the failed resource out of band Since the cloud enables you to stand up temporary versions of a whole system at low cost you can use auto mated testing to verify full recovery processes 26ArchivedAWS WellArchitected Framework The following questions focus on these considerations for reliability REL 9: How do you back up data? Back up data applications and configuration to meet your requirements for recovery time objectives (RTO) and recovery point objectives (RPO) REL 10: How do you use fault isolation to protect your workload? Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload REL 11: How do you design your workload to withstand component failures? Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency REL 12: How do you test reliability? After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect REL 13: How do you plan for disaster recovery (DR)? Having backups and redundant workload components in place is the start of your DR strate gy RTO and RPO are your objectives for restoration of availability Set these based on busi ness needs Implement a strategy to meet these objectives considering locations and func tion of workload resources and data Regularly back up your data and test your backup files to ensure that you can recov er from both logical and physical errors A key to managing failure is the frequent and automated testing of workloads to cause failure and then observe how they recov er Do this on a regular schedule and ensure that such testing is also triggered after significant workload changes Actively track KPIs such as the recovery time objective (RTO) and recovery point objective (RPO) to assess a workload's resiliency (especial ly under failuretesting scenarios) Tracking KPIs will help you identify and mitigate single points of failure The objective is to thoroughly test your workloadrecovery processes so that you are confident that you can recover all your data and continue to serve your customers even in the face of sustained problems Your recovery processes should be as well exercised as your normal production processes Resources Refer to the following resources to learn more about our best practices for Reliability Documentation •AWS Documentation •AWS Global Infrastructure •AWS Auto Scaling: How Scaling Plans Work 27ArchivedAWS WellArchitected Framework •What Is AWS Backup? Whitepaper •Reliability Pillar: AWS WellArchitected •Implementing Microservices on AWS Performance Efficiency The Performance Efficiency pillar includes the ability to use computing resources ef ficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve The performance efficiency pillar provides an overview of design principles best prac tices and questions You can find prescriptive guidance on implementation in the Per formance Efficiency Pillar whitepaper Design Principles There are five design principles for performance efficiency in the cloud: •Democratize advanced technologies : Make advanced technology implementation easier for your team by delegating complex tasks to your cloud vendor Rather than asking your IT team to learn about hosting and running a new technology consid er consuming the technology as a service For example NoSQL databases media transcoding and machine learning are all technologies that require specialized ex pertise In the cloud these technologies become services that your team can con sume allowing your team to focus on product development rather than resource provisioning and management •Go global in minutes: Deploying your workload in multiple AWS Regions around the world allows you to provide lower latency and a better experience for your cus tomers at minimal cost •Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional compute activities For example serverless storage services can act as static websites (removing the need for web servers) and event services can host code This removes the operational burden of managing physical servers and can lower transactional costs because managed ser vices operate at cloud scale •Experiment more often: With virtual and automatable resources you can quickly carry out comparative testing using different types of instances storage or config urations 28ArchivedAWS WellArchitected Framework •Consider mechanical sympathy: Understand how cloud services are consumed and always use the technology approach that aligns best with your workload goals For example consider data access patterns when you select database or storage ap proaches Definition There are four best practice areas for performance efficiency in the cloud: •Selection •Review •Monitoring •Tradeoffs Take a datadriven approach to building a highperformance architecture Gather data on all aspects of the architecture from the highlevel design to the selection and con figuration of resource types Reviewing your choices on a regular basis ensures that you are taking advantage of the continually evolving AWS Cloud Monitoring ensures that you are aware of any de viance from expected performance Make tradeoffs in your architecture to improve performance such as using compression or caching or relaxing consistency require ments Best Practices Selection The optimal solution for a particular workload varies and solutions often combine multiple approaches Wellarchitected workloads use multiple solutions and enable different features to improve performance AWS resources are available in many types and configurations which makes it easier to find an approach that closely matches your workload needs You can also find op tions that are not easily achievable with onpremises infrastructure For example a managed service such as Amazon DynamoDB provides a fully managed NoSQL data base with singledigit millisecond latency at any scale 29ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency (For a list of performance efficiency questions and best practices see the Appendix) PERF 1: How do you select the best performing architecture? Often multiple approaches are required for optimal performance across a workload Well architected systems use multiple solutions and features to improve performance Use a datadriven approach to select the patterns and implementation for your archi tecture and achieve a cost effective solution AWS Solutions Architects AWS Refer ence Architectures and AWS Partner Network (APN) partners can help you select an architecture based on industry knowledge but data obtained through benchmarking or load testing will be required to optimize your architecture Your architecture will likely combine a number of different architectural approach es (for example eventdriven ETL or pipeline) The implementation of your architec ture will use the AWS services that are specific to the optimization of your architec ture's performance In the following sections we discuss the four main resource types to consider (compute storage database and network) Compute Selecting compute resources that meet your requirements performance needs and provide great efficiency of cost and effort will enable you to accomplish more with the same number of resources When evaluating compute options be aware of your requirements for workload performance and cost requirements and use this to make informed decisions In AWS compute is available in three forms: instances containers and functions: •Instances are virtualized servers allowing you to change their capabilities with a button or an API call Because resource decisions in the cloud aren’t fixed you can experiment with different server types At AWS these virtual server instances come in different families and sizes and they offer a wide variety of capabilities includ ing solidstate drives (SSDs) and graphics processing units (GPUs) •Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resourceisolated processes AWS Fargate is serverless compute for containers or Amazon EC2 can be used if you need con trol over the installation configuration and management of your compute environ ment You can also choose from multiple container orchestration platforms: Ama zon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) •Functions abstract the execution environment from the code you want to execute For example AWS Lambda allows you to execute code without running an instance 30ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 2: How do you select your compute solution? The optimal compute solution for a workload varies based on application design usage pat terns and configuration settings Architectures can use different compute solutions for vari ous components and enable different features to improve performance Selecting the wrong compute solution for an architecture can lead to lower performance efficiency When architecting your use of compute you should take advantage of the elasticity mechanisms available to ensure you have sufficient capacity to sustain performance as demand changes Storage Cloud storage is a critical component of cloud computing holding the information used by your workload Cloud storage is typically more reliable scalable and secure than traditional onpremises storage systems Select from object block and file stor age services as well as cloud data migration options for your workload In AWS storage is available in three forms: object block and file: •Object Storage provides a scalable durable platform to make data accessible from any internet location for usergenerated content active archive serverless com puting Big Data storage or backup and recovery Amazon Simple Storage Ser vice (Amazon S3) is an object storage service that offers industryleading scal ability data availability security and performance Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies all around the world •Block Storage provides highly available consistent lowlatency block storage for each virtual host and is analogous to directattached storage (DAS) or a Stor age Area Network (SAN) Amazon Elastic Block Store (Amazon EBS) is designed for workloads that require persistent storage accessible by EC2 instances that helps you tune applications with the right storage capacity performance and cost •File Storage provides access to a shared file system across multiple systems File storage solutions like Amazon Elastic File System (EFS) are ideal for use cases such as large content repositories development environments media stores or user home directories Amazon FSx makes it easy and cost effective to launch and run popular file systems so you can leverage the rich feature sets and fast performance of widely used open source and commerciallylicensed file systems 31ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 3: How do you select your storage solution? The optimal storage solution for a system varies based on the kind of access method (block file or object) patterns of access (random or sequential) required throughput frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and durability constraints Wellarchitected systems use multiple storage solutions and enable different features to improve performance and use resources efficiently When you select a storage solution ensuring that it aligns with your access patterns will be critical to achieving the performance you want Database The cloud offers purposebuilt database services that address different problems pre sented by your workload You can choose from many purposebuilt database engines including relational keyvalue document inmemory graph time series and ledger databases By picking the best database to solve a specific problem (or a group of problems) you can break away from restrictive onesizefitsall monolithic databases and focus on building applications to meet the performance needs of your customers In AWS you can choose from multiple purposebuilt database engines including re lational keyvalue document inmemory graph time series and ledger databas es With AWS databases you don’t need to worry about database management tasks such as server provisioning patching setup configuration backups or recovery AWS continuously monitors your clusters to keep your workloads up and running with self healing storage and automated scaling so that you can focus on higher value applica tion development The following questions focus on these considerations for performance efficiency PERF 4: How do you select your database solution? The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various subsystems and enable different fea tures to improve performance Selecting the wrong database solution and features for a sys tem can lead to lower performance efficiency Your workload's database approach has a significant impact on performance efficien cy It's often an area that is chosen according to organizational defaults rather than through a datadriven approach As with storage it is critical to consider the access patterns of your workload and also to consider if other nondatabase solutions could solve the problem more efficiently (such as using graph time series or inmemory storage database) 32ArchivedAWS WellArchitected Framework Network Since the network is between all workload components it can have great impacts both positive and negative on workload performance and behavior There are also workloads that are heavily dependent on network performance such as High Perfor mance Computing (HPC) where deep network understanding is important to increase cluster performance You must determine the workload requirements for bandwidth latency jitter and throughput On AWS networking is virtualized and is available in a number of different types and configurations This makes it easier to match your networking methods with your needs AWS offers product features (for example Enhanced Networking Amazon EBSoptimized instances Amazon S3 transfer acceleration and dynamic Amazon CloudFront) to optimize network traffic AWS also offers networking features (for ex ample Amazon Route 53 latency routing Amazon VPC endpoints AWS Direct Con nect and AWS Global Accelerator) to reduce network distance or jitter The following questions focus on these considerations for performance efficiency PERF 5: How do you configure your networking solution? The optimal network solution for a workload varies based on latency throughput require ments jitter and bandwidth Physical constraints such as user or onpremises resources de termine location options These constraints can be offset with edge locations or resource placement You must consider location when deploying your network You can choose to place resources close to where they will be used to reduce distance Use networking met rics to make changes to networking configuration as the workload evolves By tak ing advantage of Regions placement groups and edge services you can significant ly improve performance Cloud based networks can be quickly rebuilt or modified so evolving your network architecture over time is necessary to maintain performance efficiency Review Cloud technologies are rapidly evolving and you must ensure that workload compo nents are using the latest technologies and approaches to continually improve perfor mance You must continually evaluate and consider changes to your workload com ponents to ensure you are meeting its performance and cost objectives New tech nologies such as machine learning and artificial intelligence (AI) can allow you to re imagine customer experiences and innovate across all of your business workloads Take advantage of the continual innovation at AWS driven by customer need We re lease new Regions edge locations services and features regularly Any of these re leases could positively improve the performance efficiency of your architecture 33ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 6: How do you evolve your workload to take advantage of new releases? When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the per formance of your workload Architectures performing poorly are usually the result of a nonexistent or broken performance review process If your architecture is performing poorly implement ing a performance review process will allow you to apply Deming’s plandocheckact (PDCA) cycle to drive iterative improvement Monitoring After you implement your workload you must monitor its performance so that you can remediate any issues before they impact your customers Monitoring metrics should be used to raise alarms when thresholds are breached Amazon CloudWatch is a monitoring and observability service that provides you with data and actionable insights to monitor your workload respond to systemwide per formance changes optimize resource utilization and get a unified view of operational health CloudWatch collects monitoring and operational data in the form of logs metrics and events from workloads that run on AWS and onpremises servers AWS XRay helps developers analyze and debug production distributed applications With AWS XRay you can glean insights into how your application is performing and dis cover root causes and identify performance bottlenecks You can use these insights to react quickly and keep your workload running smoothly The following questions focus on these considerations for performance efficiency PERF 7: How do you monitor your resources to ensure they are performing? System performance can degrade over time Monitor system performance to identify degra dation and remediate internal or external factors such as the operating system or applica tion load Ensuring that you do not see false positives is key to an effective monitoring solution Automated triggers avoid human error and can reduce the time it takes to fix prob lems Plan for game days where simulations are conducted in the production environ ment to test your alarm solution and ensure that it correctly recognizes issues Tradeoffs When you architect solutions think about tradeoffs to ensure an optimal approach Depending on your situation you could trade consistency durability and space for time or latency to deliver higher performance 34ArchivedAWS WellArchitected Framework Using AWS you can go global in minutes and deploy resources in multiple locations across the globe to be closer to your end users You can also dynamically add read only replicas to information stores (such as database systems) to reduce the load on the primary database The following questions focus on these considerations for performance efficiency PERF 8: How do you use tradeoffs to improve performance? When architecting solutions determining tradeoffs enables you to select an optimal ap proach Often you can improve performance by trading consistency durability and space for time and latency As you make changes to the workload collect and evaluate metrics to determine the impact of those changes Measure the impacts to the system and to the enduser to understand how your tradeoffs impact your workload Use a systematic approach such as load testing to explore whether the tradeoff improves performance Resources Refer to the following resources to learn more about our best practices for Perfor mance Efficiency Documentation •Amazon S3 Performance Optimization •Amazon EBS Volume Performance Whitepaper •Performance Efficiency Pillar Video •AWS re:Invent 2019: Amazon EC2 foundations (CMP211R2) •AWS re:Invent 2019: Leadership session: Storage state of the union (STG201L) •AWS re:Invent 2019: Leadership session: AWS purposebuilt databases (DAT209L) •AWS re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures (NET317R1) •AWS re:Invent 2019: Powering nextgen Amazon EC2: Deep dive into the Nitro sys tem (CMP303R2) •AWS re:Invent 2019: Scaling up to your first 10 million users (ARC211R) 35ArchivedAWS WellArchitected Framework Cost Optimization The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point The cost optimization pillar provides an overview of design principles best practices and questions You can find prescriptive guidance on implementation in the Cost Op timization Pillar whitepaper Design Principles There are five design principles for cost optimization in the cloud: •Implement Cloud Financial Management : To achieve financial success and accel erate business value realization in the cloud you need to invest in Cloud Financial Management /Cost Optimization Your organization needs to dedicate time and re sources to build capability in this new domain of technology and usage manage ment Similar to your Security or Operational Excellence capability you need to build capability through knowledge building programs resources and processes to become a costefficient organization •Adopt a consumption model : Pay only for the computing resources that you re quire and increase or decrease usage depending on business requirements not by using elaborate forecasting For example development and test environments are typically only used for eight hours a day during the work week You can stop these resources when they are not in use for a potential cost savings of 75% (40 hours versus 168 hours) •Measure overall efficiency : Measure the business output of the workload and the costs associated with delivering it Use this measure to know the gains you make from increasing output and reducing costs •Stop spending money on undifferentiated heavy lifting : AWS does the heavy lift ing of data center operations like racking stacking and powering servers It also removes the operational burden of managing operating systems and applications with managed services This allows you to focus on your customers and business projects rather than on IT infrastructure •Analyze and attribute expenditure : The cloud makes it easier to accurately identify the usage and cost of systems which then allows transparent attribution of IT costs to individual workload owners This helps measure return on investment (ROI) and gives workload owners an opportunity to optimize their resources and reduce costs Definition There are five best practice areas for cost optimization in the cloud: 36ArchivedAWS WellArchitected Framework •Practice Cloud Financial Management •Expenditure and usage awareness •Costeffective resources •Manage demand and supply resources •Optimize over time As with the other pillars within the WellArchitected Framework there are trade offs to consider for example whether to optimize for speedtomarket or for cost In some cases it’s best to optimize for speed—going to market quickly shipping new features or simply meeting a deadline—rather than investing in upfront cost opti mization Design decisions are sometimes directed by haste rather than data and the temptation always exists to overcompensate “just in case” rather than spend time benchmarking for the most costoptimal deployment This might lead to overprovi sioned and underoptimized deployments However this is a reasonable choice when you need to “lift and shift” resources from your onpremises environment to the cloud and then optimize afterwards Investing the right amount of effort in a cost op timization strategy up front allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practices and avoiding un necessary over provisioning The following sections provide techniques and best prac tices for both the initial and ongoing implementation of Cloud Financial Management and cost optimization of your workloads Best Practices Practice Cloud Financial Management With the adoption of cloud technology teams innovate faster due to shortened ap proval procurement and infrastructure deployment cycles A new approach to finan cial management in the cloud is required to realize business value and financial suc cess This approach is Cloud Financial Management and builds capability across your organization by implementing organizational wide knowledge building programs re sources and processes Many organizations are composed of many different units with different priorities The ability to align your organization to an agreed set of financial objectives and pro vide your organization the mechanisms to meet them will create a more efficient or ganization A capable organization will innovate and build faster be more agile and adjust to any internal or external factors In AWS you can use Cost Explorer and optionally Amazon Athena and Amazon Quick Sight with the Cost and Usage Report (CUR) to provide cost and usage awareness throughout your organization AWS Budgets provides proactive notifications for cost 37ArchivedAWS WellArchitected Framework and usage The AWS blogs provide information on new services and features to en sure you keep up to date with new service releases The following questions focus on these considerations for cost optimization (For a list of cost optimization questions and best practices see the Appendix) COST 1: How do you implement cloud financial management? Implementing Cloud Financial Management enables organizations to realize business value and financial success as they optimize their cost and usage and scale on AWS When building a cost optimization function use members and supplement the team with experts in CFM and CO Existing team members will understand how the organi zation currently functions and how to rapidly implement improvements Also consid er including people with supplementary or specialist skill sets such as analytics and project management When implementing cost awareness in your organization improve or build on exist ing programs and processes It is much faster to add to what exists than to build new processes and programs This will result in achieving outcomes much faster Expenditure and usage awareness The increased flexibility and agility that the cloud enables encourages innovation and fastpaced development and deployment It eliminates the manual processes and time associated with provisioning onpremises infrastructure including identifying hardware specifications negotiating price quotations managing purchase orders scheduling shipments and then deploying the resources However the ease of use and virtually unlimited ondemand capacity requires a new way of thinking about ex penditures Many businesses are composed of multiple systems run by various teams The capa bility to attribute resource costs to the individual organization or product owners dri ves efficient usage behavior and helps reduce waste Accurate cost attribution allows you to know which products are truly profitable and allows you to make more in formed decisions about where to allocate budget In AWS you create an account structure with AWS Organizations or AWS Control Tower which provides separation and assists in allocation of your costs and usage You can also use resource tagging to apply business and organization information to your usage and cost Use AWS Cost Explorer for visibility into your cost and usage or create customized dashboards and analytics with Amazon Athena and Amazon Quick Sight Controlling your cost and usage is done by notifications through AWS Budgets and controls using AWS Identity and Access Management (IAM) and Service Quotas 38ArchivedAWS WellArchitected Framework The following questions focus on these considerations for cost optimization COST 2: How do you govern usage? Establish policies and mechanisms to ensure that appropriate costs are incurred while objec tives are achieved By employing a checksandbalances approach you can innovate without overspending COST 3: How do you monitor usage and cost? Establish policies and procedures to monitor and appropriately allocate your costs This al lows you to measure and improve the cost efficiency of this workload COST 4: How do you decommission resources? Implement change control and resource management from project inception to endoflife This ensures you shut down or terminate unused resources to reduce waste You can use cost allocation tags to categorize and track your AWS usage and costs When you apply tags to your AWS resources (such as EC2 instances or S3 buckets) AWS generates a cost and usage report with your usage and your tags You can apply tags that represent organization categories (such as cost centers workload names or owners) to organize your costs across multiple services Ensure you use the right level of detail and granularity in cost and usage reporting and monitoring For high level insights and trends use daily granularity with AWS Cost Explorer For deeper analysis and inspection use hourly granularity in AWS Cost Explorer or Amazon Athena and Amazon QuickSight with the Cost and Usage Report (CUR) at an hourly granularity Combining tagged resources with entity lifecycle tracking (employees projects) makes it possible to identify orphaned resources or projects that are no longer gener ating value to the organization and should be decommissioned You can set up billing alerts to notify you of predicted overspending Costeffective resources Using the appropriate instances and resources for your workload is key to cost sav ings For example a reporting process might take five hours to run on a smaller server but one hour to run on a larger server that is twice as expensive Both servers give you the same outcome but the smaller server incurs more cost over time A wellarchitected workload uses the most costeffective resources which can have a significant and positive economic impact You also have the opportunity to use man aged services to reduce costs For example rather than maintaining servers to deliver email you can use a service that charges on a permessage basis AWS offers a variety of flexible and costeffective pricing options to acquire instances from Amazon EC2 and other services in a way that best fits your needs OnDemand 39ArchivedAWS WellArchitected Framework Instances allow you to pay for compute capacity by the hour with no minimum com mitments required Savings Plans and Reserved Instances offer savings of up to 75% off OnDemand pricing With Spot Instances you can leverage unused Amazon EC2 capacity and offer savings of up to 90% off OnDemand pricing Spot Instances are appropriate where the system can tolerate using a fleet of servers where individual servers can come and go dynamically such as stateless web servers batch processing or when using HPC and big data Appropriate service selection can also reduce usage and costs; such as CloudFront to minimize data transfer or completely eliminate costs such as utilizing Amazon Auro ra on RDS to remove expensive database licensing costs The following questions focus on these considerations for cost optimization COST 5: How do you evaluate cost when you select services? Amazon EC2 Amazon EBS and Amazon S3 are buildingblock AWS services Managed ser vices such as Amazon RDS and Amazon DynamoDB are higher level or application level AWS services By selecting the appropriate building blocks and managed services you can optimize this workload for cost For example using managed services you can reduce or re move much of your administrative and operational overhead freeing you to work on appli cations and businessrelated activities COST 6: How do you meet cost targets when you select resource type size and number? Ensure that you choose the appropriate resource size and number of resources for the task at hand You minimize waste by selecting the most cost effective type size and number COST 7: How do you use pricing models to reduce cost? Use the pricing model that is most appropriate for your resources to minimize expense COST 8: How do you plan for data transfer charges? Ensure that you plan and monitor data transfer charges so that you can make architectural decisions to minimize costs A small yet effective architectural change can drastically reduce your operational costs over time By factoring in cost during service selection and using tools such as Cost Explorer and AWS Trusted Advisor to regularly review your AWS usage you can actively monitor your utilization and adjust your deployments accordingly Manage demand and supply resources When you move to the cloud you pay only for what you need You can supply re sources to match the workload demand at the time they’re needed this eliminates the need for costly and wasteful over provisioning You can also modify the demand using a throttle buffer or queue to smooth the demand and serve it with less re sources resulting in a lower cost or process it at a later time with a batch service In AWS you can automatically provision resources to match the workload demand Auto Scaling using demand or timebased approaches allow you to add and remove 40ArchivedAWS WellArchitected Framework resources as needed If you can anticipate changes in demand you can save more money and ensure your resources match your workload needs You can use Amazon API Gateway to implement throttling or Amazon SQS to implementing a queue in your workload These will both allow you to modify the demand on your workload components The following questions focus on these considerations for cost optimization COST 9: How do you manage demand and supply resources? For a workload that has balanced spend and performance ensure that everything you pay for is used and avoid significantly underutilizing instances A skewed utilization metric in ei ther direction has an adverse impact on your organization in either operational costs (de graded performance due to overutilization) or wasted AWS expenditures (due to overpro visioning) When designing to modify demand and supply resources actively think about the patterns of usage the time it takes to provision new resources and the predictabili ty of the demand pattern When managing demand ensure you have a correctly sized queue or buffer and that you are responding to workload demand in the required amount of time Optimize over time As AWS releases new services and features it's a best practice to review your existing architectural decisions to ensure they continue to be the most cost effective As your requirements change be aggressive in decommissioning resources entire services and systems that you no longer require Implementing new features or resource types can optimize your workload incremen tally while minimizing the effort required to implement the change This provides continual improvements in efficiency over time and ensures you remain on the most updated technology to reduce operating costs You can also replace or add new com ponents to the workload with new services This can provide significant increases in efficiency so it's essential to regularly review your workload and implement new ser vices and features The following questions focus on these considerations for cost optimization COST 10: How do you evaluate new services? As AWS releases new services and features it's a best practice to review your existing archi tectural decisions to ensure they continue to be the most cost effective When regularly reviewing your deployments assess how newer services can help save you money For example Amazon Aurora on RDS can reduce costs for relational data bases Using serverless such as Lambda can remove the need to operate and manage instances to run code 41ArchivedAWS WellArchitected Framework Resources Refer to the following resources to learn more about our best practices for Cost Opti mization Documentation •AWS Documentation Whitepaper •Cost Optimization Pillar 42ArchivedAWS WellArchitected Framework The Review Process The review of architectures needs to be done in a consistent manner with a blame free approach that encourages diving deep It should be a light weight process (hours not days) that is a conversation and not an audit The purpose of reviewing an archi tecture is to identify any critical issues that might need addressing or areas that could be improved The outcome of the review is a set of actions that should improve the experience of a customer using the workload As discussed in the “On Architecture” section you will want each team member to take responsibility for the quality of its architecture We recommend that the team members who build an architecture use the WellArchitected Framework to contin ually review their architecture rather than holding a formal review meeting A con tinuous approach allows your team members to update answers as the architecture evolves and improve the architecture as you deliver features The AWS WellArchitected Framework is aligned to the way that AWS reviews systems and services internally It is premised on a set of design principles that influences ar chitectural approach and questions that ensure that people don’t neglect areas that often featured in Root Cause Analysis (RCA) Whenever there is a significant issue with an internal system AWS service or customer we look at the RCA to see if we could improve the review processes we use Reviews should be applied at key milestones in the product lifecycle early on in the design phase to avoid oneway doors 1 that are difficult to change and then before the golive date After you go into production your workload will continue to evolve as you add new features and change technology implementations The architecture of a workload changes over time You will need to follow good hygiene practices to stop its architectural characteristics from degrading as you evolve it As you make sig nificant architecture changes you should follow a set of hygiene processes including a WellArchitected review If you want to use the review as a onetime snapshot or independent measurement you will want to ensure that you have all the right people in the conversation Often we find that reviews are the first time that a team truly understands what they have implemented An approach that works well when reviewing another team's workload is to have a series of informal conversations about their architecture where you can glean the answers to most questions You can then follow up with one or two meet ings where you can gain clarity or dive deep on areas of ambiguity or perceived risk Here are some suggested items to facilitate your meetings: • A meeting room with whiteboards 1Many decisions are reversible twoway doors Those decisions can use a light weight process Oneway doors are hard or impossible to reverse and require more inspection before making them 43ArchivedAWS WellArchitected Framework • Print outs of any diagrams or design notes • Action list of questions that require outofband research to answer (for example “did we enable encryption or not?” ) After you have done a review you should have a list of issues that you can prioritize based on your business context You will also want to take into account the impact of those issues on the daytoday work of your team If you address these issues early you could free up time to work on creating business value rather than solving recur ring problems As you address issues you can update your review to see how the ar chitecture is improving While the value of a review is clear after you have done one you may find that a new team might be resistant at first Here are some objections that can be handled through educating the team on the benefits of a review: • “We are too busy!” (Often said when the team is getting ready for a big launch) • If you are getting ready for a big launch you will want it to go smoothly The re view will allow you to understand any problems you might have missed • We recommend that you carry out reviews early in the product lifecycle to uncov er risks and develop a mitigation plan aligned with the feature delivery roadmap • “We don’t have time to do anything with the results!” (Often said when there is an immovable event such as the Super Bowl that they are targeting) • These events can’t be moved Do you really want to go into it without knowing the risks in your architecture? Even if you don’t address all of these issues you can still have playbooks for handling them if they materialize • “We don’t want others to know the secrets of our solution implementation!” • If you point the team at the questions in the WellArchitected Framework they will see that none of the questions reveal any commercial or technical propriety information As you carry out multiple reviews with teams in your organization you might identify thematic issues For example you might see that a group of teams has clusters of is sues in a particular pillar or topic You will want to look at all your reviews in a holis tic manner and identify any mechanisms training or principal engineering talks that could help address those thematic issues 44ArchivedAWS WellArchitected Framework Conclusion The AWS WellArchitected Framework provides architectural best practices across the five pillars for designing and operating reliable secure efficient and costeffective systems in the cloud The Framework provides a set of questions that allows you to review an existing or proposed architecture It also provides a set of AWS best prac tices for each pillar Using the Framework in your architecture will help you produce stable and efficient systems which allow you to focus on your functional require ments 45ArchivedAWS WellArchitected Framework Contributors The following individuals and organizations contributed to this document: • Rodney Lester: Senior Manager WellArchitected Amazon Web Services • Brian Carlson: Operations Lead WellArchitected Amazon Web Services • Ben Potter: Security Lead WellArchitected Amazon Web Services • Eric Pullen: Performance Lead WellArchitected Amazon Web Services • Seth Eliot: Reliability Lead WellArchitected Amazon Web Services • Nathan Besh: Cost Lead WellArchitected Amazon Web Services • Jon Steele: Sr Technical Account Manager Amazon Web Services • Ryan King: Technical Program Manager Amazon Web Services • Erin Rifkin: Senior Product Manager Amazon Web Services • Max Ramsay: Principal Security Solutions Architect Amazon Web Services • Scott Paddock: Security Solutions Architect Amazon Web Services • Callum Hughes: Solutions Architect Amazon Web Services 46ArchivedAWS WellArchitected Framework Further Reading AWS Cloud Compliance AWS WellArchitected Partner program AWS WellArchitected Tool AWS WellArchitected homepage Cost Optimization Pillar whitepaper Operational Excellence Pillar whitepaper Performance Efficiency Pillar whitepaper Reliability Pillar whitepaper Security Pillar whitepaper The Amazon Builders' Library 47ArchivedAWS WellArchitected Framework Document Revisions Table 2 Major revisions: Date Description July 2020 Review and rewrite of most questions and answers July 2019 Addition of AWS WellArchitected Tool links to AWS WellArchitected Labs and AWS WellArchitected Part ners minor fixes to enable multiple language version of framework November 2018 Review and rewrite of most questions and answers to ensure questions focus on one topic at a time This caused some previous questions to be split into multiple questions Added common terms to definitions (work load component etc) Changed presentation of question in main body to include descriptive text June 2018 Updates to simplify question text standardize answers and improve readability November 2017 Operational Excellence moved to front of pillars and rewritten so it frames other pillars Refreshed other pil lars to reflect evolution of AWS November 2016 Updated the Framework to include operational excel lence pillar and revised and updated the other pillars to reduce duplication and incorporate learnings from car rying out reviews with thousands of customers November 2015 Updated the Appendix with current Amazon Cloud Watch Logs information October 2015 Original publication 48ArchivedAWS WellArchitected Framework Appendix: Questions and Best Practices Operational Excellence Organization OPS 1 How do you determine what your priorities are? Everyone needs to understand their part in enabling business success Have shared goals in order to set priorities for resources This will maximize the benefits of your efforts Best Practices: •Evaluate external customer needs: Involve key stakeholders including business devel opment and operations teams to determine where to focus efforts on external customer needs This will ensure that you have a thorough understanding of the operations support that is required to achieve your desired business outcomes •Evaluate internal customer needs : Involve key stakeholders including business devel opment and operations teams when determining where to focus efforts on internal cus tomer needs This will ensure that you have a thorough understanding of the operations support that is required to achieve business outcomes •Evaluate governance requirements: Ensure that you are aware of guidelines or obliga tions defined by your organization that may mandate or emphasize specific focus Eval uate internal factors such as organization policy standards and requirements Validate that you have mechanisms to identify changes to governance If no governance require ments are identified ensure that you have applied due diligence to this determination •Evaluate compliance requirements : Evaluate external factors such as regulatory compli ance requirements and industry standards to ensure that you are aware of guidelines or obligations that may mandate or emphasize specific focus If no compliance requirements are identified ensure that you apply due diligence to this determination •Evaluate threat landscape : Evaluate threats to the business (for example competition business risk and liabilities operational risks and information security threats) and main tain current information in a risk registry Include the impact of risks when determining where to focus efforts •Evaluate tradeoffs : Evaluate the impact of tradeoffs between competing interests or al ternative approaches to help make informed decisions when determining where to focus efforts or choosing a course of action For example accelerating speed to market for new features may be emphasized over cost optimization or you may choose a relational data base for nonrelational data to simplify the effort to migrate a system rather than migrat ing to a database optimized for your data type and updating your application •Manage benefits and risks : Manage benefits and risks to make informed decisions when determining where to focus efforts For example it may be beneficial to deploy a work load with unresolved issues so that significant new features can be made available to cus tomers It may be possible to mitigate associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk 49ArchivedAWS WellArchitected Framework OPS 2 How do you structure your organization to support your business outcomes? Your teams must understand their part in achieving business outcomes Teams need to un derstand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams Best Practices: •Resources have identified owners : Understand who has ownership of each application workload platform and infrastructure component what business value is provided by that component and why that ownership exists Understanding the business value of these in dividual components and how they support business outcomes informs the processes and procedures applied against them •Processes and procedures have identified owners: Understand who has ownership of the definition of individual processes and procedures why those specific process and proce dures are used and why that ownership exists Understanding the reasons that specific processes and procedures are used enables identification of improvement opportunities •Operations activities have identified owners responsible for their performance: Under stand who has responsibility to perform specific activities on defined workloads and why that responsibility exists Understanding who has responsibility to perform activities in forms who will conduct the activity validate the result and provide feedback to the owner of the activity •Team members know what they are responsible for: Understanding the responsibilities of your role and how you contribute to business outcomes informs the prioritization of your tasks and why your role is important This enables team members to recognize needs and respond appropriately •Mechanisms exist to identify responsibility and ownership: Where no individual or team is identified there are defined escalation paths to someone with the authority to assign ownership or plan for that need to be addressed •Mechanisms exist to request additions changes and exceptions : You are able to make requests to owners of processes procedures and resources Make informed decisions to approve requests where viable and determined to be appropriate after an evaluation of benefits and risks •Responsibilities between teams are predefined or negotiated: There are defined or ne gotiated agreements between teams describing how they work with and support each oth er (for example response times service level objectives or service level agreements) Un derstanding the impact of the teams’ work on business outcomes and the outcomes of other teams and organizations informs the prioritization of their tasks and enables them to respond appropriately 50ArchivedAWS WellArchitected Framework OPS 3 How does your organizational culture support your business outcomes? Provide support for your team members so that they can be more effective in taking action and supporting your business outcome Best Practices: •Executive Sponsorship : Senior leadership clearly sets expectations for the organization and evaluates success Senior leadership is the sponsor advocate and driver for the adop tion of best practices and evolution of the organization •Team members are empowered to take action when outcomes are at risk: The workload owner has defined guidance and scope empowering team members to respond when out comes are at risk Escalation mechanisms are used to get direction when events are outside of the defined scope •Escalation is encouraged : Team members have mechanisms and are encouraged to esca late concerns to decision makers and stakeholders if they believe outcomes are at risk Es calation should be performed early and often so that risks can be identified and prevent ed from causing incidents •Communications are timely clear and actionable: Mechanisms exist and are used to pro vide timely notice to team members of known risks and planned events Necessary con text details and time (when possible) are provided to support determining if action is nec essary what action is required and to take action in a timely manner For example provid ing notice of software vulnerabilities so that patching can be expedited or providing no tice of planned sales promotions so that a change freeze can be implemented to avoid the risk of service disruption •Experimentation is encouraged: Experimentation accelerates learning and keeps team members interested and engaged An undesired result is a successful experiment that has identified a path that will not lead to success Team members are not punished for suc cessful experiments with undesired results Experimentation is required for innovation to happen and turn ideas into outcomes •Team members are enabled and encouraged to maintain and grow their skill sets : Teams must grow their skill sets to adopt new technologies and to support changes in de mand and responsibilities in support of your workloads Growth of skills in new technolo gies is frequently a source of team member satisfaction and supports innovation Support your team members’ pursuit and maintenance of industry certifications that validate and acknowledge their growing skills Cross train to promote knowledge transfer and reduce the risk of significant impact when you lose skilled and experienced team members with institutional knowledge Provide dedicated structured time for learning •Resource teams appropriately: Maintain team member capacity and provide tools and resources to support your workload needs Overtasking team members increases the risk of incidents resulting from human error Investments in tools and resources (for example providing automation for frequently executed activities) can scale the effectiveness of your team enabling them to support additional activities •Diverse opinions are encouraged and sought within and across teams: Leverage cross organizational diversity to seek multiple unique perspectives Use this perspective to in crease innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspec tives51ArchivedAWS WellArchitected Framework Prepare OPS 4 How do you design your workload so that you can understand its state? Design your workload so that it provides the information necessary across all components (for example metrics logs and traces) for you to understand its internal state This enables you to provide effective responses when appropriate Best Practices: •Implement application telemetry : Instrument your application code to emit informa tion about its internal state status and achievement of business outcomes For example queue depth error messages and response times Use this information to determine when a response is required •Implement and configure workload telemetry : Design and configure your workload to emit information about its internal state and current status For example API call volume HTTP status codes and scaling events Use this information to help determine when a re sponse is required •Implement user activity telemetry: Instrument your application code to emit informa tion about user activity for example click streams or started abandoned and completed transactions Use this information to help understand how the application is used patterns of usage and to determine when a response is required •Implement dependency telemetry : Design and configure your workload to emit informa tion about the status (for example reachability or response time) of resources it depends on Examples of external dependencies can include external databases DNS and network connectivity Use this information to determine when a response is required •Implement transaction traceability: Implement your application code and configure your workload components to emit information about the flow of transactions across the work load Use this information to determine when a response is required and to assist you in identifying the factors contributing to an issue 52ArchivedAWS WellArchitected Framework OPS 5 How do you reduce defects ease remediation and improve flow into production? Adopt approaches that improve flow of changes into production that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering pro duction limit issues deployed and enable rapid identification and remediation of issues in troduced through deployment activities Best Practices: •Use version control : Use version control to enable tracking of changes and releases •Test and validate changes: Test and validate changes to help limit and detect errors Au tomate testing to reduce errors caused by manual processes and reduce the level of effort to test •Use configuration management systems : Use configuration management systems to make and track configuration changes These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes •Use build and deployment management systems: Use build and deployment manage ment systems These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes •Perform patch management : Perform patch management to gain features address issues and remain compliant with governance Automate patch management to reduce errors caused by manual processes and reduce the level of effort to patch •Share design standards: Share best practices across teams to increase awareness and maximize the benefits of development efforts •Implement practices to improve code quality : Implement practices to improve code qual ity and minimize defects For example testdriven development code reviews and stan dards adoption •Use multiple environments : Use multiple environments to experiment develop and test your workload Use increasing levels of controls as environments approach production to gain confidence your workload will operate as intended when deployed •Make frequent small reversible changes: Frequent small and reversible changes reduce the scope and impact of a change This eases troubleshooting enables faster remediation and provides the option to roll back a change •Fully automate integration and deployment : Automate build deployment and testing of the workload This reduces errors caused by manual processes and reduces the effort to deploy changes 53ArchivedAWS WellArchitected Framework OPS 6 How do you mitigate deployment risks? Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of is sues introduced through the deployment of changes Best Practices: •Plan for unsuccessful changes : Plan to revert to a known good state or remediate in the production environment if a change does not have the desired outcome This preparation reduces recovery time through faster responses •Test and validate changes: Test changes and validate the results at all lifecycle stages to confirm new features and minimize the risk and impact of failed deployments •Use deployment management systems: Use deployment management systems to track and implement change This reduces errors cause by manual processes and reduces the ef fort to deploy changes •Test using limited deployments : Test with limited deployments alongside existing sys tems to confirm desired outcomes prior to full scale deployment For example use deploy ment canary testing or onebox deployments •Deploy using parallel environments: Implement changes onto parallel environments and then transition over to the new environment Maintain the prior environment until there is confirmation of successful deployment Doing so minimizes recovery time by enabling roll back to the previous environment •Deploy frequent small reversible changes: Use frequent small and reversible changes to reduce the scope of a change This results in easier troubleshooting and faster remedia tion with the option to roll back a change •Fully automate integration and deployment : Automate build deployment and testing of the workload This reduces errors cause by manual processes and reduces the effort to de ploy changes •Automate testing and rollback : Automate testing of deployed environments to confirm desired outcomes Automate rollback to previous known good state when outcomes are not achieved to minimize recovery time and reduce errors caused by manual processes 54ArchivedAWS WellArchitected Framework OPS 7 How do you know that you are ready to support a workload? Evaluate the operational readiness of your workload processes and procedures and person nel to understand the operational risks related to your workload Best Practices: •Ensure personnel capability: Have a mechanism to validate that you have the appropri ate number of trained personnel to provide support for operational needs Train personnel and adjust personnel capacity as necessary to maintain effective support •Ensure consistent review of operational readiness : Ensure you have a consistent review of your readiness to operate a workload Reviews must include at a minimum the oper ational readiness of the teams and the workload and security requirements Implement review activities in code and trigger automated review in response to events where ap propriate to ensure consistency speed of execution and reduce errors caused by manual processes •Use runbooks to perform procedures : Runbooks are documented procedures to achieve specific outcomes Enable consistent and prompt responses to wellunderstood events by documenting procedures in runbooks Implement runbooks as code and trigger the execu tion of runbooks in response to events where appropriate to ensure consistency speed re sponses and reduce errors caused by manual processes •Use playbooks to investigate issues : Enable consistent and prompt responses to issues that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contributing to a fail ure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated •Make informed decisions to deploy systems and changes: Evaluate the capabilities of the team to support the workload and the workload's compliance with governance Evaluate these against the benefits of deployment when determining whether to transition a sys tem or change into production Understand the benefits and risks to make informed deci sions 55ArchivedAWS WellArchitected Framework Operate OPS 8 How do you understand the health of your workload? Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action Best Practices: •Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business outcomes (for example order rate customer retention rate and profit versus operating expense) and customer outcomes (for example customer satisfaction) Evaluate KPIs to determine workload success •Define workload metrics : Define workload metrics to measure the achievement of KPIs (for example abandoned shopping carts orders placed cost price and allocated workload expense) Define workload metrics to measure the health of the workload (for example interface response time error rate requests made requests completed and utilization) Evaluate metrics to determine if the workload is achieving desired outcomes and to un derstand the health of the workload •Collect and analyze workload metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed •Establish workload metrics baselines: Establish baselines for metrics to provide expected values as the basis for comparison and identification of under and over performing com ponents Identify thresholds for improvement investigation and intervention •Learn expected patterns of activity for workload: Establish patterns of workload activity to identify anomalous behavior so that you can respond appropriately if required •Alert when workload outcomes are at risk: Raise an alert when workload outcomes are at risk so that you can respond appropriately if necessary •Alert when workload anomalies are detected: Raise an alert when workload anomalies are detected so that you can respond appropriately if necessary •Validate the achievement of outcomes and the effectiveness of KPIs and metrics : Cre ate a businesslevel view of your workload operations to help you determine if you are sat isfying needs and to identify areas that need improvement to reach business goals Vali date the effectiveness of KPIs and metrics and revise them if necessary 56ArchivedAWS WellArchitected Framework OPS 9 How do you understand the health of your operations? Define capture and analyze operations metrics to gain visibility to operations events so that you can take appropriate action Best Practices: •Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business (for example new features delivered) and customer outcomes (for exam ple customer support cases) Evaluate KPIs to determine operations success •Define operations metrics: Define operations metrics to measure the achievement of KPIs (for example successful deployments and failed deployments) Define operations met rics to measure the health of operations activities (for example mean time to detect an in cident (MTTD) and mean time to recovery (MTTR) from an incident) Evaluate metrics to determine if operations are achieving desired outcomes and to understand the health of your operations activities •Collect and analyze operations metrics : Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed •Establish operations metrics baselines : Establish baselines for metrics to provide expect ed values as the basis for comparison and identification of under and over performing op erations activities •Learn the expected patterns of activity for operations: Establish patterns of operations activities to identify anomalous activity so that you can respond appropriately if necessary •Alert when operations outcomes are at risk : Raise an alert when operations outcomes are at risk so that you can respond appropriately if necessary •Alert when operations anomalies are detected : Raise an alert when operations anomalies are detected so that you can respond appropriately if necessary •Validate the achievement of outcomes and the effectiveness of KPIs and metrics : Cre ate a businesslevel view of your operations activities to help you determine if you are sat isfying needs and to identify areas that need improvement to reach business goals Vali date the effectiveness of KPIs and metrics and revise them if necessary 57ArchivedAWS WellArchitected Framework OPS 10 How do you manage workload and operations events? Prepare and validate procedures for responding to events to minimize their disruption to your workload Best Practices: •Use processes for event incident and problem management : Have processes to address observed events events that require intervention (incidents) and events that require in tervention and either recur or cannot currently be resolved (problems) Use these process es to mitigate the impact of these events on the business and your customers by ensuring timely and appropriate responses •Have a process per alert : Have a welldefined response (runbook or playbook) with a specifically identified owner for any event for which you raise an alert This ensures effec tive and prompt responses to operations events and prevents actionable events from be ing obscured by less valuable notifications •Prioritize operational events based on business impact: Ensure that when multiple events require intervention those that are most significant to the business are addressed first For example impacts can include loss of life or injury financial loss or damage to reputation or trust •Define escalation paths : Define escalation paths in your runbooks and playbooks includ ing what triggers escalation and procedures for escalation Specifically identify owners for each action to ensure effective and prompt responses to operations events •Enable push notifications : Communicate directly with your users (for example with email or SMS) when the services they use are impacted and again when the services return to normal operating conditions to enable users to take appropriate action •Communicate status through dashboards: Provide dashboards tailored to their target au diences (for example internal technical teams leadership and customers) to communicate the current operating status of the business and provide metrics of interest •Automate responses to events : Automate responses to events to reduce errors caused by manual processes and to ensure prompt and consistent responses 58ArchivedAWS WellArchitected Framework Evolve OPS 11 How do you evolve operations? Dedicate time and resources for continuous incremental improvement to evolve the effec tiveness and efficiency of your operations Best Practices: •Have a process for continuous improvement: Regularly evaluate and prioritize opportuni ties for improvement to focus efforts where they can provide the greatest benefits •Perform postincident analysis : Review customerimpacting events and identify the con tributing factors and preventative actions Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Com municate contributing factors and corrective actions as appropriate tailored to target au diences •Implement feedback loops : Include feedback loops in your procedures and workloads to help you identify issues and areas that need improvement •Perform Knowledge Management : Mechanisms exist for your team members to discover the information that they are looking for in a timely manner access it and identify that it’s current and complete Mechanisms are present to identify needed content content in need of refresh and content that should be archived so that it’s no longer referenced •Define drivers for improvement: Identify drivers for improvement to help you evaluate and prioritize opportunities •Validate insights : Review your analysis results and responses with crossfunctional teams and business owners Use these reviews to establish common understanding identify addi tional impacts and determine courses of action Adjust responses as appropriate •Perform operations metrics reviews : Regularly perform retrospective analysis of opera tions metrics with crossteam participants from different areas of the business Use these reviews to identify opportunities for improvement potential courses of action and to share lessons learned •Document and share lessons learned: Document and share lessons learned from the exe cution of operations activities so that you can use them internally and across teams •Allocate time to make improvements : Dedicate time and resources within your processes to make continuous incremental improvements possible 59ArchivedAWS WellArchitected Framework Security Security SEC 1 How do you securely operate your workload? To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations Best Practices: •Separate workloads using accounts: Organize workloads in separate accounts and group accounts based on function or a common set of controls rather than mirroring your com pany’s reporting structure Start with security and infrastructure in mind to enable your or ganization to set common guardrails as your workloads grow •Secure AWS account : Secure access to your accounts for example by enabling MFA and restrict use of the root user and configure account contacts •Identify and validate control objectives : Based on your compliance requirements and risks identified from your threat model derive and validate the control objectives and con trols that you need to apply to your workload Ongoing validation of control objectives and controls help you measure the effectiveness of risk mitigation •Keep up to date with security threats: Recognize attack vectors by staying up to date with the latest security threats to help you define and implement appropriate controls •Keep up to date with security recommendations : Stay up to date with both AWS and in dustry security recommendations to evolve the security posture of your workload •Automate testing and validation of security controls in pipelines: Establish secure base lines and templates for security mechanisms that are tested and validated as part of your build pipelines and processes Use tools and automation to test and validate all security controls continuously For example scan items such as machine images and infrastructure as code templates for security vulnerabilities irregularities and drift from an established baseline at each stage •Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an uptodate register of potential threats Prioritize your threats and adapt your security controls to prevent detect and respond Revisit and maintain this in the context of the evolving security landscape •Evaluate and implement new security services and features regularly: AWS and APN Partners constantly release new features and services that allow you to evolve the security posture of your workload 60ArchivedAWS WellArchitected Framework Identity and Access Management SEC 2 How do you manage identities for people and machines? There are two types of identities you need to manage when approaching operating secure AWS workloads Understanding the type of identity you need to manage and grant access helps you ensure the right identities have access to the right resources under the right con ditions Human Identities: Your administrators developers operators and end users require an identity to access your AWS environments and applications These are members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application or interactive commandline tools Machine Identities: Your service applications operational tools and workloads require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You may also manage machine identities for external parties who need access Additionally you may also have machines outside of AWS that need access to your AWS environment Best Practices: •Use strong signin mechanisms : Enforce minimum password length and educate users to avoid common or reused passwords Enforce multifactor authentication (MFA) with soft ware or hardware mechanisms to provide an additional layer •Use temporary credentials : Require identities to dynamically acquire temporary creden tials For workforce identities use AWS Single SignOn or federation with IAM roles to ac cess AWS accounts For machine identities require the use of IAM roles instead of long term access keys •Store and use secrets securely: For workforce and machine identities that require secrets such as passwords to third party applications store them with automatic rotation using the latest industry standards in a specialized service •Rely on a centralized identity provider: For workforce identities rely on an identity provider that enables you to manage identities in a centralized place This enables you to create manage and revoke access from a single location making it easier to manage ac cess This reduces the requirement for multiple credentials and provides an opportunity to integrate with HR processes •Audit and rotate credentials periodically: When you cannot rely on temporary credentials and require long term credentials audit credentials to ensure that the defined controls (for example MFA) are enforced rotated regularly and have appropriate access level •Leverage user groups and attributes: Place users with common security requirements in groups defined by your identity provider and put mechanisms in place to ensure that user attributes that may be used for access control (eg department or location) are cor rect and updated Use these groups and attributes rather than individual users to control access This allows you to manage access centrally by changing a user’s group member ship or attributes once rather than updating many individual policies when a user’s access needs change 61ArchivedAWS WellArchitected Framework SEC 3 How do you manage permissions for people and machines? Manage permissions to control access to people and machine identities that require access to AWS and your workload Permissions control who can access what and under what condi tions Best Practices: •Define access requirements: Each component or resource of your workload needs to be accessed by administrators end users or other components Have a clear definition of who or what should have access to each component choose the appropriate identity type and method of authentication and authorization •Grant least privilege access : Grant only the access that identities require by allowing ac cess to specific actions on specific AWS resources under specific conditions Rely on groups and identity attributes to dynamically set permissions at scale rather than defining per missions for individual users For example you can allow a group of developers access to manage only resources for their project This way when a developer is removed from the group access for the developer is revoked everywhere that group was used for access con trol without requiring any changes to the access policies •Establish emergency access process : A process that allows emergency access to your workload in the unlikely event of an automated process or pipeline issue This will help you rely on least privilege access but ensure users can obtain the right level of access when they require it For example establish a process for administrators to verify and approve their request •Reduce permissions continuously : As teams and workloads determine what access they need remove permissions they no longer use and establish review processes to achieve least privilege permissions Continuously monitor and reduce unused identities and per missions •Define permission guardrails for your organization: Establish common controls that re strict access to all identities in your organization For example you can restrict access to specific AWS Regions or prevent your operators from deleting common resources such as an IAM role used for your central security team •Manage access based on life cycle : Integrate access controls with operator and applica tion life cycle and your centralized federation provider For example remove a user’s ac cess when they leave the organization or change roles •Analyze public and cross account access: Continuously monitor findings that highlight public and cross account access Reduce public access and cross account access to only re sources that require this type of access •Share resources securely : Govern the consumption of shared resources across accounts or within your AWS Organization Monitor shared resources and review shared resource ac cess 62ArchivedAWS WellArchitected Framework Detection SEC 4 How do you detect and investigate security events? Capture and analyze events from logs and metrics to gain visibility Take action on security events and potential threats to help secure your workload Best Practices: •Configure service and application logging : Configure logging throughout the workload including application logs resource logs and AWS service logs For example ensure that AWS CloudTrail Amazon CloudWatch Logs Amazon GuardDuty and AWS Security Hub are enabled for all accounts within your organization •Analyze logs findings and metrics centrally: All logs metrics and telemetry should be collected centrally and automatically analyzed to detect anomalies and indicators of unauthorized activity A dashboard can provide you easy to access insight into realtime health For example ensure that Amazon GuardDuty and Security Hub logs are sent to a central location for alerting and analysis •Automate response to events: Using automation to investigate and remediate events re duces human effort and error and enables you to scale investigation capabilities Regular reviews will help you tune automation tools and continuously iterate For example auto mate responses to Amazon GuardDuty events by automating the first investigation step then iterate to gradually remove human effort •Implement actionable security events: Create alerts that are sent to and can be actioned by your team Ensure that alerts include relevant information for the team to take action For example ensure that Amazon GuardDuty and AWS Security Hub alerts are sent to the team to action or sent to response automation tooling with the team remaining informed by messaging from the automation framework 63ArchivedAWS WellArchitected Framework Infrastructure Protection SEC 5 How do you protect your network resources? Any workload that has some form of network connectivity whether it’s the internet or a pri vate network requires multiple layers of defense to help protect from external and internal networkbased threats Best Practices: •Create network layers : Group components that share reachability requirements into lay ers For example a database cluster in a VPC with no need for internet access should be placed in subnets with no route to or from the internet In a serverless workload operating without a VPC similar layering and segmentation with microservices can achieve the same goal •Control traffic at all layers: Apply controls with a defense in depth approach for both in bound and outbound traffic For example for Amazon Virtual Private Cloud (VPC) this in cludes security groups Network ACLs and subnets For AWS Lambda consider running in your private VPC with VPCbased controls •Automate network protection: Automate protection mechanisms to provide a selfde fending network based on threat intelligence and anomaly detection For example intru sion detection and prevention tools that can proactively adapt to current threats and re duce their impact •Implement inspection and protection: Inspect and filter your traffic at each layer For ex ample use a web application firewall to help protect against inadvertent access at the ap plication network layer For Lambda functions thirdparty tools can add applicationlayer firewalling to your runtime environment 64ArchivedAWS WellArchitected Framework SEC 6 How do you protect your compute resources? Compute resources in your workload require multiple layers of defense to help protect from external and internal threats Compute resources include EC2 instances containers AWS Lambda functions database services IoT devices and more Best Practices: •Perform vulnerability management : Frequently scan and patch for vulnerabilities in your code dependencies and in your infrastructure to help protect against new threats •Reduce attack surface : Reduce your attack surface by hardening operating systems mini mizing components libraries and externally consumable services in use •Implement managed services : Implement services that manage resources such as Ama zon RDS AWS Lambda and Amazon ECS to reduce your security maintenance tasks as part of the shared responsibility model •Automate compute protection : Automate your protective compute mechanisms including vulnerability management reduction in attack surface and management of resources •Enable people to perform actions at a distance: Removing the ability for interactive ac cess reduces the risk of human error and the potential for manual configuration or man agement For example use a change management workflow to deploy EC2 instances us ing infrastructure as code then manage EC2 instances using tools instead of allowing di rect access or a bastion host •Validate software integrity : Implement mechanisms (for example code signing) to vali date that the software code and libraries used in the workload are from trusted sources and have not been tampered with 65ArchivedAWS WellArchitected Framework Data Protection SEC 7 How do you classify your data? Classification provides a way to categorize data based on criticality and sensitivity in order to help you determine appropriate protection and retention controls Best Practices: •Identify the data within your workload: This includes the type and classification of data the associated business processes data owner applicable legal and compliance require ments where it’s stored and the resulting controls that are needed to be enforced This may include classifications to indicate if the data is intended to be publicly available if the data is internal use only such as customer personally identifiable information (PII) or if the data is for more restricted access such as intellectual property legally privileged or marked sensititve and more •Define data protection controls : Protect data according to its classification level For ex ample secure data classified as public by using relevant recommendations while protect ing sensitive data with additional controls •Automate identification and classification : Automate identification and classification of data to reduce the risk of human error from manual interactions •Define data lifecycle management: Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements Aspects including the du ration you retain data for data destruction data access management data transformation and data sharing should be considered 66ArchivedAWS WellArchitected Framework SEC 8 How do you protect your data at rest? Protect your data at rest by implementing multiple controls to reduce the risk of unautho rized access or mishandling Best Practices: •Implement secure key management : Encryption keys must be stored securely with strict access control for example by using a key management service such as AWS KMS Consid er using different keys and access control to the keys combined with the AWS IAM and re source policies to align with data classification levels and segregation requirements •Enforce encryption at rest: Enforce your encryption requirements based on the latest standards and recommendations to help protect your data at rest •Automate data at rest protection: Use automated tools to validate and enforce data at rest protection continuously for example verify that there are only encrypted storage re sources •Enforce access control : Enforce access control with least privileges and mechanisms in cluding backups isolation and versioning to help protect your data at rest Prevent opera tors from granting public access to your data •Use mechanisms to keep people away from data: Keep all users away from directly ac cessing sensitive data and systems under normal operational circumstances For example provide a dashboard instead of direct access to a data store to run queries Where CI/CD pipelines are not used determine which controls and processes are required to adequately provide a normally disabled breakglass access mechanism SEC 9 How do you protect your data in transit? Protect your data in transit by implementing multiple controls to reduce the risk of unautho rized access or loss Best Practices: •Implement secure key and certificate management: Store encryption keys and certifi cates securely and rotate them at appropriate time intervals while applying strict access control; for example by using a certificate management service such as AWS Certificate Manager (ACM) •Enforce encryption in transit : Enforce your defined encryption requirements based on ap propriate standards and recommendations to help you meet your organizational legal and compliance requirements •Automate detection of unintended data access: Use tools such as GuardDuty to automat ically detect attempts to move data outside of defined boundaries based on data classifi cation level for example to detect a trojan that is copying data to an unknown or untrust ed network using the DNS protocol •Authenticate network communications: Verify the identity of communications by using protocols that support authentication such as Transport Layer Security (TLS) or IPsec 67ArchivedAWS WellArchitected Framework Incident Response SEC 10 How do you anticipate respond to and recover from incidents? Preparation is critical to timely and effective investigation response to and recovery from security incidents to help minimize disruption to your organization Best Practices: •Identify key personnel and external resources: Identify internal and external personnel resources and legal obligations that would help your organization respond to an incident •Develop incident management plans: Create plans to help you respond to communicate during and recover from an incident For example you can start an incident response plan with the most likely scenarios for your workload and organization Include how you would communicate and escalate both internally and externally •Prepare forensic capabilities : Identify and prepare forensic investigation capabilities that are suitable including external specialists tools and automation •Automate containment capability : Automate containment and recovery of an incident to reduce response times and organizational impact •Preprovision access : Ensure that incident responders have the correct access preprovi sioned into AWS to reduce the time for investigation through to recovery •Predeploy tools : Ensure that security personnel have the right tools predeployed into AWS to reduce the time for investigation through to recovery •Run game days : Practice incident response game days (simulations) regularly incorporate lessons learned into your incident management plans and continuously improve 68ArchivedAWS WellArchitected Framework Reliability Foundations REL 1 How do you manage service quotas and constraints? For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk Best Practices: •Aware of service quotas and constraints : You are aware of your default quotas and quo ta increase requests for your workload architecture You additionally know which resource constraints such as disk or network are potentially impactful •Manage service quotas across accounts and regions: If you are using multiple AWS ac counts or AWS Regions ensure that you request the appropriate quotas in all environ ments in which your production workloads run •Accommodate fixed service quotas and constraints through architecture: Be aware of unchangeable service quotas and physical resources and architect to prevent these from impacting reliability •Monitor and manage quotas : Evaluate your potential usage and increase your quotas ap propriately allowing for planned growth in usage •Automate quota management: Implement tools to alert you when thresholds are being approached By using AWS Service Quotas APIs you can automate quota increase requests •Ensure that a sufficient gap exists between the current quotas and the maximum us age to accommodate failover: When a resource fails it may still be counted against quo tas until its successfully terminated Ensure that your quotas cover the overlap of all failed resources with replacements before the failed resources are terminated You should con sider an Availability Zone failure when calculating this gap 69ArchivedAWS WellArchitected Framework REL 2 How do you plan your network topology? Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must include network considerations such as intra and intersystem connectivity pub lic IP address management private IP address management and domain name resolution Best Practices: •Use highly available network connectivity for your workload public endpoints: These endpoints and the routing to them must be highly available To achieve this use highly available DNS content delivery networks (CDNs) API Gateway load balancing or reverse proxies •Provision redundant connectivity between private networks in the cloud and on premises environments: Use multiple AWS Direct Connect (DX) connections or VPN tun nels between separately deployed private networks Use multiple DX locations for high availability If using multiple AWS Regions ensure redundancy in at least two of them You might want to evaluate AWS Marketplace appliances that terminate VPNs If you use AWS Marketplace appliances deploy redundant instances for high availability in different Avail ability Zones •Ensure IP subnet allocation accounts for expansion and availability: Amazon VPC IP ad dress ranges must be large enough to accommodate workload requirements including factoring in future expansion and allocation of IP addresses to subnets across Availability Zones This includes load balancers EC2 instances and containerbased applications •Prefer hubandspoke topologies over manytomany mesh: If more than two network address spaces (for example VPCs and onpremises networks) are connected via VPC peer ing AWS Direct Connect or VPN then use a hubandspoke model like that provided by AWS Transit Gateway •Enforce nonoverlapping private IP address ranges in all private address spaces where they are connected : The IP address ranges of each of your VPCs must not overlap when peered or connected via VPN You must similarly avoid IP address conflicts between a VPC and onpremises environments or with other cloud providers that you use You must also have a way to allocate private IP address ranges when needed 70ArchivedAWS WellArchitected Framework Workload Architecture REL 3 How do you design your workload service architecture? Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making soft ware components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler Best Practices: •Choose how to segment your workload : Monolithic architecture should be avoided In stead you should choose between SOA and microservices When making each choice bal ance the benefits against the complexities—what is right for a new product racing to first launch is different than what a workload built to scale from the start needs The benefits of using smaller segments include greater agility organizational flexibility and scalability Complexities include possible increased latency more complex debugging and increased operational burden •Build services focused on specific business domains and functionality: SOA builds ser vices with welldelineated functions defined by business needs Microservices use domain models and bounded context to limit this further so that each service does just one thing Focusing on specific functionality enables you to differentiate the reliability requirements of different services and target investments more specifically A concise business problem and having a small team associated with each service also enables easier organizational scaling •Provide service contracts per API : Service contracts are documented agreements between teams on service integration and include a machinereadable API definition rate limits and performance expectations A versioning strategy allows clients to continue using the existing API and migrate their applications to the newer API when they are ready Deploy ment can happen anytime as long as the contract is not violated The service provider team can use the technology stack of their choice to satisfy the API contract Similarly the service consumer can use their own technology 71ArchivedAWS WellArchitected Framework REL 4 How do you design interactions in a distributed system to prevent failures? Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not neg atively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) Best Practices: •Identify which kind of distributed system is required: Hard realtime distributed systems require responses to be given synchronously and rapidly while soft realtime systems have a more generous time window of minutes or more for response Offline systems handle responses through batch or asynchronous processing Hard realtime distributed systems have the most stringent reliability requirements •Implement loosely coupled dependencies: Dependencies such as queuing systems streaming systems workflows and load balancers are loosely coupled Loose coupling helps isolate behavior of a component from other components that depend on it increas ing resiliency and agility •Make all responses idempotent : An idempotent service promises that each request is completed exactly once such that making multiple identical requests has the same ef fect as making a single request An idempotent service makes it easier for a client to im plement retries without fear that a request will be erroneously processed multiple times To do this clients can issue API requests with an idempotency token—the same token is used whenever the request is repeated An idempotent service API uses the token to return a response identical to the response that was returned the first time that the request was completed •Do constant work: Systems can fail when there are large rapid changes in load For exam ple a health check system that monitors the health of thousands of servers should send the same size payload (a full snapshot of the current state) each time Whether no servers are failing or all of them the health check system is doing constant work with no large rapid changes 72ArchivedAWS WellArchitected Framework REL 5 How do you design interactions in a distributed system to mitigate or withstand failures? Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) Best Practices: •Implement graceful degradation to transform applicable hard dependencies into soft dependencies : When a component's dependencies are unhealthy the component itself can still function although in a degraded manner For example when a dependency call fails failover to a predetermined static response •Throttle requests : This is a mitigation pattern to respond to an unexpected increase in de mand Some requests are honored but those over a defined limit are rejected and return a message indicating they have been throttled The expectation on clients is that they will back off and abandon the request or try again at a slower rate •Control and limit retry calls: Use exponential backoff to retry after progressively longer intervals Introduce jitter to randomize those retry intervals and limit the maximum num ber of retries •Fail fast and limit queues : If the workload is unable to respond successfully to a request then fail fast This allows the releasing of resources associated with a request and permits the service to recover if it’s running out of resources If the workload is able to respond successfully but the rate of requests is too high then use a queue to buffer requests in stead However do not allow long queues that can result in serving stale requests that the client has already given up on •Set client timeouts : Set timeouts appropriately verify them systematically and do not re ly on default values as they are generally set too high •Make services stateless where possible : Services should either not require state or should offload state such that between different client requests there is no dependence on lo cally stored data on disk or in memory This enables servers to be replaced at will without causing an availability impact Amazon ElastiCache or Amazon DynamoDB are good desti nations for offloaded state •Implement emergency levers: These are rapid processes that may mitigate availability impact on your workload They can be operated in the absence of a root cause An ideal emergency lever reduces the cognitive burden on the resolvers to zero by providing fully deterministic activation and deactivation criteria Example levers include blocking all robot traffic or serving a static response Levers are often manual but they can also be automat ed 73ArchivedAWS WellArchitected Framework Change Management REL 6 How do you monitor workload resources? Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response Best Practices: •Monitor all components for the workload (Generation): Monitor the components of the workload with Amazon CloudWatch or thirdparty tools Monitor AWS services with Per sonal Health Dashboard •Define and calculate metrics (Aggregation): Store log data and apply filters where neces sary to calculate metrics such as counts of a specific log event or latency calculated from log event timestamps •Send notifications (Realtime processing and alarming) : Organizations that need to know receive notifications when significant events occur •Automate responses (Realtime processing and alarming): Use automation to take action when an event is detected for example to replace failed components •Storage and Analytics : Collect log files and metrics histories and analyze these for broader trends and workload insights •Conduct reviews regularly : Frequently review how workload monitoring is implemented and update it based on significant events and changes •Monitor endtoend tracing of requests through your system: Use AWS XRay or third party tools so that developers can more easily analyze and debug distributed systems to understand how their applications and its underlying services are performing 74ArchivedAWS WellArchitected Framework REL 7 How do you design your workload to adapt to changes in demand? A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time Best Practices: •Use automation when obtaining or scaling resources : When replacing impaired resources or scaling your workload automate the process by using managed AWS services such as Amazon S3 and AWS Auto Scaling You can also use thirdparty tools and AWS SDKs to au tomate scaling •Obtain resources upon detection of impairment to a workload: Scale resources reactive ly when necessary if availability is impacted to restore workload availability •Obtain resources upon detection that more resources are needed for a workload: Scale resources proactively to meet demand and avoid availability impact •Load test your workload: Adopt a load testing methodology to measure if scaling activity meets workload requirements REL 8 How do you implement change? Controlled changes are necessary to deploy new functionality and to ensure that the work loads and the operating environment are running known software and can be patched or re placed in a predictable manner If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them Best Practices: •Use runbooks for standard activities such as deployment : Runbooks are the predefined steps used to achieve specific outcomes Use runbooks to perform standard activities whether done manually or automatically Examples include deploying a workload patch ing it or making DNS modifications •Integrate functional testing as part of your deployment: Functional tests are run as part of automated deployment If success criteria are not met the pipeline is halted or rolled back •Integrate resiliency testing as part of your deployment: Resiliency tests (as part of chaos engineering) are run as part of the automated deployment pipeline in a preprod environ ment •Deploy using immutable infrastructure : This is a model that mandates that no updates security patches or configuration changes happen inplace on production workloads When a change is needed the architecture is built onto new infrastructure and deployed into production •Deploy changes with automation: Deployments and patching are automated to eliminate negative impact 75ArchivedAWS WellArchitected Framework Failure Management REL 9 How do you back up data? Back up data applications and configuration to meet your requirements for recovery time objectives (RTO) and recovery point objectives (RPO) Best Practices: •Identify and back up all data that needs to be backed up or reproduce the data from sources : Amazon S3 can be used as a backup destination for multiple data sources AWS services such as Amazon EBS Amazon RDS and Amazon DynamoDB have built in capabil ities to create backups Thirdparty backup software can also be used Alternatively if the data can be reproduced from other sources to meet RPO you might not require a backup •Secure and encrypt backups : Detect access using authentication and authorization such as AWS IAM and detect data integrity compromise by using encryption •Perform data backup automatically : Configure backups to be taken automatically based on a periodic schedule or by changes in the dataset RDS instances EBS volumes Dy namoDB tables and S3 objects can all be configured for automatic backup AWS Market place solutions or thirdparty solutions can also be used •Perform periodic recovery of the data to verify backup integrity and processes : Validate that your backup process implementation meets your recovery time objectives (RTO) and recovery point objectives (RPO) by performing a recovery test REL 10 How do you use fault isolation to protect your workload? Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload Best Practices: •Deploy the workload to multiple locations : Distribute workload data and resources across multiple Availability Zones or where necessary across AWS Regions These loca tions can be as diverse as required •Automate recovery for components constrained to a single location: If components of the workload can only run in a single Availability Zone or onpremises data center you must implement the capability to do a complete rebuild of the workload within your de fined recovery objectives •Use bulkhead architectures: Like the bulkheads on a ship this pattern ensures that a fail ure is contained to a small subset of requests/users so the number of impaired requests is limited and most can continue without error Bulkheads for data are usually called parti tions or shards while bulkheads for services are known as cells 76ArchivedAWS WellArchitected Framework REL 11 How do you design your workload to withstand component failures? Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency Best Practices: •Monitor all components of the workload to detect failures: Continuously monitor the health of your workload so that you and your automated systems are aware of degrada tion or complete failure as soon as they occur Monitor for key performance indicators (KPIs) based on business value •Fail over to healthy resources : Ensure that if a resource failure occurs that healthy re sources can continue to serve requests For location failures (such as Availability Zone or AWS Region) ensure you have systems in place to failover to healthy resources in unim paired locations •Automate healing on all layers : Upon detection of a failure use automated capabilities to perform actions to remediate •Use static stability to prevent bimodal behavior: Bimodal behavior is when your work load exhibits different behavior under normal and failure modes for example relying on launching new instances if an Availability Zone fails You should instead build workloads that are statically stable and operate in only one mode In this case provision enough in stances in each Availability Zone to handle the workload load if one AZ were removed and then use Elastic Load Balancing or Amazon Route 53 health checks to shift load away from the impaired instances •Send notifications when events impact availability: Notifications are sent upon the de tection of significant events even if the issue caused by the event was automatically re solved 77ArchivedAWS WellArchitected Framework REL 12 How do you test reliability? After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect Best Practices: •Use playbooks to investigate failures: Enable consistent and prompt responses to fail ure scenarios that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contribut ing to a failure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated •Perform postincident analysis : Review customerimpacting events and identify the con tributing factors and preventative action items Use this information to develop mitiga tions to limit or prevent recurrence Develop procedures for prompt and effective respons es Communicate contributing factors and corrective actions as appropriate tailored to target audiences Have a method to communicate these causes to others as needed •Test functional requirements : These include unit tests and integration tests that validate required functionality •Test scaling and performance requirements: This includes load testing to validate that the workload meets scaling and performance requirements •Test resiliency using chaos engineering: Run tests that inject failures regularly into pre production and production environments Hypothesize how your workload will react to the failure then compare your hypothesis to the testing results and iterate if they do not match Ensure that production testing does not impact users •Conduct game days regularly : Use game days to regularly exercise your failure procedures as close to production as possible (including in production environments) with the peo ple who will be involved in actual failure scenarios Game days enforce measures to ensure that production testing does not impact users 78ArchivedAWS WellArchitected Framework REL 13 How do you plan for disaster recovery (DR)? Having backups and redundant workload components in place is the start of your DR strate gy RTO and RPO are your objectives for restoration of availability Set these based on busi ness needs Implement a strategy to meet these objectives considering locations and func tion of workload resources and data Best Practices: •Define recovery objectives for downtime and data loss : The workload has a recovery time objective (RTO) and recovery point objective (RPO) •Use defined recovery strategies to meet the recovery objectives: A disaster recovery (DR) strategy has been defined to meet objectives •Test disaster recovery implementation to validate the implementation: Regularly test failover to DR to ensure that RTO and RPO are met •Manage configuration drift at the DR site or region: Ensure that the infrastructure data and configuration are as needed at the DR site or region For example check that AMIs and service quotas are up to date •Automate recovery : Use AWS or thirdparty tools to automate system recovery and route traffic to the DR site or region 79ArchivedAWS WellArchitected Framework Performance Efficiency Selection PERF 1 How do you select the best performing architecture? Often multiple approaches are required for optimal performance across a workload Wellar chitected systems use multiple solutions and features to improve performance Best Practices: •Understand the available services and resources : Learn about and understand the wide range of services and resources available in the cloud Identify the relevant services and configuration options for your workload and understand how to achieve optimal perfor mance •Define a process for architectural choices : Use internal experience and knowledge of the cloud or external resources such as published use cases relevant documentation or whitepapers to define a process to choose resources and services You should define a process that encourages experimentation and benchmarking with the services that could be used in your workload •Factor cost requirements into decisions : Workloads often have cost requirements for op eration Use internal cost controls to select resource types and sizes based on predicted re source need •Use policies or reference architectures: Maximize performance and efficiency by evaluat ing internal policies and existing reference architectures and using your analysis to select services and configurations for your workload •Use guidance from your cloud provider or an appropriate partner: Use cloud company resources such as solutions architects professional services or an appropriate partner to guide your decisions These resources can help review and improve your architecture for optimal performance •Benchmark existing workloads: Benchmark the performance of an existing workload to understand how it performs on the cloud Use the data collected from benchmarks to dri ve architectural decisions •Load test your workload: Deploy your latest workload architecture on the cloud using dif ferent resource types and sizes Monitor the deployment to capture performance metrics that identify bottlenecks or excess capacity Use this performance information to design or improve your architecture and resource selection 80ArchivedAWS WellArchitected Framework PERF 2 How do you select your compute solution? The optimal compute solution for a workload varies based on application design usage pat terns and configuration settings Architectures can use different compute solutions for vari ous components and enable different features to improve performance Selecting the wrong compute solution for an architecture can lead to lower performance efficiency Best Practices: •Evaluate the available compute options : Understand the performance characteristics of the computerelated options available to you Know how instances containers and func tions work and what advantages or disadvantages they bring to your workload •Understand the available compute configuration options: Understand how various op tions complement your workload and which configuration options are best for your sys tem Examples of these options include instance family sizes features (GPU I/O) function sizes container instances and single versus multitenancy •Collect computerelated metrics : One of the best ways to understand how your compute systems are performing is to record and track the true utilization of various resources This data can be used to make more accurate determinations about resource requirements •Determine the required configuration by rightsizing: Analyze the various performance characteristics of your workload and how these characteristics relate to memory network and CPU usage Use this data to choose resources that best match your workload's profile For example a memoryintensive workload such as a database could be served best by the rfamily of instances However a bursting workload can benefit more from an elastic container system •Use the available elasticity of resources : The cloud provides the flexibility to expand or reduce your resources dynamically through a variety of mechanisms to meet changes in demand Combined with computerelated metrics a workload can automatically respond to changes and utilize the optimal set of resources to achieve its goal •Reevaluate compute needs based on metrics: Use systemlevel metrics to identify the behavior and requirements of your workload over time Evaluate your workload's needs by comparing the available resources with these requirements and make changes to your compute environment to best match your workload's profile For example over time a sys tem might be observed to be more memoryintensive than initially thought so moving to a different instance family or size could improve both performance and efficiency 81ArchivedAWS WellArchitected Framework PERF 3 How do you select your storage solution? The optimal storage solution for a system varies based on the kind of access method (block file or object) patterns of access (random or sequential) required throughput frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and durability constraints Wellarchitected systems use multiple storage solutions and enable different features to improve performance and use resources efficiently Best Practices: •Understand storage characteristics and requirements: Understand the different charac teristics (for example shareable file size cache size access patterns latency throughput and persistence of data) that are required to select the services that best fit your workload such as object storage block storage file storage or instance storage •Evaluate available configuration options: Evaluate the various characteristics and config uration options and how they relate to storage Understand where and how to use provi sioned IOPS SSDs magnetic storage object storage archival storage or ephemeral stor age to optimize storage space and performance for your workload •Make decisions based on access patterns and metrics : Choose storage systems based on your workload's access patterns and configure them by determining how the workload accesses data Increase storage efficiency by choosing object storage over block storage Configure the storage options you choose to match your data access patterns 82ArchivedAWS WellArchitected Framework PERF 4 How do you select your database solution? The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various subsystems and enable different fea tures to improve performance Selecting the wrong database solution and features for a sys tem can lead to lower performance efficiency Best Practices: •Understand data characteristics : Understand the different characteristics of data in your workload Determine if the workload requires transactions how it interacts with data and what its performance demands are Use this data to select the best performing database approach for your workload (for example relational databases NoSQL Keyvalue docu ment wide column graph time series or inmemory storage) •Evaluate the available options: Evaluate the services and storage options that are avail able as part of the selection process for your workload's storage mechanisms Understand how and when to use a given service or system for data storage Learn about available configuration options that can optimize database performance or efficiency such as provi sioned IOPs memory and compute resources and caching •Collect and record database performance metrics : Use tools libraries and systems that record performance measurements related to database performance For example mea sure transactions per second slow queries or system latency introduced when accessing the database Use this data to understand the performance of your database systems •Choose data storage based on access patterns: Use the access patterns of the workload to decide which services and technologies to use For example utilize a relational database for workloads that require transactions or a keyvalue store that provides higher through put but is eventually consistent where applicable •Optimize data storage based on access patterns and metrics : Use performance charac teristics and access patterns that optimize how data is stored or queried to achieve the best possible performance Measure how optimizations such as indexing key distribution data warehouse design or caching strategies impact system performance or overall effi ciency 83ArchivedAWS WellArchitected Framework PERF 5 How do you configure your networking solution? The optimal network solution for a workload varies based on latency throughput require ments jitter and bandwidth Physical constraints such as user or onpremises resources de termine location options These constraints can be offset with edge locations or resource placement Best Practices: •Understand how networking impacts performance : Analyze and understand how net workrelated decisions impact workload performance For example network latency often impacts the user experience and using the wrong protocols can starve network capacity through excessive overhead •Evaluate available networking features: Evaluate networking features in the cloud that may increase performance Measure the impact of these features through testing metrics and analysis For example take advantage of networklevel features that are available to reduce latency network distance or jitter •Choose appropriately sized dedicated connectivity or VPN for hybrid workloads: When there is a requirement for onpremise communication ensure that you have adequate bandwidth for workload performance Based on bandwidth requirements a single dedicat ed connection or a single VPN might not be enough and you must enable traffic load bal ancing across multiple connections •Leverage loadbalancing and encryption offloading: Distribute traffic across multiple resources or services to allow your workload to take advantage of the elasticity that the cloud provides You can also use load balancing for offloading encryption termination to improve performance and to manage and route traffic effectively •Choose network protocols to improve performance: Make decisions about protocols for communication between systems and networks based on the impact to the workload’s performance •Choose your workload’s location based on network requirements: Use the cloud loca tion options available to reduce network latency or improve throughput Utilize AWS Re gions Availability Zones placement groups and edge locations such as Outposts Local Regions and Wavelength to reduce network latency or improve throughput •Optimize network configuration based on metrics: Use collected and analyzed data to make informed decisions about optimizing your network configuration Measure the im pact of those changes and use the impact measurements to make future decisions 84ArchivedAWS WellArchitected Framework Review PERF 6 How do you evolve your workload to take advantage of new releases? When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the per formance of your workload Best Practices: •Stay uptodate on new resources and services : Evaluate ways to improve performance as new services design patterns and product offerings become available Determine which of these could improve performance or increase the efficiency of the workload through ad hoc evaluation internal discussion or external analysis •Define a process to improve workload performance : Define a process to evaluate new services design patterns resource types and configurations as they become available For example run existing performance tests on new instance offerings to determine their po tential to improve your workload •Evolve workload performance over time : As an organization use the information gath ered through the evaluation process to actively drive adoption of new services or resources when they become available 85ArchivedAWS WellArchitected Framework Monitoring PERF 7 How do you monitor your resources to ensure they are performing? System performance can degrade over time Monitor system performance to identify degra dation and remediate internal or external factors such as the operating system or applica tion load Best Practices: •Record performancerelated metrics : Use a monitoring and observability service to record performancerelated metrics For example record database transactions slow queries I/O latency HTTP request throughput service latency or other key data •Analyze metrics when events or incidents occur : In response to (or during) an event or incident use monitoring dashboards or reports to understand and diagnose the impact These views provide insight into which portions of the workload are not performing as ex pected •Establish Key Performance Indicators (KPIs) to measure workload performance : Identi fy the KPIs that indicate whether the workload is performing as intended For example an APIbased workload might use overall response latency as an indication of overall perfor mance and an ecommerce site might choose to use the number of purchases as its KPI •Use monitoring to generate alarmbased notifications: Using the performancerelated key performance indicators (KPIs) that you defined use a monitoring system that gener ates alarms automatically when these measurements are outside expected boundaries •Review metrics at regular intervals : As routine maintenance or in response to events or incidents review which metrics are collected Use these reviews to identify which metrics were key in addressing issues and which additional metrics if they were being tracked would help to identify address or prevent issues •Monitor and alarm proactively : Use key performance indicators (KPIs) combined with monitoring and alerting systems to proactively address performancerelated issues Use alarms to trigger automated actions to remediate issues where possible Escalate the alarm to those able to respond if automated response is not possible For example you may have a system that can predict expected key performance indicators (KPI) values and alarm when they breach certain thresholds or a tool that can automatically halt or roll back deployments if KPIs are outside of expected values 86ArchivedAWS WellArchitected Framework Tradeoffs PERF 8 How do you use tradeoffs to improve performance? When architecting solutions determining tradeoffs enables you to select an optimal ap proach Often you can improve performance by trading consistency durability and space for time and latency Best Practices: •Understand the areas where performance is most critical : Understand and identify ar eas where increasing the performance of your workload will have a positive impact on ef ficiency or customer experience For example a website that has a large amount of cus tomer interaction can benefit from using edge services to move content delivery closer to customers •Learn about design patterns and services: Research and understand the various design patterns and services that help improve workload performance As part of the analysis identify what you could trade to achieve higher performance For example using a cache service can help to reduce the load placed on database systems; however it requires some engineering to implement safe caching or possible introduction of eventual consistency in some areas •Identify how tradeoffs impact customers and efficiency: When evaluating perfor mancerelated improvements determine which choices will impact your customers and workload efficiency For example if using a keyvalue data store increases system perfor mance it is important to evaluate how the eventually consistent nature of it will impact customers •Measure the impact of performance improvements : As changes are made to improve performance evaluate the collected metrics and data Use this information to determine impact that the performance improvement had on the workload the workload’s compo nents and your customers This measurement helps you understand the improvements that result from the tradeoff and helps you determine if any negative sideeffects were in troduced •Use various performancerelated strategies : Where applicable utilize multiple strategies to improve performance For example using strategies like caching data to prevent exces sive network or database calls using readreplicas for database engines to improve read rates sharding or compressing data where possible to reduce data volumes and buffering and streaming of results as they are available to avoid blocking 87ArchivedAWS WellArchitected Framework Cost Optimization Practice Cloud Financial Management COST 1 How do you implement cloud financial management? Implementing Cloud Financial Management enables organizations to realize business value and financial success as they optimize their cost and usage and scale on AWS Best Practices: •Establish a cost optimization function : Create a team that is responsible for establishing and maintaining cost awareness across your organization The team requires people from finance technology and business roles across the organization •Establish a partnership between finance and technology: Involve finance and technolo gy teams in cost and usage discussions at all stages of your cloud journey Teams regularly meet and discuss topics such as organizational goals and targets current state of cost and usage and financial and accounting practices •Establish cloud budgets and forecasts: Adjust existing organizational budgeting and fore casting processes to be compatible with the highly variable nature of cloud costs and us age Processes must be dynamic using trend based or business driverbased algorithms or a combination •Implement cost awareness in your organizational processes : Implement cost awareness into new or existing processes that impact usage and leverage existing processes for cost awareness Implement cost awareness into employee training •Report and notify on cost optimization: Configure AWS Budgets to provide notifications on cost and usage against targets Have regular meetings to analyze this workload's cost efficiency and to promote cost aware culture •Monitor cost proactively : Implement tooling and dashboards to monitor cost proactively for the workload Do not just look at costs and categories when you receive notifications This helps to identify positive trends and promote them throughout your organization •Keep up to date with new service releases : Consult regularly with experts or APN Partners to consider which services and features provide lower cost Review AWS blogs and other information sources 88ArchivedAWS WellArchitected Framework Expenditure and usage awareness COST 2 How do you govern usage? Establish policies and mechanisms to ensure that appropriate costs are incurred while objec tives are achieved By employing a checksandbalances approach you can innovate without overspending Best Practices: •Develop policies based on your organization requirements: Develop policies that define how resources are managed by your organization Policies should cover cost aspects of re sources and workloads including creation modification and decommission over the re source lifetime •Implement goals and targets: Implement both cost and usage goals for your workload Goals provide direction to your organization on cost and usage and targets provide mea surable outcomes for your workloads •Implement an account structure: Implement a structure of accounts that maps to your or ganization This assists in allocating and managing costs throughout your organization •Implement groups and roles : Implement groups and roles that align to your policies and control who can create modify or decommission instances and resources in each group For example implement development test and production groups This applies to AWS services and thirdparty solutions •Implement cost controls : Implement controls based on organization policies and defined groups and roles These ensure that costs are only incurred as defined by organization re quirements: for example control access to regions or resource types with IAM policies •Track project lifecycle : Track measure and audit the lifecycle of projects teams and en vironments to avoid using and paying for unnecessary resources 89ArchivedAWS WellArchitected Framework COST 3 How do you monitor usage and cost? Establish policies and procedures to monitor and appropriately allocate your costs This al lows you to measure and improve the cost efficiency of this workload Best Practices: •Configure detailed information sources: Configure the AWS Cost and Usage Report and Cost Explorer hourly granularity to provide detailed cost and usage information Configure your workload to have log entries for every delivered business outcome •Identify cost attribution categories : Identify organization categories that could be used to allocate cost within your organization •Establish organization metrics: Establish the organization metrics that are required for this workload Example metrics of a workload are customer reports produced or web pages served to customers •Configure billing and cost management tools: Configure AWS Cost Explorer and AWS Budgets inline with your organization policies •Add organization information to cost and usage : Define a tagging schema based on or ganization and workload attributes and cost allocation categories Implement tagging across all resources Use Cost Categories to group costs and usage according to organiza tion attributes •Allocate costs based on workload metrics : Allocate the workload's costs by metrics or business outcomes to measure workload cost efficiency Implement a process to ana lyze the AWS Cost and Usage Report with Amazon Athena which can provide insight and charge back capability COST 4 How do you decommission resources? Implement change control and resource management from project inception to endoflife This ensures you shut down or terminate unused resources to reduce waste Best Practices: •Track resources over their life time : Define and implement a method to track resources and their associations with systems over their life time You can use tagging to identify the workload or function of the resource •Implement a decommissioning process : Implement a process to identify and decommis sion orphaned resources •Decommission resources : Decommission resources triggered by events such as periodic audits or changes in usage Decommissioning is typically performed periodically and is manual or automated •Decommission resources automatically : Design your workload to gracefully handle re source termination as you identify and decommission noncritical resources resources that are not required or resources with low utilization 90ArchivedAWS WellArchitected Framework Costeffective resources COST 5 How do you evaluate cost when you select services? Amazon EC2 Amazon EBS and Amazon S3 are buildingblock AWS services Managed ser vices such as Amazon RDS and Amazon DynamoDB are higher level or application level AWS services By selecting the appropriate building blocks and managed services you can optimize this workload for cost For example using managed services you can reduce or re move much of your administrative and operational overhead freeing you to work on appli cations and businessrelated activities Best Practices: •Identify organization requirements for cost: Work with team members to define the bal ance between cost optimization and other pillars such as performance and reliability for this workload •Analyze all components of this workload : Ensure every workload component is analyzed regardless of current size or current costs Review effort should reflect potential benefit such as current and projected costs •Perform a thorough analysis of each component : Look at overall cost to the organization of each component Look at total cost of ownership by factoring in cost of operations and management especially when using managed services Review effort should reflect poten tial benefit: for example time spent analyzing is proportional to component cost •Select software with cost effective licensing: Open source software will eliminate soft ware licensing costs which can contribute significant costs to workloads Where licensed software is required avoid licenses bound to arbitrary attributes such as CPUs look for li censes that are bound to output or outcomes The cost of these licenses scales more close ly to the benefit they provide •Select components of this workload to optimize cost in line with organization prior ities : Factor in cost when selecting all components This includes using application level and managed services such as Amazon RDS Amazon DynamoDB Amazon SNS and Ama zon SES to reduce overall organization cost Use serverless and containers for compute such as AWS Lambda Amazon S3 for static websites and Amazon ECS Minimize license costs by using open source software or software that does not have license fees: for exam ple Amazon Linux for compute workloads or migrate databases to Amazon Aurora •Perform cost analysis for different usage over time: Workloads can change over time Some services or features are more cost effective at different usage levels By performing the analysis on each component over time and at projected usage you ensure the work load remains cost effective over its lifetime 91ArchivedAWS WellArchitected Framework COST 6 How do you meet cost targets when you select resource type size and number? Ensure that you choose the appropriate resource size and number of resources for the task at hand You minimize waste by selecting the most cost effective type size and number Best Practices: •Perform cost modeling : Identify organization requirements and perform cost modeling of the workload and each of its components Perform benchmark activities for the workload under different predicted loads and compare the costs The modeling effort should reflect potential benefit: for example time spent is proportional to component cost •Select resource type and size based on data: Select resource size or type based on data about the workload and resource characteristics: for example compute memory through put or write intensive This selection is typically made using a previous version of the workload (such as an onpremises version) using documentation or using other sources of information about the workload •Select resource type and size automatically based on metrics : Use metrics from the cur rently running workload to select the right size and type to optimize for cost Appropriate ly provision throughput sizing and storage for services such as Amazon EC2 Amazon Dy namoDB Amazon EBS (PIOPS) Amazon RDS Amazon EMR and networking This can be done with a feedback loop such as automatic scaling or by custom code in the workload COST 7 How do you use pricing models to reduce cost? Use the pricing model that is most appropriate for your resources to minimize expense Best Practices: •Perform pricing model analysis : Analyze each component of the workload Determine if the component and resources will be running for extended periods (for commitment dis counts) or dynamic and short running (for spot or ondemand) Perform an analysis on the workload using the Recommendations feature in AWS Cost Explorer •Implement regions based on cost : Resource pricing can be different in each region Fac toring in region cost ensures you pay the lowest overall price for this workload •Select third party agreements with cost efficient terms: Cost efficient agreements and terms ensure the cost of these services scales with the benefits they provide Select agree ments and pricing that scale when they provide additional benefits to your organization •Implement pricing models for all components of this workload: Permanently running resources should utilize reserved capacity such as Savings Plans or reserved Instances Short term capacity is configured to use Spot Instances or Spot Fleet On demand is only used for shortterm workloads that cannot be interrupted and do not run long enough for reserved capacity between 25% to 75% of the period depending on the resource type •Perform pricing model analysis at the master account level: Use Cost Explorer Savings Plans and Reserved Instance recommendations to perform regular analysis at the master account level for commitment discounts 92ArchivedAWS WellArchitected Framework COST 8 How do you plan for data transfer charges? Ensure that you plan and monitor data transfer charges so that you can make architectural decisions to minimize costs A small yet effective architectural change can drastically reduce your operational costs over time Best Practices: •Perform data transfer modeling : Gather organization requirements and perform data transfer modeling of the workload and each of its components This identifies the lowest cost point for its current data transfer requirements •Select components to optimize data transfer cost : All components are selected and ar chitecture is designed to reduce data transfer costs This includes using components such as WAN optimization and MultiAZ configurations •Implement services to reduce data transfer costs: Implement services to reduce data transfer: for example using a CDN such as Amazon CloudFront to deliver content to end users caching layers using Amazon ElastiCache or using AWS Direct Connect instead of VPN for connectivity to AWS Manage demand and supply resources COST 9 How do you manage demand and supply resources? For a workload that has balanced spend and performance ensure that everything you pay for is used and avoid significantly underutilizing instances A skewed utilization metric in ei ther direction has an adverse impact on your organization in either operational costs (de graded performance due to overutilization) or wasted AWS expenditures (due to overpro visioning) Best Practices: •Perform an analysis on the workload demand : Analyze the demand of the workload over time Ensure the analysis covers seasonal trends and accurately represents operating con ditions over the full workload lifetime Analysis effort should reflect potential benefit: for example time spent is proportional to the workload cost •Implement a buffer or throttle to manage demand: Buffering and throttling modify the demand on your workload smoothing out any peaks Implement throttling when your clients perform retries Implement buffering to store the request and defer processing un til a later time Ensure your throttles and buffers are designed so clients receive a response in the required time •Supply resources dynamically : Resources are provisioned in a planned manner This can be demand based such as through automatic scaling or timebased where demand is predictable and resources are provided based on time These methods result in the least amount of over or under provisioning 93ArchivedAWS WellArchitected Framework Optimize over time COST 10 How do you evaluate new services? As AWS releases new services and features it's a best practice to review your existing archi tectural decisions to ensure they continue to be the most cost effective Best Practices: •Develop a workload review process : Develop a process that defines the criteria and process for workload review The review effort should reflect potential benefit: for exam ple core workloads or workloads with a value of over 10% of the bill are reviewed quarter ly while workloads below 10% are reviewed annually •Review and analyze this workload regularly : Existing workloads are regularly reviewed as per defined processes 94,General,consultant,Best Practices Backup_and_Recovery_Approaches_Using_AWS,ArchivedBackup and Recovery Approaches Using AWS June 2016 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 2 of 26 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 3 of 26 Contents Abstract 4 Introduction 4 Why Use AWS as a DataProtection Platform? 4 AWS Storage Services for Data Protection 5 Amazon S3 6 Amazon Glacier 6 AWS Storage Gateway 7 AWS Transfer Services 7 Designing a Backup and Recovery Solution 7 CloudNative Infrastructure 8 EBS SnapshotBased Protection 9 Database Backup Approaches 14 OnPremises to AWS Infrastructure 17 Hybrid Environments 20 Backing Up AWSBased Applications to Your Data Center 21 Migrating Backup Management to the Cloud for Availability 22 Example Hybrid Scenario 23 Archiving Data with AWS 24 Securing Backup Data in AWS 24 Conclusion 25 Contributors 25 Document Revisions 26 ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 4 of 26 Abstract This paper is intended for enterprise solution architects backup architects and IT administrators who are responsible for protecting data in their corporate IT environments It discusses production workloads and architectures that can be implemented using AWS to augment or replace a backup and recovery solution These approaches offer lower costs higher scalability and more durability to meet Recovery Time Objective (RTO) Recovery Point Objective (RPO) and compliance requirements Introduction As the growth of enterprise data accelerates the task of protecting it becomes more challenging Questions about the durability and scalability of backup methods are commonplace including this one: How does the cloud help meet my backup and archival needs? This paper covers a number of backup architectures (cloudnative applications hybrid and onpremises environments) and associated AWS services that can be used to build scalable and reliable dataprotection solutions Why Use AWS as a DataProtection Platform? Amazon Web Services (AWS) is a secure highperformance flexible cost effective and easy touse cloud computing platform AWS takes care of the undifferentiated heavy lifting and provides tools and resources you can use to build scalable backup and recovery solutions There are many advantages to using AWS as part of your data protection strategy:  Durability: Amazon Simple Storage Service (Amazon S3) and Amazon Glacier are designed for 999999999 99% (11 nines) of durability for the objects stored in them Both platforms offer reliable locations for backup data ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 5 of 26  Security: AWS provides a number of options for access control and encrypting data in transit and at rest  Global infrastructure : AWS services are available around the globe so you can back up and store data in the region that meets your compliance requirements  Compliance: AWS infrastructure is certified for compliance with standards such as Service Organization Controls (SOC) Statement on Standards for Attestation Engagements ( SSAE ) 16 International Organization for Standardization (ISO) 27001 Payment Card Industry Data Security Standard (PCI DSS) Health Insurance Portability and Accountability Act (HIPPA ) SEC1 and Federal Risk and Authorization Management Program (FedRAMP) so you can easily fit the backup solution into your existing compliance regimen  Scalability: With AWS you don’t have to worry about capacity You can scale your consumption up or down as your needs change without administrative overhead  Lower TCO: The scale of AWS operations drives down service costs and helps lower the total cost of ownership (T CO) of the storage AWS passes these cost savings on to customers in the form of price drops  Payasy ougo pricing: Purchase AWS services as you need them and only for the period you plan to use them AWS pricing has no upfront fees termination penalties or longterm contracts AWS Storage Services for Data Protection Amazon S3 and Amazon Glacier are ideal services for backup and archival Both are durable lowcost storage platforms Both offer unlimited capacity and require no volume or media management as backup data sets grow The payforwhat youuse model and low cost per GB/month make these services a good fit for data protection use cases 1 https://awsamazoncom/aboutaws/whatsnew/2015/09/amazonglacierreceives third partycomplianceassessmentforsecrule17a 4ffromcohassetassociates inc/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 6 of 26 Amazon S3 Amazon S3 provides highly secure scalable object storage You can use Amazon S3 to store and retrieve any amount of data at any time from anywhere on the web Amazon S3 stores data as objects within resources called buckets AWS Storage Gateway and many thirdparty backup solutions can manage Am azon S3 objects on your behalf You can store as many objects as you want in a bucket and you can write read and delete objects in your bucket Single objects can be up to 5 TB in size Amazon S3 offers a range of storage classes designed for different use cases These include:  Amazon S3 Standard for generalpurpose storage of frequently accessed data  Amazon S3 Standard Infrequent Access for longlived but less frequently accessed data  Amazon Glacier for longterm archive Amazon S3 also offers lifecycle policies you can configure to manage your data throughout its lifecycle After a policy is set your data will be migrated to the appropriate storage class without any changes to your application For more information see S3 Storage Classes Amazon Glacier Amazon Glacier is an extremely lowcost cloud archive storage service that provides secure and durable storage for data archiving and online backup To keep costs low Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are acceptable With Amazon Glacier you can reliably store large or small amounts of data for as little as $00 07 per gigabyte per month a significant savings compared to onpremises solutions Amazon Glacier is well suited for storage of backup data with long or indefinite retention requirements and for longterm data archiving For more information see Amazon Glacier ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 7 of 26 AWS Storage Gateway AWS Storage Gateway connects an onpremises software appliance with cloud based storage to provide seamless and highly secure integration between your on premises IT environment and the AWS storage infrastructure For more information see AWS Storage Gateway AWS Transfer Services In addition to thirdparty gateways and connectors you can use AWS options like AWS Direct Connect AWS Snowball AWS Storage Gateway and Amazon S3 Transfer Acceleration to quickly transfer your data For more information see Cloud Data Migration Designing a Backup and Recovery Solution When you develop a comprehensive strategy for backing up and restoring data you must first identify the failure or disaster situations that can occur and their potential business impact In some industries you must consider regulatory requirements for data security privacy and records retention You should implement backup processes that will offer the appropriate level of granularity to meet the RTO and RPO of the business including:  Filelevel recovery  Volumelevel recovery  Applicationlevel recovery (for example databases)  Imagelevel recovery The following sections describe backup recovery and archive approaches based on the organization of your infrastructure IT infrastructure can broadly be categorized as cloud native onpremises and hybrid ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 8 of 26 CloudNative Infrastructure This scenario describes a workload environment that exists entirely on AWS As the following figure shows it includes web servers application servers monitoring servers databases and Active Directory If you are running all of your services from AWS you can leverage many builtin features to meet your data protection and recovery needs Figure 1: AWS Native Scenario ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 9 of 26 EBS SnapshotBased Protection When services are running in Amazon Elastic Compute Cloud2 (Amazon EC2) compute instances can use Amazon Elastic Block Store (Amazon EBS) volumes to store and access primary data You can use this block storage for structured data such as databases or unstructured data such as files in a file system on the volume Amazon EBS provides the ability to create snapshots (backups) of any Amazon EBS volume It takes a copy of the volume and places it in Amazon S3 where it is stored redundantly in multiple Availability Zones The first snapshot is a full copy of the volume; ongoing snapshots store incremental blocklevel changes only This is a fast and reliable way to restore full volume data If you only need a partial restore you can attach the volume to the running instance under a different device name mount it and then use operating system copy commands to copy the data from the backup volume to the production volume Amazon EBS snapshots can also be copied between AWS regions using the Amazon EBS snapshot copy capability available in the console or from the command line as described in the Amazon Elastic Cloud Compute User Guide 3 You can use this feature to store your backup in another region without having to manage the underlying replication technology 2 http://awsamazoncom/ec2/ 3 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebscopysnapshothtml ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 10 of 26 Creating EBS Snapshots When you create a snapshot you protect your data directly to durable diskbased storage You can use the AWS Management Console the command line interface (CLI) or the APIs to create the Amazon EBS snapshot In the Amazon EC2 console on the Elastic Block Store Volumes page choose Create Snapshot from the Actions menu On the Create Snapshot dialog box choose Create to create a snapshot that will be stored in Amazon S3 Figure 2: Using the EC2 Console to Create a Snapshot To use the CLI command to create the snapshot run the following command:  aws ec2 create snapshot You can schedule and run the aws ec2 create snapshot commands on a regular basis to back up the EBS data The economical pricing of Amazon S3 makes it possible for you to retain many generations of data And because snapshots are blockbased you consume space only for data that’s changed after the initial s napshot was created ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 11 of 26 Restoring from an EBS Snapshot To restore data from a snapshot you can use the AWS Management Console the CLI or the APIs to create a volume from an existing snapshot For example follow these steps to restore a volume to an earl ier point intime backup: 1 Use the following command to create a volume from the backup snapshot:  aws ec2 create volume –region uswest1b –snapshot id mysnapshotid 2 On the Amazon EC2 instance unmount the existing volume In Linux use umount In Windows use the Logical Volume Manager (LVM) 3 Use the following command to detach the existing volume from the instance:  aws ec2 detachvolume volumeid oldvolumeid – instanceid myec2instance id 4 Use the following command to attach the volume that was created from the snapshot:  aws ec2 attachvolume volumeid newvolume id instanceid myec2instance id device /dev/sdf 5 Remount the volume on the running instance ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 12 of 26 Creating Consistent or Hot Backups When you perform a backup it’s best to have the system in a state where it is not performing any I/O I n the ideal case the machine is n’t accepting traffic but this is increasingly rare as 24/7 IT operations become the norm For this reason you must quiesce the file system or database in order to make a clean backup The way in which you do this depends on your database or file system The process for a database is as follows:  If possible put the database into hot backup mode  Run the Amazon EBS snapshot commands  Take the database out of hot backup mode or if using a read replica terminate the read replica instance The process for a file system is similar but depends on the capabilities of the operating system or file system For example XFS is a file system that can flush its data for a consistent backup For more information see xfs_freeze 4 If your file system does not support the ability to freeze you should unmount it issue the snapshot command and then remount the file system Alternatively you can facilitate this process by using a logical volume manager that supports the freezing of I/O Because the snapshot process continues in the background and the creation of the snapshot is fast to execute and captures a point in time the volumes you ’re backing up only need to be unmounted for a matter of seconds Because the backup window is as small as possible the outage time is predictable and can be scheduled 4 https://accessredhatcom/documentation/en US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsfreeze html ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 13 of 26 Performing Multivolume Backups In some cases you can stripe data across multiple Amazon EBS volumes by using a logical volume manager to increase potential throughput When you use a logical volume manager (for example mdadm or LVM) it is important to perform the backup from the volume manager layer rather than the underlying EBS volumes This ensures all metadata is consistent and the subcomponent volumes are coherent There are a number of ways to accomplish this For example you can use the script created by alesticcom5 The memory buffers should be flushed to disk; the file system I/O to disk should be stopped; and a snapshot should be initiated simultaneously for all the volumes making up the RAID set After the snapshot for the volumes is initiated (usually a second or two) the file system can continue its operations The snapshots should be tagged so that you can manage them collectively during a restore You can also perform these backups from the logical volume manager or file system level In these cases using a traditional backup agent enables the data to be backed up over the network A number of agentbased backup solutions are available on the internet and in the AWS Marketplace 6 Remember that agent based backup software expects a consistent server name and IP address As a result using these tools with instances deployed in an Amazon virtual private cloud (VPC)7 is the best way to ensure reliability An alternative approach is to create a replica of the primary system volumes that exist on a single large volume This simplifies the backup process because only one large volume must be backed up and the backup does not take place on the primary system However you should first determine whether the single volume can perform sufficiently during the backup and whether the maximum volume size is appropriate for the application 5 https://githubcom/alestic/ec2consistentsnapshot 6 https://awsamazoncom/marketplace/ 7 http://awsamazoncom/vpc/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 14 of 26 Database Backup Approaches AWS has many options for databases You can run your own database on an EC2 instance or use one of the managed service database options provided by the Amazon Relational Database Service 8(Amazon RDS) If you are running your own database on an EC2 instance you can back up data to files using native tools (for example MySQL9 Oracle10 MSSQL11 PostgreSQL12) or create a snapshot of the volumes containing the data using one of the methods described in “EBS SnapshotBased Protection ” Using Database Replica Backups For databases that are built on RAID sets of Amazon EBS volumes you can remove the burden of backups on the primary database by creating a read replica of the database This is an up todate copy of the database that runs on a separate Amazon EC2 instance The replica database instance can be created using multiple disks similar to the source or the data can be consolidated to a single EBS volume You can then use one of the procedures described in “ EBS SnapshotBased Protection ” to snapshot the EBS volumes This approach is often used for large databases that are required to run 24/7 When that is the case the backup window required is too long and the production database cannot be taken down for such long periods Using Amazon RDS for Backups Amazon RDS includes features for automating database backups Amazon RDS creates a storage volume snapshot of your database instance backing up the entire DB instance not just individual databases 8 https://awsamazoncom/rds/ 9 http://devmysqlcom/doc/refman/57/en/backupandrecoveryhtml 10 http://docsoraclecom/cd/E11882_01/backup112/e10642/rcmbckbahtm#BRADV 8003 11 http://msdnmicrosoftcom/enus/library/ms187510aspx 12 http://wwwpostgresqlorg/docs/93/static/backuphtml ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 15 of 26 Amazon RDS provides two different methods for backing up and restoring your DB instances :  Automated backups enable po intintime recovery of your DB instance Automated backups are turned on by default when you create a new DB instance Amazon RDS performs a full daily backup of your data during a window that you define when you create the DB instance You can configure a retention period of up to 35 days for the automated backup Amazon RDS uses these periodic data backups in conjunction with your transaction logs to enable you to restore your DB instance to any second during your retention period up to the LatestRestorableTime (typically the last five minutes) To find the latest restorable time for your DB instances you can use the DescribeDBInstances API call or look on the Description tab for the database in the Amazon RDS console When you initiate a point intime recovery transaction logs are applied to the most appropriate daily backup in order to restore your DB instance to the time you requested  DB snapshots are userinitiated backups that enable you to back up your DB instan ce to a known state as frequently as you like and then restore to that state at any time You can use the Amazon RDS console or the CreateDBSnapshot API call to create DB snapshots The se snapshots have unlimited retention They are kept until you use the console or the DeleteDBSnapshot API call to explicitly delete them When you restore a database to a point in time or from a DB snapshot a new database instance with a new endpoint will be created In this way you can create multiple database instances from a specific DB snapshot or point in time You can use the AWS Management Console or a DeleteDBInstance call to delete the old database instance ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 16 of 26 Using AMI to Back Up EC2 Instances AWS stores system images in what are called Amazon Machine Images (AMI s) These images consist of the template for the root volume required to launch an instance You can use the AWS Management Console or the aws ec2 create image CLI command to back up the root volume as an AMI Figure 3: Using an AMI to Back Up and Launch an Instance When you register an AMI it is stored in your account using Amazon EBS snapshots These snapshots reside in Amazon S3 and are highly durable Figure 4: Using the EC2 Console to Create a Machine Image After you have created an AMI of your Amazon EC2 instance you can use the AMI to recreate the instance or launch more copies of the instance You can also copy AMIs from one region to another for application migration or disaster recovery ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 17 of 26 OnPremises to AWS Infrastructure This scenario describes a workload environment with no components in the cloud All resources including web servers application servers monitoring servers databases Active Directory and more are hosted either in the customer data center or through colocation Routers SwitchesWorkstations Application ServersFile ServersWeb ServersManagement Server Database ServersSAN Storage SAN StorageRouters Application ServersSwitchesWorkstations Workstations Workstations Database ServersFile ServersInternet Customer Interconnect NetworkSAN Storage RoutersSwitchesWorkstations Database ServersFile Servers Application ServersApplication ServersApplication ServersColocation Hosting Branch OfficeCorporate Data Center Figure 5: OnPremises Environment ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 18 of 26 By using AWS storage services in this scenario you can focus on backup and archiving tasks You don ’t have to worry about storage scaling or infrastructure capacity to accomplish the backup task Amazon S3 and Amazon Glacier are natively APIbased and available through the Internet This allows backup software vendors to directly integrate their applications with AWS storage solutions as shown in the following figure Figure 6 : Backup Connector to Amazon S3 or Amazon Glacier In this scenario backup and archive software directly interfaces with AWS through the APIs Because the backup software is AWSaware it will back up the data from the onpremises servers directly to Amazon S3 or Amazon Glacier If your existing backup software does not natively support the AWS cloud you can use AWS storage gateway products AWS Storage Gateway13 is a virtual appliance that provides seamless and secure integration between your data center and the AWS storage infrastructure The service allows you to securely store data 13 http://awsamazoncom/storagegateway/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 19 of 26 in the AWS cloud for scalable and costeffective storage Storage Gateway supports industrystandard storage protocols that work with your existing applications while securely storing all of your data encrypted in Amazon S3 or Amazon Glacier Figure 7: Connecting OnPremises to AW S Storage AWS Storage Gateway supports the following configurations:  Volume gateways: Volume gateways provide cloudbacked storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your onpremises application servers The gateway supports the following volume configurations:  Gatewaycached volumes: You can store your primary data in Amazon S3 and retain your frequently accessed data locally Gatewaycached volumes provide substantial cost savings on primary storage minimize the need to scale your storage on premises and retain lowlatency access to your frequently accessed data  Gatewaystored volumes: In the event you need lowlatency access to your entire data set you can configure your onpremises data gateway to store your primary data locally and asynchronously back up point intime snapshots of this data to Amazon S3 Gatewaystored volumes provide durable and inexpensive offsite backups that you can recover locally or from Amazon EC2  Gatewayvirtual tape library (gatewayVTL): With gatewayVTL you can have a limitless collection of virtual tapes Each virtual tape can be stored ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 20 of 26 in a virtual tape library backed by Amazon S3 or a virtual tape shelf backed by Amazon Glacier The virtual tape library exposes an industrystandard iSCSI interface which provides your backup application with online access to the virtual tapes When you no longer require immediate or frequent access to data contained on a virtual tape you can use your backup application to move it from its virtual tape library to your virtual tape shelf to further reduce your storage costs These gateways act as plugandplay devices providing standard iSCSI devices which can be integrated into your backup or archive framework You can use the iSCSI disk devices as storage pools for your backup software or the gatewayVTL to offload tapebased backup or archive directly to Amazon S3 or Amazon Glacier Using this method your backup and archives are automatically offsite (for compliance purposes) and stored on durable media eliminating the complexity and security risks of off site tape management Hybrid Environments The two infrastructure deployments discussed to this point cloudnative and on premises can be combined into a hybrid scenario wh ere the workload environment has on premises and AWS infrastructure components Resources including web servers application servers monitoring servers databases Active Directory and more are hosted either in the customer data center or AWS Applications running in the AWS cloud are connected to applications running on premises This is becoming a common scenario for enterprise workloads Many enterprises have data centers of their own and use AWS to augment capacity These customer data centers are often connected to the AWS network by highcapacity network links For example with AWS Direct Connect14 you can establish private dedicated connectivity from your premises to AWS This provides the bandwidth 14 http://awsamazoncom/directconnect/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 21 of 26 and consistent latency to upload data to the cloud for the purposes of data protection and consistent performance and latency for hybrid workloads Figure 8: A Hybrid Infrastructure Scenario Welldesigned data protection solutions typically use a combination of the methods described in the cloudnative and onpremises solutions Back ing Up AWSBased Applications to Your Data Center If you already have a framework that backs up data for your onpremises servers then it is easy to extend it to your AWS resources over a VPN connection or through AWS Direct Connect You can install the backup agent on the Amazon EC2 instances and back them up per your dataprotection policies ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 22 of 26 Migrating Backup Management to the Cloud for Availability Dependin g on your backup architecture you may have a master backup server and one or more media or storage servers located onpremises with the services it’s protecting In this case you might want to move the master backup server to an Amazon EC2 instance to protect it from onpremises disasters and have a highly available backup infrastructure To manage the backup data flows you might also want to create one or more media servers on Amazon EC2 instances Media servers near the Amazon EC2 instances will save you money on internet transfer and when backing up to S3 or Amazon Glacier increase overall backup and recovery performance Figure 9 : Using Gateways in the Hybrid Scenario ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 23 of 26 Example Hybrid Scenario Assume that you are managing an environment where you are backing up Amazon EC2 instances standalone servers virtual machines and databases This environment has 1000 servers and you back up the operating system file data virtual machine images and databases There are 20 databases (a mixture of MySQL Microsoft SQL Server and Oracle) to back up Your backup software has agents that back up operating systems virtual machine images data volumes SQL Server databases and Oracle databases (using RMAN) For applications like MySQL that your backup software does not have an agent for you might use the mysqldump client utility to create a database dump file to disk where standard backup agents can then protect the data To protect th is environment your thirdparty backup software most likely has a global catalog server or master server that controls the backup archive and restore activities as well as multiple media server s that are connected to disk based storage Linear TapeOpen (LTO) tape driv es and AWS storage services The simpliest way to augment your backup solution with AWS storage services is to take advantage of your backup vendor ’s support for Amazon S3 or Amazon Glacier We suggest you work with your vendor to understand their integration and connector options For a list of backup software vendors who work with AWS see our partner directory15 If your exisin g backup software does not natively support cloud storage for backup or archive you can use a storage gateway device such as a bridge between the backup software and Amazon S3 or Amazon Glacier There are many thirdparty gateway solutions You can also use AWS Storage Gateway virtual appliances to bridge this gap because it uses generic techniques such as iSCSIbased volumes and virtual tape libraries (VTL s) This configuration requires a supported hypervisor (VMware or Microsoft Hyper V) and local storage to host the appliance 15 http://wwwawspartnerdirectorycom/PartnerDirectory/PartnerSearch?type=ISV ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 24 of 26 Archiving Data with AWS When you need to preserve data for compliance or corporate reasons you archive it Unlike backup s which are usually performed to keep a copy of the production data for a short duration to recover from data corruption or data loss archiving maintains all copies of data until the retention policy expires A good archive meets the following criteria:  Data durability for longterm integrity  Data security  Ease of recoverability  Low cost Immutable data stores can be another regulatory or compliance requirement Amazon Glacier provid es archives at low cost native encryption of data at rest 11 nines of durability and unlimited capacity Amazon S3 Standard Infrequent Access is a good choice for use cases that require the quick retrieval of data Amazon Glacier is a good choice for use cases where data is infrequently accessed and retrieval times of several hours are acceptable Objects can be tiered into Amazon Glacier either through lifecycle rules in S3 or the Amazon Glacier API The Amazon Glacier Vault Lock feature allows you to easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a vault lock policy You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits For more information see Amazon Glacier Securing Backup Data in AWS Data security is a common concern AWS takes security very seriously It’s the foundation of every service we launch Storage services like Amazon S3 provide strong capabilities for access control and encryption both at rest and in transit All Amazon S3 and Amazon Glacier API endpoints support SSL encryption for ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 25 of 26 data in transit Amazon Glacier encrypts all data at rest by default With Amazon S3 customers can choose serverside encryption for objects at rest by letting AWS manage the encryption keys providing their own keys when they upload an object or using AWS Key Management Service (AWS KMS )16 integration for the encryption keys Alternatively customers can always encrypt their data before uploading it to AWS For more information s ee Amazon Web Services: Overview of Security Processes Conclusion Gartner has recognized AWS as a leader in public cloud storage services17 AWS is well positioned to help organizations move their workloads to cloudbased platforms the next generation of backup AWS provides costeffective and scalable solutions to help organizations balance their requirements for backup and archiving These services integrate well with technologies you are using today Contributors The following individuals contributed to this paper :  Pawan Agnihotri Solutions Architect Amazon Web Services  Lee Kear Solutions Architect Amazon Web Services  Peter Levett Solutions Architect Amazon Web Services 16 http://docsawsamazoncom/AmazonS3/latest/dev/UsingKMSEncryptionhtml 17 http://wwwgartnercom/technology/reprintsdo?id=1 1WWKTQ3&ct=140709&st=sb ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 26 of 26 Document Revisions Updated May 2016,General,consultant,Best Practices Best_Practices_for_Deploying_Alteryx_Server_on_AWS,ArchivedBest Practices for Deploying Alteryx Server on AWS August 2019 This paper has been archived For the latest technical guidance on the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Introduction 1 Alteryx Server 1 Designer 1 Scheduler 1 Controller 2 Worker 3 Database 3 Gallery 3 Options for Deploying Alte ryx Server on AWS 4 Enterprise Deployment 5 Deploy Alteryx Server with Chef 8 Deploy a Windows Server EC2 instance and install Alteryx Server 8 Deploy an Amazon EC2 Instance from the Alteryx Server AMI 8 Sizing and Scaling Alteryx Server on AWS 10 Performance Consider ations 10 Availability Considerations 14 Management Considerations 15 Sizing and Scaling Summary 15 Operations 17 Backup and Restore 17 Monitoring 17 Network and Security 18 Connecting On Premises Resources to Amazon VPC 18 Security Groups 20 Network Access Con trol Lists (NACLs) 20 Bastion Host (Jump Box) 20 Archived Secure Sockets Layer (SSL) 21 Best Practices 21 Deployment 21 Scaling and Availability 22 Network and Security 22 Performance 23 Conclusion 23 Contributors 23 Further Reading 24 Document Revisions 25 Archived Abstract Alteryx Server is a scalable server based analytics solution that helps you create publish and share analytic applications schedule and automate workflow jobs create manage and share data connec tions and control data access This whitepaper discusse s how to run Alteryx Server on AWS and provides an overview of the AWS services that relate to Alteryx Server It also includes i nformation on common architecture patterns and deployment of Alteryx Server on AWS The paper is intended for information techn ology professionals who are new to Alteryx products and are considering deploying Alteryx Server on AWSArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 1 Introduction Alteryx Server provides a scalable platform that helps create analytical insights and empowers analysts and business users across your org anization to make better data driven decisions Alteryx Server provides: • Data blending • Predictive analytics • Interactive visualizations • An easy touse drag anddrop interface • Support for a wide variety of data sources • Data governance and security • Sharing an d collaboration Alteryx Server is an end toend analytics platform for the enterprise used by thousands of customers around the world For details on how customers have successfully used Alteryx on AWS see the Alteryx + AWS Customer Success Stories Alteryx Server Alteryx Server consists of six main components : Designer Scheduler Controller Worker Database and Gallery Each component is discussed in the following sections Designer The Designer is a Windows software application that lets you create repeatable workflow processe s Designer is installed by de fault on the same instance as the Controller You can use o ther installations of the Designer (for example on your workstation) and connect it to the C ontroller using the controller tok en Scheduler The Scheduler lets you schedule the execution of workflows or analytic applications developed within the Designer ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 2 Controller The Controller orchestrates workflow execution s manages the service settings and delegates work to the Workers The Controller also supports the Gallery and handles APIs for remote integration T he Controller has t hree key parts : authentication controller token and database drivers which are described as follows Authentication Alteryx Server supports local authentication Microsoft Active Directory (Microsoft AD) authentication and SAML 20 authentication For short term trial or proof ofconcept deployments local authentication is a reasonable option However in most deployments we recommend that you use Microsoft AD or SAML 20 to connect your user directory Note: Changing authentication methods requires that you reinstall the Control ler For deployments of Alteryx Server on AWS where you have chosen Microsoft AD consider using AWS Directory Services AWS Directory Services enables Alteryx Server to use a fully managed instance of Microsoft AD in the AWS Cloud AWS Microsoft AD is bui lt on Microsoft AD and does not require you to synchronize or replicate data from your existing Active Directory to the cloud (although this remains an option for later integration as your deployment evolves over time ) For more information on this option see AWS Directory Service Controller Token The controller token connects the Controller to Workers and D esigner clients to schedule and run workflows from other Designer components The token is automatically generated when you install Alteryx Server The controller token is unique to your server instance and administrators must safeguard it You only need to regenerate the token if it is compromised If you regenerate the token all the W orker s and Gallery components must be updated with the new token Drivers Alteryx Server communicates with numerous supported data sources including databases such as Amazon Aurora and Amazon Redshift and object stores such as ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 3 Amazon S imple Storage Service (Ama zon S 3) For a complete list of supported sources see Data Sources on the Alteryx Technical Specifications page Successfully connecting to most data sources is a simple process because the Controller has a network path to the database and proper credentials to access the database with the appropriate permissions For help with troubleshooting database connections see the Alteryx Community and Alteryx Support pages Each database requires you to install the appropriate driver When using Alteryx Server be sure to configure each required database driver on the server machine with the same version that is used for Designer clients If a Designer client and the Alteryx Server do not have the same driver the scheduled workflow may not complete properly Worker The Worker executes workflows or analytic applications sent to the Controller The same instance that runs the Controller can run the Worker This setup is common in smaller scale deployments You can configure s eparate instances to run as Workers for scaling and performance purposes You must configure a t least one instan ce as a Worker —the total number of Workers you need is dependent on performance considerations Database The persistence tier store s information that is critical to the functioning of the Controller such as A lteryx application files the job queue gallery information and result data Alteryx Server supports two different databases for persistence: MongoDB and SQLite Most deploymen ts use MongoDB which can be deployed as an embedded database or as a user managed database Consider using MongoDB if you need a scalable or highly available architecture Note that m ost scalable deployments use a user managed MongoDB database Consider u sing SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads Gallery The Gallery is a web based application for sharing workflows and outputs The Gallery can be run on the Alteryx Server machine Alternatively multiple Gallery machines can be configured behind an Elastic Load Balanc ing (ELB) load balanc er to handle the Gallery services at scale ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 4 Options for Deploying Alteryx Server on AWS Alteryx Server is contained as a Microsoft Windows Service It can run easily on most Microsoft Windows Server operating systems Note: In order to install Alteryx Server on AWS you will need an AWS account and an Alteryx Server license key If you do not have a license key trial options for Alteryx Server on AW S are available through AWS Marketplace You can install the Alteryx Server components into a multi node cluster to create a scalable enterprise deployment of Alteryx Server: Figure 1: Scalable enterprise deployment of Alteryx Server Alternatively you can install Alteryx Server in one self contained EC2 instance: ArchivedAmazon Web Services Best Practices for Deploying Alte ryx Server on AWS Page 5 Figure 2: Deployment of Alteryx Server on a single EC2 instance The following sections discuss how to deploy Alteryx Server on AWS from the most complex deployment to the simplest deployment Enterprise Deployment The following architecture diagram shows a solution for a scalable enterprise deployment of Alteryx Server on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 6 Figure 3: Alteryx Server architecture on AWS The following high level steps explain how to create a scal able enterprise deployment of Alteryx Server on AWS: Note: To deploy Alteryx Server on AWS you will need the controller token to connect the Controller to Workers and Designer clients the IP or DNS information of the Controller for connection and failover if needed and the usermanaged MongoDB connection information 1 Create an Amazon Virtual Private Cloud ( VPC) or use an existing VPC with a minimum of two Availability Zones (called A vailability Zone A and Availability Zone B) ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 7 2 Deploy a Controller instance in Availability Zone A Document the controller key and connection information for later steps Note: It’s possible to use an Elastic IP address to connect remote clients and users to the Controller but we recommend that you use AWS Direct Connect or A WS Managed VPN for more complex long running deployments VPC p eering connection options and Direct Connect can enable private connectivity to the Controller instance as well as a predictable cost effective network path back to on premises data sources that you may wish to expose to the Controller 3 Create a MongoDB replica set with at least three instances Place each instance in a different Availability Zone Document the connection information for the next step 4 Connect the MongoDB cluster to the Controller instance by providing the MongoDB connection information in the Alteryx System Settings on the Controller 5 Deploy and connect a Worker instance in Availability Zone A to the Controller instance in the Availability Zone A subnet 6 Deploy and connect a Worker instance in Availability Zone B to the Controller instance in the Availability Zone A subnet 7 Deploy and connect more Workers as needed to support your desired level o f workflow concurrency You can have more than one Worker in each A vailability Zone but be aware that each A vailability Zone represents a fault domain You should also consider the performance implications of losing access to Workers deployed in a particu lar Availability Zone 8 Create an ELB l oad balancer to handle requests to the Gallery instances 9 Deploy Gallery instances and register with the ELB l oad balancer Be sure to deploy your Gallery instances in multiple Availability Zones 10 Connect the Gallery i nstances to the Controller instance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 8 11 Connect the client Designer installations to the Controller instance using either the E lastic IP address or the optional private IP (chose n in Step 2 ) then test workf lows and publishing to Gallery 12 (Optional) Deploy a Cold/Warm Standby Controller instance in another Availability Zone or AWS R egion Failover is controlled by changing the Elastic IP address (if deployed in the same VPC) or DNS name to this Controller instance Deploy Alteryx Server with Chef You can use AWS OpsWorks with Chef cookbooks and recipes to deploy Alteryx Server For Alteryx Chef resources see cookbook alteryx server on GitHub Deploy a Windows Server EC2 instance and install Alteryx Server You can deploy an Amazon Elastic Compute Cloud (Amazon EC2) instance running Windows Server and then install Alteryx Server You can download the install package here Make sure that you deploy an instance with the recommended compute size (at least 8 vCPUs) Windows operating system (Microsoft Windows Server 2008R2 or later) and available Amazon Elastic Block Store (Amazon EBS) storage (1TB) Deploy an Amazon EC2 Instance from the Alteryx Server AMI You can purchase an Amazon Machine Image (AMI) from Alteryx through AWS Marketplace and use it to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance running Alteryx Server You can find the Alteryx Server offering on AWS Marketplace Note: You can try one instance of the product for 14 days Please remember to turn your instance off once your trial is complete to avoid incurring charges You have two options for launching your Amazon EC2 instance You can launch an instance using the Amazon EC2 launch wizard in the Amazon EC2 console or by ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 9 selecting the Alteryx Server AMI in the launch wizard Note that the fastest way to deploy Alteryx Server on AWS is to launch an Amazon EC2 instance using the Marketplace website To launch Alteryx Server using the Marketplace website: 1 Navigate to AWS Marketplace 2 Select Alteryx Server then select Continue to Subscribe 3 Once subscribed select Continue to Configuration 4 Review the configura tion settings choose a nearby Region then select Continue to Launch 5 Once you have configured the options on the page as desired select Launch 6 Go to the Amazon EC2 console to view the startup of the instance 7 It can be helpful to note the Instance ID for later reference You can give the instance a friendly name to find it more easily and to allow others to know what the instance is for Click inside the Name field and enter the desired name 8 Navigate to the instance Public IP address or Pu blic DNS name in your browser Enter in your email address and take note of the token at the bottom: 9 Your token will be specific to your instance If you selected the Bring Your Own License image a similar registration will appear and prompt you for lic ense information 10 After selecting your server instance and clicking Connect you will be guided through using Remote Desktop Protocol (RDP) to connect to the Controller instance of Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server o n AWS Page 10 11 Once connected you can use your AWS instance running Alteryx Ser ver The desktop contains links to the Designer and Server System Settings 12 Start using Alteryx Server See Alteryx Community for more information on how to use Alteryx Server and Designer Sizing and Scalin g Alteryx Server on AWS When sizing and scaling your Alteryx Server deployment consider the performance availability and management Performance Considerations This section covers options and best practices for improving the performance of your Alteryx Server workflows Scaling Up vs Scaling Out You can usually increase performance by scaling your Workers up or out To scale up you need to relaunch Workers using a larger instance type with more vCPUs or memory or by configuring faster storage When sca ling up you should increase the size of all Workers as the Controller does not schedule on specific worker instances by priority and will not assign work to the machine with the most resources To scale out you need to configure additional instances Both options typically take only a few minutes Below are two scenarios that discuss scaling up and scaling out: Long job queues – If you expect that a high number of jobs will be scheduled or if you observe that the job queue length exceeds defined limits then scale out to make sure you have enough instances to meet demand Scale up if you already have a very large number of small nodes Long running jobs or large workflows – Larger instances specifically instance types with more RAM are best suit ed for long running workloads If you find that you have longrunning jobs first examine the query logic load on the data source and network path and adjust if necessary If the jobs are otherwise well tuned consider scaling up This table presents heu ristics that can help you determine the number of Workers you need to execute workloads with different run times ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 11 Number of Users 5Second Workload 30Second Workload 1Minute Workload 2+Minute Workload Number of Worker Instances 120 1 1 2 3 2040 1 2 3 4 40100 2 3 4 5 100 3 4 5 6 Table 1: Number of Worker instances needed to execute workloads with different run times Consider having your users run some of their frequently requested workflows on a test instance of Alteryx Server of your planned instance size You can quickly deploy a test instance using the Alteryx Server AMI These tests will help you understand the number of jobs and workflow sizes that your instance size can handle To predict workflow sizes rev iew your current and planned Designer workflows In Alteryx benchmark testing the engine running in Alteryx Designer performed nearly the same as in Alteryx Server when running on similar instance types (see Alteryx Analytics Benchmarking Results )Keep this in mind when determining how long workloads will take to run You can test workload times without installing Alteryx Server by using the Designer on hardw are that is similar to what you would use to deploy Alteryx Server Scaling Based on Demand Many customers find they need to add more Workers at predictable times For peak usage times you can launch new Worker instances from the Alteryx Server AMI and pay for them using the pay asyougo option With this model you pay only for instances you need for as long as you use them This is common for seasonal or end ofmonth end ofquarter or end ofyear workloads You can use an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group with a script to insert the controller token into these new instances to scale additional Worker instances on demand with minimal or no post launch configuration Additionally you can integrate Amazon EC2 Auto Scaling with Amazon CloudWatch to scale automatically based on custom metrics such as the number of jobs queued Scaling Alteryx Server to more instances will have licensing implications because it is licensed by cores ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 12 Figure 4: Use Amazon EC2 Auto Scaling and Amaz on CloudWatch to scale Worker instances ondemand You can perform additional scheduled scaling actions with Amazon EC2 Auto Scaling For example you can configure an Amazon EC2 Auto Scaling group to spin up instances at the start of business hours and tur n them off automatically at the end of the day This allows Alteryx Server to reduce compute costs while meeting business analytic requirements Worker Performance Workers have several configuration settings The two settings that are the most important fo r optimizing workflow performance are simultaneous workflows and max sort/join memory Simultaneous workflows – You have the best starting point for simultaneous workflows when 4 vCPUs are available for each workflow For example if an instance has 8 vCPUs then we recommend that you enable 2 workflows to run simultaneously This setting is labeled Workflows allowed to run simultaneously in the Worker configuration interface You can adjust this setting as a way to tune performance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 13 Note: 4 vCPU s = 1 workflows running simultaneously Max sort/join memory usage – This configuration manages the memory available to workflows that are more RAM intensive The best practice is to take t he total memory available to the machine and subtract a suggested 4 GB of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned: Max Sort / Join Memory Usage =(Total Memory −Suggested 4GBs Operating System Memory ) #of simultaneous workflow For example for a Worker configured with 32 GB of memory and 8 vCPUs the recommended number of simultaneous workflows is 4 because there are 8 vCPUs (1 workflow for e very 2 vCPUs) In this example 4 GB of memory set aside for the OS is subtracted from 3 2 GB total memory The remaining number (28 GB) is divided by the number of simultaneous workflows (4) leaving 7 GB Therefore the recommended max sort/join memory is 7 GB Max Sort / Join Memory Usage for 32 GB Instance and 8 vCPUs = (32 GB – 4 GB) / 4 simultaneous workflows = 7 GB The following table shows a list of precomputed values for suggested max sort/join memory Instance vCPUs Suggested Simultaneo us Workflows Total Memory (GB) OS Memory (constant) (GB) Suggested Max Sort/Join Memory (GB / Th read) 4 2 16 4 6 8 4 32 4 7 16 8 32 4 35 16 8 64 4 75 32 16 128 4 78 Table 2: Examples of suggested max sort / join memory Database Performance Using a user managed MongoDB cluster allows you to control and tune the performance of the Alteryx Server persistence tier ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 14 Availability Considerations Except for the Controller you can scale out the other major Alteryx Server components to multiple inst ances Scaling the Worker Gallery and Database instances increases their availability performance or both You can create a standby Controller to ensure availability in the event of a Controller issue instance failure or Availability Zone issue For high availability you should deploy Worker Gallery and Database instances in two or three Availability Zones Consider deploying instances in more than one AWS Region for faster disaster recovery to improve interactive access to data for your regional customers and to reduce latency for users in different geographies Figure 5: High availability deployment of Alteryx Server on AWS AWS recommends that you have approximately 3 5 Worker instances 2 4 Gallery instances behind an ELB application load balancer and 3 5 Mongo Database instances configured in a Mongo DB replica set for high availability deployments The worker instances de picted above were created with Amazon EC2 auto scaling The exact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 15 numbers and instance sizes are dependent on costs and the performance sizing specific to your organization For multi Region deployments ensure that each AWS Region has a Controller instanc e that can be used with a DNS name (Elastic IP addresses are local to a single AWS Region) We recommend using Amazon Route 53 in an active passive configuration to ensure there is only one active controller The passive controllers can be fully configured but Amazon Route 53 will only route traffic to a passive controller if the active controller becomes unavailable Management Considerations Many of the configurations we discussed allow for more flexible management of Alteryx Server Control of the pers istence tier gives you more options when replicating and backing up the databas e Placing the Gallery behind a load balancer allows for easier maintenance when upgrading or deploying Gallery instances From an operational standpoint a scaled install gives you more options and less downtime for backups monitoring database permissions and thirdparty tools Remember scaling Alteryx Server will have licensing implications based on the number of vCPUs in the deployment You need to license a ll deployed nodes regardless of function Sizing and Scaling Summary A high level overview of reasons and decisions for sizing and scaling Alter yx is given in the table below Action Performance Impact Availability Impact Management Impact Controller Scaled Up (Larger Instance Size) Can help increase Gallery performance No major impact No major impact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 16 Action Performance Impact Availability Impact Management Impact Controlle r Scaled Out ( More Controller Instance s) No major impact Having multiple Controllers requires that one Controller is on cold or warm standby Requires customized scripts or triggers to automatically failover You can create these with AWS services such as CloudWatch and SNS Worker Scaled Up (Larger Instance Size) Decreased workflow completion times For best results use instance types with more memory or optimized memory No major impact No major impact Worker Scaled Out (More Worker Instances) More concurrent workflows can be run More resiliency to Worker instance failures Reduced downtime during maintenance Gallery Scaled Out (More Gallery Instances) Better performance for more Gallery users More resiliency to Gallery instance failures Reduced downtime during maintenance User Managed MongoDB database More control for tuning and performance Clust ering and replication in MongoDB allow for higher availability Give you more control over the database but require s some knowledge about NoSQL databases Table 3: Scaling actions and impact on performance availability and management When considering Alteryx Server deployment options and which components to scale it's best to consider your organization ’s performance availability and management needs For example your organiza tion may have a few users creating analytic workflows but hundreds of users consuming those workflows via the Gallery In that ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 17 case you might need minimal infrastructure to handle analytic workflows and the database while the Controller which aids the G allery instances would need to be a larger instance and the Gallery instances would be best served using several instances behind a l oad balancer If you are concerned with data loss you should create a user managed MongoDB cluster and make sure that it is backed up regularly to multiple locations Operations This section discusses backup restore and monitoring operations Backup and Restore You can use the Amazon Elastic Block Store ( EBS) snapshot feature to back up the Controller Worker and Database instances You can use these s napshots to restore data in the event of a failure It is best to stop the Controller and Database tier before a snapshot The Gallery is stateless and does not need to be backed up For details on how to perform backup and recovery operations i f you are using a user managed MongoDB database see the MongoDB documentation for Amazon EC2 Backup and Restore Monit oring AWS provides robust monitoring of Amazon E lastic Compute Cloud (Amazon E C2) instances Amazon EBS volumes and other services via Amazon CloudWatch Amazon CloudWatch can be triggered to send a notification via Amazon Simple Notification Service (Ama zon SNS) or email upon meeting userdefined thresholds on individual AWS services Amazon CloudWatch can also be configured to trigger an auto recov ery action on instance failure You can also write a custom metric to Amazon CloudWatch for example to mo nitor current queue sizes of workflows in your Controller and to alarm or trigger automatic responses from those measures By default these metrics are not available from Alteryx Server but can be parsed from Alteryx logs and custom workflows and exposed to CloudWatch using Amazon CloudWatch Logs You can also use t hirdparty monitoring tools to monitor status and performance for Alteryx Server A free analytics workflow and app lication is available for reviewing ArchivedAmazon Web Services Best Practice s for Deploying Alteryx Server on AWS Page 18 Alteryx Server performance and logs You ca n get that tool from the Alteryx support community Network and Security This section covers network and security considerations for Alteryx Server deployment Connecting OnPremise s Resources to Amazon VPC In order for Alteryx Server to access your on premises data sources connect an Amazon Virtual Private Cloud (Amazon VPC) to your onpremise s resources In the following figure the private subnet contain s Alteryx Server You can place all t he Gallery services in a public subnet (not shown ) for simple access to the internet and users or you can configure AWS Direct Connect or use VPN to enable a private peering connection with no public IP addressing required You can also place Gallery instances or Alteryx Server in the private subnets with configuration of NAT Gateway Scaling hybrid or disaster recovery options are also available in this model with elements of Altery x Server deployed as need ed either on premises or on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 19 Figure 6: Options for connecting on premises services to Alteryx Server on AWS Alteryx Server often uses information stored on private corporation resources Be aware of the performance and traffi c implications of accessing large amounts of data that are outside of AWS AWS offers a several solutions to handle this kind of expected traffic You can provision a VPN connection to your VPC by provisioning an AWS Managed VPN Connection AWS VPN CloudHu b or a third party software VPN appliance running on an Amazon EC2 instance deployed in your VPC We recommend using AWS Direct Connect to connect to private data sources outside of AWS as it provides a predictable low cost and high performance dedicate d peering connection You can also use VPN with Direct Connect to fully encrypt all traffic This approach fits well into risk and security compliance standards for many corporations You may already be using Direct Connect to connect with an existing AWS deployment It is possible to share Direct Connect and create connections to multiple VPCs even across AWS accounts or to provision access to remote regions While possible it is not ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 20 recommended to connect to data sources directly over the internet from a public subnet due to security concerns See the Direct Connect documentation for more details on a variety of connectivity scenarios see the AWS Direct Connect docu mentation Security Groups When running Alteryx Server on AWS be sure to check your security group settings when attempting to add a connection to a data source You will need to customize your security groups based on your needs as some data sources may require specific ports Refer to the data source documentation on the specific source you are connecting to and the ports and protocols used for traffic Port Permitted Traffic 3389 RDP Access 80 HTTP Web Traffic 443 HTTPS Web Traffic 81 Used Only with AWS Marketplace Offering for Client Connections 5985 Used Only with AWS Marketplace Offering for Windows Management Table 4: Security Groups for Alteryx Server Network Access Control Lists (NACLs) Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs are not stateful and tend to be more restrictive and so they are not recommended for general deployments The y may be useful for organizations with specific compliance concerns or other internal security requirements NACLs are supported for controlling network traffic that relates to Alteryx Server Bastion Host (Jump Box) In the case that Alteryx Server components are placed in a private subnet we recommend that a bastion host or jump box is placed in the public subnet with security group rules to allow traffic between the public jump box and the private server This adds another level of control and help s limit the types of conne ctions that can reach the Alteryx Server For details on bastion host deployment on AWS see the Linux Bastion Hosts on the AWS Cloud Quick Start ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 21 Secure Sockets Layer (SSL) The Gallery component of Alteryx Server is available over HTTP or HTTPS If you deploy gallery instances in a public subnet we recommend HTTPS For information on how to properly configure TLS see the Alteryx Server documentation Best Practices The following sections summarize best practices and tips for deploying Alteryx Server on AWS Deployment • Deploy Alteryx Server on an in stance that meets the minimum requirements: Microsoft Windows Server 2008R2 (or later) at least 8 vCPUs and at least 1TB of Amazon Elastic Block Store (Amazon EBS) storage • Do not change the Alteryx Server Authentication Mode once it has been set Changi ng the Authentication Mode requires that you reinstall Microsoft Windows Active Directory (Microsoft AD) or SAML 20 are the recommended authentication methods • The controller token is unique to each Alteryx Server installation and administrators must sa feguard it • Be sure to configure each required database driver on the server machine with the same version that is used for designer clients • Alteryx Server supports two different mechanisms for persistence: MongoDB and SQLite Choose MongoDB if you need a scalable or highly available architecture Choose SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads • Worker instances Gallery instances and user managed MongoDB instances can be scaled for deployments suppor ting user groups of 20 or more • If you use the pay asyougo AWS Marketplace image for test purposes be sure to note the 14 day trial period and remember to turn your instance off once your trial is complete ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 22 Scaling and Availability • For a more resilient architecture be sure to scale out worker Gallery and persistence instances across multiple Availability Zones Consider deploying instances across AWS Regions to reduce latency for users in different geographies or to improve access to data • Multiple Ga llery instances can be configured behind a load balancer to handle the Gallery services at scale • When scaling Worker instances you should increase the size of all Worker instances as the Controller does not schedule on specific worker instances by priori ty • A standby Controller can be deployed for failover AWS tools such as AWS CLI Amazon Route 53 and Amazon CloudWatch can help automate failover • Scaling Alteryx Server to more instances will likely have licensing implications because it is licensed by c ores Network and Security • Alteryx Server on AWS commonly process information stored on premises Be aware of the potential performance and cost implications of using large amounts of data outside of AWS • When using Alteryx Server on AWS ensure that you c heck your security group settings when attempting to add a connection to a data source You will need to customize security groups based on your needs as some data sources may require specific ports Refer to documentation on the specific database you are connecting to and the ports and protocols used for traffic • Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs may be useful for organizations with specific compliance concerns or other internal security requirements • Be sure your Alteryx Designer clients have connectivity to any Controllers you plan to schedule workflows on This is an easily missed requirement when Alteryx Server is deployed in the cloud ArchivedAmazon Web Services Best Practices for Deploy ing Alteryx Server on AWS Page 23 Performance • Instance types with a larger ratio of memory to vCPUs will often run Alteryx workflows faster Consider EC2 memory optimized instances types such as the R4 when working to improve performance • We recommend two VPCs per simultaneous workflow • The user defined Controller setting max so rt/join memory manages the memory available to workflows that are RAM intensive The best practice is to take total memory available to the machine and subtract a suggested 4 GBs of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned For example: 32 GBs – 4 = 28 GBs / 4 simultaneous workflows = 7 GBs max sort/join memory • For workflows using geo spatial tools use EBS Provisioned IOPS SSD (io1) or EBS General Purpose SSD (gp2) volumes that have been optimized for I/O intensive tasks to increase performance Conclusion AWS lets you deploy scalable analytic tools such as Alteryx Server Using Alteryx Server on AWS is a cost effective and flexible way to manage and deploy various configurations of Alter yx Server In this whitepaper we have discussed several considerations and best practices for deploying Alteryx Server on AWS Please send comments or feedback on this paper to the papers authors or helpfeedback@alteryxcom Contributors The following individuals and organizations contributed to this document: • Mike Ruiz Solutions Architect AWS • Claudine Morales Solutions Architect AWS • Matt Braun Product Manager Alteryx • Mark Hayford Amazon Web Services Architect Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 24 Further Reading For additional information see the following: • Alteryx Community • Alteryx Knowledge Base • Alteryx Server Install Guide • Alteryx SSL Information • Alteryx Documentation ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 25 Document Revisions Date Description August 2019 Edits to clarify information about Simultaneous workflows August 2018 First publication,General,consultant,Best Practices Best_Practices_for_Deploying_Amazon_WorkSpaces,"ArchivedBest Practices for Deploying Amazon WorkSpaces Network Access Directory Services Cost Optimization and Security December 2020 This version has been archived For the latest technical information refer to https://docsawsamazoncom/whitepapers/ latest/bestpracticesdeployingamazon workspaces/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 WorkSpaces Requirements 1 Network Considerations 2 VPC Design 3 Network Interfaces 4 Traffic Flow 4 Client Device to WorkSpace 4 Amazon WorkSpaces Service to VPC 6 Example of a Typical Configuration 7 AWS Directory Service 11 AD DS Deployment Scenarios 11 Scenario 1: Using AD Connector to Proxy Authentication to On Premises Ac tive Directory Service 12 Scenario 2: Extending On Premises AD DS into AWS (Replica) 15 Scenario 3: Standalone Isolated Deployment Using AWS Directory Service in the AWS Cloud 17 Scenario 4: AWS Microsoft AD and a Two Way Transitive Trust to On Premises 19 Scenario 5: AWS Microsoft AD using a Shared Services Virtual Private Cloud (VPC) 21 Scenario 6: AWS Mi crosoft AD Shared Services VPC and a One Way Trust to On Premises 23 Design Considerations 25 VPC Design 25 Active Directory: Sites and Serv ices 29 Multi Factor Authentication (MFA) 30 Disaster Recovery / Business Continuity 31 WorkSpaces Interface VPC Endpoint (AWS PrivateLink) – API Calls 32 Amazon WorkSpaces Tags 33 Automating Amazon WorkSpaces Deployment 34 Common WorkSpaces Automation Methods 34 WorkSpaces Deployment Automation Best Practices 36 ArchivedAmazon W orkSpaces Language Packs 37 Amazon WorkSpaces Profile Management 37 Folder Redirection 37 Best Practices 37 Thing to Avoid 38 Other Considerations 39 Profile Settings 39 Amazon WorkSpaces Volumes 39 Amazon W orkSpaces Logging 40 Amazon WorkSpaces Migrate 42 WellArchitected Framework 44 Security 45 Encryption in Transit 45 Network Inte rfaces 47 WorkSpaces Security Group 48 Encrypted WorkSpaces 49 Access Control Options and Trusted Devices 51 IP Access Control Groups 51 Monitoring or Logging Using Amazon CloudWatch 52 Cost Optimization 54 SelfService WorkSpace Management Capabilities 54 Amazon WorkSpaces Cost Optimizer 55 Troubleshooting 56 AD Connector Cannot Connect to Active Directory 56 Troubleshooting a WorkSpace Custom Image Creation Error 57 Troubleshooting a Windows WorkSpace Marked as Unhealthy 57 Collecting a WorkSpaces Support Log Bundle for Debugging 59 How to Check Latency to the Closest AWS Region 62 Conclusion 62 Contributors 62 Further Reading 62 ArchivedDocument Revisions 63 ArchivedAbstract This whitepaper outlines a set of best practices for the deployment of Amazon WorkSpaces The paper covers network considerations directory services and user authentication security and monitoring and logging This whitepaper was written to enable quick access to relevant information It is intended for network engineer s directory engineer s or security engineer s ArchivedAmazon Web Services Best Practices for De ploying Amazon WorkSpaces 1 Introduction Amazon WorkSpaces is a managed desktop computing service in the cloud Amazon WorkSpaces removes the burden of procuring or deploying hardware or installing complex software and delivers a desktop experience with either a few clicks on the AWS Management Console using the Amazon Web Services ( AWS ) command line interface (CLI) or by using the application programming interface (API) With Amazon WorkSpaces you can launch a Microsoft Windows or A mazon Linux desktop within minutes which enables you to connect to and access your desktop software securely reliably and quickly from on premises or from an external network You can: • Leverage your existing onpremises Microsoft Active Directory (AD) by using AWS Directory Service : Active Directory Connector (AD Connector) • Extend your directory to the AWS Cloud • Build a managed directory with AWS Directory Service Microsoft AD or Simple AD to manage your users and WorkSpaces • Leverag e your on premises or cloud hosted RADIUS server with AD Connector to provide multi factor authentication (MFA) to your WorkSpaces You can automate the provisioning of Amazon WorkSpaces by using the CLI or API which enables you to integrate Amazon WorkSp aces into your existing provisioning workflows For security in addition to the integrated network encryption that the Amazon WorkSpaces service provides you can also enable encryption at rest for your WorkSpaces See the Encrypted WorkSpaces section of this document You can deploy applications to your WorkSpaces by using your existing on premises tools such as Microsoft System Center Configuration Manager (SCCM) Puppet Enterprise or Ansible The following sections provi de details about Amazon WorkSpaces explain how the service works describe what you need to launch the service and tells you what options and features are available for you to use WorkSpaces Requirements The Amazon WorkSpaces service requires three components to deploy successfully: • WorkSpaces client application — An Amazon WorkSpaces supported client device See Getting Started with Your WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 2 You can also use Personal Computer over Internet Protocol (PCoIP) Zero Clients to connect to WorkSpaces For a list of available devices see PCoIP Zero Clients for Amazon WorkSpaces • A directory service to authenticate users and provide access to their WorkSpace — Amazon WorkSpaces currently works with AWS Directory Service and Microsoft AD You can use your on premises AD server with AWS Directory Service to support your existing enterprise user credentials with Amazon WorkSpaces • Amazon Virtual Private Cloud (Amazon VPC) in which to run your Amazon WorkSpaces — You’ll need a minimum of two subnets for a n Amazon WorkSpaces deployment because each AWS Directory Service construct requires two subnets in a Multi AZ deployment Network Considerations Each WorkSpace is associated with the specific Amazon VPC and AWS D irectory Service construct that you used to create it All AWS Directory Service constructs (Simple AD AD Connector and Microsoft AD) require two subnets to operate each in different Availability Zones (AZs) Subnets are permanently affiliated with a Di rectory Service construct and can’t be modified after it is created Because of this it’s imperative that you determine the right subnet sizes before you create the Directory Services construct Carefully consider the following before you create the subnets: • How many WorkSpaces will you need over time? • What is the expected growth? • What types of users will you need to accommodate? • How many AD domains will you connect? • Where do your enterprise user accounts reside? Amazon recommends defining user group s or personas based on the type of access and the user authentication you require as part of your planning process Answers to these questions are helpful when you need to limit access to certain applications or resources Defined user personas can help you segment and restrict access using AWS Directory Service network access control lists routing tables and VPC security groups Each AWS Directory Service construct uses two subnets and applies the same settings to all WorkSpaces that launch from that construct For example you can use a security group that applies to all WorkSpaces attached to an AD Connector to specify whether MFA is required or whether an enduser can have local administrator access on their WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 3 Note: Each AD Connector conne cts to your existing Enterprise Microsoft AD To take advantage of this capability and specify an Organizational Unit (OU) you must construct your Directory Service to take your user personas into consideration VPC Design This section describes best prac tices for sizing your VPC and subnets traffic flow and implications for directory services design Here are a few things to consider when designing the VPC subnets security groups routing policies and network access control lists ( ACLs ) for your Amaz on WorkSpaces so that you can build your WorkSpaces environment for scale security and ease of management: • VPC — We recommend using a separate VPC specifically for your WorkSpaces deployment With a separate VPC you can specify the necessary governance and security guardrails for your WorkSpaces by creating traffic separation • Directory Services — Each AWS Director y Service construct requires a pair of subnets that provide s a highly available directory service split between Amazon AZs • Subnet size — WorkSpaces deployments are tied to a directory construct and reside in the same VPC subnets as your chosen AWS Directo ry Service A few considerations: o Subnet sizes are permanent and cannot change You should leave ample room for future growth o You can specify a default security group for your chosen AWS Directory Service The security group applies to all WorkSpaces that are associated with the specific AWS Directory Service construct o You can have multiple AWS Directory Services use the same subnet Consider future plans when you design your VPC For example you might want to add management components such as an antivi rus server a patch management server or an AD or RADIUS MFA server It’s worth planning for additional available IP addresses in your VPC design to accommodate such requirements For in depth guidance and considerations for VPC design and subnet sizing see the re:Invent presentation How Amazoncom is Moving to Amazon WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 4 Network Interfaces Each WorkSpace has two elastic network interfaces (ENIs) a management network interface (eth0 ) and a primary network interface ( eth1 ) AWS uses the management network interface to manage the WorkSpace — it’s the interface on which your client connection terminates AWS uses a private IP address range for this interface For network routing to work properly you can’t use this private address space on any network that can communicate with your WorkSpaces VPC For a list of the private IP ranges that we use on a per region basis see Amazon WorkSpaces Details Note: Amazon WorkSpac es and their associated management network interfaces do not reside in your VPC and you cannot view the management network interface or the Amazon Elastic Compute Cloud (Amazon EC2) instance ID in your AWS Management Console (see Figure s 4 5 and 6) However you can view and modify the security group settings of your primary network interface ( eth1 ) in the console The primary network interface of each WorkSpace does count toward you r ENI Amazon EC2 resource quotas For large deployments of Amazon WorkSpaces you need to open a support ticket via the AWS Management Console to increase your ENI quotas Traffic Flow You can break down Amazon WorkSpaces traffic into two main components: • The traffic between the client device and the Amazon WorkSpace s service • The traffic between the Amazon WorkSpaces service and customer network traffic In the next section we discuss both of these components Client Device to WorkSpace Regardless of its location ( onpremises or remote) the device running the Amazon WorkSpaces client uses the same two ports for connectivity to the Amazon WorkSpaces service The client uses port 443 (HTTPS port) for all authentication and session related information a nd port 4172 (PCoIP port) with both Transmission Control Protocol ( TCP) and User Datagram Protocol ( UDP ) for pixel streaming to a given WorkSpace and network health checks Traffic on both ports is encrypted Port 443 traffic is used for authentication a nd session information and uses TLS for encrypting the traffic Pixel streaming traffic uses AES256bit encryption for communication ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 5 between the client and eth0 of the WorkSpace via the streaming gateway More information can be found in the Security section of this document We publish per region IP ranges of our PCoIP streaming gateways and network health check endpoints You can limit outbound traffic on port 4172 from your corporate network to the AWS streaming gateway and network health check endpoints by allowing only outbound traffic on port 4172 to the specific AWS Regions in which you’re using Amazon WorkSpaces For the IP ranges and network health check endpoints see Amazon WorkSpaces PCoIP Gateway IP Ranges The Amazon WorkSpaces client has a built in network status check This utility shows users whether their network can support a connection by way of a status indicator on the bottom right of the application A more detailed view of the network status can be accessed by choosing Network on the topright side of the client See Figure 1 Figure 1 — WorkSpaces Client : network check A user initiates a connection from their client to the Amazon WorkSpaces service by supplying their login information for the directory used by the Directory Service construct typically their corporate directory The login information is sent via HTTPS to the authentication gateways of the Amazon WorkSpaces service in the Region where the WorkSpace is located The authentication gateway of the Amazon WorkSpaces service then forwards the traffic to the specific AWS Directory Service construct associated with your WorkSpace For example when using the AD Connector the AD Connector forwards the authentication request directly to your AD service which could be onpremises or in an AWS VPC See the AD DS Deployment Scenarios section of this document The AD Connector does not store any authentication information and it acts as a stateless ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 6 proxy As a result it’s imperative that the AD Connector has connec tivity to an AD server The AD Connector determines which AD server to connect to by using the DNS servers that you define when you create the AD Connector If you’re using an AD Connector and you have MFA enabled on the directory the MFA token is checked before the directory service authentication Should the MFA validation fail the user’s login information is not forwarded to your AWS Directory Service Once a user is authenticated the streaming traffic starts by using port 4172 (PCoIP port) through the AWS streaming gateway to the WorkSpace Session related information is still exchanged via HTTPS throughout the session The streaming traffic uses the first ENI on the WorkSpace ( eth0 on the WorkSpace) that is not connected to your VPC The network connection from the streaming gateway to the ENI is managed by AWS In the event of a connection failure from the streaming gateways to the WorkSpaces streamin g ENI a CloudWatch event is generated See the Monitoring or Logging Using Amazon CloudWatch section of this document The amount of data sent between the Amazon WorkSpaces service and the client depends on the level of pixel activity To ensure an optimal experience for users we recommend that the round trip time (RTT) between the WorkSpaces client and the AWS Region where your WorkSpaces are located is less than 100 m illiseconds (ms) Typically this means you r WorkSpaces client is located less than two thousand miles from the Region in which the WorkSpace is being hosted We provide a Connection Health Check webpage to help you determine the m ost optimal AWS Region to connect to the Amazon WorkSpaces service Amazon WorkSpaces Service to VPC After a connection is authenticated from a client to a WorkSpace and streaming traffic is initiated your WorkSpaces client will display either a Windows or Linux desktop (your Amazon WorkSpace) that is connected to your virtual private cloud ( VPC) and your network should show that you have established that connection The WorkSpace’s primary ENI identified as eth1 will have an IP address assigned to it f rom the Dynamic Host Configuration Protocol (DHCP) service that is provided by your VPC typically from the same subnets as your AWS Directory Service The IP address stays with the WorkSpace for the duration of the life of the WorkSpace The ENI in your V PC has access to any resource in the VPC and to any network you have connected to your VPC (via a VPC peering an AWS Direct Connect connection or VPN connection) ENI access to your network resources is determined by the route table of the subnet and default security group that your AWS Directory Service configures for each WorkSpace as well any additional security groups that you assign to the ENI You can add security groups to the ENI facing your VPC at any time by using the AWS Management Console or AWS CLI (For more information on security groups see Security Groups for Your WorkSpaces ) In addition to security groups you can use your ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 7 preferred host based firewall on a given WorkSpace to limit network access to resources within the VPC Figure 4 in the AD DS Deployment Scenarios section of this whitepaper shows the traffic flow described Example of a Typical Configuratio n Let’s consider a scenario where you have two types of users and your AWS Directory Service uses a centralized AD for user authentication: • Workers who need full access from anywhere (for example full time employees) — These users will have full access to the internet and the internal network and they will pass through a firewall from the VPC to the on premises network • Workers who should have only restricted access from inside the corporate network (for example contractors and consultants) — These users have restricted internet access through a proxy server to specific websites in the VPC and will have limited network access in the VPC and to the on premises network You’d like to give full time employees the ability to have local administra tor access on their WorkSpace to install software and you would like to enforce two factor authentication with MFA You also want to allow full time employees to access the internet without restrictions from their WorkSpace For contractors you want to b lock local admin istrator access so that they can only use specific pre installed applications You want to apply restrictive network access controls using security groups for these WorkSpaces You need to open port s 80 and 443 to specific internal websites only and you want to entirely block their access to the internet In this scenario there are two completely different types of user personas with different requirements for network and desktop access It’s a best practice to manage and configure their WorkSpaces differently You will need to create two AD Connectors one for each user persona Each AD Connector requires two subnets that have enough IP addresses available to meet your WorkSpaces usage growth estima tes Note: Each AWS VPC subnet consumes five IP addresses (the first four and the last IP address) for management purposes and each AD Connector consumes one IP address in each subnet in which it persists Further considerations for this scenario are as f ollows: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 8 • AWS VPC subnets should be private subnets so that traffic such as internet access can be controlled through either a Network Address Translation (NAT) Gateway Proxy NAT server in the cloud or routed back through your on premises traffic manage ment system • A firewall is in place for all VPC traffic bound for the on premises network • Microsoft AD server and the MFA RADIUS servers are either on premises (s ee Scenario 1: Using AD Connector to Proxy Authentication to On Premises AD DS in this document ) or part of the AWS Cloud implementation (see Scenario 2 and Scenario 3 AD DS Deployment Scenarios in this document ) Given that all WorkSpaces are granted some form of internet access and given that they are hosted in a private subnet you also must create public subnets that can access the internet through an internet gateway You need a NAT gateway for the full time e mployees allowing them to access the internet and a Proxy NAT server for the consultants and contractors to limit their access to specific internal websites To plan for failure design for high availability and limit cross AZ traffic charges you shou ld have two NAT gateways and NAT or proxy servers in two different subnets in a Multi AZ deployment The two A Zs that you select as public subnets will match the two AZs that you use for your WorkSpaces subnets in Regions that have more than two zones You can route all traffic from each WorkSpaces AZ to the corresponding public subnet to limit cross AZ traffic charges and provide easier management Figure 2 shows the VPC configuration ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 9 Figure 2 — Highlevel VPC design The following information describes how to configure the two different WorkSpaces types : To configure WorkSpace s for fulltime employees : 1 In the Amazon WorkSpaces Management Console choose the Directories option on the menu bar 2 Choose the directory that hosts your full time employees 3 Choose Local Administrator Setting By enabling this option any newly created WorkSpace will have local administrator privileges To grant internet access configure NAT for outbound internet access from your VPC To enable MFA you need to specify a RADIUS server se rver IPs ports and a preshared key For full time employees’ WorkSpaces inbound traffic to the WorkSpace can be limited to Remote Desktop Protocol (RDP) from the Helpdesk subnet by ap plying a default security group via the AD Connector settings To configure WorkSpaces for c ontractors and consultants : ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 10 1 In the Amazon WorkSpaces Management Console disable Internet Access and the Local Administrator setting 2 Add a security group under the Security Group settings section to enforce a security group for all new WorkSpaces created under that directory For consultants’ WorkSpaces limit outbound and inbound traffic to the WorkSpaces by applying a default Security group via the AD Connector settings to all WorkSpaces associated with the AD Connector The security group prevent s outbound access from the WorkSpaces to anything other than HTTP and HTTPS traffic and inbound traffic to RDP from the Helpdesk subnet in th e onpremises network Note: The security group applies only to the ENI that is in the VPC ( eth1 on the WorkSpace) and access to the WorkSpace from the WorkSpaces client is not restricted as a result of a security group Figure 3 shows the final WorkSpaces VPC design Figure 3 — WorkSpaces design with user personas ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 11 AWS Directory Service As mentioned in the introduction AWS Directory Service is a core component of Amazon WorkSpaces With AWS Directory Serv ice you can create three types of directories with Amazon WorkSpaces: • AWS Managed Microsoft AD which is a managed Microsoft AD powered by Windows Server 2012 R2 AWS Managed Microsoft AD is available in Standard or Enterprise Edition • Simple A D is standalone Microsoft ADcompatible managed directory service powered by Samba 4 • AD Connector is a directory proxy for redirecting authentica tion requests and user or group lookups to your existing on premises Microsoft AD The following section describes communication flows for authentication between the Amazon WorkSpaces brokerage service and AWS Directory Service best practices for implemen ting WorkSpaces with AWS Directory Service and advanced concepts such as MFA It also discus ses infrastructure architecture concepts for Amazon WorkSpaces at scale requirements on Amazon VPC and AWS Directory Service including integration with on prem ises Microsoft AD Domain Services (AD DS) AD DS Deployment Scenarios Backing Amazon WorkSpaces is the AWS Directory Service and the proper design and deployment of the directory service is critical The following three scenarios build on the Active Direc tory Domain Services on AWS Quick Start guide and describe the best practice deployment options for AD DS when used with Amazon WorkSpaces The Design Considerations section of this document details the specific requirements and best practices of using AD Connector for WorkSpaces which is an integral part of the overall WorkSpaces design concept • Scenario 1: Using AD Connector to proxy authentication to on premises AD DS — In this scenario network connectivity (VPN/Direct Connect) is in place to the customer with all authentication proxied via AWS Directory Service (AD Connector) to the customer on premises AD DS • Scenario 2: Extending on premises AD DS into AWS (Replica) — This scenario is similar to scenario 1 but here a replica of the customer AD DS is deployed on AWS in combination with AD Con nector reducing latency of authentication/query requests to AD DS and the AD DS global catalog ArchivedAmazon Web Services Best Practices for Deploying Amaz on WorkSpaces 12 • Scenario 3: Standalone isolated deployment using AWS Dire ctory Service in the AWS Cloud — This is an isolated scenario and doesn’t include connectivity back to the customer for authentication This approach uses AWS Directory Service (Microsoft AD) and AD Connector Although this scenario doesn’t rely on connect ivity to the customer for authentication it does make provision for application traffic where required over VPN or Direct Connect • Scenario 4: AWS Microsoft AD and a Two Way Transitive Trust to On Premises — This scenario includes the AWS Managed Microsof t AD Service (MAD) with a twoway transitive trust to the on premises Microsoft AD forest • Scenario 5: AWS Microsoft AD using a Shared Services VPC — This scenario uses AWS Managed Microsoft AD in a Shared Services VPC to be used as an Identity Domain for multiple AWS Services ( Amazon EC2 Amazon WorkSpaces and so on ) while u sing the AD Connector to proxy Lightweight Directory Access Protocol (LDAP ) user authentication requests to the AD domain controllers • Scenario 6: AWS Microsoft AD Shared Services VPC and a One Way Trust to On Premises AD — This scenario is similar to Scenario 5 but it includes disparate identity and resource domains using a oneway trust to onpremises Scenario 1: Using AD Connector to Proxy Authentication to On Premi ses Active Directory Service This scenario is for customers who don’t want to extend their on premises AD service into AWS or where a new deployment of AD DS is not an option Figure 4 depicts at a high level each of the components and shows the user authentication flow ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 13 Figure 4 — AD Connector to on premises Active Directory In this scenario AWS Directory Service (AD Connector) is used for all user or MFA authentication that is pr oxied through the AD Connector to the customer on premises AD DS (see Figure 5 ) For details on the protocols or encryption used for the authentication process see the Security section of this docume nt Figure 5 — User authentication via the Authentication Gateway Scenario 1 shows a hybrid architecture where the customer m ight already have resources in AWS as well as resources in an on premises data center that could be accessed via Amazon WorkSpaces The customer can leverage their existing on premises AD DS and RADIUS servers for user and MFA authentication This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two A Zs ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 14 • DHCP Options Set — Creation of an Amazon VPC DHCP Options Set This allows customer specified domain name and domain name servers (DNS) (on premises services) to be defined For m ore information see DHCP options sets • Amazon virtual private gat eway — Enable communication with your own network over an IPsec VPN tunnel or an AWS Direct Connect connection • AWS Directory Service — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed in the s ame private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network connectivity — Corporate VPN or Direct Connect endpoints • AD DS — Corporate AD DS • MFA (optional) — Corporate RADIUS server • End user devices — Corporate or bring your own license ( BYOL ) end user devices (such as Windows Mac s iPad s Android tablets zero clients and Chromebook s) used to access the A mazon WorkSpaces service See this list of client applications for supported devices and web browsers Although this solution is great for customers who don’t want to deploy AD DS into the cloud it does come with some caveats : • Reliance on connectivity — If connectivity to the data center is lost users cannot log in to their respective WorkSpaces and existing connections will remain active for the Kerberos/ Ticket Granting Ticket ( TGT) lifetime • Latency — If latency exists via the connection (this is more the case with VPN than D irect Connect ) then WorkSpaces authentication and any AD DS related activity such as Group Policy (GPO) enforcement will take more time • Traffic costs — All authentication must traverse the VPN or D irect Connect link and so it depends on the connection type This is either Data Transfer O ut from Amazon EC2 to internet or Data Transfer Out (Direct Connect ) ArchivedAmazon Web Services Best Practice s for Deploying Amazon WorkSpaces 15 Note: AD Connector is a proxy service It doesn’t store or cache user credentials Instead all authentication lookup and management requests are handled by your AD An account with delegation privileges is required in your directory service with rights to re ad all user information and join a computer to the domain In general the WorkSpaces experience is highly dependent on item 5 shown in Figure 4 For this scenario the WorkSpaces authentication experience is highly dependent on the network link between the customer AD and the WorkSpaces VPC The customer should ensure the link is highly available Scenario 2: Extending On Premises AD DS into AWS (Replica) This scenario is similar to scenario 1 However in this scenario a replica of the customer AD DS is deployed on AWS in combination with AD Connector This reduces latency of authentication or query requests to AD DS Figure 6 shows a high level view of each of the components and the user authentica tion flow Figure 6 — Extend customer Active Directory Domain to the cloud As in scenario 1 AD Connector is used for all user or MFA authentication which in turn is proxied to the customer AD DS ( see Figure 5 ) In this scenario the customer AD DS is deployed across AZs on Amazon EC2 instances that are promoted to be domain ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 16 controllers in the customer’s on premises AD forest running in the AWS Cloud Each domain controller is deployed into VPC private subnets to make AD DS highly available in the AWS Cloud For best practices for depl oying AD DS on AWS see the Design Considerations section of this document After WorkSpaces instances are deployed they have access to the cloud based domain controllers for secure low latency directory services and DNS All network traffic including AD DS communication authent ication requests and AD replication is secured either within the private subnets or across the customer VPN tunnel or D irect Connect This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for the customer AD DS two for AD Connector or Amazon WorkSpaces • DHCP Options Set — Creation of an Amazon VPC DHCP options set This allows the customer to define a specified domain name and DNSs (AD DS local) For more information see DHCP Options Sets • Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel or AWS Direct Connect connection • Amazon EC2 — o Customer corporate AD DS domain controllers deployed on Amazon EC2 instances in dedicated private VPC subnets o Customer “optional” RADIUS servers for MFA on Amazon EC2 in stances in dedicated private VPC subnets • AWS Directory Services — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network connectivity — Corporate VPN or AWS Direct Connect endpoints • AD DS — Corporate AD DS (required for replicati on) • MFA “optional” — Corporate RADIUS server ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 17 • End user devices — Corporate or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers This solution doesn’t have the same caveats as scenario 1 Amazon WorkSpaces and AWS Directory Service have no reliance on the connectivity in place • Reliance on connectivity — If connectivity to the c ustomer data center is lost end users can continue to work because authentication and “optional” MFA are processed locally • Latency — With the exception of replication traffic all authentication is local and low latency See the Active Directory: Sites and Services section of this document • Traffic costs — In this scenario authentication is local with only AD DS replication having to traverse the VPN or D irect Connect link reducing da ta transfer In general the WorkSpaces experience is enhanced and isn’t highly dependent on item 5 as shown in Figure 6 This is also the case when a customer want s to scale WorkSpaces to thousands of desktops especially in relat ion to AD DS global catalog queries as this traffic remains local to the WorkSpaces environment Scenario 3: Standalone Isolated Deployment Using AWS Directory Service in the AWS Cloud This scenario shown in Figure 7 has AD DS deployed in the AWS Cloud in a standalone isolated environment AWS Directory Service is used exclusively in this scenario Instead of fully managing AD DS customers can rely on AWS Directory Service for tasks such as building a highly available directory topology monitoring domain controllers and configuring backups and snapshots ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 18 Figure 7 — Cloud only : AWS Directory Services (Microsoft AD) As in scenario 2 the AD DS (Microsoft AD) is deployed into dedicated subnets that span two AZs making AD DS highly available in the AWS Cloud In addition to Microsoft AD AD Connector (in all three scenarios) is deployed for WorkSpaces authentication or MFA This ensures separation of roles or functions within the Amazon VPC which is a standard best practice For more information see the Design Considerations section of this document Scenario 3 is a standard allin configuration that works well for customers who want to have AWS manage the deployment patching high availability and monitoring of the AWS Directory Service The scenario also works well for proof of concepts lab and production environments because of its isolation mode In addition to the placement of AWS Directory Service Figure 7 shows the flow of traffic from a user to a workspace and how the workspace interacts with the AD server and MFA server This architecture uses the following components or construct s AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for AD DS Microsoft AD two for AD Connector or WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 19 • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • AWS Directory Services — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more informati on see the Active Directory: Sites and Services section of this document Customer • Optional: Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL enduser devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Like scenario 2 this scenario doesn’t have issues w ith reliance on connectivity to the customer on premises data center latency or data out transfer costs (except where internet access is enabled for WorkSpaces within the VPC) because by design this is an isolated or cloud only scenario Scenario 4: AW S Microsoft AD and a Two Way Transitive Trust to On Premises This scenario shown in Figure 8 has AWS Managed AD deployed in the AWS Cloud which has a two way transitive trust to the customer on premises AD User accounts and WorkSpaces are created in the Managed AD with the AD trust enabling resources to be accessed in the on premises environment ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 20 Figure 8 — AWS Microsoft AD and a two way transitive trust to onpremises As in scenario 3 the AD DS (Microsoft AD) is deployed into dedicated subnets that span two AZs making AD DS highly available in the AWS Cloud This s cenario works well for customers who want to have a fully managed AWS Directory Service incl uding deployment patching high availability and monitoring of their AWS Cloud This scenario also allows WorkSpaces users to access AD joined resources on their existing networks This scenario requires a domain trust to be in place Security groups and firewall rules need to allow communication between the two active directories In addition to the placement of AWS Directory Service Figure 8 shows the flow of traffic from a user to a workspace and how the workspace interacts w ith the AD server and MFA server This architecture uses the following components or construct AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for AD DS Microsoft AD two for AD Connector or WorkSpaces • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 21 • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) • Amazon EC 2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers This solution requires connectivity to the customer on premises data center to allow the trust process to operate If WorkSpaces users are using resources on the on premises network then latency and outbound data transfer costs need to be considered Scenario 5: AWS Microsoft AD using a Shared Services Virtual Private Cloud (VPC) This scenario shown in Figure 9 has an AWS Managed AD deployed in the A WS Cloud providing authentication services for workloads that are either already hosted in AWS or are planned to be as part of a broader migration The best practice recommendation is to have Amazon WorkSpaces in a dedicated VPC Customers should also create a specific AD OU to organize the WorkSpaces computer objects To deploy WorkSpaces with a shared services VPC hosting Managed AD deploy an AD Connector (ADC) with an ADC service account created in the Managed AD The service account req uires permissions to create computer objects in the WorkSpaces designated OU in the shared services Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 22 Figure 9 — AWS Microsoft AD using a shared services VPC This architecture uses the following components or construct s AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two AZs (two for AD Connector and WorkSpaces) • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) AD Connecto r • AWS Transit Gateway/VPC Peering — Enable connectivity between Workspaces VPC and the Shared Services VPC • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpace s 23 Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporat e or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Scenario 6: AWS Microsoft AD Shared Services VPC and a One Way Trust to OnPremises This scenario as shown in Figure 10 uses an existing on premises AD for user accounts a nd introduces a separate Managed AD in the AWS Cloud to host the computer objects associated with the WorkSpaces This scenario allows the computer objects and AD group policies to be managed independently from the corporate AD This scenario is useful whe n a third party wants to manage WorkSpaces on a customer’s behalf as it allows the third party to define and control the WorkSpaces and policies associated with them without a need to grant the third party access to the customer AD In this scenario a specific AD OU is created to organize the WorkSpaces computer objects in the Shared Services AD To deploy WorkSpaces with the computer objects created in the Shared Services VPC hosting Managed AD using use r accounts from the customer domain deploy an AD Connector referencing the corporate AD Use an ADC Service Account created in the corporate AD that has permissions to create computer objects in the OU that was configured for WorkSpaces in the Shared Serv ices Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 24 Figure 10 — AWS Microsoft shared services VPC and a one way trust to AD onpremises This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two AZs — two for AD Connector and WorkSpaces • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over a n IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) AD Connector • Transit Gateway/VPC Peerin g — Enable connectivity between Workspaces VPC and the Shared Services VPC • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 25 Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL e nduser devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Design Considerations A functional AD DS deployment in the AWS Cloud requires a good understanding of both Active Directory concepts and specific AWS services In this section we discuss key design co nsiderations when deploying AD DS for Amazon WorkSpaces VPC best practices for AWS Directory Service DHCP and DNS requirements AD Connector specifics and AD sites and services VPC Design As we discuss ed in the Network Considerations section of this document and documented earlier for scenarios 2 and 3 customers should deploy AD DS in the AWS Cloud into a dedicated pair of private subnets across two AZs and separated from AD Connector or WorkSpaces subnets This construct provides highly available low latency access to AD DS services for WorkSpaces while maintaining standard best practices of separation of roles or functions within the Amazon VPC Figure 11 shows the separation of A D DS and AD Connector into dedicated private subnets (scenario 3) In this example all services reside in the same Amazon VPC ArchivedAmazon Web Services Best Practices for Deploy ing Amazon WorkSpaces 26 Figure 11 — AD DS network segregation Figure 12 shows a design similar to scenario 1 ; however in this scenario the on premises portion resides in a dedicated Amazon VPC Figure 12 — Dedicated WorkSpaces VPC Note: For customers who have an existing AWS deployment where AD DS is being used w e recommend that they locate their WorkSpaces in a dedicated VPC and use VPC peering for AD DS communications In addition to the creation of dedicated private subnets for AD DS domain controllers and member servers require several Security Group rules t o allow traffic for services ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 27 such as AD DS replication user authentication Windows Time services and distributed file system (DFS) Note: Best practice is to restrict the required security group rules to the WorkSpaces private subnets and in the case of scenario 2 allow for bidirectional AD DS communications on premises to and from the AWS Cloud as shown in the following table Table 1 — Bidirectional AD DS communications to and from the AWS Cloud Protocol Port Use Destination TCP 53 88 135 139 389 445 464 636 Auth (primary) Active Directory (private data center or Amazon EC2) * TCP 49152 – 65535 RPC High Ports Active Directory (private data center or Amazon EC2) ** TCP 3268 3269 Trusts Active Directory (private data center or Amazon EC2) * TCP 9389 Remote Microsoft Windows PowerShell (optional) Active Directory (private data center or Amazon EC2) * UDP 53 88 123 137 138 389 445 464 Auth (primary) Active Directory (private data center or Amazon EC2) * UDP 1812 Auth (MFA) (optional) RADIUS (private data center or Amazon EC2) * *See Active Directory and Active Directory Domain Services Port Requirements **See Service overview and network port requirements for Windows ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 28 For step bystep guidance for implementing rules see Adding Rules to a Security Group in the Amazon Elastic Compute Cloud User Guide VPC Design : DHCP and DNS With an Amazon VPC DHCP services are provided by default for your instances By default eve ry VPC provides an internal DNS server that is accessible via the Classless InterDomain Routing (CIDR) +2 address space and is assigned to all instances via a default DHCP options set DHCP options sets are used within an Amazon VPC to define scope optio ns such as the domain name or the name servers that should be handed to customer instances via DHCP Correct functionality of Windows services within a customer VPC depends on this DHCP scope option In each of the scenarios defined earlier customers create and assign their own scope that defines the domain name and name servers This ensures that domain joined Windows instances or WorkSpaces are configured to use the AD DNS The following table is an example of a custom set of DHCP scope options that mu st be created for Amazon WorkSpaces and AWS Directory Services to function correctly Table 2 — Custom set of DHCP scope options Parameter Value Name tag Creates a tag with key = name and value set to a specific string Example: examplecom Domain name examplecom Domain name servers DNS server address separated by commas Example: 1 920210 1 920221 NTP servers Leave this field blank NetBIOS name servers Enter the same comma separated IPs as per domain name servers Example: 1 920210 1 920221 NetBIOS node type 2 ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 29 For details on creating a custom DHCP option set and associating it with an Amazon VPC see Working with DHCP options sets in the Amazon Virtual Private Cloud User Guide In scenario 1 the DHCP scope would be the on premises DNS or AD DS However in scenario s 2 or 3 this would be the locally deployed directory service (AD DS on Amazon EC2 or AWS Directory Servi ces: Microsoft AD) We recommend each domain controller that resides in the AWS Cloud be a global catalog and Directory Integrated DNS server Active Directory: Sites and Services For Scenario 2 sites and services are critical components for the correct function of AD DS Site topology controls AD replication between domain controllers within the same site and across site boundaries In scenario 2 at least two sites are present : on premises and the Amazon WorkSpa ces in the cloud Defining the correct site topology ensures client affinity meaning that clients (in this case WorkSpaces) use their preferred local domain controller Figure 13 — Active Directory sites and services : client affinity Best practice: Define high cost for site links between on premises AD DS and the AWS Cloud Figure 10 is an example of what costs to assign to the site links (cost 100) to ensure siteindependent client affinity ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 30 These associations help ensure that traffic — such as AD DS replication and client authentication — uses the most efficient path to a domain controller In the case of scenarios 2 and 3 this helps ensure lower latency and cross link traffic Multi Factor Authentication (MFA) Implementing MFA requires the Amazon WorkSpaces infrastructure to use AD Connector as its AWS Directory Service and have a RADIUS server Although this document doesn’t discuss the deployme nt of a RADIUS server the previous section AD DS Deployment Scenarios describes the placement of RADIUS within each scenario MFA – TwoFactor Authentication Amazon WorkSpaces supports MFA through AWS Directory Service: AD Connector and a customer owned RADIUS server After MFA is enabled users are required to provide their Username Password and MFA Code to the WorkSpaces client for authentication to their respective WorkSpaces desktops Figure 14 — WorkSpaces client with MFA enabled ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 31 Hard rul e: Implementing MFA authentication requires customers to use AD Connector AD Connector doesn’t support selective “per user” MFA as this is a global per AD Connector setting If selective “per user” MFA is required users must be separated by an AD Connector WorkSpaces MFA requires one or more RADIUS servers Typically these are existing solutions for example RSA or the servers can be depl oyed within a VPC (see the AD DS Deployment Scenarios section of this document ) If deploying a new RADIUS solution several implementations exist such as FreeRADIUS and cloud services such as Duo Security For a list of prerequisites to implement MFA with Amazon WorkSpaces see the Amazon WorkSpaces Administration Guide Preparing Your Network for an AD Connector Directory The process for configuring AD Connector for MFA is described in Managing an AD Connector Directory: Multi factor Authentication in the Amazon WorkSpaces Administration Guide Disaster Recovery / Business Continuity WorkSpaces Cross Region Redirection Amazon WorkSpaces is a regional service that provides remote desktop access to customers Depending on business continuity and disaster recovery requirements (BC/DR) some customers require seamless failover to another region where the Amazon WorkSpaces service i s available This BC/DR requirement can be accomplished using the Amazon WorkSpaces Cross Region redirection option It allows customers to use a fully qualified domain name (FQDN) as their Amazon WorkSpaces registration code When your end users log in t o WorkSpaces you can redirect them across Amazon WorkSpaces Regions based on your DNS policies for the FQDN This option can be used with public or private DNS zones Cross region failure can be manual or automated The automated failover can be done by u sing DNS health checks to determine if a primary site is still available before failing over to the second region If you do n’t have DNS health checks you can creat e a TXT record within your managed DNS service An important consideration is to determine at what point a failure to a failover region should occur The criteria for this decision should be based on your company policy but should include the Recovery Time Objective (RTO) and the Recovery Point Objective (RPO) A Well Architected Amazon Worksp aces architecture design should include the potential for service failure The time tolerance for normal business operation recovery ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 32 will also factor into the decision Additionally with cross region redirection user data replication to the new region should be considered There are many options available for user data replication such as Amazon WorkDocs Windows FSx (DFS Share) or 3rd party utilities to synchronize data volumes between regions For more information see Cross Region Redirection for Amazon WorkSpaces WorkSpaces Interface VPC Endpoint (AWS PrivateLink) – API Calls Amazon WorkSpaces public APIs are supported on AWS PrivateLink AWS PrivateLink increases the security of data shared wi th cloud based applications by reducing the exposure of data to the public internet WorkSpaces API traffic can be secured inside a VPC by using a n interface endpoint which is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service This enables you to privately access WorkSpaces API services by using private I P addresses Using PrivateLink with WorkSpaces Public APIs also enables you to securely expose REST APIs to resources only within your VPC or to those connected to your data centers via AWS Direct Connect You can restrict access to selected Amazon VPCs and VPC Endpoints and enable cross account access using resource specific policies Ensure that the security group that is associated with the endpoint network interface allows communication between the endpoint network interface and the resources in your VPC that communicate with the service If the security group restricts inbound HTTPS traffic (port 443) from resources in the VPC you might not be able to send traffic through the endpoint network interface An interface endpoint supports TCP traffic only • Endpoints support IPv4 traffic only • When you create an endpoint you can attach an endpoint policy to it that controls access to the service to which you are connecting • You have a quota on the number of endpoints you can create per VPC • Endpoints are supported within the same Region only You cannot create an endpoint between a VPC and a service in a different Region ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 33 Create Notification to receive alerts on interface endpoint events — You can create a notification to receive alerts for specific events that occur on your interface endpoint To create a notification you must associate an Amazon SNS topic with the notification You can subscribe to the SNS topic to receive an email notification when an endpoint event occurs Create a VPC Endpoint Policy for Amazon WorkSpaces — You can create a policy for Amazon VPC endpoints for Amazon WorkSpaces to specify the following: • The principal that can perform actions • The actions that can be performed • The resources on which actions can be performed Connect Your Private Network to Your VPC — To call the Amazon WorkSpaces API through your VPC you have to connect from an instance that is inside the VPC or connect your private network to your VPC by using an Amazon Virtual Private Net work (VPN) or AWS Direct Connect For information about Amazon VPN see VPN connections in the Amazon Virtual Private Cloud User Guide For information about AWS Direct C onnect see Creating a connection in the AWS Direct Connect User Guide For more information about using Amazon WorkSpaces API through a VPC interface endpoi nt see Infrastructure Security in Amazon WorkSpaces Amazon WorkSpaces Tags Tags enable you to associate metadata with AWS resources Tags can be used wi th Amazon WorkSpaces to registered directories bundles IP Access Control Groups or images Tags assist with cost allocation to internal cost centers Before using tags with Amazon WorkSpaces review the Tagging Best Practice s whitepaper Tag Restrictions • Maximum number of tags per resource —50 • Maximum key length —127 Unicode characters • Maximum value length —255 Unicode characters • Tag keys and values are case sensitive Allowed characters are letters spaces and numbers representable in UTF 8 plus the following special characters: + = _ : / @ Do not use leading or trailing spaces • Do not use the ""aws:"" or ""aws:workspaces:"" pr efixes in your tag names or values because they are reserved for AWS use You can't edit or delete tag names or values with these prefixes ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 34 Resources That You Can Tag • You can add tags to the following resources when you create them : WorkSpaces imported im ages and IP access control groups • You can add tags to existing resources of the following types : WorkSpaces registered directories custom bundles images and IP access control groups Using the Cost Allocation Tag To view your WorkSpaces resource tags in the Cost Explorer activate the tags that you have applied to your WorkSpaces resources by following the instructions in Activating User Defined Cost Allo cation Tags in the AWS Billing and Cost Management User Guide Although tags appear 24 hours after activation it can take four to five days for values associated with those tags to appear in the Cost Explorer To appear and provide cost data in Cost Explo rer WorkSpaces resources that have been tagged must incur charges during that time Cost Explorer shows only cost data from the time when the tags were activated forward No historical data is available at this time Managing Tags To update the tags for an existing resource using the AWS CLI use the create tags and deletetags commands For bulk updates and to automate the task on a large number of WorkSpaces resource Amazon WorkSpaces adds support for AWS Resource Groups Tag Editor AWS Resource Groups Tag Editor enables you to add edit or delete AWS tags from your WorkSpaces along with your other AWS resources Automating Amazon WorkSpaces Deployment With Amazon WorkSpaces you can launch a Microsoft Windows or Amazon Linux desktop within minutes and connect to and access your desktop software from on premises or an external network securely reliably and quickly You can automate the provisioning of Amazon WorkSpaces to enable you to integrate Amazon WorkSpaces into your existing provisioning workflows Common WorkSpaces Automation Methods Customers can use a number of tools to allow for rapid Amazon WorkSpaces deployment The tools can be used to allow simplify management of Work Spaces reduce costs and enable an agile environment that can scale and move fast ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 35 AWS CLI and API There are Amazon WorkSpaces API operations you can use to interact with the s ervice securely and at scale All public APIs are available with the AWS CLI SDK and Tools for PowerShell while private APIs such as image creation are available only through the AWS Console When considering operational management and business self service for Amazon WorkSpaces consider that WorkSpaces APIs do require technical expertise and security permissions to use API calls can be made using the AWS SDK AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are PowerShell modules built on functionality exposed by the AWS SDK for NET These modules enable you to script operations on AWS resources from the PowerShell command line and integ rate with existing tools and services For example API calls can enable you to automatically manage the WorkSpaces lifecycle by integrating with AD to provision and decommission WorkSpaces based on a user’s AD group membership AWS CloudFormation AWS Clo udFormation enables you to model your entire infrastructure in a text file This template becomes the single source of truth for your infrastructure This helps you to standardize infrastructure components used across your organization enabling configurat ion compliance and faster troubleshooting AWS CloudFormation provisions your resources in a safe repeatable manner enabling you to build and rebuild your infrastructure and applications You can use CloudFormation to commission and decommission envir onments which is useful whe n you have a number of accounts that you want to build and decommission in a repeatable fashion When considering operational management and business self service for Amazon WorkSpaces consider that AWS CloudFormation does require technical expertise and security permissions to use SelfService WorkSpaces Portal Customers can use build on WorkSpaces API commands and other AWS Services to create a WorkSpaces self service portal This helps customers streamline the process to deploy and reclaim WorkSpaces at scale Using a WorkSpaces portal you can enable your work force to provision their own WorkSpaces with an integrated approval workflow that does not require IT intervention for each request This reduces IT operational costs while helping end users get started with WorkSpaces faster The additional built in appr oval workflow simplifies the desktop approval process for businesses A dedicated portal can offer an automated tool for provisioning Windows or Linux cloud desktops and enable users to rebuild restart or migrate their WorkSpace as well as provide a fa cility for password resets ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 36 There are guided examples of creating Self Service WorkSpaces Portals referenced in the Further Reading section of this document AWS Partners provide preconfigured WorkSpaces management por tals via the AWS Marketplace Integration with Enterprise IT Service Management As enterprises adopt Amazon WorkSpaces as their virtual desktop solution at scale there is a need to implement or integrat e with IT Service Management (ITSM) systems ITSM integration allows for self service offerings for provisioning and operations The AWS Service Catalog enables you to manage commonly deployed AWS ser vices and provisioned software products centrally This service helps your organization achieve consistent governance and compliance requirements while enabling users to deploy only the approved AWS services they need The AWS Service Catalog can be used to enable a self service lifecycle management offering for Amazon WorkSpaces from within IT Service Management tools such as ServiceNow WorkSpaces Deployment Automation Best Practices You should consider Well Architected principles of selecting and designing WorkSpaces deployment automation • Design for Automation — Design to deliver the least possible manual intervention in the pr ocess to enable repeatability and scale • Design for Cost Optimization — By automatically creating and reclaiming WorkSpaces you can reduce the administration effort needed to provide resources and remove idle or unused resources from generating unnecessary cost • Design for Efficiency — Minimize the resources needed to create and terminate WorkSpaces Where possible provide T ier 0 self service capabilities for the business to improve efficiency • Design for Flexibility — Create a consistent deployment mechanism that can handle multiple scenarios and can scale with the same mechanism (customized using tagged use case and profil e identifiers ) • Design for Productivity — Design your WorkSpaces operations to allow for the correct authorization and validation to add or remove resources • Design for Scalability — The pay asyou go model that Amazon WorkSpaces uses can drive cost saving s by creating resources as needed and removing them when they are no longer necessary • Design for Security — Design your WorkSpaces operations to allow for the correct authorization and validation to add or remove resources ArchivedAmazon Web Services Best Practices for Deploying Amazon W orkSpaces 37 • Design for Supportability — Design your WorkSpaces operations to allow for noninvasive support and recovery mechanisms and processes Amazon WorkSpaces Language Packs Amazon WorkSpaces bundles that provide the Windows 10 desktop experience supports English (US) French (Canadian) Korean and Japanese However you can include additional language packs for Spanish Italian Portuguese and many more language options For more information see How do I create a new Windows WorkSpace image with a client language other than English? Amazon WorkSpaces Profile Management Amazon WorkSpaces separates the user profile from the base Operating System (OS) by redirecting all profile writes to a separate Amazon Elastic Block Store (Amazon EBS) volume In Microsoft Windows the user profile is stored in D:\Users\username In Amazon Linux the user profile is stored in /home The EBS volume is snapshotted automatically every 12 hours The snapshot is automatically stored in an AWS Managed S3 bucket to be used in the event that a n Amazon WorkSpace is rebuilt or restored For most organizations having automatic snapshots every 12 hours is superior to the existing desktop deployment of no backups for user profiles However customers can require more granular control over user profiles ; for example migration from desktop to WorkSpaces to a new OS/AWS Region support for DR and so on There are alternative methods for profile management available for Amazon WorkSpaces Folder Redirection While folder redirect ion is a common design consideration in Virtual Desktop Infrastructure (VDI) architectures it is not a best practice or even a common requirement in Amazon WorkSpaces designs The reason for this is Amazon WorkSpaces is a persistent Desktop as a Service (DaaS) solution with application and user data persisting out of the box There are specific scenarios where Folder Redirection for User Shell Folders ( for example D:\Users\username\Desktop redirected to \\Server\RedirectionShare$ \username\Desktop ) are required such as immediate recovery point objective (RPO) for user profile data in disaster recovery (DR) environments Best Practices The following best practices are listed for a robust folder redirection: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 38 • Host the Windows File Server s in the same AWS Region and AZ that the Amazon WorkSpaces are launched in • Ensure AD Security Group Inbound Rules include the Windows File Server Security Group or private IP address es; otherwise ensure that the onpremises firewall allows those same TCP and UDP portbased traffic • Ensure Windows File Server Security Group Inbound Rules include TCP 445 (SMB) for all Amazon WorkSpaces Security Groups • Create an AD Security Group for Amazon WorkSpaces users to authorize users access to the Windows File Share • Use DFS Namespac e (DFS N) and DFS Replication (DFS R) to ensure your Windows File Share is agile not tied to anyone one specific Windows File Server and all user data is automatically replicated between Windows File Servers • Append ‘$’ to the end of the share name to hi de the share hosting user data from view when browsing the network shares in Windows Explorer • Create the file share following Microsoft’s guidance for redirected folders : Deploy Folder Redirection with Offline Files Follow the guidance for Security Permissions and GPO configuration closely • If your Amazon Work Spaces deployment is Bring Your Own License (BYOL) you must also specify disabling Offline Files following Microsoft’s guidance : Disable Offline Files on Individual Redirected Folders • Install and run Data Deduplication (commonly referred to as ‘dedupe’) if your Windows File Server is Windows Server 2016 or newer to reduce storage consumption and optimize cost See Install and enable Data Deduplication and Running Data Deduplicati on • Back up your Windows File Server file shares using existing organizational backup solutions Thing to Avoid • Do not use Windows File Server s that are accessible only across a wide area network ( WAN ) connection as the SMB protocol is not designed for th at use • Do not use the same Windows File Share that is used for Home Directories to mitigate the chances of users accidentally deleting their User Shell folders • While enabling Volume Shadow Copy Service (VSS) is recommended for ease of file restores this alone does not remove the requirement to back up the Windows File Server file shares ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 39 Other Considerations • Amazon FSx for Windows File Server of fers a managed service for Windows file shares and simplify the operational overhead of folder redirection including automatic backups • Utilize AWS Storage Gateway for SMB File Share to back up your file shares if there is no existing organizational backup solution Profile Settings Group Policies A common best practice in enterprise Microsoft Windows deployments is to define user environment settings through Group Policy Object (GPO) and Group Policy Preferences (GPP) settings Settings such as shortcuts drive mappings registry keys and printer s are defined through the Group Policy Management Console The benefits to defining the user environment through GPOs include but are not limited to: • Centralized configuration management • User profile defined by AD Security Group Membership or OU placement • Protection against deletion of settings • Automate profile creation and personalization at first logon • Ease of future updating Note: Follow Microsoft’s Best Practices for optimizing Group Policy performance Interactive Logon Banners Group Policies must not be used as they are not supported on Amazon WorkSpaces Banners are presented on the Amazon WorkSpaces Client through AWS support requests Additionally removable devices must not be blocked through group policy as they are required for Amazon WorkSpaces GPOs can be used to manage Windows WorkSpaces For more information see Manage Your Windows WorkSpaces Amazon WorkSpaces Volumes Each Amazon WorkSpaces instance contains two volumes : an operating system volume and a user volume ArchivedAmazon Web Services Best Practices fo r Deploying Amazon WorkSpaces 40 • Amazon Windows WorkSpaces — The C:\ drive is used for the Operating System (OS) and the D:\ drive is user volume The user profile is located on the user volume (AppData Documents Pictures Downloads and so on ) • Amazon Linux WorkSpace s — With an Amazon Linux WorkSpace the system volume (/dev/xvda1) mounts as the root folder The user volume is for user data and applications; /dev/xvdf1 mounts as /home For operating system volumes you can select a starting size for this drive of 80 GB or 175 GB For user volumes you can select a starting size of 10 GB 50 GB or 100 GB Both volumes can be increased up to 2TB in size as needed ; however to increase the user volume beyond 100 GB the OS volume must be 175 GB Volume changes can be performed only once every six hours p er volume For additional information on modifying the WorkSpaces volume size see the Modify a WorkSpace section of the Administration Guide WorkSpaces Volumes Best Practices When planning an Amazon WorkSpaces deployment we recommend factor ing the minimum requirements for OS installation in place upgrades and additional core applications that will be added to the image on the OS volume For the user volume we recommend starting with a smaller disk allocation and incrementally increas ing the user volume size as needed Minimizing the size of the disk volumes reduces the cost of running the WorkSpace Note : While a volu me size can be increased it cannot be decreased Amazon WorkSpaces Logging In an Amazon WorkSpaces environment there are many log sources that can be captured to troubleshoot issues and monitor the overall WorkSpaces performance Amazon WorkSpaces Clie nt 3x On each Amazon WorkSpaces client the client logs are located in the following director ies: • Windows — %LOCALAPPDATA% \Amazon Web Services \Amazon WorkSpaces \logs • macOS — ~/Library/""Application Support""/""Amazon Web Services""/""Amazon WorkSpaces""/logs • Linux (Ubuntu 1804 or later) — /opt/workspacesclient/workspacesclient ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 41 There are many instances where diagnostic or debugging details may be needed for a WorkSpaces session from the client side Advanced client logs can be enabled as well by adding an “ l3“ to the workspaces executable file For example: ""C:\Program Files (x 86)\Amazon Web Services Inc \Amazon WorkSpaces"" workspacesexe l3 Amazon WorkSpaces Service Amazon WorkSpaces service is integrated with Amazon CloudWatch Metrics CloudWatch Events and CloudTrail This integration allows of the performance data and API calls to be logged into central AWS Service When managing an Amazon WorkSpaces environment it is important to constantly monitor certain CloudWatch metrics to d etermine the overall environment health status Metrics While there are other CloudWatch metrics available for Amazon WorkSpaces (see Monitor Your WorkSpaces Using CloudWatch Metrics ) the three following metrics will assist in maintaining the WorkSpace instance availability : • Unhealthy — The number of WorkSpaces that returned an unhealthy status • SessionLaunchTime — The amount of time it takes to initiate a WorkSpaces session • InSessionLatency — The round trip time between the WorkSpaces client and the WorkSpace For more information on WorkSpaces logging options see Logging Amazon WorkSpaces API Calls by Using CloudTrail The additional CloudWatch Events will assist with capturing the client side IP of the user session when the user connected to the WorkSpaces session and the what endpoint was used during t he connection All of these details assist with isolating or pinpointing user reported issues during troubleshooting sessions Note : Some CloudWatch Metrics are available only with AWS Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 42 Amazon WorkSpaces Migrate Amazon WorkSpaces migrate featur e enables you to bring your user volume data to a new bundle You can use this feature to : • Migrate your WorkSpaces from the Windows 7 Experience to the Windows 10 Desktop Experience • Migrate from a PCoIP WorkSpace to a WorkSpaces Streaming Protocol (WSP) WorkSpace • Migrate WorkSpaces from one public or custom bundle to another For example you can migrate from GPU enabled (Graphics and GraphicsPro) bundles to non GPU enabled bundles and vice versa Migration Process With WorkSpaces migrate you can spe cify the target WorkSpaces bundle The migration process recreates the WorkSpace using a new root volume from the target bundle image and the user volume from the latest original user volume snapshot A new user profile is generated during migrate for bet ter compatibility The data in your old user profile that cannot be moved to the new profile is stored in a notMigrated folder During migration the data on the user volume (drive D) is preserved but all the data on the root volume ( C:\ drive) is lost This means that none of the installed applications settings and changes to the registry are preserved The old user profile folder is renamed with the NotMigrated suffix and a ne w user profile is created The migration process takes up to one hour per WorkSpace In addition if the migrate workflow fails to complete the process the service will automat ically roll back the WorkSpace to its ori ginal state before migration minimiz ing any data loss risk Any tags assigned to the original WorkSpace are carried over during migration The running mode of the WorkSpace is preserved The migrated WorkSpace has a new WorkSpace ID computer name and IP address Migration procedure You c an migrate WorkSpaces through the Amazon WorkSpaces console the AWS CLI using the migrate workspace command or the Amazon WorkSpaces API All migration requests gets queued and the service will automatically throttle the total number of migration request s if there are too many Migration Limits • You cannot migrate to a public or custom Windows 7 desktop experience bundle You cannot migrate to BYOL Windows 7 bundles ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 43 • You can migrate BYOL WorkSpaces only to other BYOL bundles • You cannot migrate a WorkSpace created from public or custom bundles to a BYOL bundle • Migrating Linux WorkSpaces is not currently supported • In AWS Regions that support more than one language you can migrate WorkSpaces between language bundles • The source and target bundles must be different (However in Regions that support more than one language you can migrate to the same Windows 10 bundle as long as the languages differ) If you want to refresh your WorkSpace using the same bundle rebuild the WorkSpace instead • You cannot migrate WorkSpaces across Regions • Note that WorkSpaces cannot be migrated when they are in ADMIN_MAINTENANCE mode Cost During the month in which migration occurs you are charged prorated amounts for both the new and the original WorkSpaces For example if you migrate WorkSpace A to Work Space B on May 10 you will be charged for WorkSpace A from May 1 to May 10 and you will be charged for WorkSpace B from May 11 to May 30 WorkSpaces migration best practices Before you migrate a WorkSpace do the following: • Back up any important data on drive C to another location All data on drive C is erased during migration • Make sure that the WorkSpace being migrated is at least 12 hours old to ensure that a snapshot of the user volume has been created On the Migrate WorkSpaces page in the Amazon WorkSpaces console you can see the time of the last snapshot Any data created after the last snapshot is lost during migration • To avoid potential data loss make sure that your users log out of their WorkSpaces and don't log back in until after the mig ration process is finished • Make sure that the WorkSpaces you want to migrate have a status of AVAILABLE STOPPED or ERROR • Make sure that you have enough IP addresses for the WorkSpaces you are migrating During migration new IP addresses will be alloc ated for the WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 44 • If you are using scripts to migrate WorkSpaces migrate them in batches of no more than 25 WorkSpaces at a time WellArchitected Framework AWS Well Architected helps cloud architects build secure high performing resilient and efficient infrastructure for their applications and workloads It describes the key concepts design principles and architectural best practices for designing and running workloads in th e cloud It is based on five key pillars: • Operational Excellence (OE) • Security • Reliability • Performance Efficiency • Cost Optimization When architecting an Amazon WorkSpaces environment it is important to evaluate these key pillars to determine the maturity deployment level and discover additional features that can be used with the Amazon WorkSpaces While there is overall guidance for the AWS Well Architect Framework we are provi ing some key questions that can be included in the planning phase of your WorkSpaces deployment to ensure each of the five pillars are considered General • What is the business driver for this project? Operational Excellence • How do you segregate access control between users and different admin groups? Security 1 What are the security and compliance requirements to be considered for the WorkSpaces to operate in? 2 Are there any restrictions on routing to external IP addresses? 3 Are the required WorkSpaces ports allowed through the corporate firewall? 4 Is or will multi factor authentication be used with this deployment? 5 How do you many user identities and authorization requests today? ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 45 Reliability 1 What is the data retention policy for desk tops? 2 What is the Recovery Point Objective (RPO) for end user data? 3 What is the Recovery Time Objective (RTO) for end user data? Cost Optimization 1 Have the WorkSpaces bundles been right sized for the user case and applications? 2 Will the users consum e WorkSpaces more than 82 hours per month? While the questions above do not constitute an exhaustive list of items that should be considered they provide some overarch ing guidance to assist you with a Well Architected Amazon WorkSpaces deployment Security This section explains how to secure data by using encryption when using Amazon WorkSpaces services We describe encryption in transit and at rest and the use of secu rity groups to protect network access to the WorkSpaces This section also provides information on how to control end device access to WorkSpaces by using Trusted Devices and IP Access Control Groups Additional information on authentication (including M FA support) in the AWS Directory Service can be found in this section Encryption in Transit Amazon WorkSpaces uses cryptography to protect confidentiality at different stages of communication (in transit) and also to protect data at rest (encrypted WorkSp aces) The processes in each stage of the encryption used by Amazon WorkSpaces in transit is described in the following sections For information about the encryption at rest see the Encrypted WorkSpaces section of this document Registration and Updates The desktop client application communicates with Amazon for updates and registration using HTTPS ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 46 Authentication Stage The desktop client initiates authentication by sending credentials to the authentication gateway The communication between the desktop client and authentication gateway uses HTTPS At the end of this stage if the authentication succeeds the a uthenticati on gateway returns an OAuth 20 token to the desktop client through the same HTTPS connection Note: The desktop client application supports the use of a proxy server for port 443 (HTTPS) traffic for updates registration and authentication After recei ving credentials from the client the authentication gateway sends an authentication request to AWS Directory Service The communication from the authentication gateway to AWS Directory Service takes place over HTTPS so no user credentials are transmitted in plain text Authentication — Active Directory Connector (ADC) AD Connector uses Kerberos to establish authenticated communication with on premises AD so it can bind to LDAP and execute subsequent LDAP queries Client side LDAPS support in ADC is also available to encrypt queries between Microsoft AD and AWS Applications Before implementing client side LDAPS functionality review the prerequisites for client side LDAPS The AWS Directory Service also supports LDAP with TLS No user credentials are transmitted in plaintext at any time For increased security it is possible to connect a WorkSpaces VPC with the on premises network (where AD resides) using a VPN connection When using an AWS hardware VPN connection customers can s et up encryption in transit by using standard IPSEC ( Internet Key Exchange ( IKE) and IPSEC SAs) with AES 128 or AES 256 symmetric encryption keys SHA 1 or SHA 256 for integrity hash and DH groups (214 18 22 23 and 24 for phase 1; 125 14 18 22 23 and 24 for phase 2) using perfect forward secrecy (PFS) Broker Stage After receiving the OAuth 20 token (from the a uthentication gateway if the authentication succeeded) the desktop client quer ies Amazon WorkSpaces services (Broker Connection Manager) using HTTPS The desktop client authenticates itself by sending the OAuth 20 token and as a result the client receive s the endpoint information of the WorkSpaces streaming gateway ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 47 Streaming Stage The desktop client requests to open a PCoIP session with the streaming gateway (using the OAuth 20 token) This session is AES256 encrypted and uses the PCoIP port for communication control ( 4172/ TCP) Using the OAuth20 token the streaming gateway requests the user specific WorkSpaces information from the Amazon WorkSpaces service over HTTPS The streaming gateway also receives the TGT from the client (which is encrypted using the client user’s password) and by using Kerberos TGT pass through the gateway initiates a Windows login on the WorkSpace using t he user’s retrieved Kerberos TGT The WorkSpace then initiates an authentication request to the configured AWS Directory Service using standard Kerberos authentication After the WorkSpace is successfully logged in the PCoIP streaming starts The connect ion is initiated by the client on port TCP 4172 with the return traffic on port UDP 4172 Additionally the initial connection between the streaming gateway and a WorkSpaces desktop over the management interface is via UDP 55002 (See documentation for IP Address and Port Requirements for Amazon WorkSpaces The initial outbound UDP port is 55002 ) The streaming connection using po rts 4172 ( TCP and UDP ) is encrypted by using AES 128 and 256 bit ciphers but default to 128 bit Customers can actively change this to 256 bit either using PCoIP specific AD Group Policy settings for Windows WorkSpaces or with the pcoip agentconf file for Amazon Linux WorkSpaces To learn more about Group Policy administration for Amazon WorkSpaces review the documentation Network Interfaces Each Amazon WorkSpace has two network interfaces called the primary network interface and management network interface The primary network interface provides connectivity to resources inside the customer VPC such as access to AWS Directory Service the internet and the custome r corporate network It is possible to attach security groups to this primary network interface Conceptually we differentiate the security groups attached to this ENI based on the scope of the deployment: WorkSpaces security group and ENI security groups Management Network Interface The management network interface cannot be controlled via security groups; however customers can use a host based firewall on WorkSpace s to block ports or control access We don’t recommend applying restrictions on the management network interface If a customer decide s to add host based firewall rules to manage this interface a few ports should be open so the Amazon WorkSpaces service ca n manage ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 48 the health and accessibility to the WorkSpace See Network Interfaces in the Amazon Workspaces Administration Gui de WorkSpaces Security Group A default security group is created per AWS Directory Service and is automatically attached to all WorkSpaces that belong to that specific directory As with any other security group it is possible to modify the rules of a Wo rkSpaces security group The results take effect immediately after the changes are applied However do not delete this security group If you delete this security group your WorkSpaces won't function correctly and you won't be able to recreate this grou p and add it back It is also possible to change the default WorkSpaces security group attached to an AWS Directory Service by changing the WorkSpaces security group association Note: A newly associated security group will be attached only to WorkSpaces created or rebuilt after the modification ENI Security Groups Because the primary network interface is a regular ENI it can be managed by usin g the different AWS management tools See Elastic Network Interfaces Look for the WorkSpace IP address (in the WorkSpaces page in the Amazon WorkSpaces console) and then use that IP address as a filter to find the corresponding ENI (in the Network Interfaces section of the Amazon EC2 console) Once the ENI is located it can be directly manage d by security groups When manually assigning security groups to the primary network interface consider the port requirements of Amazon WorkSpaces See Network Interfaces in the Amazon Workspaces Administration Guide ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 49 Figure 15 — WorkSpaces client with MFA enabled Encrypted WorkSpaces Each Amazon WorkSpace is provisioned with a root volume ( C: drive fo r Windows WorkSpaces root for Amazon Linux WorkSpaces) and a user volume ( D: drive for Windows WorkSpaces /home for Amazon Linux WorkSpaces) The encrypted WorkSpaces feature enables one or both volumes to be encrypted What is Encrypted? The data stored at rest disk input/output ( I/O) to the volume and snapshots created from encrypted volumes are all encrypted When Does Encryption Occur? Encryption for a WorkSpace should be specified when launching (creating) the WorkSpace WorkSpaces v olumes can be encrypted only at launch time: after launch the volume encryption status cannot be changed Figure 16 shows the Amazon WorkSpaces console page for choosing encryption during the launch of a new WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 50 Figure 16 — Encrypting WorkSpace root volumes How is a New WorkSpace Encrypted? A customer can choose the Encrypted WorkSpaces option from either the Amazon WorkSpaces console or AWS CLI or by using the Amazon WorkSpaces API when a custome r launch es a new WorkSpace To encrypt the volumes Amazon WorkSpaces uses a CMK from AWS Key Management Service ( AWS KMS) A default AWS KMS CMK is created the first time a WorkSpace is launched in a Region (CMKs have a Region scope ) A customer can also create a customer managed CMK to use with encrypted WorkSpaces The CMK is used to encrypt the data keys that are used by Amazon WorkSpaces service to encrypt each of the WorkSpace volumes (In a strict sense it is Amazon EBS that will encrypt the volumes) For current CMK limits see AWS KMS Resource quotas Note: Creating custom images from an encrypted WorkSpac e is not supported Also WorkSpaces launched with root volume encryption enabled can take up to an hour to be provisioned For a detailed description of the WorkSpaces encryption process see How Amazon WorkSpaces uses AWS KMS Consider how the use of CMK will be monitored to ensure that a request for an encrypted WorkSpace is serviced correctly For additional ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 51 information about AWS KMS customer master keys and d ata keys see the AWS KMS page Access Control Options and Trusted Devices Amazon WorkSpaces provides customers with options to manage which client devices can access WorkSpaces Customers can limit WorkSpaces access to trusted devices only Access to WorkSpaces can be allowed from macOS and Microsoft Windows PCs using digital certificates Amazon Workspaces can allow or block access for iOS Android Chrome OS Linux and zero clients as well as the WorkSpaces Web Access client With these capabilities it can further improve the security posture Access control options are enabled for new deployments for users to access their WorkSpaces from clients on Windows MacOS iOS Android ChromeOS and Zero Clients Access using Web Access or a Linux WorkSpaces client is not enabled by default for a new Workspaces deployment and will need to be enabled If there are limits on corporate data access from trusted devices (also known as managed devices) WorkSpaces access can be restricted to trusted devices with valid certificates When this feature is enabled Amazon WorkSpaces uses certificate based authentication to determine whether a device is trusted If the WorkSpaces client application can't ver ify that a device is trusted it blocks attempts to log in or reconnect from the device For more information about controlling which devices can access WorkSpaces see Restrict WorkSpaces Access to Trusted Devices Note : Certificates for trusted devices appl y only to Amazon WorkSpaces Windows and macOS clients This feature does not apply to the Amazon WorkSpaces Web Access client or any third party clients including but not limited to Teradici PCoIP software and mobile clients Teradici PCoIP zero clients RDP clients and remote desktop applications IP Acc ess Control Groups Using IP address based control groups customers can define and manage groups of trusted IP addresses and allow users to access their WorkSpaces only when they're connected to a trusted network This feature helps customers gain greater control over their security posture IP access control groups can b e added at the WorkSpaces directory level There are two ways to get started using IP access control groups ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 52 • IP Access Controls page — From the WorkSpaces management console IP access control groups can be created on the IP Access Controls page Rules can be added to these groups by entering the IP addresses or IP ranges from which WorkSpaces can be accessed These groups can then be added to directories on the Update Details page • Workspace APIs — WorkSpaces APIs can be used to create delete and view groups; create or delete access rules; or to add and remove groups from directories For a detailed description of the using IP access control groups with the A mazon WorkSpaces encryption process see IP Access Control Groups for Your WorkSpaces Monitoring or Logging Using Amazon CloudWatch Monitoring network servers and logs is an integral part of any infrastructure Customers wh o deploy Amazon WorkSpaces need to monitor their deployments specifically the overall health and connection status of individual WorkSpaces Amazon CloudWatch Metrics for WorkSpaces CloudWatch metrics for WorkSpaces is designed to provide administrators w ith additional insight into the overall health and connection status of individual WorkSpaces Metrics are available per WorkSpace or aggregated for all WorkSpaces in an organization within a given directory These metrics like all CloudWatch metrics ca n be viewed in the AWS Management Console (see Figure 17 ) accessed via the CloudWatch APIs and monitored by CloudWatch alarms and third party tools By default the following metrics are enabled and are available at no extra cost: • Available — WorkSpaces that respond to a status check are counted in this metric Figure 17 — CloudWatch metrics : ConnectionAttempt / ConnectionFailure ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 53 • Unhealthy — WorkSpaces that don’t respond to the same status check are counted in this metric • ConnectionAttempt — The number of connection attempts made to a WorkSpace • ConnectionSuccess — The number of successful connection attempts • ConnectionFailure — The number of unsuccessful connection attempts • Sessi onLaunchTime — The amount of time taken to initiate a session as measured by the WorkSpaces client • InSessionLatency — The round trip time between the WorkSpaces client and WorkSpaces as measured and reported by the client • SessionDisconnect — The number of user initiated and automatically closed sessions Additionally alarms can be created as shown in Figure 18 Figure 18 — Create CloudWatch alarm for WorkSpaces connection errors Amazon CloudWatch Events for WorkSpaces Events from Amazon CloudWatch Events can be used to view search download archive analyze and respond to successful logins to WorkSpaces The service can monitor client WAN IP addresses Operating System WorkSpaces ID and Directory ID ArchivedAmazon Web Services Best Prac tices for Deploying Amazon WorkSpaces 54 information for users’ logins to WorkSpaces For example it can use events for the following purposes: • Store or archive WorkSpaces login events as logs for future reference analyze the logs to look for patterns and take action based on those patterns • Use the WAN IP address to determine where users are logged in from and then use policies to allow users access only to files or data from WorkSpaces that meet the access criteria found in the Clo udWatch Event type of WorkSpaces Access • Use policy controls to block access to files and applications from unauthorized IP addresses For more information on how to use CloudWatch Events see the Amazon CloudWatch Events User Guide To learn more about CloudWatch Events for WorkSpaces see Monitor your WorkSpac es using Cloudwatch Events Cost Optimization SelfService WorkSpace Management Capabilities In Amazon WorkSpaces self service WorkSpace management capabilities can be enabled for users to provide them with more control over their experience Allowing users self service capability can reduce your IT support staff workload for Amazon WorkSpaces When s elfservice capabilities are enabled they allow users to perform one or more of the following tasks directly from their Windows macOS or Linux client for A mazon WorkSpaces: • Cache their credentials on their client This lets users reconnect to their WorkSpace without re entering their credentials • Restart their WorkSpace • Increase the size of the root and user volumes on their WorkSpace • Change the compute ty pe (bundle) for their WorkSpace • Switch the running mode of their WorkSpace • Rebuild their WorkSpace There are no ongoing cost implications for allowing users the Restart and Rebuild options for their WorkSpaces Users should be aware that a Rebuild of their WorkSpace will cause their WorkSpace to be unavailable for up to an hour as the rebuild process takes place ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 55 Options to increase the size of the volumes change the compute type and switch the running mode can incur additional costs for WorkSpaces A best practice is to enable selfservice to reduce the workload for the support team Self service for additional cost items should be allowed within a workflow process that ensures that authorization for additional charges has been obtained This c an be through a dedicated selfservice portal for WorkSpaces or by integration with existing Information Technology Service Manage (ITSM) services such as ServiceNow For more detailed information see Enabling Self Service WorkSpace Management Capabilities for Your Users For an example describing enabling a structured portal for user self service see Automate Amazon WorkSpaces with a Self Service Portal Amazon WorkSpaces Cost Optimizer The running mode of a WorkSpace determines its immediate availability and how it will be billed Here are the current running WorkSpaces running mode : • AlwaysOn — Use when paying a fixed monthly fee for unlimited usage of WorkSpaces This mode is best for users who use their WorkSpace full time as their primary desktop • AutoStop — Use when paying for WorkSpaces by the hour With this mode WorkSpaces stop after a specified period of inactivity and the state of apps and data is saved To set the automatic stop time use AutoStop Time (hours) A best practice is to monitor usage and set the WorkSpaces’ running mode to be the most cost effective This can be done with the Amazon WorkSpaces Cost Optimizer This solution deploys an Amazon Cloudwatch event that invokes an AWS Lambda function every 24 hours This solution can convert individual WorkSpaces from an hourly billing model to a monthly billing model on a ny day after the threshold is met If the method converts a WorkSpace from hourly billing to monthly billing it does not convert the WorkSpace back to hourly billing until the beginning of the next month and only if usage was below the threshold However the billing model can be manually change d at any time using the AWS Management Console The method ’s AWS CloudFormation template includes parameters that will execute these conversions Opting Out with Tags To prevent the method from converting a WorkSpa ce between billing models apply a resource tag to the WorkSpace using the tag key Skip_Convert and any tag value This method will log tagged WorkSpaces but it will not convert the tagged WorkSpaces Remove the tag at any time to resume automatic conversion for that WorkSpace For detail s see Amazon WorkSpaces Cost Optimizer ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 56 Troubleshooting Common administration and client issues such as error message s like ""Your device is not able to connect to the WorkSpaces Registration service"" or “Can't connect to a WorkSpace with an interactive logon banne r” can be found on the Client and Admi n Troubleshooting pages in the Amazon WorkSpaces Administration Guide AD Connector Cannot Connect to Activ e Directory For AD Connector to connect to the onpremises directory the firewall for the on premises network must have certain ports open to the CIDRs for both subnets in the VPC See Scenario 1: Using AD Connector to Proxy Authentication to On Premises Active Directory Service To test if these conditions are met perform the following steps To test the connection : 3 Launch a Windows instance in the VPC and connect to it over RDP The remaining steps are performed on the VPC instance 4 Download and unzip the DirectoryServicePortTest test application The source code and Microsoft Visual Studio project file s are included to modify the test application if desired 5 From a Windows command prompt run the DirectoryServicePortTest test application with the following options: DirectoryServicePortTestexe d ip tcp ""5388135 13938944546463649152"" udp ""5388123137138389445464"" — The fully qualified domain name used to test the forest and domain functional levels If the domain name is excluded the functional levels won't be tested — The IP address of a domain controller in the onpremises domain The ports are tested against this IP address If the IP address is excluded the ports won't be tested This test determine s if the necessary ports are open from the VPC to the domain The test app also verifies the minimum forest and domain functional levels ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 57 Troubleshooting A WorkSpace Custom Image Creation Error If a Windows or Amazon Linux WorkSpace has been launched and customized a custom image can be creat ed from that WorkSpace A custom image contains the operating system application software and settings for the WorkSpace Review the requirements to create a Windows custom image or the requirements to create an Amazon Linux custom image Image creation requires that all prerequisites are met before image c reation can start To confirm that the Windows WorkSpace meets the requirements for image creation we recommend running the Image Checker The Image Checker performs a series of tests on the WorkSpace when an image is created and provides guidance on ho w to resolve any issues it finds For detailed information see installing and configuring the image checker After the WorkSpace pass es all tests a “Validation Successful ” message appears You can now create a custom bundle Otherwise resolve any issues that cause test failures and warnings and repeat the process of running the Image Checker until the WorkSpace passes all tests All failures and warnings must be resolved before an image can be created For more information f ollow the tips for resolving issues det ected by the Image Checker Troubleshooting a Windows WorkSpace Marked as Unhealthy The Amazon WorkSpaces service periodically checks the health of a WorkSpace by sending it a status request The WorkSpace is marked as Unhealthy if a response isn’t received from the WorkSpace in a timely manner Common causes for this problem are: • An application on the WorkSpace is blocking network connection between the Amazon WorkSpaces service and the WorkSpace • High CPU utilization on the WorkSpace • The computer name of the Work Space is changed • The agent or service that responds to the Amazon WorkSpaces service isn't in running state The following troubleshooting steps can return the WorkSpace to a healthy state: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 58 • First reboot the WorkSpace from the Amazon WorkSpaces console If rebooting the WorkSpace doesn't resolve the issue either use RDP or connect to an Amazon Linux WorkSpace using SSH • If the WorkSpace is unreachable by a different pr otocol rebuild the WorkSpace from the Amazon WorkSpaces console • If a WorkSpaces connection cannot be established verify the following: Verify CPU Utilization Use Open Task Manager to determine if the WorkSpace is experiencing high CPU utilization If it is try any of the following troubleshooting steps to resolve the issue: 1 Stop any service that is consuming a high amount of CPU 2 Resize the WorkSpace to a compute type greater than what is currently used 3 Reboot the WorkSpace Note : To diagnose high CPU utilization and for guidance if the above steps don't resolve the high CPU utilization issue see How do I diagnose high CPU utilization on my EC2 Windows instance when my CPU is not throttled? Verify the Computer Name of the WorkSpace If the computer name of the Workspace was changed chan ge it back to the original name : 1 Open the Amazon WorkSpaces console and then expand the Unhealthy WorkSpace to show details 2 Copy the Computer Name 3 Connect to the WorkSpace using RDP 4 Open a command prompt and then enter hostname to view the current computer name • If the name matches the Computer Name from step 2 skip to the next troubleshooting section • If the names don’t match enter sysdmcpl to open system properties and then follow the remaining steps in this section 5 Choose Change and then paste the Computer Name from step 2 6 Enter the domain user credentials if prompted Confirm that SkyLightWorkspaceConfigService is in Running State ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 59 • From Services verify that the WorkSpace service SkyLightWorkspaceConfigService is in runni ng state If it’s not start the service Verify Firewall Rules Confirm that the Windows Firewall and any third party firewall that is running have rules to allow the following ports: • Inbound TCP on port 4172: Establish the streaming connection • Inbound UD P on port 4172: Stream user input • Inbound TCP on port 8200: Manage and configure the WorkSpace • Outbound UDP on port 55002: PCoIP streaming If the firewall uses stateless filtering then open ephemeral ports 49152 65535 to allow return communication If the firewall uses stateful filtering then ephemeral port 55002 is already open Collecting a WorkSpaces Support Log Bundle for Debugging When troubleshooting WorkSpaces issues it is necessary to gather the log bundle from the affected WorkSpace and the h ost where the WorkSpaces client is installed There are two fundamental categories of logs: • Server side logs : The WorkSpace is the server in this scenario so these are logs that live on the WorkSpace itself • Client side logs : Logs on the device that the end user is using to connect to the WorkSpace o Note that only Windows and macOS clients write logs locally o Zero clients and iOS clients do not log o Android logs are encrypted on the local storage and uploaded automatically to the WorkSpaces client engineering team Only that team can review the logs for Android devices PCoIP Se rverSide Logs All of the PCoIP components write their log files into one of two folders: • Primary location : C:\ProgramData \Teradici\PCoIPAgent \logs ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 60 • Archive location : C:\ProgramData \Teradici\logs Sometimes when working with AWS Support on a complex issue it is necessary to put the PCoIP Server agent into verbose logging mode To enable this : 1 Open the following registry key: HKEY_LOCAL_MACHINE \SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin _defaults 2 In the pcoip_admin_defaults key create the following 32bit DWORD: pcoipevent_filter_mode 3 Set the value for pcoipevent_filter_mode to “3” (Dec or Hex) For reference these are the log thresholds which can be defined in this DWORD • 0 — (CRITICAL) • 1 — (ERROR) • 2 — (INFO) • 3 — (Debug) If the pcoip_admin_default DWORD doesn’t exist the log level is 2 by default It is recommended to restore a value of 2 to the DWORD after it no longer need verbose logs as they are much larger and will consume disk space unnecessarily WebAccess Server Side Logs The WorkSpaces web access client uses the STXHD service The logs for WebAccess are stored at C:\ProgramData \Amazon\Stxhd\Logs Client Side Logs These logs come from the WorkSpaces client that the user connects with so the logs are on the end user’ s computer The log file locations for Windows and Mac are : • Windows : ""%LOCALAPPDATA% \Amazon Web Services \Amazon WorkSpaces \Logs"" • macOS : ~/Library/Logs/Amazon Web Services/ • Linux : ~/local/share/Amazon Web Services/Amazon WorkSpaces/logs To help troubleshoot issues that users might experience enable advanced logging that can be used on any Amazon WorkSpaces client Advanced logging is enabled for every subsequent client session until it is disabled ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 61 1 Before connecting to the WorkSpace the end user should enable advanced logging for their WorkSpaces client 2 The end user should then connect as usual use their WorkSpace and attempt to reproduce the issue 3 Advanced logging generates log files that contain diagnostic information and debugging level details including verbose performance data This setting persists until explicitly turned off After the user has successfully reproduce d the issue with verbose logging on this setting should be disabled as it generates large log files Automated Server Side Log Bundle Collection for Windows The GetWorkSpaceLogsps1 script is helpful for quickly gathering a server side log bundle for AWS Premium Support The script can be requested from AWS Premium Support by requesting it in a support case : 1 Connect to the WorkSpace using the client or using Remote Desktop Protocol (RDP) 2 Start an administrative Command Prompt ( run as administrator) 3 Launch the script from the Command Prompt with the following command: powershellexe NoLogo ExecutionPolicy RemoteSigned NoProfile File ""C: \Program Files \Amazon\WorkSpacesConfig \Scripts\Get WorkSpaceLogsps1"" 4 The script create s a log bundle on the user's desktop The script creates a zip file with the following folders: • C — Contains the files from Program Files Program Files (x86) ProgramData and Windows related to Skylight EC2Config Teradici Event viewer and Windows logs (Panther and others) • CliXML — Contains XML files that can be imported in Powershell by using ImportCliXML for interactive filtering See Import Clixml • Config — Detailed logs for each chec k that is performed • ScriptLogs — Logs about the script execution (not relevant to the investigation but useful to debug what the script does) • tmp —Temporary folder (it should be empty) ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkS paces 62 • Traces — Packet capture done during the log collection How to Check Latency to the Closest AWS Region The Connection Health Check website quickly checks whether all the required services that use Amazon WorkSpaces can be reached It also does a performance check to each AWS Region where Amazon WorkSpaces is available and lets users know which one will be the fastest Conclusion There is a strategic shift in end user computing as organizations strive to be more agile better protect their data and help their workers be more productive Many of the benefits already realized with cloud computing also apply to end user computing By moving their Windows or Linux desktops to the A WS Cloud with Amazon WorkSpaces organizations can quickly scale as they add workers improve their security posture by keeping data off devices and offer their workers a portable desktop with access from anywhere using the device of their choice Amazo n WorkSpaces is designed to be integrated into existing IT systems and processes and this whitepaper describe d the best practices for doing this The result of following the guidelines in this whitepaper is a cost effective cloud desktop deployment that c an securely scale with your business on the AWS global infrastructure Contributors Contributors to this document include: • Naviero Magee Sr EUC Solutions Architect Amazon Web Services • Andrew Wood Sr EUC Solutions Architect Amazon Web Services • Dzung N guyen Sr Consultant Amazon Web Services • Stephen Stetler Sr EUC Solutions Architect Amazon Web Services Further Reading For additional information see: • Amazon WorkSpaces Administration Guide • Amazon WorkSpaces Developer Guide • Amazon WorkSpaces Clients ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 63 • Managing Amazon Linux 2 Amazon WorkSpaces with AWS OpsWorks for Puppet Enterprise • Customizing the Amazon Linux WorkSpace • How to improve LDAP security in AWS Directory Service with client side LDAPS • Use Amazon CloudWatch Events with Amazon WorkSpaces and AWS Lambda for greater fleet visibility • How Amazon WorkSpaces Use AWS KMS • AWS CLI Command Reference – WorkSpaces • Monitoring Amazon WorkSpaces Metrics • MATE Desktop Environment • Troubleshooting AWS Directory Service Administration Issues • Troublesh ooting Amazon WorkSpaces Administration Issues • Troubleshooting Amazon WorkSpaces Client Issues • Automate Amazon WorkSpaces with a Self Service Portal Document Revisions Date Description December 2020 Updated content May 2020 Updated content and added new diagrams July 2016 First publication",General,consultant,Best Practices Best_Practices_for_Deploying_Microsoft_SQL_Server_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlBest Practices for Deploying Microsoft SQL Server on Amazon EC2 First Published September 2018 Updated July 28 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlContents Introduction 1 High availability and disaster recovery 2 Availability Zones and multi AZ deployment 3 Using AWS Launch Wizard to deploy Microsoft SQL Server on Amazon EC2 instances 5 Multi Region deployments 6 Disaster recovery 8 Performance opti mization 10 Using Amazon Elastic Block Store (Amazon EBS) 10 Instance storage 11 Amazon FSx for Windows File Server 13 Bandwidth and latency 13 Read replicas 14 Security optimization 15 Amazon VPC 15 Encryption at rest 15 Encryption in transit 16 Encryption in use 16 AWS Key Management Service (AWS KMS) 16 Security patches 16 Cost optimization 17 Using SQL Server Developer Edition for non production 17 Amazon EC2 CPU optimization 18 Switch to SQL Server Standard Edition 18 Z1d and R5b EC2 instance types 19 Eliminating active replica licenses 20 SQL Server on Linux 22 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlOperational excellence 23 Observability and root cause analysis 23 Reducing mean time to resolution (MTTR) 24 Patch management 24 Contributors 25 Document revisions 25 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAbstract This whitepaper focuses on best practices to attain the most value for the least cost when running Microsoft SQL Server on AWS Although for many general purpose use cases Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server (MS SQL) provides an easy and quick solution this paper focus es on scenarios where you need to push the limits to satisfy your special requirements In particular this paper ex plains how you can minimize your costs maximize availability of your SQL Server databases optimize your infrastructure for maximum performance and tighten it for security compliance while enabling operational excellence for ongoing maintenance The fle xibility of AWS services combined with the power of Microsoft SQL Server can provide expanded capabilities for those who seek innovative approaches to optimize their applications and transform their businesses The main focus of this paper is on the capa bilities available in Microsoft SQL Server 2019 which is the most current version at the time of this publication Existing databases that run on previous versions (2008 2012 2014 2016 and 2017) can be migrated to SQL Server 2019 and run in compatibil ity mode Mainstream and extended support for SQL Server 2000 2005 and 2008 has been discontinued by Microsoft Any database running on those versions of SQL Server must be upgraded to a supported version first Although it is possible to run those versions of SQL Server on AWS that discussion is outside the scope of this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 1 Introduction AWS offers the best cloud for SQL Server and it is the proven rel iable and secure cloud platform for running Windows based applications today and in the future SQL Server on Windows or Linux on Amazon EC2 enables you to increase or decrease capacity within minutes not hours or days You can commission one hundreds or even thousands of server instances simultaneously Deploying self managed fully functioning and production ready Microsoft SQL Server instances on Amazon EC2 is now possible within a few minutes for anyone even those without deep skills on SQL Server and cloud features or configuration nuances thanks to AWS Launch Wizard for SQL Server Using AWS Launch Wizard you can quickly deploy SQL Server on EC2 Windows or Linux instances with all the best practices already implemented and included in your deployment Independent benchmarks have proven that SQL Server runs 2X faster with 64% lower costs on AWS when c ompared with the next biggest cloud provider AWS continues to be the most preferred option for deploying and running Microsoft SQL Server This is due to the unique combination of breadth and depth of services and capabilities offered by AWS providing th e optimum platform for MS SQL Server workloads Requirements for running SQL Server often fall under following categories: • High availability and disaster recovery • Performance • Security • Cost • Monitoring and maintenance These requirements map directly to the f ive pillars of the AWS Well Architected Framework namely: • Reliability • Performance efficiency • Security • Cost optimization This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 2 • Operational excellence This paper discuss es each of these requirements in further detail along with best practices using AWS services to address them High availability and disaster recovery Every business seeks data solutions that can address their operational requirements These require ments often translate to specific values of the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) The RTO indicates how long the business can endure database and application outages and the RPO determines how much data loss is tolerable For example an RTO of one hour tells you that in the event of an application outage the recovery plans should aim to bring the application back online within one hour An RPO of zero indicates that should there be any minor or major issues impacting the application there should be no data loss after the a pplication is brought back online The combination of RTO and RPO requirements dictate s what solution should be adopted Typically applications with RPO and RTO values close to zero should use a high availability (HA) solution whereas disaster recovery ( DR) solutions can be used for those with higher values In many cases HA and DR solutions can be mixed to address more complex requirements Microsoft SQL Server offers several HA/DR solutions each suitable for specific requirements The f ollowing table compar es these solutions: Table 1: HA/DR options in M icrosoft SQL Server Solution HA DR Enterprise edition Standard edition Always On availability groups Yes Yes Yes Yes (2 replicas )* Always On failover cluster instances Yes Yes** Yes Yes (2 nodes) Distributed availability groups Yes Yes Yes No Log shipping No Yes Yes Yes Mirroring (deprecated) Yes Yes Yes Yes (Full safety only) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 3 *Always On basic availability groups in SQL Server 2019 Standard edition support a s ingle passive replicas (in addition to the primary replica) for a single database per availability group If you need multiple databases in HA mode a separate availability group needs to be defined for each database **MSSQL Failover Cluster Instance is o ften used as a pure HA solution However as discussed later in this document in AWS the FCI can also serve as a complete HA/DR solution These solutions rely on one or more secondary servers with SQL Server running as active or passive standby Based on the specific HA/DR requirements these servers can be located in close proximity to each other or far apart In AWS you can choose between low latency or an extremely low probability of failure You can also combine these options to create the solution that is most suitable to your use case This paper look s at these options and how they can be used with SQL Server workloads Availability Zones and multiAZ deployment AWS Availability Zones are designed to provide separate failure domains while keeping workloads in relatively close proximity for low latency communica tions Availability Zones are a good solution for synchronous replication of your databases using Mirroring Always On Availability Groups Basic Availability Groups or Failover Cluster Instances SQL Server provides zero data loss and when combined with the low latency infrastructure of Availability Zones provides high performance This is one of the main differences between most on premises deployments and AWS For example Always On Failover Cluster Instance (FCI) is often used inside a single data center because all nodes in an FCI cluster must have access to the same shared storage Locating these nodes in different data centers could degrade performance However with AWS FCI nodes can be located in separate Avail ability Zone s and still provide high performance because of the low latency network link between all Availability Zone s within a Region This feature enables a higher level of availability and could eliminate the need for a third node which is often coupl ed with an FCI cluster for disaster recovery purposes SQL Server FCI relies on shared storage being accessible from all nodes participating in FCI Amazon FSx for Windows File Server is a fully managed service providing shared storage that automatically replicates the data synchronously across two This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 4 Availability Zones provides high availability with automatic failure detection failover and failback and fully supports the Server Message Block (SMB ) Continuous Availability (CA) feature This enables you to simplify your SQL Server Always On deployments and use Amazon FSx as storage tier for MS SQL FCI Scenarios where Amazon FSx is ap plicable for performance tuning and cost optimization are discussed in subsequent sections of this document Figure 1: Using Amazon FSx as file share for Failover Cluster Instance or as file share witness in Windows Server Failover Cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 5 Using AWS La unch Wizard to deploy Microsoft SQL Server on Amazon EC2 instances AWS Launch Wizard is a service that offers a guided way of sizing configuring and deploying AWS resources for third party applications such as Microsoft SQL Server without the need to manually identi fy and provision individual AWS resources Today you can use AWS Launch Wizard to deploy Microsoft SQL Server with following configurations: • SQL Server single instance on Windo ws • SQL Server single instance on Linux • SQL Server HA using Always On Availability Groups on Windows • SQL Server HA using Always On Availability Groups on Linux • SQL Server HA using Always On Failover Cluster Instance on Windows To start you input your MS SQL workload requirements including performance number of nodes licensing model MS SQL edition and connectivity on the service console Launch Wizard then identifies the correct AWS resources such as EC2 instances and EBS volumes to deploy and run you r MS SQL instance Launch Wizard provides an estimated cost of deployment and enables you to modify your resources to instantly view an updated cost assessment After you approve the AWS resources Launch Wizard automatically provisions and configures the selected resources to create a fully functioning production ready application AWS Launch wizard handles all the heavy lifting including installation and configurat ion of Always On Availability Groups or Failover Cluster Instance This is especially useful with the Linux support as most MS SQL administrators find Linux configuration non trivial when done manually AWS Launch Wizard also creates CloudFormation templa tes that can serve as a baseline to accelerate subsequent deployments For post deployment management AWS Systems Manager (SSM) Application Manager autom atically imports application resources created by AWS Launch Wizard From the Application Manager console you can view operations details and perform operations tasks As discussed later in this document y ou can also use SSM Automation documents to mana ge or remediate issues with application components or resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 6 Figure 2: AWS Launch Wizard deploys MS SQL FCI using Amazon FSx for Windows File Server Multi Region deployments For those workloads that require even more resilience against unplanned even ts you can leverage the global scale of AWS to ensure availability under almost any circumstances By default Amazon Virtual Private Cloud (Amazon VPC) is confined within a single Region Therefore for a multi region deployment you need to establish connectivity between your SQL Server instances that are deployed in different Regions In AWS there are a number of ways to do this each suitable for a range of requirements: • VPC peering — Provides an encrypted network connectivity between two VPCs The traffic flows through the AWS networking backbone eliminating latency and other hazards of the internet • AWS Transit Gateway — If you need to connect two or more VPCs or on premises sites you can use AWS Transit Gateway to s implify management and configuration overhead of establishing network connection between them • VPN connections — AWS VPN solutions are especially useful when you need to operate in a hybrid environment and connect your AWS VPCs to your on premises sites and clients This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 7 • VPC sharing — If your applications or other clients are spread across multip le AWS accounts an easy way to make your SQL Server instance available to all of them is using virtual private cloud ( VPC) Sharing A shared VPC can also be connected to other VPCs using AWS Transit Gateway AWS VPN CloudHub VPN connections or VPC peering These connections are useful when workloads are spread across multiple accounts an d Regions If you have applications or users that are deployed in remote Regions which need to connect to your SQL Server instances you can use the AWS Direct Connect feature that provides connectivity from any Direct Connect connection to all AWS Regions Although it is possible to have synchronous replication in a multi region SQL Server deployment the farther apart your selected Regions are the more severe the performance penalty is for a synchronous replication Often the best practice for multi region deployments is to establish an asynchronous replication especially for Regions that are geographically distant For those workloads that come with aggressive RPO requirements asynchronous multi Region deployment can be combined with a Multi AZ or Single AZ synchronous replication You can also combine all three methods into a single solution However these combinations would impose a significant increase in your SQL Server license costs which must be considered as part of your planning In cases involving several replicas across two or more Regions distributed availability groups might be the most suitable option This feature enables you to combine availability groups deployed in each Region into a larger distributed availability group Distributed availability g roups can also be used to increase the number of read replicas A traditional availability group allows up to eight read replicas This means you can have a total of nine replicas including the primary Using a distributed availability group a second ava ilability group can be added to the first increasing the total number of replicas to 18 This process can be repeated with a third availability group and a second distributed availability group The second distributed availability group can be configured to include either the first or second availability groups as its primary Distributed availability group is the means through which SQL Server Always On can achieve virtually unlimited scale Another use case of a distributed availability group is for zero downtime database migrations when during migration a read only replica is available at target destination The independence of SQL Server Distributed Availability Group from Active Directory and Windows Server Failover Cluster (WSFC) is the main benefactor for these cases It This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 8 enables you to keep both sides of the migration synchronized without having to wor ry about the complexities of Active Directory or WSFC See How to architect a hybrid Microsoft SQL Serve r solution using distributed availability groups for more details Figure 2: SQL Server distributed availability group in AWS Disaster recovery Similar to HA solutions DR solutions require a replica of SQL Server databases in another server However for DR the other server is often in a remote site far away from the primary site This means higher latency and therefore low er performance if you rely on HA solutions that use synchronous replication DR solutions often rely on asynchronous replication of data Similar to HA DR solutions are based on either block level or database level replication For example SQL Server This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 9 Log Shipping replicates data at the database level while Windows Storage Replica can be used to implement block level replication DR solutions are selected based on their requirements such as cost RPO RTO complexity and the effort to implement each solution In addition t o common SQL Server DR solutions such as Log Shipping and Windows Storage Replica AWS also provides CloudEndure Disaster Recovery You can use CloudEndure Disaster Recovery to reduce d owntime to a few minutes protect against data loss for sub second RPO simplify implementation increase reliability and decrease the total cost of ownership CloudEndure is an agent based solution that replicates entire virtual machines including the operating system all installed applications and all databases into a staging area The staging area contains low cost resources automatically provisioned and managed by CloudEndure Disaster Recovery This greatly reduces the cost of provisioning duplica te resources Because the staging area does not run a live version of your workloads you don’t need to pay for duplicate software licenses or highperformance compute Rather you pay for lowcost compute and storage The fully provisioned recovery envir onment with the rightsized compute and higher performance storage required for recovered workloads is launched only during a disaster or drill AWS also makes CloudEndure available at no additional cost for migration projects Figure 3: CloudEndure disaster recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 10 Performance optimization In some cases maximizing performance m ight be your utmost priority Both SQL Server and AWS have several options to substantially i ncrease performance of your workloads Using Amazon Elastic Block Store (Amazon EBS) Amazon EBS is a Single AZ block storage service with a number of flexible options to cater to diverse requirements When it comes to maximizing performance with consistent and predictable results on a single volume using a Provisioned IOPS Solid State Drive (SSD) volume type ( io2 and io2 Block Express ) is the easiest choice You can provision up to 64000 input/output operations per second ( IOPS ) per io2 EBS volume (based on 16 KiB I/O size) along with 1000 MiB/s throughput For more demanding workloads the io2 Block Express EBS volumes guarantee 256000 IOPS and 4000 MiB/s through put per volume If you need more IOPS and throughput than provided by a single EBS volume you can create multiple volumes and stripe them in your Windows or Linux instance (Microsoft SQL Server 2017 and later can be installed on both Windows and Linux systems ) Striping enables you to further increase the available IOPS per instance up to 260000 and throughput per instance up to 7500 MB/s Remember to use EBSoptim ized EC2 instance types This means a dedicated network connection is allocated to serve requests between your EC2 instance and the EBS volumes attached to it While you can use a single Provisioned IOPS (io1 io2 or io2 Block Express ) volume to meet your IOPS and throughput requirements General Purpose SSD (gp2 and gp3 ) volumes offer a better balance of price and performance for SQL Server workloads when configured appropriately General Purpose SSD (gp2) volumes deliver single digit millisecond latencies and the ability to burst to 16000 IOPS for extended periods This ability is well suited to SQL Server The IOPS load generated by a relational database like SQL Server tends to spike frequently For exam ple table scan operations require a burst of throughput while other transactional operations require consistent low latency One of the major benefits of using EBS volumes is the ability to create point intime and instantaneous EBS snapshots This feat ure copies the EBS snapshot to Amazon Simple Storage Service (Amazon S3) infrastructure which provides 99999999999% durability Despite EBS volumes being confined to a single AZ EBS snapshots can be This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 11 restored to any AZ within the same Region Note that block level snapshots are not the same as database backups and not all features of database backups are attainable this way Therefore this method is often combined and complemented with a regular database backup plan Although each EBS volume can be as large as 64 TB and therefore could take a long time to transfer all its d ata to Amazon S3 EBS snapshots are always point intime This means SQL Server and other applications can continue reading and writing to and from the EBS volume while d ata is being transferred in the background When you restore a volume from a snapshot the volume is immediately available to applications for read and write operations However it takes some time until it gets to its full performance capacity Using Amazon EBS fast snapshot restore you can eliminate the latency of input/output ( I/O) operations on a block when it is accessed for the first time Volumes created using fast snapshot restore instantly deliver all of their provisioned performance You can use AWS Systems Manager Run Command to take application consistent EBS snapshots of your online SQL Server files at any time with no need to bring your database offline or in read only mode The snapshot process uses Windows Volume Shadow Copy Service (VSS) to take image level backups of VSS aware applications Microsoft SQL Server is VSS aware and is perfectly compatible with t his technique It is also possible to take VSS snapshots of Linux instances however that process requires some manual steps because Linux does not natively support VSS You can also take crash consistent EBS snapshots across multiple EBS volumes attached to a Windows or Linux EC2 instance without using orchestrator applications Using this method you only lose uncommitted transactions and writes that are not flushed to the disk SQL Server is capable of restoring databases to a consistent point before the crash time This feature is also supported through AWS Backup EBS volumes are simple and convenient to use and in most cases effective too However there m ight be circumstances where you need even higher IOPS and throughput than what is achievable using Amazon EBS Instance storage Storage optimized EC2 instance types use fixed size local disks and a variety of different storage technologies are available Among these Non Volatile Memory express (NVMe) is the fastest technology with the highest IOPS and throughput The i3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 12 class of instance types provides NVMe SSD drives For example i316xlarg e comes with eight disks each with 19 TB of SSD storage When selecting storage optimized EC2 instance types for maximum performance it is essential to understand that some of the smaller instance types provide instance storage that is shared with oth er instances These are virtual disks that reside on a physical disk attached to the physical host By selecting a bigger instance type such as i32xlarge you ensure that there is a 1:1 correspondence between your instance store disk and the underlying p hysical disk This ensures consistent disk performance and eliminates the noisy neighbor problem Instance disks are ephemeral and live only as long as their associated EC2 instance If the EC2 instance fails or is stopped or ended all of its instance storage disks are wiped out and the data stored on them is irrecoverable Unlike EBS volumes instance storage disks cannot be backed up using a snapshot Therefore if you choose to use EC2 instance storage for your permanent data y ou need to provide a way to increase its durability One suitable use for instance storage may be the tempdb system database files because those files are r ecreated each time the SQL Server service is restarted SQL Server drops all tempdb temporary tables and stored procedures during shut down As a best practice the tempdb files should be stored on a fast volume separate from user databases For the best performance ensure that the tempdb data files within the same filegroup are the same size and stored on striped volumes Another use for EC2 instance storage is the buffer pool extension Buffer pool extension is available on both the Enterprise and Standard editions of Microsoft SQL Server This feature uses fast random access disks (SSD) as a secondary cache between RAM and persistent disk storage striking a balance between cost and performance when running workloads on SQL Server Although instance storage disks are the fastest available to EC2 instances their performance is capped at the speed of the physical disk You can go beyond the single disk maximum by striping across several disks You could also use instance storage disks as the cache layer in Storage Spaces (for single Windows instances) and Storage Spaces Direct (for Windows Server failover clusters) storage pools This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 13 Amazon FSx for Windows File Server Amazon FSx for Windows File Server is another storage option for SQL Server on Amazon EC2 This option is suitable for three major use cases: • As shared storage used by SQL Server nodes participating in a Failover Cluster Instance • As file share witness to be used with any SQL Server cluster on top of Windows Server Failover Cluster • As an option to attain higher throughput levels than available in dedicated EBS optimization The f irst two cases were discussed in an earlier section of this document To better understand the third case notice that EBS throughput depends on EC2 instance size Smaller EC2 instance sizes prov ide lower EBS throughput ; therefore to attain EBS higher throughput you need bigger instance sizes However higher instance sizes cost more If a workload leaves a big portion of its network bandwidth unused but requires higher throughput to access underlying storage using a shared file system over SMB may unlock its r equired performance while reducing cost by using smaller EC2 instance sizes Amazon FSx provides fast performance with baseline throughput up to 2 GB/second p er file system hundreds of thousands of IOPS and consistent submillisecond latencies To provide the right performance for your SQL instances you can choose a throughput level that is independent of your file system size Higher levels of throughput ca pacity also come with higher levels of IOPS that the file server can serve to the SQL Server instances accessing it The storage capacity determines not only how much data you can store but also how many I/O operations per second (IOPS) you can perform on the storage – each GB of storage provides three IOPS You can provision each file system to be up to 64 TB in size Bandwidth and latency When tuning for performance it is important to remember the difference between latency and bandwidth You should find a balance between network latency and availability To gain the highest bandwidth on AWS you can leverage enhanced networking and Elastic Network Adapter (ENA) or the new Elastic Fabric Adapter (EFA) which when combined with new generation of EC2 instances such as C6gn C5n R5n This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 14 I3en or G4dn instances can provide up to 100Gbps bandwidth But this quite high bandwidth has no effect on latency Network latency changes in direct correlation with the distance between interconnecting nodes Cluster ing nodes is a way to increase availability but placing cluster nodes too close to each other increases the probability of simultaneous failure reducing availability Putting them too far apart yields the highest availability but at the expense of highe r latency AWS Availability Zones within each AWS Region are engineered to provide a balance that fits most practical cases Each AZ is engineered to be physically separated from other Availability Zone s while keeping in close geographic proximity to provide low network latency Therefore in the overwhelming number of cases the best practice is to spread cluster nodes across multiple Availability Zone Read replicas You might determine that many of you r DB transactions are read only queries and that the sheer number of incoming connections is flooding your database Read replicas are a known solution for this situation You can offload your read only transactions from your primary SQL Server instance to one or more read replica instances Read replicas can also be used to perform backup operations relieving primary instance from performance hits during backup windows When using availab ility group listeners if you mark your connection strings as read only SQL Server routes incoming connections to any available read replicas and only sends read/write transactions to the primary instance Always On Availability Groups introduced in SQL Server 2012 supports up to four secondary replicas In more recent versions of SQL Server (2014 2016 2017 and 2019) Always On Availability Groups support one set of primary databases and one to eight sets of corresponding secondary databases There m ight be cases where you have users or applications connecting to your databases from geographically dispersed locations If latency is a concern you can locate read replicas close to your users and applications When you use a secondary database for read only transactions you must ensure that the server software is properly licensed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 15 Security optimization Cloud security at AWS is the highest priority and there are many AWS security features available to you These features can be combined with the built in security features of Microsoft SQL Server to satisfy even the most stringent requi rements and expectations Amazon VPC There are many features in Amazon VPC that can help you secure your data in transit You can use security groups to restrict access to your EC2 instances and allow only certain endpoints and protocols You can also use network access control lists to deny known sources of threats A best practice is to deploy your SQL Server instances in private subnets inside a VPC and only allow access to the internet through a VPC network address translation ( NAT) gateway or a custom NAT instance Encryption at rest If you are using EBS volumes to store your SQL Server database files you have the option to enable block level encryption Amazon EBS transparently handles encryption and decryption for you This is available through a simple check box with no further action necessary Amazon FSx f or Windows File Server also includes built in encryption at rest Both EBS and Amazon FSx are integrated with AWS Key Management Service (AWS KMS) for managing encryption keys This means through AWS KMS you can either use keys provided by AWS or bring your own keys For more information see the AWS KMS documentation At the database level you can use SQL Server Transparent Data Encryption (TDE) a feature available in Microsoft SQL Server that provides transparent encryption of your data at rest TDE is available on Amazon RDS for SQL Server and you ca n also enable it on your SQL Server workloads on EC2 instances Previously TDE was only available on SQL Server Enterprise Edition However SQL Server 2019 has also made it available on Standard Edition If you want to have encryption atrest for your da tabase files on Standard Edition on an earlier version of SQL Server you can use EBS encryption instead It’s important to understand the tradeoffs and differences between EBS encryption and TDE EBS encryption is done at the block level that is data is encrypted when it is stored and decrypted when it is retrieved However with TDE the encryption is done at This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 16 the file level Database files are encrypted and can only be decrypted using the corresponding certificate For example this means if you use E BS encryption without TDE and copy your database data or log files from your EC2 instance to an S3 bucket that does not have encryption enabled the files will not be encrypted Furthermore if someone gains access to your EC2 instance database files will be exposed instantly However there is no performance penalty when using EBS encryption whereas enabling TDE adds additional drag on your server resources Encryption in transit As a best practice you can enable encryption in transit for your SQL Server workloads using the SSL/TLS protocol Microsoft SQL Server supports encrypted connections and SQL Server workloads in AWS are no exception When using SMB protocol for SQL Server storage layer Amazon FSx automatically encrypts all data in transit using SMB encryption as you access your file system without the need for you to modify SQL Server or other applications’ configurations Encryption in use Microsoft SQL Server offers Always Encrypted to protect sensitive data using client certificates This provides a separation between those who own t he data and can view it and those who manage the data but should have no access This feature is also available on both Amazon RDS for SQL Server as well as SQL Server workloads on Amazon EC2 AWS Key Management Service ( AWS KMS) AWS KMS is a fully managed service to create and store encryption keys You can use KMS generated keys or bring your own keys In either case keys never leave AWS KMS and are protected from any unauthorized access You can use KMS keys to encrypt your SQL Server backup files when you store them on Amazon S3 Amazon S3 Glacier or any other storage service Amazon EBS encryption also integrates with AWS KMS Securi ty patches One of the common security requirements is the regular deployment of security patches and updates In AWS you can use AWS Systems Manager Pa tch Manager to automate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 17 this process Not e that use cases for Patch Manager are not restricted to security patches For more details refer to the Patch management section of this whitepaper Cost optimization SQL Server can be hosted on AWS through License Included (LI) a nd Bring Your Own License (BYOL) licensing models With LI you run SQL Server on AWS and pay for the licenses as a component of your AWS hourly usage bill The advantage of this mo del is that you do not need to have any long term commitments and can stop using the product at any time and stop paying for its usage However many businesses already have considerable investments in SQL Server licenses and m ight want to reuse their exis ting licenses on AWS This is possible using BYOL: • If you have Software Assurance (SA) one of its benefits is the Microsoft License Mobility through Software Assurance program This program enables you to use your licenses on server instances running anywhere including on Amazon EC2 instances • If you don ’t have SA you may still be able to use your own licenses on AWS using Amazon EC2 Dedicated Hosts For more details consult the licensing section of the FAQ for Microsoft workloads on AWS to ensure license compliance The BYOL option on EC2 Dedicated Hosts can significant ly reduce costs as the number of physical cores on an EC2 host is about half of the total number of vCPU available on that host However one of the common challenges with this option is the difficulty of tracking license usage and compliance AWS License Manager helps you solve this problem by tracking license usage and optionally enforcing license compliance based on your defined license terms and conditions AWS License Manager is available to AWS customers at no additi onal cost Using SQL Server Developer Edition for non production One of the easiest way s to save licensing costs is to use MS SQL Developer Edition for environments that are not going to be used by application end users These are typically Dev Staging T est and User Acceptance Testing (UAT) environments This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 18 For this you can download SQL Server Developer Edition installation media from the Microsoft website and install it on you r EC2 instances SQL Server Developer Edition is equivalent to SQL Server Enterprise Edition with full features and functionality Amazon EC2 CPU optimization The z1d instance types provide the maximum CPU power enabling you to reduce the number of CPU cores for compute intensive SQL Server deployments However your SQL Server deployments m ight not be compute intensive and require an EC2 instance type that provides intensity on other resources such as memory or storage Because EC2 instance types that provide these resources are also providing a fixed number of cores that m ight be more than your requirement the end result can be a higher licensing cost for cores that are not used at all AWS offers a solution for these situations You can use EC2 CPU optimization to reduce the number of cores available to an EC2 instance and avoid unnecessary lic ensing costs Switch to SQL Server Standard Edition Enterprise grade features of SQL Server are exclusively available in the Enterprise edition However many of these features have also been made available in the Standard edition enabling you to switch t o the Standard edition if you’ve been using Enterprise edition only for those features One example is encryption at rest using Transparent Data Encryption (TDE) which is now available in the Standard edition as of SQL Server 2019 One of the most common reasons for using Enterprise edition has always been its mission critical HA capabilities However there are alternative options that enable switching to Standard edition without degrading availability You can use these options to cost optimize your SQL Server deployments One option is using Always On Basic Availability Groups This option is similar to Always On Availability Groups but comes with a number of limitations The most important limitation is that you can have only one database in a basic availability group This limitation rules this option out for applications that rel y on multiple database s The other option is using Always On Failover Cluster Instance (FCI) Since FCI provides HA at the instance level it does not matter how many databases are hosted on your SQL Server instance Traditionally this option was restricte d to HA within a single data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 19 center However as discussed earlier you can use Amazon FSx for Windows File Server to overcome that limitation See the High availability and disaster recovery section of this docume nt You can simplify the complexity and cost of running Microsoft SQL FCI deployments using Amazon FSx in the following scenarios: • Due to the complexity and cost of implementing a shared storage solution for FCI you might have opted to use availability gr oups and SQL Server Enterprise Edition However you can now switch to Standard edition and significantly reduce your licensing costs and also simplify the overall complexity of your SQL deployment and ongoing management • You might already use SQL Server FCI with shared storage using a third party storage replication software solution That implies that you purchase d a license for the storage replication solution and then deploy ed administer ed and maintain ed the shared storage solution yourself You can now switch to using a fully managed shared storage solution with Amazon FSx simplify ing and reduc ing costs for your SQL Server FCI deployment • You ran your SQL Server Always On deployment on premises using a combination of FCI and AG FCI to provide high availability within your primary data center site and AG provid ed a DR solution across sites The combination of Availability Zone s and the support in Amazon FSx for highly available shared storage deployed across multiple Availability Zone s now make s it possible for you to eliminate the need for separate HA and DR solutions reduc ing costs as well as simplify deplo yment complexities For a more detailed discussion of M icrosoft SQL Server FCI deployments using Amazon FSx see the blog post Simplify your Microsoft SQL Server high availability deployments using Amazon FSx for Windows File Server Z1d and R5b EC2 instance types The highperformance Z1d and R5b instance type is optimized for workloads that carry high licensing costs such as Microsoft SQL Server and Oracle databases The Z1d instance type is built with a custom Intel Xeon Scalable Processor that delivers a sustained all core Turbo frequency of up to 40 GHz which is significantly faster than other instances The R5b uses similar technology with 31 to 35 GHz For workloads that need faster sequential processing you can run fewer cores with a z1d instance and get the same or better performance as running oth er instances with more cores This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 20 For example moving a n SQL Server Enterprise workload running on an r44xlarge to a z1d3xlarge can deliver up to 24% in savings because of licensing fewer cores The same savings are realized when moving a workload from an r4 16xlarge to a z1d12xlarge as it is the same 4:3 ratio Figure 4: TCO comparison between SQL Server on r44xlarge and z1d3xlarge Eliminating active replica licenses Another opportunity for cost optimization in the cloud is through applying a combinatio n of BYOL and LI models A common use case is SQL Server Always On Availability Groups with active replicas Active replicas are used primarily for: • Reporting • Backup • OLAP Batch jobs • HA This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best P ractices for Deploying Microsoft SQL Server on Amazon EC2 21 Out of the above four operations the first three are often performed i ntermittently This means you would not need an instance continuously up and dedicated to running those operations In a traditional on premises environment you would have to create an active replica that is continuously synchronized with the primary inst ance This means you need to obtain an additional license for the active replica Figure 5: SQL Server active replication onpremises In AWS there is an opportunity to optimize this architecture by replacing the active replica with a passive replica th erefore relegating its role solely to the purpose of HA Other operations can be performed on a separate instance using License Included which could run for a few hours and then be shut down or ended The data can be restored through an EBS snapshot of th e primary instance This snapshot can be taken using VSS enabled EBS snapshots ensuring no performance impact or downtime on the primary This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 22 Figure 6: Eliminating active replica licenses in AWS This solution is applicable when jobs on the active replica run with a low frequency However if you need a replica for jobs that run continuously or at a high frequency consider using AWS Database Migration Service (AWS DMS) to continuously replicate data from your primary instance into a secondary The primary benefit of this method is because you can do it using SQL Server Standard edition it avoid s the high cost of SQL Ser ver Enterprise edition licensing Refer to the AWS Microsoft licensing page for more details on ways you can optimize licensing costs on AWS SQL Server on Linux Deploying SQL Server on L inux is a way to eliminate Windows license costs Installation and configuration of MS SQL on Linux can be non trivial However as discussed earlier in this document AWS Launch Wizard helps simplifying that by taking care of most of the heavy lifting for you This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Pract ices for Deploying Microsoft SQL Server on Amazon EC2 23 Operational excellence Most of the discussions explored in this whitepaper pertain to the best practices available for deploying Microsoft SQL Server in AWS However another crucial dimension is operating and maintaining these workloads post deployment As a general principle the best practice is to assume that failures and incidents happen all the time It ’s important to be pre pared and equipped to respond to these incidents This objective is composed of three subobjectives : • Observe and detect anomaly • Detect the root cause • Act to resolve the problem AWS provides tools and services for each of these purposes Observability and root cause analysis Amazon CloudWatch is a service that enables real time monitoring of AWS resources and other applications You can use CloudWatch to col lect and track metrics which are variables you can measure for your resources and applications Amazon CloudWatch Application Insights for NET and SQ L Server is a feature of Amazon CloudWatch that is designed to enable operational excellence for Microsoft SQL Server and NET applications Once enabled it identifies and sets up key metrics and logs across your application resources and technology stack It continuously monitors the metrics and logs to detect anomalies and errors while using artificial intelligence and machine learning (AI/ML) to correlate detected errors and anomalies When errors and anomalies are detected Application Insights gener ates CloudWatch Events To aid with troubleshooting it creates automated dashboards for the detected problems which include correlated metric anomalies and log errors along with additional insights to point you to the potential root cause Using AWS Launch Wizard you can choose to enable Amazon CloudWatch for Application Insights with a single click The AWS Launch Wizard handles all the configuration necessary to make your MSSQL instance observable through Amazon CloudWatch Application Insights This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 24 Reducing mean time to resolution (MTTR) The automated dashboards generated by Amazon CloudWatch Application Insights help you to take swift remedial actions to keep your applications healthy and to prevent impact to the end users of your application It also creates OpsItems so you can resolve problems using AWS Systems Manager OpsCenter AWS Systems Manager is a serv ice that enables you to view and control your infrastructure in AWS on premises and in other clouds OpsCenter is a capability of AWS Systems Manager designed to reduce the mean time to resolution OpsCenter also provides Systems Manager Automation documents (runbooks) that you can use to fully or partially automate resolution of issues Patch management AWS Systems Manager Patch Manager is a comprehensive patch management solution fully integrated with native Windows APIs and supporting Windows Server and Linux operating systems as well as Mi crosoft applications including Microsoft SQL Server Systems Manager Patch Manager integrates with AWS Systems Manager Maintenance Windows allow ing you to define a predictable schedule to prevent potential disruption of business operations You can also use AWS Systems Manager Configuration Compliance dashboards to quickly see patch compliance state or other configuration inconsistencies across your fleet Conclusion This whitepaper described a number of best practices for deploying Microsoft SQL Server workloads on AWS It discussed how AWS services can be used to compliment Microsoft SQL Server features to address different requirements AWS offers the greatest breadth and depth of services in the cloud and Amazon EC2 is the most flexible option for deploying Microsoft SQL Server workloads Each solution and associated trade offs may be embraced according to particular business requirements This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 25 The five pillars of AWS Well Architected Framework (reliability security performance cost optimization and operational excellence) are explored as applicable to SQL Server workloads and AWS services supporting each requirement are introduced Contributors The following individuals and organizations contributed to this document: • Sepehr Samiei Solutions Architect Amazon Web Services Document revisions Date Description July 28 2021 Updated for new AWS services and capabilities supporting Microsoft SQL Server workloads May 2020 Updated for new AWS services and capabilities supporting Microsoft SQL Server workloads March 2019 Updated for Total Cost of Ownership (TCO) using z1d instance types and EC2 CPU Optimization September 2018 First publication,General,consultant,Best Practices Best_Practices_for_Migrating_from_RDBMS_to_Amazon_DynamoDB,This paper has been archived For the latest technical content refer t o the HTML version: : https://docsawsamazoncom/whitepapers/latest/best practicesformigratingfromrdbmstodynamodb/ welcomehtml Best Practices for Migrating from RDBMS to Amazon DynamoDB Leverage the Power of NoSQL for Suitable Workloads Nathaniel Slater March 2015 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 2 of 24 Contents Contents 2 Abstract 2 Introduction 2 Overview of Amazon DynamoDB 4 Suitable Workloads 6 Unsuitable Workloads 7 Key Concepts 8 Migrating to DynamoDB from RDBMS 13 Planning Phase 13 Data Analysis Phase 15 Data Modeling Phase 17 Testing Phase 21 Data Migration Phase 22 Conclusion 23 Cheat Sheet 23 Further Reading 23 Abstract Today software architects and developers have an array of choices for data storage and persistence These include not only traditional relational database management systems ( RDBMS) but also NoSQL databases such as Amazon DynamoDB Certain workloads will scale better and be more cost effective to run using a NoSQL solution This paper will highlight the best practices for migrating these workloads from an RDBMS to DynamoDB We will disc uss how NoSQL databases like DynamoDB differ from a traditional RDBMS and propose a framework for analysis data modeling and migration of data from an RDBMS into DynamoDB Introduction For decades the RDBMS was the de facto choice for data storage and persistence Any data driven application be it an e commerce website or an expense reporting This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 3 of 24 system was almost certain to use a relational database to retrieve and store the data required by the application T he reasons for this are numerous and include the following: • RDBMS is a mature and stable technology • The query language SQ L is feature rich and versatile • The servers that run an RDBMS engine are typically some of the most stable and powerful in the IT infrastructure • All major programming languages contain support for the drivers used to communicate with an RDBMS as well as a rich set of tools for simplifying the development of database driven applications These factors and many others have supported this incumbency of the RDBMS For architects and software developers there simply wasn’t a reasonable alternative for data storage and persistence – until now The growth of “internet scale” web applications such as e commerce and social media the explosion of connected devices like smart phones and tablets and the rise of big data have resulted in new workloads that t raditional relational database s are not well suited to handle As a system designed for transaction processing the fundamental properties that all RDBMS must support are defined by the acronym ACID: Atomicity Consistency Isolation and Durability Atom icity means “all or nothing” – a transaction executes completely or not at all Consistency means that the execution of a transaction causes a valid state transition Once the transaction has been committed the state of the resulting data must conform to the constraints imposed by the database schema Isolation requires that concurrent transactions execute separately from one another The isolation property guarantees that if concurrent transactions were executed in serial the end state of the data would be the same Durability requires that the state of the data once a transaction executes be preserved In the event of power or system failure the database should be able to recover to the last known state These ACID properties are all desirable but support for all four requires an architecture that poses some challenges for today’s data intensive workloads For example consistency requires a well defined schema and that all data stored in a database conform to that schema This is great for ad hoc queries and read heavy workloads For a workload consisting almost entirely of writes such a s the saving of a player ’s state in a gaming application this enforcement of schema is expensive from a storage and compute standpoint The game developer benef its little by forcing this data into rows and tables that relate to one another thr ough a welldefined set of keys This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 4 of 24 Consistency also requires locking some portion of the data until the transaction modifying it completes and then making the change immediately visible For a bank transaction which debits one account and credits another this is required This type of transaction is called “strongly consistent” For a social media application on the other hand there really is no requirement that all users see an update to a data feed at precisely the same time In this latter case the transaction is “eventually consistent” It is far more important that the social media application scale to handle potentially millions of simultaneous users even if those users see changes to the data at different times Scaling an RDBMS to handle this level of concurrency while maintaining strong consistency requires upgrading to more powerful (and often proprietary) hardware This is called “scaling up” or “vertical scaling” and it usually carries an extremely high cost The more cost effective way to scale a database to support this level of concurrency is to add server instances running on commodity hardware This is called “scaling out” or “horizontal scaling” an d it is typically far more cost effective than vertical scaling NoSQL databases like Amazon DynamoDB ad dress the scaling and performance challenges found with RDBMS The term “NoSQL” simply means that the database doesn’t follow the relational model e spoused by EF Codd in his 1970 paper A Relational Model of Data for Large Shared Data Banks 1 which would become the basis for all modern RDBMS As a result NoSQL databases vary much more widely in features and functionality than a traditional RDBMS T here is no common query language analogous to SQL and query flexibility is generally replaced by high I/O performance and horizontal scalability NoSQL databases don’t enforce the notion of schema in the same way as an RDBMS Some may store semi structured data like JSON Others may store r elated values as column sets Still others may simply store key/value pairs The net result i s that NoSQL databases trade some of the query capabilities and ACID properties of an RDBMS for a much more flexible dat a model that scales horizontally These characteristics make NoSQL databases an excellent choice in situations where use of an RDBMS for non relational workloads (like the aforementioned game state example) is resulting in some combination of performance bottlenecks operational complexity and rising cos ts DynamoDB offers solutions to all these problems and is an excellent platform for migrating these workloads off of an RDBMS Overview of Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service running in the AWS cloud The complexity of running a massively scalable distributed NoSQL database is managed by the service itself allowing software developers to focus on building applications rather than managing infrastructure NoSQL databases are designed for scale but their architectures are sophisticated and there can be significant operational 1 http://wwwseasupennedu/~zives/03f/cis550/coddpdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 5 of 24 overhead in running a large NoSQL cluster Instead of having to become experts in advanced distributed computing concepts the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice In addition to being easy to use DynamoDB is also cost effective With D ynamoDB you pay for the storage you are consuming and the IO throughput y ou have provisioned It is designed to scale elastically When the storage and throughput requirements of an application are low only a small amount of capacity needs to be provisioned in the DynamoDB service As the number of users of an application g rows and the required IO throughput increases additional capacity can be provisioned on the fly This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second Tables are the fundamental construct for organizing and storing data in DynamoDB A table consists of items An item is composed of a primary key that uniquely identifies it and key/val ue pairs called attributes While an item is similar to a row in an RDBMS table all the items in the same DynamoDB table need not share th e same set of attributes in the way that all rows in a relational table share the same columns Figure 1 shows the structure of a DynamoDB table and the items it contains There is no concept of a column in a DynamoDB table Each item in the table can be expressed as a tuple containing an arbitrary number of elements up to a maximum size of 400 K This data model is well suited for storing data in the formats commonly used for object serializ ation and messaging in distributed systems As we will see in the next section workloads that involve this type of data are good candidates to migrate to DynamoDB Figure 1: DynamoDB Table Structure table items Attributes (name/value pairs) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 6 of 24 Tables and items are created updated and deleted through the DynamoDB API There is no conc ept of a standard DML language like there is in the relational database world Manipulation of data in DynamoDB is done programmatically through object oriented code It is possible to query data in a DynamoDB table but this too is done programmatically through the API Because there is no generic query language like SQL it’s important to unders tand your application’s data access patterns well in order to make the most effective use of DynamoDB Suitable Workloads DynamoDB is a NoSQL database which means that it will perform best for workloads involving non relational data Some of the more common use cases for non relational workloads are: • AdTech o Capturing browser cookie state • Mobile applications o Storing application data and session state • Gaming applications o Storing user preferences and application state o Storing player s’ game state • Consumer “voting” applications o Reality TV contests Superbowl commercials • Large Scale Websites o Session state o User data used for personalization o Access control • Application monitoring o Storing application log and event data o JSON data • Internet of Things o Sensor data and log ingestion This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 7 of 24 All of t hese use cases benefit from some combination of the features that make NoSQL databases so powerful Ad Tech applications typically require extremely low latency which is well suited for DynamoDB’s low single digit millisecond re ad and write performance Mobile applications and consumer voting applications often have millions of users and need to handle thousands of requests per second DynamoDB can scale horizontally to meet this load Finally application monitoring solutions typically ingest hundreds of thousands of data points per minute and DynamoDB’s sche maless data model high IO performance and support for a native JSON data type is a great fit for these types of applications Another important characteristic to consi der when determining if a workload is suitable for a NoSQL database like DynamoDB is whether it requires horizontal scaling A mobile application may have millions of users but each installation of the applicati on will only read and write session data fo r a single user This means the user session data in the DynamoDB table can be distributed across multiple storage partitions A read or write of data for a given user will be confined to a single partition This allows the DynamoDB table to scale horizontally —as more users are added more partitions are created As long as requests to read and write this data are uniformly d istributed across partitions DynamoDB will be able to handle a very large amount of concurrent data access This type of horizontal scaling is difficult to achieve with an RDBMS without the use of “sharding” which can add significant complexity to an a pplication’s data access layer When data in an RDBMS is “sharded” it is split across different database instances This requires maintaining an index of the instances and the range of data they contain In order to read and write data a client applic ation needs to know which shard contains the range of data to be read or written Sharding also adds administrative overhead and cost – instead of a single database instance you are now responsible for keeping several up and running It’s also important to evaluate the data consistency requirement of an application when determining if a workload would be suitable for DynamoDB There are actually two consistency models supported in DynamoDB: strong and eventual consistency with the former requiring more p rovisioned IO than the latter This flexibility allows the developer to get the best possible performance from the database while being able to support the consistency requirements of the application If an application does not require “strongly consisten t” reads meaning that updates made by one client do not need to be immediately visible to others then use of an RDBMS that will force strong consistency can result in a tax on performance with no net benefit to the application The reason is that strong consistency usually involves having to lock some portion of the data which can cause performance bottlenecks Unsuitable Workloads This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 8 of 24 Not all workloads are suitable for a NoSQL database like DynamoDB While in theory one could implement a classic entity relationship model using DynamoDB tables and items in practice this would be extremely cumbersome to work with Transactional systems that require well defined relationships between entities are still best implemented using a traditional RDBMS Some o ther unsuitable workloads include: • Adhoc queries • OLAP • BLOB storage Because DynamoDB does not support a standard query language like SQL and because there is no concept of a table join constructing ad hoc queries is not as efficient as it is with RDBMS Running such queries with DynamoDB is possible but requires the use of Amazon EMR and Hive Likewise OLAP applications are difficult to deliver as well because the dimensional data model used for analytical processing requires joining fact tables to d imension tables Finally due to the size limitation of a DynamoDB item storing BLOBs is often not practical DynamoDB does support a binary data type but this is not suited for storing large binary objects like images or documents However storing a pointer in the DynamoDB table to a large BLOB stored in Amazon S3 easily supports this last use case Key Concepts As described in the previous section Dynam oDB organizes data into tables consisting of items Each item in a DynamoDB table can define a n arbitrary set of attributes but all items in the table must define a primary key that uniquely identifies the item This key must contain an attribute known as the “hash key” a nd optionally an attribute called the “range ke y” Figure 2 shows the structure of a DynamoDB table defining both a hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 9 of 24 Figure 2: DynamoDB Table with Hash and Range Keys If an item can be uniquely identified by a single attribute value then this attribute can function as the hash key In other cases an item may be uniquely identified by two values In this case the primary key will be defined as a composite of the has h key and the range key Figure 3 demonstrates this concept An RDBMS table relating media files with the codec used to trans code them can be modeled as a single table in DynamoDB using a primary key con sisting of a hash and range key Note how the data is de normalized in the DynamoDB table This is a common practice when migrating data from an RDBMS to a NoSQL database and will be discussed in more detail later in this paper Hash key Range key (DynamoDB maintains a sorted index) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 10 of 24 Figure 3: Example of Hash and Range Keys The ideal hash key will contain a large number of distinct values uniformly distributed across the items in the table A user ID is a good example of an attribute that tends to be uniformly distributed across items in a table Attributes that would be modeled as lookup values o r enumerations in an RDBMS tend to make poor hash keys The reason is that certain values may occur much more frequently than others These concepts are shown in Figure 4 Notice how the counts of user_id are uniform whereas the counts of status_code a re not If the status_code is used as a hash key in a DynamoDB table the value that occurs most frequently will end up being stored on the same partition and this means that most reads and writes will be hitting that single partition This is called a “hot partition” and this will negatively impact performance select user_id count(*) as total from user_preferences group by user_id user_id total 8a9642f7 51554138bb63870cd45d7e19 1 31667c72 86c54afb82a1a988bfe34d49 1 693f8265 b0d240f1add0bbe2e8650c08 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 11 of 24 select status_code count(*) as total from status_code sc log l where lstatus_code_id = scstatus_code_id status_code total 400 125000 403 250 500 10000 505 2 Figure 4: Uniform and NonUniform Distribution of Potential Key Values Items can be fetched from a table using the primary key Often it is useful to be able to fetch items using a different set of values than the hash and the range keys DynamoDB supports these operations t hrough local and global secondary indexes A local secondary index uses the same hash key as defined on the table but a different attribute as the range key Figure 5 shows how a local secondary index is defined on a table A global secondary index can use any scalar attribute as the hash or range key Fetching items using secondary indexes is done using the query interface defined in the DynamoDB API Figure 5: A Local Secondary Index Because there are limits to the number of local and global secondary indexes that can exist per table it is important to fully understand the data access requirements of any application that uses DynamoDB for persistent storage In addition global secondary indexes require that attribute values be projected into the index What this means is that when an index is created a subset of attributes from the parent table need to be selected for inclusion into the index When an item is queried using a globa l secondary index the only attributes that will be populated in the returned item are those that have Range key LSI key Hash key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 12 of 24 been projected into the index Figure 6 demonstrates this concept Note how the original hash and range key attributes are automatically promoted in the global secondary index Reads on global secondary indexes are always eventually consistent whereas local secondary indexes support eventual or strong consistency Finally both local and global secondary indexes use provisioned IO (discussed in detail below) for reads and writes to the index This means that each time an item is inserted or updated in the main table any secondary indexes will consume IO to update the index Figure 6: Create a global secondary index on a table Whenever an item is read from or written to a DynamoDB table or index the amount of data required to perform the read or write operation is expressed as a “read unit” or “write unit” A read unit consists of 4K of data and a write unit is 1K This means that fetching an item of 8K in size will consume 2 read units of data Inserting the item would consume 8 write units of data The number of read and write units allowed per second is known as the “provisioned IO” of the table If your application requires that 1000 4K items be written per second then the provisioned write capacity of the table would need to be a minimum of 4000 write units per second When an insufficient amount of read or write capacity is provisi oned on a table the DynamoDB service will “throttle” the read and write operations This can result in poor performance and in some cases throttling exceptions in the client application For this reason it is important to understand an application ’s IO requirements when designing the tables However both read and write capacity can be altered on an existing table and if an application suddenly experiences a spike in usage that results in throttling the provisioned IO can be increased to handle the n ew workload Similarly if load decreases for some reason the provisioned IO can be reduced This ability to dynamically alter the IO characteristics of a table is a key differentiator between DynamoDB and a traditional RDBMS in which IO throughput is going to be fixed based on the underlying hardware the database engine is running on Choose which attributes to promote (if any) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 13 of 24 Migrating to DynamoDB from RDBMS In the previous section we discussed some of the key features of DynamoDB as well as some of the key differences between DynamoDB and a traditional RDBMS In this section we will propose a strategy for migrating from an RDBMS to DynamoDB that takes into account these key features and differences Because database migrations tend to be complex and risky we advocate taking a phased ite rative approach As is the case with the adoption of any new technology it’s also good to focus on the easiest use cases first It’s also important to remember as we propose in this section that migration to DynamoDB doesn’t need to be an “all or not hing” process For certain migrations it may be feasible to run the workload on both DynamoDB and the RDBMS in parallel and switch over to DynamoDB only when it’s clear that the migration has succeeded and the application is working properly The follow ing state diagram expresses our proposed migration strategy: Figure 7: Migration Phases It is important to note that this process is iterative The outcome of certain states can result in a return to a previous state Oversights in the data analysis an d data modeling phase may not become apparent until testing In most cases it will be necessary to iterate over these phases multiple times before reaching the final data migration state Each phase will be discussed in detail in the sections that follo w Planning Phase The first part of the planning phase is to identify the goals of the data migration These often include (but are not limited to): • Increasing application performance • Lowering costs • Reducing the load on an RDBMS In many cases the goals of a migration may be a combination of all of the above Once these goals have been defined they can be used to inform the identification of the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 14 of 24 RDMBS tables to migrate to DynamoDB As we mentioned previously tables being used by workloads involving non relational data make excellent choices for migration to DynamoDB Migration of such tables to DynamoDB can result in significantly improved application performance as well as lower costs and lower loads on the RDBMS Some good candidates for migration are: • Entity Attribute Value tables • Application session state tables • User preference tables • Logging tables Once the tables have been identified any characteristics of the source tables that may make migration challenging should b e documented This information will be essential for choosing a sound migration strategy Let’s take a look at some of the more common challenges that tend to impact the migration strategy : Challenge Impact on Migration Strategy Writes to the RDBMS sour ce table cannot be acquiesced before or during the migration Synchronization of the data in the target DynamoDB table with the source will be difficult Consider a migration strategy that involves writing data to both the source and target tables in parallel The amount of data in the source table is in excess of what can reasonably be transferred with the existing network bandwidth Consider exporting the data from the source table to removable disks and using the AWS Import/Export service to import the data to a bucket in S3 Import this data into DynamoDB directly from S3 Alternatively reduce the amount of data that needs to be migrated by exporting only those records that were created after a recent point in time All data older than that point will remain in the legacy table in the RDBMS The data in the source table needs to be transformed before it can be imported into Export the data from the source table and transfer it to S3 Consider using EMR to perform the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 15 of 24 Challenge Impact on Migration Strategy DynamoDB data transforma tion and import the transformed data into DynamoDB The primary key structure of the source table is not portable to DynamoDB Identify column(s) that will make suitable hash and range keys for the imported items Alternatively consider adding a surrog ate key (such as a UUID) to the source table that will act as a suitable hash key The data in the source table is encrypted If the encryption is being managed by the RDBMS then the data will need to be decrypted when exported and re encrypted upon import using an encryption scheme enforced by the application not the underlying database engine The cryptographic keys will need to be managed outside of DynamoDB Table 1: Challenges that Impact Migration Strategy Finally and perhaps most importantly the backup and recovery process should be defined and documented in the planning phase If the migration strategy requires a full cutover from the RDBMS to DynamoDB defining a process for restoring functionality using the RDBMS in the event the migration fails is essential To mitigate risk consider running the workload on DynamoDB and the RDBMS in parallel for some length of time In this scenario the legacy RDBMS based application can be disabled only once the workload has been sufficiently tested in production using DynamoDB Data Analysis Phase The purpose of the data analysis phase is to understand the composition of the source data and to identify the data access patterns used by the application This information is required input into the data modeling phase It is also essential for understanding the cost and performance of running a workload on DynamoDB The analysis of the source data should include: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 16 of 24 • An estimate of the number of items to be imported into DynamoDB • A distribution of the item sizes • The multiplicity of values to be used as hash or range keys DynamoDB pricing contains two main components – storage and provisioned IO By estimating the number of items that will be imported into a DynamoDB table and the approximate size of each item the storage and the provisioned IO requirements for the table can be calculated Common SQL data types will map to one of 3 scalar types in DynamoDB: string number and binary The length of the number data type is variable and strings are encoded using binary UTF 8 Focus should be placed on the attributes with the largest values when estimating item size as provisioned I OPS are given in integral 1K increments —there is no concept of a fractional IO in DynamoDB If an item is estimated to be 33K in size it will require 4 1K write IO units and 1 4K read IO unit to write and read a single item respectively Since the siz e will be rounded to the nearest kilobyte the exact size of the numeric types is unimportant In most cases even for large numbers with high precision the data will be stored using a small number of bytes Because each item in a table may contain a var iable number of attributes it is useful to compute a distribution of item sizes and use a percentile value to estimate item size For example one may choose an item size representing the 95th percentile and use this for estimating the storage and provisioned IO costs In the event that there are too many rows in the source table to inspect individually take samples of the source data and use these for computing the item size distribution At a minimum a table should have enough provisioned read and write units to read and write a single item per second For example if 4 write units are required to write an item with a size equal to or less than the 95 th percentile than the table should have a minimum provisioned IO of 4 write units per second Anything less than this and the write of a single item will cause throttling and degrade performance In practice the number of provisioned read and write units will be much larger than the required minimum An application using DynamoDB for data storage will typically need to issue read and writes concurrently Correctly estimating the provisioned IO is key to both guaranteeing the required application performance as well as understanding cost Understanding the distribution frequency of RDBMS colu mn values that could be hash or range keys is essential for obtaining maximum performance as well Columns containing values that are not uniformly distributed (ie some values occur in much larger numbers than others) are not good hash or range keys because accessing items with keys occurring in high frequency will hit the same DynamoDB partitions and this has negative performance implications The second purpose of the data analysis phase is to categorize the data access patterns of the application Because DynamoDB does not support a generic query language like SQL it is essential to document that ways in which data will be written to and read from This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 17 of 24 the tables This information is critical for the data modeling phase in which the tables the key structure and the indexes will be defined Some com mon patterns for data access are: • Write Only – Items are written to a table and never read by the application • Fetches by distinct value – Items are fetched in dividually by a value that uniquely identifies the item in the table • Queries across a range of values – This is seen frequently with temporal data As we will see in the next section documentation of an application’s data access patterns using categories such as those described above will drive much of the data modeling decisions Data Modeling Phase In this phase the tables hash and range keys and secondary indexes w ill be defined The data model produced in this phase must support the data access patterns described in the data analysis phase The first step in data modeling is to determine the hash and range keys for a table The primary key whether consisting only of the hash key or a composite of the hash and range key must be unique for all items in the table When migrating data from an RDBMS it is tempting to use the primary key of the source table as the hash key But in reality this key is often semantically meaningless to the application For example a User table in an RDBMS may define a numeric primary key but an application responsible for logging in a user will ask for an email address not the numeric user ID In this case the email address is the “natural key” and would be better suited as the hash key in the DynamoDB table as items can easily be fetched by their hash key values Modeling the hash key in this way is appropriate for data access patterns that fetch items by distinct value For other data access patterns like “write only” using a randomly generated numeric ID will work well for the hash key In this case the items will never be fetched from the table by the application and as such the key will only be used to uniquely identify the items not a means of fetching data RDBMS tables that contain a unique index on two key values are good candidates for defining a primary key using both a hash key and a range key Intersection tables used to define many tomany relationships in an RDBMS are typically modeled using a unique index on the key values of both sides of the relationship Because fetching data i n a many tomany relationship requires a series of table joins migrating such a table to DynamoDB would also involve de normalizing the data (discussed in more detail below) Date values are also often used as range keys A table counting the number of t imes a URL was visited on any given day could define the URL as the hash key and the date as the range key As with primary keys consisting solely of a hash key fetching items with a composite primary key requires the application to specify both the hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 18 of 24 values This needs to be considered when evaluating whether a surrogate key or a natural key would make the better choice for the hash and or range key Because non key attributes can be added to an item arbitrarily the only attributes th at must be specified in a DynamoDB table definition are the hash key and (optionally) the range key However if secondary indexes are going to be defined on any non key attributes then these must be included in the table definition Inclusion of non key attributes in the table definition does not impose any sort of schema on all the items in the table Aside from the primary key each item in the table can have an arbitrary list of attributes The support for SQL in an RDBMS means that records can be f etched using any of the column values in the table These queries may not always be efficient – if no index exists on the column used to fetch the data a full table scan may be required to locate the matching rows The query interface exposed by the Dyn amoDB API does not support fetching items from a table in this way It is possible to do a full table scan but this is inefficient and will consume substantial read units if the table is large Instead items can be fetched from a DynamoDB table by the primary key of the table or the key of a local or global secondary index defined on the table Because an index on a non key column of an RDBMS table suggests that the application commonly queries for data on this value these attributes make good candidates for local or global secondary indexes in a DynamoDB table There are limits to the number of secondary indexes allowed on a DynamoDB table 2 so it is important to choose define keys for these indexes using attribute values that the application will use most frequently for fetching data DynamoDB does not support the concept of a table join so migrating data from an RDBMS table will often re quire denormalizing the data To those used to working with an RDBMS this will be a foreign and perhaps uncomfortable concept Since the workloads most suitable for migrating to DynamoDB from an RDMBS tend to involve nonrelational data denormalizatio n rarely poses the same issues as it would in a relational data model For example if a relational database contains a User and a UserAddress table related through the UserID one would combine the User attributes with the Address attributes into a sing le DynamoDB table In the relational database normalizing the User Address information allows for multiple addresses to be specified for a given user This is a requirement for a contact management or CRM system But in DynamoDB a User table would likely serve a different purpose —perhaps keeping track of a mobile application’s registered users In this use case the multiplicity of Users to Addresses is less important than scalability and fast retrieval of user records 2 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Limitshtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 19 of 24 Data Modeling Example Let’s walk through an example that combines the concepts described in this section and the previous This example will demonstrate how to use secondary indexes for efficient data access and how to estimate both item size and the required amount of provisioned IO for a DynamoDB table Figure 8 contains an ER diagram for a schema used to track events when processing orders placed online through an e commerce portal Both the RDBMS and DynamoDB table structures are shown Figure 8: RDBMS and DynamoDB schem a for tracking events The number of rows that will be migrated is around 10! so computing the 95th percentile of item size iteratively is not practical Instead we will perform simple random sampling with replacement of 10! rows This will give us adeq uate precision for the purposes of estimating item size To do this we construct a SQL view that contains the fields that will be inserted into the DynamoDB table Our sampling routine then randomly selects 10 ! rows from this view and computes the size representing the 95th percentile This statistical sampling yields a 95th percentile size of 66 KB most of which is consumed by the “Data” attribute (which can be as large as 6KB in size) The minimum number of write units required to write a single i tem is: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1𝐾𝐵 𝑝𝑒𝑟 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡 )=7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 20 of 24 The minimum number of read units required to read a single item is computed similarly: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 4𝐾𝑏 𝑝𝑒𝑟 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡 )=2 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This particular workload is write heavy and we need enough IO to write 1000 events for 500 orders per day This is computed as follows: 500 𝑜𝑟𝑑𝑒𝑟𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 1000 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 = 5 ×10! 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 5 × 10!𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 86400 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 =578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 =41 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 Reads on the table only happen once per hour when the previous hour’s data is imported into an Amazon Elastic Map Reduce cluster for ETL This operation uses a query that selects items from a given date range (which is why the EventDate attribute is both a range key and a global secondary index) The number of read units (which will be provisioned on the global secondary index) required to retrieve the results of a query is based on the size of the results re turned by the query: 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 3600 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 =20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 × 66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1024𝐾𝐵 =13411𝑀𝐵 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 The maximum amount of data re turned in a single query operation is 1MB so pagination will be required Each hourly read query will require reading 135 pages of data For strongly consistent reads 256 read units are required to read a full page at a time (the number is half as much for eventually consistent reads) So to support this particular workload 256 read units and 41 write units will be required From a practical standpoint the write units would likely be expressed in an even number like 48 We now have all the data we need to estimate the DynamoDB cost for this workload: 1 Number of items ( 10 !) 2 Item size (7KB) 3 Write units (48) 4 Read units (256) These can be run through the Amazon Simple Monthly Calculator3 to derive a cost estimate 3 http://calculators3amazonawscom/indexhtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 21 of 24 Testing Phase The testing phase is the most important part of the migration strategy It is during this phase that the entire migration process will be tested end toend A comprehensive test plan should minimally contain the following: Test Category Purpose Basic Acceptance Tests These tests should be automatically executed upon completion of the data migration routines Their primary purpose is to verify whether the data migration was successful Some common outputs from these tests will include: • Total # items processed • Total # items imported • Total # items skipped • Total # warnings • Total # errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional Tests These tests exercise the functionality of the application(s) using DynamoDB for data storage They will include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the RDBMS data to DynamoDB It is during this round of testing that gaps in the data model are often revealed NonFunctional Tests These tests will assess the non functional characteristics of the application such as performance under varying levels of load and resiliency to failure of any portion of This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 22 of 24 Test Category Purpose the application stack These tests can also include boundary or edge cases that are low probability but could negatively impact the application (for example if a large number of clients try to update the same record at the exact same time) The backup and recovery process defined in the planning phase should also be included in nonfunctional testing User Acceptance Tests These tests should be executed by the end users of the application(s) once the final data migration has completed The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet it’s primary function in t he organization Table 2: Data Migration Test Plan Because the migration strategy is iterative these tests will be executed numerous times For maximum efficiency consider testing the data migration routines using a sampling from the production data if the total amount of data to migrate is large The outcome of the testing phase will often require revisiting a previous phase in the process The overall migration strategy will become more refined through each iteration through the process and once al l the tests have executed successfully it will be a good indication that it is time for the next and final phase: data migration Data Migration Phase In the data migration phase the full set of production data from the source RDBMS tables will be migr ated into DynamoDB By the time this phase is reached the end to end data migration process will have been tested and vetted thoroughly All the steps of the process will have been carefully documented so running it on the production data set should be as simple as following a procedure that has been executed numerous times before In preparation for this final phase a notification should be sent to the application users alerting them that the application will be undergoing maintenance and (if required) downtime Once the data migration has completed the user acceptance tests defined in the previous phase should be executed one final time to ensure that the application is in a usable state In the event that the migration fails for any reason the b ackup and recovery procedure —which will also have been thoroughly tested and vetted at this point —can be executed When the system is back to a stable state a root cause analysis of the failure should be conducted and the data migration rescheduled once the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 23 of 24 root cause has been resolved If all goes well the application should be closely monitored over the next several days until there is sufficient data indicating that the application is functioning normally Conclusion Leveraging DynamoDB for suitable workloads can result in lower costs a reduction in operational overhead and an increase in performance availability and reliability when compared to a traditional RDBMS In this paper we proposed a strategy for identifying and migrating suitable workloads from an RDBMS to DynamoDB While implementing such a strategy will require careful planning and engineering effort we are confident that the ROI of migrating to a fully managed NoSQL solution like DynamoDB will greatly exceed the upfront cost associated with the effort Cheat Sheet The following is a “cheat sheet” summarizing some of the key concepts discussed in this paper and the sections where those concepts are detailed: Concept Section Determining suitable wor kloads Suitable Workloads Choosing the right key structure Key Concepts Table indexing Data Modeling Phase Provisioning read and write throughput Data Modeling Example Choosing a migration strategy Planning Phase Further Reading For additional help please consult the following sources: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 24 of 24 • DynamoDB Developer Guide4 • DynamoDB Website5 4 http://docsawsamazoncom/amazondynamodb/latest/developerguide/GettingStartedDynamoDBhtml 5 http://awsamazoncom/dynamodb,General,consultant,Best Practices Best_Practices_for_Migrating_MySQL_Databases_to_Amazon_Aurora,"ArchivedBest Practices for Migrating MySQL Databases to Amazon Aurora October 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Basic Performance Considerations 1 Client Location 1 Client Capacity 3 Client Configuration 4 Server Capacity 4 Tools and Procedures 5 Advanced Performance Concepts 6 Client Topics 6 Server Topics 7 Tools 8 Procedure Optimizations 12 Conclusion 18 Contributors 18 Archived Abstract This whitepaper discusses some of the important factors affecting the performance of selfmanaged export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora Although many of the topics are discussed in the context of Amazon RDS performance principles presented here also apply to the MySQL Community Edition found in selfmanaged MySQL installations Target Audience The target audience of this paper includes:  Database and system administrators performing migrations from MySQL compatible databases into Aurora where AWSmanaged migration tools cannot be used  Software developers working on bulk data import tools for MySQL compatible databases ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 1 Introduction Migrations are among the most timeconsuming tasks handled by database administrators (DBAs) Although the task becomes easier with the advent of managed migration services such as the AWS Database Migration Service (AWS DMS) many largescale database migrations still require a custom approach due to performance manageability and compatibility requirements The total time required to export data from the source repository and import it into the target database is one of the most important factors determining the success of all migration projects This paper discuss es the following major contributors to migration performance:  Client and server performance characteristics  The choice of migration tools; without the right tools even the most powerful client and server machines cannot reach their full potential  Optimized migration procedures to fully utilize the available client/server resources and leverage performanceoptimized tooling Basic Performance Considerations The following are basic considerations for client and server performance Tooling and procedure optimizations are described in more detail in “Tools and Procedures "" later in this document Client Location Perform export/import operations from a client machine that is launched in the same location as the database server:  For onpremises database servers the client machine should be in the same onpremises network  For Amazon RDS or Amazon Elastic Compute Cloud (Amazon EC2) database instances the client instance should exist in the same Amazon Virtual Private Cloud (Amazon VPC) and Availability Zone as the server ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 2 For EC2Classic (nonVPC) servers the client should be located in the same AWS Region and Availability Zone Figure 1: Logical migration between AWS Cloud databases To follow the preceding recommendations during migrations between distant databases you might need to use two client machines:  One in the source network so that it’s close to the server you’re migrating from  Another in the target network so that it’s close to the server you’re migrating to In this case you can move dump files between client machines using file transfer protocols (such as FTP or SFTP) or upload them to Amazon Simple Storage Service (Amazon S3) To further reduce the total migration time you can compress files prior to transferring them ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 3 Figure 2: Data flow in a selfmanaged migration from onpremises to an AWS Cloud database Client Capacity Regardless of its location the client machine should have adequate CPU I/O and network capacity to perform the requested operations Although the definition of adequate varies depending on use cases the general recommendations are as follows:  If the export or import involves realtime processing of data for example compression or decompression choose an instance class with at least one CPU core per export/import thread  Ensure that there is enough network bandwidth available to the client instance We recommend using instance types that support enhanced networking For more information see the Enhanced Networking section in the Amazon EC2 User Guide 1  Ensure that the client’s storage layer provides the expecte d read/write capacity For example if you expect to dump data at 100 megabytes per second the instance and its underlying Amazon Elastic Block Store ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 4 (Amazon EBS) volume must be capable of sustaining at least 100 MB/s (800 Mbps) of I/O throughput Client Configuration For best performance on Linux client instances we recommend that you enable the receive packet steering (RPS) and receive flow steering (RFS) features To enable RPS use the following code sudo sh c 'for x in /sys/class/net/eth0/queues/r x*; do echo ffffffff > $x/rps_cpus; done' sudo sh c ""echo 4096 > /sys/class/net/eth0/queues/rx 0/rps_flow_cnt"" sudo sh c ""echo 4096 > /sys/class/net/eth0/queues/rx 1/rps_flow_cnt To enable RFS use the following code sudo sh c ""echo 32768 > /proc/sys/ net/core/rps_sock_flow_entries"" Server Capacity To dump or ingest data at optimal speed the database server should have enough I/O and CPU capacity In traditional databases I/O performance often becomes the ultimate bottleneck during migrations Aurora addresses this challenge by using a custom distributed storage layer designed to provide low latency and high throughput under multithreaded workloads In Aurora you don’t have to choose between storage types or provision storage specifically for export/import purposes We recommend using Aurora for instances with one CPU core per thread for exports and two CPU cores per thread for imports If you’ve chosen an instance class with enough CPU cores to handle your export/import the instance should already offer adequate network bandwidth ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 5 For more information see “Server Topics ” later in this document Tools and Procedures Whenever possible perform export and import operations in multithreaded fashion On modern systems equipped with multicore CPUs and distributed storage this approach ensures that all available client/server resources are consumed efficiently Engineer export/import procedures to avoid unnecessary overhead The following table lists common export/import performance challenges and provides sample solutions You can use it do drive your tooling and procedure choices Import Technique Challenge Potential Solution Examples Single row INSERT statements Storage and SQL processing overhead Use multi row SQL statements Use non SQL format (eg CSV flat files) Import 1 MB of data per statement Use a set of flat files (chunks) 1 GB each Single row or multi row statements with small transaction size Transactional overhead each statement is committed separately Increase transaction size Commit once per 1000 statements Flat file imports with very large transaction size Undo management overhead Reduce transaction size Commit once per 1 GB of data imported Single threaded export/import Under utilization of server resources only one table is accessed at a time Export/import multiple tables in parallel Export from or load into 8 tables in parallel If you are exporting data from an active production database you have to find a balance between the performance of production queries and that of the export itself Execute export operations carefully so that you don ’t compromise the performance of the production workload This information is discussed i n more detail in the following section ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 6 Advanced Performance Concepts Client Topics Contrary to the popular opinion that total migration time depends exclusively on server performance data migrations can often be constrained by clientside factors It is important that you identify understand and finally address client side bottlenecks; otherwise you may not achieve the goal of reaching optimal import/export performance Client Location The location of the client machine is an important factor affecting data migrations performance benchmarks and day today database operations alike Remote clients can experience network latency ranging from dozens to hundreds of milliseconds Communication latency introduces unnecessary overhead to every database operation and can result in substantial performance degradation The performance impact of network latency is particularly visible during single threaded operations involving large amounts of short database statements With all statements executed on a single thread the statement throughput becomes the inverse of network latency yielding very low overall performance We strongly recommend that you perform all types of database activities from an Amazon EC2 instance located in the same VPC and Availability Zone as the database server For EC2Classic (non VPC) servers the client should be located in the same AWS Region and Availability Zone The reason we recommend that you launch client instances not only in the same AWS Region but also in the same VPC is that crossVPC traffic is treated as public and thus uses publicly routable IP addresses Because the traffic must travel through a public network segment the network path becomes longer resulting in higher communication latency ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 7 Client Capacity It is a common misconception that the specifications of client machines have little or no impact on export/import operations Although it is often true that resource utilization is higher on the server side it is still important to remember the following:  On small client instances multithreaded exports and imports can become CPUbound especially if data is compressed or decompressed on the fly eg when the data stream is piped through a compression tool like gzip  Multithreaded data migrations can consume substantial network and I/O bandwidth Choose the instance class and size and type of the underlying Amazon EBS storage volume carefully For more information see the Amazon EBS Volume Performance section in the Amazon EC2 User Guide 2 All operating systems provide diagnostic tools that can help you detect CPU network and I/O bottlenecks When investigating export/import performance issues we recommend that you use these tools and rule out clientside problems before digging deeper into server configuration Server Topics Serverside storage performance CPU power and network throughput are among the most important server characteristics affecting batch export/import operations Aurora supports pointandclick instance scaling that enables you to modify the compute and network capacity of your database cluster for the duration of the batch operations Storage Performance Aurora leverages a purposebuilt distributed storage layer designed to provide low latency and high throughput under multithreaded workloads You don't need to choose between storage volume types or provision storage specifically for export/import purposes ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 8 CPU Power Multithreaded exports/imports can become CPU bound when executed against smaller instance types We recommend using a server instance class with one CPU core per thread for exports and two CPU cores per thread for imports CPU capacity can be consumed efficiently only if the export/import is realized in multithreaded fashion Using an instance type with more CPU cores is unlikely to improve performance dump or import that is executed in a single thread Network Throughput Aurora does not use Amazon EBS volumes for storage As a result it is not constrained by the bandwidth of EBS network links or throughput limits of the EBS volumes However the theoretical peak I/O throughput of Aurora instances still depends on the instance class As a rule of thumb if you choose an instance class with enough CPU cores to handle the export/import (as discussed earlier) the instance should already offer adequate network performance Temporary Scaling In many cases export/import tasks can require significantly more compute capacity than day today database operations Thanks to the pointandclick compute scaling feature of Amazon RDS for MySQL and Aurora you can temporarily overprovision your instance and then scale it back down when you no longer need the additional capacity Note : Due to the benefits of the Aurora custom storage layer storage scaling is not needed before during or after exporting/imp orting data Tools With client and server machines located close to each other and sized adequately let ’s look at the different methods and tools you can use to actually move the data ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 9 Percona XtraBackup Aurora supports migration from Percona XtraBackup files stored in Amazon S3 Migrating from backup files can be significantly faster than migrating from logical schema and data dumps using tools such as mysqldump Logical imports work by executing SQL commands to recreate the schema and data from your source database which carries considerable processing overhead However Percona XtraBackup files can be ingested directly into an Aurora storage volume which removes the additional SQL execution cost A migration from Percona XtraBackup files involves three main steps: 1 Using the innobackupex tool to create a backup of the source database 2 Copying the backup to Amazon S3 3 Restoring the backup through the AWS RDS console You can use this migration method for source servers using MySQL versions 55 and 56 For more information and stepbystep instructions for migrating from Percona XtraBackup files see the Amazon Relational Database Service User Guide 3 mysqldump The mysqldump tool is perhaps the most popular export/import tool for MySQLcompatible database engines The tool produces dumps in the form of SQL files that contain data definition language (DDL) data control language (DCL) and data manipulation language (DML) statements The statements carry information about data structures data access rules and the actual data respectively In the context of this whitepaper two types of statements are of interest:  CREATE TABLE statements to create relevant table structures before data can be inserted ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 10  INSERT statements to populate tables with data Each INSERT typically contains data from multiple rows but the dataset for each table is essentially represented as a series of INSERT statements The mysqldump based approach introduces certain issues related to performance:  When used against managed database servers such as Amazon RDS instances the tool’s functionality is limited Due to privilege restrictions it cannot dump data in multiple threads or produce flatfile dumps suitable for parallel loading  The SQL files do not include any transaction control statements by default Consequently you have very little control over the size of database transactions used to import data This lack of control can lead to poor performance for example: o With autocommit mode enabled (default) each individual INSERT statement runs inside its own transaction The database must COMMIT frequently which increases the overall execution overhead o With autocommit mode disabled each table is populated using one massive transaction The approach removes COMMIT overhead but leads to side effects such as tablespace bloat and long rollback times if the import operation is interrupted Note: Work is in progress to provide a modern replacement for the legacy mysqldump tool The new tool called mysqlpump is expected to check most of the boxes on MySQL DBA’s performance checklist For more information see the MySQL Reference Manual 4 Flat Files As opposed to SQLformat dumps that contain data encapsulated in SQL statements flatfile dumps come with very little overhead The only control ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 11 characters are the delimiters used to separate individual rows and columns Files in commaseparated value (CSV) or tabseparated value (TSV) format are both examples of the flatfile approach Flat files are most commonly produced using:  The SELECT … INTO OUTFILE statement which dumps table contents (but not table structure) into a file located in the server’s local file system  mysqldump command with the tab parameter which also dumps table contents to a file and creates the relevant metadata files with CREATE TABLE statements The command uses SELECT … INTO OUTFILE internally so it also creates dump files on the server’s local file system Note : Due to privilege restrictions you cannot use the methods mentioned previously with managed database servers such as Amazon RDS However you can import flat files dumped from self managed servers into managed instances with no issues Flat files have two major benefits:  The lack of SQL encapsulation results in much smaller dump files and removes SQL processing overhead during import  Flat files are always created in fileper table fashion which makes it easy to import them in parallel Flat files also have their disadvantages For example the server would use a single transaction to import data from each dump file To have more control over the size of import transactions you need to manually split very large dump files into chunks and then import one chunk at a time ThirdParty Tools and Alternative Solutions The mydumper and myloader tools are two popular opensource MySQL export/import tools designed to address performance issues that are associated ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 12 with the legacy mysqldump program They operate on SQLformat dumps and offer advanced features such as:  Dumping and loading data in multiple threads  Creating dump files in fileper table fashion  Creating chunked dumps that is multiple files per table  Dumping data and metadata into separate files  Ability to configure transaction size during import  Ability to schedule dumps in regular intervals For more information about mydumper and myloader see the project home page5 Efficient exports and imports are possible even without the help of thirdparty tools With enough effort you can solve issues associated with SQLformat or flat file dumps manually as follows:  Solve singlethreaded mode of operations in legacy tools by running multiple instances of the tool in parallel However this does not allow you to create consistent databasewide dumps without temporarily suspending database writes  Control transaction size by manually splitting large dump files into smaller chunks Procedure Optimizations This section describes ways that you can handle some of the common export/import challenges ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 13 Choosing the Right Number of Threads for Multithreaded Operations As mentioned earlier a rule of thumb is to use one thread per server CPU core for exports and one thread per two CPU cores for imports For example you should use 16 concurrent threads to dump data from a 16core dbr34xlarge instance and 8 concurrent threads to import data into the same instance type Exporting and Importing Multiple Large Tables If the dataset is spread fairly evenly across multiple tables export/import operations are relatively easy to parallelize To achieve optimal performance follow these guidelines:  Perform export and import operations using multiple parallel threads To achieve this use a modern export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ”  Never use singlerow INSERT statements for batch imports Instead use multi row INSERT statements or import data from flat files  Avoid using small transactions but also don’t let each transaction become too heavy As a rule of thumb split large dumps into 500MB chunks and import one chunk per transaction Exporting and Importing Individual Large Tables In many databases data is not distributed equally across tables It is not uncommon for the majority of the data set to be stored in just a few tables or even a single table In this case the common approach of one export/import thread per table can result in suboptimal performance This is because the total export/import time depends on the slowest thread which is the thread that is processing the largest table To mitigate this you must leverage multithreading at the table level The following ideas can help you achieve better performance in similar situations ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 14 Large Table Approach for Exports On the source server you can perform a multithreaded dump of table data using a custom export script or a modern thirdparty export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ” When using custom scripts you can leverage multithreading by exporting multiple ranges (segments) of rows in parallel For best results you can produce segments by dumping ranges of values in an indexed table column preferably the primary key For performance reasons you should not produce segments using pagination ( LIMIT … OFFSET clause) When using mydumper know that the tool uses multiple threads across multiple tables but it does not parallelize operations against individual tables To use multiple threads per table you must explicitly provide the rows parameter when invoking the mydumper tool as follows rows : Split table into chunks of this many rows default unlimited You can choose the parameter value so that the total size of each chunk doesn’t exceed 100 MB For example if the average row length in the table is 1 KB you can choose a chunk size of 100000 rows for the total chunk size of ~100 MB Large Table Approach for Imports Once the dump is completed you can import it into the target server using custom scripts or the myloader tool Note : Both mydumper and myloader default to using four parallel threads which may not be enough to achieve optimal performance on Aurora dbr32xlarge instances or larger You can change the default level of parallelism using the threads parameter ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 15 Splitting Dump Files into Chunks You can import data from flat files using a single data chunk (for small tables) or a contiguous sequence of data chunks (for larger tables) Use the following guidelines to decide how to split table dumps into multiple chunks:  Avoid generating very small chunks (<1 MB) so that you can avoid protocol and transactional overhead Alternatively very large chunks can put unnecessary pressure on server resources without bringing performance benefits As a rule of thumb you might use a 500MB chunk size for large batch imports  For partitioned InnoDB tables use one chunk per partition and don’t mix data from different partitions in one chunk If individual partitions are very large split partition data further using one of the following solutions  For tables or table partitions with an autoincremented PRIMARY key: o If PRIMARY key values are provided in the dump it is good practice not to split data in a random fashion Instead use rangebased splitting so that each chunk contains monotonically increasing primary key values For example if a table has a PRIMARY key column called id data can be sorted by id in ascending order and then sliced into chunks This approach reduces page fragmentation and lock contention during import o If PRIMARY key values are not provided in the dump the engine generates them automatically for each inserted row In such cases you don't need to chunk the data in any particular way and you can choose the method that’s easiest for you to implement  If the table or table partition has a PRIMARY or NOT NULL UNIQUE key that is not autoincremented split the data so that each chunk contains monotonically increasing key values for that PRIMARY or NOT NULL UNIQUE KEY as described previously ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 16  If the table does not have a PRIMARY or NOT NULL UNIQUE key the engine creates an implicit internal clustered index and fills it with monotonically increasing values regardless of how the input data is split For more information about InnoDB index types see the MySQL Reference Manual 6 Avoiding Secondary Index Maintenance Overhead CREATE TABLE statements found in a typical SQLformat dump include the definition of the table primary key and all secondary keys Consequently the cost of secondary index management has to be paid for every row inserted during the import You can observe the index management cost as a gradual decrease in import performance as the table grows The negative effects of index management overhead are particularly visible if the table is large or if there are multiple secondary indexes defined on it In extreme cases importing data into a table with secondary indexes can be several times slower than importing the same data into a table with no secondary indexes Unfortunately none of the tools mentioned in this document support builtin secondary index optimization You can however implement the optimization using this simple technique:  Modify the dump files so that CREATE TABLE statements do not include secondary key or foreign key definitions  Import data  Recreate secondary and foreign keys using ALTER TABLE statements or third party online schema manipulation tools such as “pt onlineschema change” from Percona Toolkit When using ALTER TABLE: o Avoid using separate ALTER TABLE statements for each index Instead use one ALTER TABLE statement per table to recreate all indexes for that table in a single operation ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 17 o You may run multiple ALTER TABLE statements in parallel (one per table) to reduce the total time required to process all tables ALTER TABLE operations can consume a significant amount of temporary storage space depending on the table size and the number and type of indexes defined on the table Aurora instances use local (perinstance) temporary storage volumes If you observe that ALTER TABLE operations on large tables are failing to complete it can be due to lack of free space on the instan ce’s temporary volume If this occurs you can apply one of the following solutions:  Scale the Aurora instance to a larger type  If altering multiple tables in parallel reduce the number of ALTER statements running concurrently or try running only one ALTER at a time  Consider using a thirdparty online schema manipulation tool such as ptonlineschemachange from Percona Toolkit To learn more about monitoring the local temporary storage on Aurora instances see the Amazon Relational Database Service User Guide 7 Reducing the Impact of LongRunning Data Dumps Data dumps are often performed from active database servers that are part of a missioncritical production environment If severe performance impact of a massive dump is not acceptable in your environment consider one of the following ideas:  If the source server has replicas you can dump data from one of the replicas  If the source server is covered by regular backup procedures: o Use backup data as input for the import process if backup format allows for that ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 18 o If backup format is not suitable for direct importing into the target database use the backup to provision a temporary database and dump data from it  If neither replicas nor backups are available: o Perform dumps during offpeak hours when production traffic is at its lowest o Reduce the concurrency of dump operations so that the server has enough spare capacity to handle production traffic Conclusion This paper discussed important factors affecting the performance of self managed export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora:  The location and sizing of client and server machines  The ability to consume client and server resources efficiently which is mostly achieved through multithreading  The ability to identify and avoid unnecessary overhead at all stages of the migration process We hope that the ideas and observations we provide will contribute to creating a better overall experience for data migrations in your MySQLcompatible database environments Contributors The following individuals and organizations contributed to this document:  Szymon Komendera Database Engineer Amazon Web Services ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 19 1 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSPerformanceh tml 3 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMigrate MySQLhtml#AuroraMigrateMySQLS3 4 https://devmysqlcom/doc/refman/57/en/mysqlpumphtml 5 https://launchpadnet/mydumper/ 6 https://devmysqlcom/doc/refman/56/en/innodbindextypeshtml 7 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMonitor inghtml Notes",General,consultant,Best Practices Best_Practices_for_Running_Oracle_Siebel_CRM_on_AWS,ArchivedBest Practices for Running Oracle Siebel CRM on AWS March 2018 This paper has been archived For the latest technical content about the AWS Clou d see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments cond itions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Benefits of Running Siebel Applications on AWS 1 Key Benefits of AWS versus On Premises 1 Key Benefits over SaaS 4 AWS Concepts and Services 5 Regions and Availability Zones 5 Amazon EC2 7 Amazon RDS 7 AWS DMS 7 Elastic Load Balancing 7 Amazon EBS 8 Amazon Machine Images 8 Amazon S3 8 Amazon Route 53 8 Amazon VPC 9 AWS Direct Connect 9 Siebel CRM Architecture on AWS and Deployment Best Practices 9 Traffic Distribution and Load Balancing 10 Scalability 11 Architecting for High Availability and Disaster Recovery 12 VPC and Connectivity Options 16 Securing Your Siebel Application on AWS 17 Siebel and Oracle Licensing on AWS 19 Siebel and Oracle Database License Portability 19 Amazon RDS for Oracle Licensing Models 20 Siebel on AWS Use Cases 20 Archived Monitoring Your Infrastructure 21 AWS and Oracle Support 22 AWS Support 22 Oracle Support 22 Conclusion 23 Contributors 23 Further Reading 23 Archived Abstract Oracle's Siebel Customer Relationship Management ( CRM ) is a widely used and popular CRM application This whitepaper is intended for AWS customers and partners who want to learn about the benefits and options for running Oracle Siebel CRM on AWS This whitepaper provides architectural guidance and outlines best practices for high availability security scalability performance and disaster recovery for running Oracle Siebe l CRM systems on AWS ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 1 Introduction Companies are increasingly adopting a “ cloud first mobile first ” strategy Migrating Oracle’s Siebel Customer Relationship Management (CRM) applications to a cloud platform is becoming a necessity This paper is intended to help you understand Amazon Web Services (AWS) and how to leverage AWS to run Oracle Siebel CRM applications The paper also discusses key benefits and best practices for running Oracle Siebel CRM workloads on AWS Benefits of R unning Siebel Applications on AWS Migrating your Siebel applications to AWS is relatively simple and straightforward However it’s important that you don’t view this as merely a physical to virtual conversion or as just a “ lift and shift ” migration Understanding and using the AWS services and capabilities will help you make the most of running your Siebel systems on AWS Key Benefits of AWS versus OnPremise s Migrating your on premises Siebel environment to AWS offers you the following benefits: • Eliminate long procurement cycles – Traditional deployment as shown in the following figure involves a long procurement process Each stage is time intensive and requir es large capital outlay and multiple approvals ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 2 Figure 1 : A typical IT procurement c ycle This process has to be repeated for the various environments for example development testing training break fix and production which compounds the costs and causes significant delays With AWS you can p rovision new infrastructure and Siebel environments in minutes compared to waiting weeks or months to procure and deploy traditional infrastructure • Have Moore’s law work for you instead of against you – In an onpremises environment you end up owning ha rdware that depreciates in value every year You ’re locked in to the price and capacity of the hardware once you acquire it and you have ongoing hardware support costs With AWS you can switch your underlying Siebel instances to newer AWS instance types as they become available • Right size anytime – Often customer s oversize environments for initial phases and then can’t cope with growth in later phases With AWS you can scale the usage up or down at any time You pay only for the computing capacity you use for the duration you use it You can change instance sizes in minutes through the AWS Management Console the AWS API or the AWS Command Line Interface (CLI) IT Procurement Cycle01Capacility Planning 02Capital Allocation 03 Provisioning04 Maintenance05Hardware RefreshArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 3 • Resilience and ability to keep recovering from multiple failures – Onpremise s failures have to be handled on a case bycase basis Failed parts have to be procured and replaced Key components such as the Siebel gateway name server have to be clustered using expensive clustering software D eployment is still limited by the ability to handle only one failure in the primary gateway With AWS clustering of the Siebel gateway is n’t required The gateway can recover from multiple failures using the instance recovery feature of Amazon EC2 • Disa ster recovery – You can build extremely low cost standby disaster recovery ( DR) environments for existing deployments and incur costs only for the duration of the outage • Lower total cost of ownership ( TCO ) – Siebel c ustomers with on premise s data centers typically pay hardware support costs virtualization licensing and support costs data center costs etc You can eliminate or reduce a ll of these by moving to AWS Y ou benefit from the economies of scale and efficiencies that AWS provide s and pay for only the compute storage and other resources you use • Ability to test application performance – Performance testing is recommended before any major change to a Siebel environment However m ost customers performance test their Siebel CRM application s only during the initial launch on the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of the environment required for performance testing With AWS you can minimize the risk of discovering performance issues later in production An AWS Cloud environment can be created easily and quickly just for the duration of the performance test and used only when needed Again you ’re charged only for the hours the environment is used • No endoflife (EOL) for hardware /platform – All hardware platforms have EOL dates when the existing hardware is no longer supported and you are forced to buy new hardware In the AWS C loud you can simply upgrade your Amazon EC2 instances to new AWS instance types in a single click at no cost for the upgrade • No need for clustering – The Siebel gateway name server is a single point of failure On premise s implementations require you to cluster the gateway Clustering is complicated and expensive to implement With AWS no clustering is needed In addition you can automatically recover ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 4 a failed gateway name server using the instance recovery feature of Amazon EC2 • Unlimited environments – Customers with o npremise s data centers face the issue of limited environments For example a test environment will have a newer release compared to a production environment This means that if a performance issue is found in production you have no way to suddenly provision a performance debugging environment On AWS you can do this easily Key Benefits over S aaS The following are some of the benefits of deploying Siebel CRM on AWS compared to moving to a CRM offering based on a Software asaService ( SaaS ) model : • Lower total cost of ownership (TCO) – Existing Siebel customers don’t have to purchase new CRM licenses or risk a reimplementation of their CRM —they can just move their existing Siebel CRM implementation to AWS For new customers the TCO is still low because they don’t have to pay monthly SaaS license fees Siebel is a proven CRM with rich verticals • Unlimited usage – SaaS applications have governor/platform limits to accommodate underlying multi tenant architecture Governor limits restrict usage including the number of API calls transactio n times datasets and file sizes With AWS you can self provision and use as much or as little capacity as you need You pay only for what you use • Multi tenant v ersus elastic architecture – SaaS products typically use a multi tenant architecture which ties you to a specific instance and the limits of that instance With AWS you have complete control over the computing capacity you provision —you can provision as much or as little as you need • Single application management – With Siebel CRM you can manage everything —including marketing sales service CPQ and order s—in one application On SaaS this requires multiple applications that you have to buy and integrate The cost of integration with SaaS applications is easy to overlook in the buy deci sion but these costs can add up significantly later ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 5 AWS Concept s and Services In this section we introduce you to some AWS concepts and s ervices that help you understand how Siebel CRM is deployed on AWS Regions and Availability Zones The AWS Cloud infrastructure is built around AWS Regions and Availability Zones AWS Regions provide multiple physically separated and isolated Availability Zones that are connected with low latency high throughput and highly redundant networking Avail ability Zones consist of one or more discrete data centers each with redundant power networking and connectivity and housed in separate facilities These Availability Zones enable you to operate production applications and databases that are more highl y available fault tolerant and scalable than is possible from a single data center Each r egion is a separate geographic area isolated from the other regions Regions enable you to place resources such as Amazon EC2 instances and data in multiple locat ions Resources aren’t replicated across regions unless you do so specifically An AWS account provides multiple regions so that you can launch your application in locations that meet your requirements For example you might want to launch in Europe to be closer to your European customers or to meet legal requirements The following diagram illustrates the relationship between r egions and Availability Zones Figure 2: Relationship between AWS Regions and Availability Zones ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 6 The following figure shows the regions and the number of Availability Zones in each region provided by an AWS account at the time of this publication For the most current list of regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Figure 3: Map of AWS Regions and Availability Zones ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 7 Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud billed by the hour You can run virtual machines with various compute and memory capacities You have a choice of operating systems including different versions of Windows Server and Linux Amazon RDS Amazon Relation al Database Service (Amazon RDS) makes it easy to se t up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks This allows you to focus on your applications and business For Siebel both Micros oft SQL Server and Oracle databases are available AWS DMS AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely AWS DMS can also be used for continuous data rep lication with high availability and supports mos t widely used commercial and open source databases like Oracle SQL Server PostgreSQL and SAP ASE Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic You can use ELB to load balance web server traffic ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 8 Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each EBS volume is automatically replicated within its Availability Zone to protect you from component failure offeri ng high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Images An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your EC2 instance AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that AWS can boot them when you ask AWS to do so Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable and highly scalable object storage Amazon S3 is easy to use with a simple web service interface to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It ’s designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 9 Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways You can levera ge multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and use the AWS Cloud as an extension of your corporate data center AWS Direct Connect AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS Cloud services Using Direct Connect y ou can establish private dedicated network connectivity between AWS and your data center office or colocation environment In many cases this can reduce your network costs increase bandwidth throughput and provide a more consistent network experience t han internet based connections Siebel CRM Architecture on AWS and Deployment Best Practices The following architecture diagram illustrates how you can deploy Oracle Siebel CRM on AWS Three required components of your Siebel CRM application (the Siebel gateway name server Siebel a pplication server and Siebel web server) can be deployed to multiple EC2 instances behind an Elastic Load Balancing load balancer The fourth required Siebel component (the Siebel d atabase) can be set up on Amazon RDS for Oracle You can deploy your Siebel web application and gateway name servers and the Siebel database across multiple Availability Zones for high availability of your application ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 10 Figure 4: Architecture for deploying Siebel CRM on AW S The following sections describe the elements of this architecture in detail Traffic Distribution and Load Balancing Amazon Route 53 DNS is used to direct users to Siebel CRM hosted on AWS Elastic Load Balancing (ELB) is used to distribute incoming application traffic across the Siebel web servers deployed in multiple Availability Zones The load balancer serves as a single point of contact for client request s which enables you to increase the availability of your application You can add and remov e Siebel web server instances from your load balancer as your needs change without disrupting the overall flow of information ELB ensures that only healthy Siebel web server instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances If a Siebel web server instance fails ELB automatically reroutes the traffic to the ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 11 remaining running Siebel web server instances If a failed Siebel web server instance is restored ELB restores the traffic to that instance Scalability When using AWS you can scale your application easily because of the elastic nature of the cloud You can scale up the Siebel web and application servers simply by changing the instance type to a larger instance type For example yo u can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x 1e32xlarge instance with 128 vCPUs and 3904 GiB RAM After selecting a new instance type you only need a restart for the changes to take effect Typically the resizing operation is completed in a few minutes the Amazon EBS volumes remain attached to the instances and no data migration is required For your Siebel database deployed on Amazon RDS you can scale the compute and storage resources independently You can scale up the compute resources simply by changing the DB instance class to a larger one This modification typically takes only a few minutes and the database will be temporarily unavailable during this period You can increase the s torage capacity and IOPS provisioned for your database without any impact on database availability You can scale out the web and application tier by adding and configuring more instances when you need them The Siebel g ateway name server keeps track of available application and web servers These are registered with the Siebel gateway name server when the Siebel a pplication server or Siebel w eb server is installed To meet extra capacity requirements additional instances of Siebel w eb server s and application servers should be preinstalled and configured on EC2 instances These “standby ” instances can be shut down until extra capacity is required You don’t incur c harges when instances are shut down —you incur only Amazon Elastic Block Sto re (Amazon EBS ) storage charges At the time of this publication EBS General Purpose volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an instance with 120 GB of hard disk drive ( HDD ) space the storage charge is on ly $12 per month These preinstalled standby instances provide you the flexibility to use them to meet additional capacity needs as and when you need them ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 12 Architecting for H igh Availability and Disaster Recovery In this section we discuss best practices and options for deploying Siebel CRM on AWS for high availability of your Siebel application and for disaster recovery Multi Availability Zone Deployment for High Availability of a Siebel Database on Amazon RDS As described earlier each Availability Zone is isolated from other zones and runs on its own physically distinct independent infrastructure The likeli hood of two Availability Zones experiencing a failure at the same time is very small Like the Siebel web and application servers you can deploy the Siebel database on Amazon RDS in a M ultiAZ configuration Multi AZ deployments provide enhanced availability and durability for Amazon RDS DB i nstances making them a natural fit for production database workloads When you provision a Multi AZ DB i nstance Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone In case of an infrastructure failure (for example instance hardware failure storage failure or network disruption) Amazon RDS performs an automatic fail over to the standby instance Because the endpoint for your DB i nstance remains the same after a failover your application can resume database operation s as soon as the failover is complete without manual administrative intervention To learn how to set up Amazon RDS for Oracle as the database backend of your Siebel CRM a pplication see this documentation 1 Configuring the Siebel Gateway Name Server for High Availability With bare metal implementations you can deploy Siebel gateway name servers in an active /passive cluster to ensure availability in case of underlying host failure When deploying on AWS you have several options for configuring Siebel gateway name server s to ensure high availability You can use the EC2 automatic instance recovery feature to recover the Siebel gateway if the underlying host fails Instance recovery perform s several system status checks of the Siebel gateway name server instance and the other components that need to be running for the instance to function as expected Among other things instance recovery checks for loss of network connectivity ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 13 loss of system power and software and hardware issues on the physical host If a sys tem status check of the underlying hardware fails the instance will be rebooted (on new hardware if necessary) but will retain its i nstance ID IP address Elastic IP a ddresses EBS v olume attachments and other configuration details Another option is t o put the Siebel gateway name servers in an A uto Scaling group that spans multiple Availability Zones and set the min imum and maximum size of the group to one Auto Scaling ensure s that an instance of the Siebel gateway name server is running in the selected Availability Zones This solution ensures high availability of the Siebel gateway name server in the unlikely event of an A vailability Zone failure Note : You should back up the siebns dat configuration file to Amazon S3 before and after making any configuration changes especially when creating new component definitions and adding or deleting Siebel servers When the Siebel gateway name server is restored after a failure it should update itself with the latest copy of sie bnsdat from Amazon S3 You don’t have to buy additional software or run additional passive instances while using instance recovery or a fixed size A uto Scaling group for high availability Finally you can configure high availability cluster s of the Siebel gateway name server s There are several third party products such as SIOS2 and SoftNAS3 that offer a shared storage solution on AWS for clustering the Siebel gateway name server s Multi Region Deployment for Disaster Recovery Although a single AWS Region architecture with M ultiAZ deployment might suffice for most use cases you might want to consider a m ultiregion deployment for disaster recovery (DR) depending on business requirements For example you might have regulator y requirements or a business policy that mandate s that the DR site be located a certain distance away from the primary site Crossr egion deployments for DR should be designed and validated for specific use cases based on your uptime needs and budget The following diagram shows a typical Siebel deployment across re gions that addresses both high availability and DR requirements ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 14 Figure 5: Multi region deployment of Siebel on Amazon RDS for Oracle In this scenario u sers are directed to the Siebel application server in the primary region using Amazon Route 53 If the primary r egion is unavailable due to a disaster failover is initiated and users are redirected to the Siebel application server deployed in the DR region The primary database is deployed on Amazon RDS for Oracle in a Multi AZ configuration AWS DMS is used to replicate the data from the RDS DB instance in the primary r egion to another RDS DB instance in the DR r egion Note: AWS DMS can replicate only the data not the database schema changes The database schema changes in the RDS DB instance in the primary region should be applied separately to the RDS DB instance in the DR region You can do this while updating the applications in the DR r egion Multi Region Deployment of Siebel on Oracle Running on Amazon EC2 Instances Although Amazon RDS for Oracle is the recommended option for deploying the Siebel database there could be scenarios where Amazon RDS might not be suitable For example Amazon RDS might not be suitable i n the unlikely scenario that t he database size is close to or greater than the Amazon RDS for Oracle storage limit In such scenarios you can install the Siebel database on ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 15 Oracle on EC2 instances and configure Oracle Data G uard replication for high availability and DR as shown in the following figure Figure 6: Multi region deployment of Siebel on Oracle on Amazon EC2 In this DR scenario the database is deployed on Oracle running on EC2 instances Oracle Data Guard replication is configured between the primary database and two standby databases One of the two standby databases is “local” (for synchronous replication) in another Availability Zone in the primary r egion The other is a “remote ” standby database ( for asynchronous replication) in the DR region If the primary database fails the local standby database is promoted as the primary database and the Siebel application server will connect to it In the extremely unlikely event of a r egion failure or unavailability the remote standby database is promoted as the primary database and users are re directed to the Siebel application server in the DR region using Route 53 For more details on deploying Oracle Database with Data Guard replication on AWS see the Oracle Database on the AWS Cloud Quick Start4 Refer to this AWS whitepaper to learn more about using AWS for d isaster recovery 5 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 16 Using AWS as a DR Site and an OnPremise s Production Environment You can also deploy a DR environment on AWS for your Siebel applications running in an on premise s production environment If the production environment fails a failover is initiated and use rs are redirected to the Siebel application server deployed on AWS The process is fairly simple and inv olves the following major steps: • Setting up connectivity between your on premise s data center and AWS using a VPN connection or AWS Direct Connect • Insta lling Siebel web application and gateway name server s on AWS • Back ing up siebns dat to an Amazon S3 bucket • Installing the standby database on AWS and configur ing Oracle Data Guard replication or replication using AWS DMS between the on premise s production database and the standby database on AWS In this scenario i f the onpremises production environment fails you can initiate a failover and redirect users to the Siebel application server on AWS VPC and Connectivity Options Amazon VPC lets y ou provision a secure private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network using IP address ranges that you define Amazon VPC provides you with several options for securely connecting your AWS virtual network s with other remote networks (Network security is discussed in greater detail in the section Amazon VPC and Network Security ) If users are accessing the Siebel application primarily from an office or on premise s (eg a call center) you ca n use a hardware IP sec VPN connection or AWS Direct Connect to connect your on premise s network and Amazon VPC If users are accessing the Siebel application from outside the office (eg a sales rep or customer accessing Siebel from the field or from hom e) you can use a software appliance based VPN connection over the i nternet For detailed information about various connectivity options see the whitepaper Amazon Virtual Private Cloud Con nectivity Options 6 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 17 Securing Your Siebel Application on AWS The AWS infrastructure is architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is slightly different from security in your on premises data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastruct ure that supports the cloud and you are responsible for securing workloads you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways It also gives you the flexibility to implement the most applicable sec urity controls for your business functions in the AWS environment Figure 7: AWS shared responsibility model We recommend that you take advantage of the various security features that AWS offers when deploying Siebel CRM on AWS You can use t he following AWS security features to control and monitor access to the infrastructure components of your Siebel deployment ( eg OSlevel access to your Siebel application servers network level security limiting access to AWS services such as Amazon EC2 Amazon RD S Amazon S3 etc) The Siebel application security architecture (user authentication authorization field level encryption etc) does not change when you deploy you r Siebel application on AWS —you configure and manage it the same way as you would on premises7 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 18 IAM When you deploy your Siebel application on AWS you can use AWS Identity and Access Management (IAM) to control access to the AWS environment in which your S iebel servers are deployed With IAM you can centrally manage users and security credentials ( such as passwords access keys and permissions policies ) that control which AWS services and resources users can access IAM supports multi factor authenticatio n for privileged accounts This includes options for hardware based authenticators and support for i ntegration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging You can use AWS CloudTrail for resource change tracking and compliance auditing of the AWS infrastructure components of your Siebel environment (such as Amazon EC2 Amazon RDS Amazon S3 etc) For Siebel application level auditing you can continue to us e the Siebel Audit Trail feature8 AWS CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made T he AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing Amazon VPC and Network Security Amazon VPC enables you to provision a logically isol ated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define It offers you an IPsec VPN device to provide an encrypted tunnel between the Amazon VPC and your data center You create one or more subnets within each Amazon VPC Each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network access control lists ( network ACLs ) which are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC These network ACLs can contain ordered rules to allow or deny ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 19 traffic based on IP protocol by service port as well as source/destination IP address Security groups are a complete firewall solution enabling filtering on both ingress and egress traffic from an instance Traffic can be restricted by any IP protocol by service port and by source/destination IP address (individual IP or CIDR block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage services such as Amazon EBS Amazon S3 and Amazon Glacier and database services such as Amazon RDS for Oracle and Amazon RDS for SQL Server for use with the Siebel database You can choose whether to have AWS manage your encryption keys using AWS Key Management Service ( AWS KMS) or you ca n maintain complete control over your keys Dedicated hardware based cryptographic key storage options (AWS CloudHSM) are available to help you satisfy compliance requirements For more information on AWS security see the Introduction to AWS Security 9 and AWS Security Best Practices10 whitepapers Siebel and Oracle Licensing on AWS In this section we will briefly discuss Siebel CRM and Oracle Database license portability and Amazon RDS for Oracle licensing models Siebel and Oracle D atabase License Portability Most Oracle s oftware licenses are fully portable to AWS including the Enterprise License Agreement (ELA) Unlimited License Agreement (ULA) Business Process Outsourcing (BPO) and Oracle Partner Network (OPN) You can use your existing Siebel license and Oracle database licenses on AWS However you should consult your own Oracle license agreement for specific information ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 20 Amazon RDS for Oracle L icensing Models You can deploy your Siebel CRM applications on Amazon RDS for Oracle under two different licensing models : “License Included” and “Bring Your Own License (BYOL)” In the License Included service model (available only for Oracle Standard Edition One and Oracle Standard Edition Two) you do n’t need to separately purchase Oracle licenses The Oracle Database software has been licensed by AWS If you already own Oracle Database licenses you can use the BYOL model to run Oracle databases on Amazon RDS The BYOL model is designed for customers who prefer to use existing Oracle database licenses or purchase new licenses directly from Oracle Siebel on AWS Use Cases The following are some of the common use cases for Siebel on AWS: • Migrate existing Siebel environments to AWS – This is most suitable if you are on a recent release of Siebel You should design your AWS deployments based on the best practices in this whitepaper For migrating large databases to Amazon RDS within a small downtime window we recommend that you take a point intime export of your database transfer it to AWS import it into Amazon RDS and then apply the delta changes from on premises You can use AWS Direct Connect or AWS Snowball to transfer the export dump to AWS You can use AWS DMS to apply the delta changes and sync the on premises database with the Amazon RDS instance • Siebel upgrade – You can leverage AWS as the upgrade environment to keep the costs of upgrade to a minimum You can either use this new environment only for test and development or you can migrate your entire Siebel environment to AWS Either way you can reduce your overall TCO • Performance testing – Most customers only do performance testing for Siebel changes either on initial implementation or when they have Siebel upgrades to put in place Performance testing for customer ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 21 enhancements is almost never continually done AWS enables you to do performance testing at minimal cost because you are only charged for the resources you use when you us e them This minimal cost enable s more realistic testing both for Siebel upgrades and for your own enhancements You can budget for this on an annual basis depending on your needs for example when Siebel repository file ( SRF ) changes are put in place With additional real world testing of your own planned changes or enhancements you can reduce performance issues and avoid business critical downtimes • Siebel test and development environments on AWS – You might want to set up test and d evelopment environment s on AWS just to try AWS or if the move of the production environment is n’t urgent • Disaster recovery on AWS – You might want to set up a DR environment for your existing Siebel CRM on AWS This can be done at a much lower cost than setting up traditi onal DR Monitoring Your Infrastructure You can continue to use the existing tools that you are familiar with for monitoring your Siebel application such as the Siebel Web Server Extension (SWSE) statistics page the Server Manager GUI or the Server Manager (srvrmgr) command line interface Optionally you can use Oracle Enterprise Manager to monitor your Siebel environment by installing the Oracle Enterprise Manage r Plug in for Oracle Siebel You can also use Amazon CloudWatch to monit or AWS C loud resources and the applications you’re running on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including EC2 instances EBS volumes load balancers and RDS DB instances Metrics such as CPU utilization l atency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom application and system metrics such as memory usage transaction volumes or error rates Amazon CloudWatch will monitor these also You can use the Enhanced Monitoring feature of Amazon RDS to monitor your Siebel database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 22 AWS and Oracle Support In this section we discuss the support model when you deploy your Siebel CRM applications on AWS AWS Support AWS Support is a one onone fast response support channel that is staffed around the clock with technical support engineers and experienced customer service professionals who can help you get the most from the products and features provided by AWS 11 All AWS Support tiers offer an unlimited number of support cases with pay by themonth pricing and no long term contracts The four tiers provide developers and businesses the flexibility to choose the supp ort tiers that meet their specific needs AWS Support Business and Enterprise levels include support for common operating systems and common application stack components AWS Support engineers can assist with the setup configuration and troubleshooting of certain third party platforms and applications including Red Hat Enterprise Linux SUSE Linux Windows S erver 2008 Windows Server 2012 Windows Server 2016 Open VPN RRAS etc Oracle Support Siebel CRM versions 150 and 160 are certified to run on A WS Oracle’s certification details for Siebel on AWS are available in the certification section of the Oracle Support site 12 You can use the existing licenses for Siebel a pplications that you had with your onpremises implementations You will have the same level of Oracle Support that you had with your onpremise s implementation Oracle’s only requirement for Infrastructure as a Service (IaaS ) clouds is that you use platforms and databases that are certifie d with Siebel Certified versions of both Siebel and platform s and database s are documented on the Oracle support site You can submit issues in the same manner and provide information about your environments as before When you contact Oracle Support t he fact that you are ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 23 running your Siebel CRM application in the cloud might not even enter the discussion because there is nothing unique about using IaaS that would require any change to the application This is the same approach for virtualization technology that Oracle S upport has followed with Siebel for many years Escalations would continue to go through the customer support site Conclusion By deploying Siebel i n the AWS C loud you can simultaneously reduce cost and enable capabilities that might not be possible or cost effective if deployed in an onpremises data center Some benefits of deploying Siebel on AWS include: • Low cost —resources are billed by the hour and only for the duration they are used • Changing from CapEx to OpEx eliminates the need for a large capital layout • Higher availability of 99 99% by deploying Siebel in a Multi AZ configuration • Flexibility to add capacity elastically to cope with demand This enables you to perform application upgrades faster • Flexibility to add envir onments and use them for short durations such as for performance testing and training Contributors The following individuals and organizations contributed to this document: • Ashok Sundaram Solutions Architect Amazon Web Services • Yoav Eilat Sr Product Marketing Manager Amazon Web Services • Mark Farrier Director Product Management – Siebel CRM Oracle • Milind Waikul CEO Enterprise Beacon Inc Further Reading For additional information see the following sources: ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 24 • Test drive Siebel running on Amazon EC 2 and Amazon RDS http://wwwenterprisebeaconcom/testdriveshtml • Amazon EC2 https://awsamazoncom/ec2/ • Amazon RDS https://awsamazoncom/rds/ • Amazon CloudWatch https://awsamazoncom/cloudwatch/ • AWS DMS https://awsamazoncom/dms/ • Elastic Load balancing https://awsamazoncom/elasticloadbalancing/ • Amazon EBS https://awsamazoncom/ebs/ • Amazon S3 https://awsam azoncom/s3/ • Amazon Route 53 https://awsamazoncom/route53/ • Amazon VPC https://awsamazoncom/vpc/ • AWS Direct Connect https://awsamazoncom/directconnect/ • AWS CloudTrail https://awsamazoncom/cloudtrail/ • AWS CloudHSM https://awsamazoncom/cloudhsm/ • Amazon Glacier https://awsamazoncom/glacier/ • AWS KMS https://awsamazoncom/kms/ • AWS Cost Estimator http ://calculators3amazonawscom/indexhtml ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 25 • AWS Trusted Advisor https://awsamazoncom/premiumsupport/trustedadvisor/ • Oracle cloud licensing http://wwworaclecom/us/corporate/pricing/cloud licensing 070579pdf • Oracle Processor Core Factor Table http://wwwo raclecom/us/corporate/contracts/processorcore factor table 070634pdf • Amazon EC2 virtual cores by instance type https://awsamazoncom/ec2/virtualcores/ • Oracle Database on the AWS Cloud Quick Start (with Data Guard replication) https://s3amazonawscom/quickstart reference/oracle/database/latest/doc/oracle database ontheaws cloudpdf ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 26 1 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/OracleResourc esSiebelhtml 2 http://ussioscom/clustersyourway/products/windows/datakeeper cluster 3 https://awsamazoncom/whitepapers/softnas architecture onaws/ 4 https://s3amazonawscom/quickstart reference/oracle/database/latest/doc/oracle database ontheawscloudpdf 5 https://d0awsstaticcom/whitepapers/aws disaster recoverypdf 6 https://d0awsstaticcom/whitepapers/aws amazon vpcconnectivity optionspdf 7 https://docsoraclecom/cd/E74890_01/books/Secur/secur_aboutsec005ht m 8 https://docsoraclecom/cd/E74890_01/books/AppsAdmin/AppsAdminAudi tTrail2html 9 http://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 10 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practic espdf 11 https://awsamazoncom/premiumsupport/ 12 http://supportoraclecom/ Notes,General,consultant,Best Practices Big_Data_Analytics_Options_on_AWS,"ArchivedBig Data Analytics Options on AWS December 2018 This paper has been archived For the latest technical information see https://docsawsamazoncom/whitepapers/latest/bigdata analyticsoptions/welcomehtmlArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 5 The AWS Advantage in Big Data Analytics 5 Amazon Kinesis 7 AWS Lambda 11 Amazon EMR 14 AWS Glue 20 Amazon Machine Learning 22 Amazon DynamoDB 25 Amazon Redshift 29 Amazon Elasticsearch Service 33 Amazon QuickSight 37 Amazon EC2 40 Amazon Athena 42 Solving Big Data Problems on AWS 45 Example 1: Queries against an Amazon S3 Data Lake 47 Example 2: Capturing and Analyzing Sensor Data 49 Example 3: Sentiment Analysis of Social Media 52 Conclusion 54 Contributors 55 Further Reading 55 Document Rev isions 56 Archived Abstract This whitepaper helps architects data scientists and developers understand the big data analytics options available in the AWS cloud by providing an overview of services with the following information: • Ideal usage patterns • Cost model • Performance • Durability and availability • Scalability and elasticity • Interfaces • Anti patterns This paper concludes with scenarios that showcase the an alytics options in use as well as additional resources for getting started with big data analytics on AWS ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 5 of 56 Introduction As we become a more digital society the amount of data being created and collected is growing and accelerating significantly Analysis of this ever growing data becomes a challenge with traditional analytical tools We require innovation to bridge the gap between data being generated and data that can be analyzed effectively Big data tools and technologies offer opportunities and challenges in being able to analyze data efficiently to better understand customer preferences gain a competitive advantage in the marketplace and grow your business Data management architectures have evolved from the traditional data warehousing model to more complex architectures that address more requirements such as realtime and batch processing; structured and unstructured data; high velocity transactions; and so on Amazon Web Services (AWS) provides a broad platform of managed services to help you build secure and seamlessly scale endtoend big data applications quickly and with ease Whether your applications require realtime streaming or batch data processing AWS provides the infrastructure and tools to tackle your next big data project No hardware to procure no infrastructure to maintain and scale —only what you need to collect store process and analyze big data AWS has an ecosystem of analytical solutions specifically designed to handle this growing amount of data and provide insight into your business The AWS Advantage in Big Data Analytics Analyzing large data sets requires significant compute capacity that can vary in size based on the amount of input data and the type of analysis This characteristic of big data workloads is ideally suited to the payasyougo cloud computing model where applications can easily scale up and down based on demand As requirements change you can easily resize your environment (horizontally or vertically) on AWS to meet your needs without having to wait for additional hardware or being required to over invest to provision enough capacity For mission critical applications on a more traditional infrastructure system designers have no choice but to over provision because a surge in additional data due to an increase in business need must be something the system can ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 6 of 56 handle By contrast on AWS you can provision more capacity and compute in a matter of minutes meaning that your big data applications grow and shrink as demand dictates and your system runs as close to optimal efficiency as possible In addition you get flexible computing on a global infrastructure with access to the many different geographic regions that AWS offers along with the ability to use other scalable services that augment to build sophisticated big data applications These other services include Amazon Simple Storage Service (Amazon S3) to store data and AWS Glue to orchestrate jobs to move and transform that data easily AWS IoT which lets connected devices interact with cloud applications and other connected devices As the amount of data being generated continues to grow AWS has many options to get that data to the cloud including secure devices like AWS Snowball to accelerate petabyte scale data transfers delivery s treams with Amazon Kinesis Data Firehose to load streaming data continuously migrating databases using AWS D atabase Migration Service and scalable p rivate connections through AWS Direct Connect AWS recently added AWS Snowball Edge which is a 100 TB data transfer device with on board storage and compute capabilities You can use Snowball Edge to move large amounts of data into and out of AWS as a temporary storage tier for large local datasets or to support local workloads in remote or offline locations Additionally you can deploy AWS Lambda code on Snowball Edge to perform tasks such as analyzing data streams or processing data locally As mobile continues to rapidly grow in usage you can use the suite of services within the AWS Mobil e Hub to collect and measure app usage and data or export that data to another service for further custom analysis These capabilities of the AWS platform make it an ideal fit for solving big data problems and many customers have implemented successful big data analytics workloads on AWS For more information about case studies see Big Data Customer Success Stories The following services for collecting processing stori ng and analyzing big data are described in order : • Amazon Kinesis ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 7 of 56 • AWS Lambda • Amazon Elastic MapReduce • Amazon Glue • Amazon Machine Learning • Amazon DynamoDB • Amazon Redshift • Amazon Athena • Amazon Elasticsearch Service • Amazon QuickSight In addition to these services Amazon EC2 instances are available for self managed big data applications Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS making it easy to load and analyze streaming data and also providing the ability for you to build custom streaming data applications for specialized needs With Kinesis you can ingest real time data such as application logs website clickstreams IoT telemetry data and more into your databases data lakes and data warehouses or build y our own real time applications using this data Amazon Kinesis enables you to process and analyze data as it arrives and respond in real time instead of having to wait until all your data is collected before the processing can begin Currently there are 4 pieces of the Kinesis platform that can be utilized based on your use case : • Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data • Amazon Kinesis Video Streams enables you to build custom applications that process or analyze streaming video • Amazon Kinesis Data Firehose enables you to deliver real time streaming data to AWS destinations such as Amazon S3 Amazon Redshift Amazon Kinesis Analytics and Amazon Elasticsearch Service • Amazon Kinesis Data Analytics enables you to process and analyze streaming data with standard SQL ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 8 of 56 Kinesis Data Streams and Kinesi s Video Streams enable you to build custom applications that process or analyze streaming data in real time Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams financial transactions social media feeds IT logs and location tracking events Kinesis Video Streams can continuously capture video data from smartphones security cameras drones satellites dashcams and other edge devices With the Amazon Kinesis Client Library (KCL) you can build Amazon Kinesis applications and use streaming data to power real time dashboards generate alerts and implement dynamic pricing and advertising You can also emit data from Kinesis Data Streams and Kinesis Video Streams to other AWS services such as Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) and AWS Lambda Provision the level of input and output required for your data stream in blocks of 1 megabyte per second (MB/sec) using the AWS Management Console API or SDK s The size of your stream can be adjusted up or down at any time without restarting the stream and without any impact on the data sources pushing data to the stream Within seconds data put into a stream is available for analysis With Kinesis Data Firehose you do not need to write applications or manage resources You configure your data producers to send data to Kinesis Firehose and it automatically delivers the data to the AWS destination that you specified You can also configure Kinesis Data Firehose t o transform your data before data delivery It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration It can also batch compress and encrypt the data before loading it minimizing the amount of storage used at the destination and increasing security Amazon Kinesis Data Analytics is the easiest way to process and analyze real time streaming data With Kinesis Data Anal ytics you just use standard SQL to process your data streams so you don’t have to learn any new programming languages Simply point Kinesis Data Analytics at an incoming data stream write your SQL queries and specify where you want to load the results Kinesis Data Analytics takes care of running your SQL queries continuously on data while it’s in transit and sending the results to the destinations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 9 of 56 In the subsequent sections we will focus primarily on Amazon Kinesis Data Streams Ideal Usage Patterns Amazon Kinesis Data Steams is useful wherever there is a need to move data rapidly off producers (data sources) and continuously process it That processing can be to transform the data before emitting into another data store drive realtime metrics and analytics or derive and aggregate multiple streams into more complex streams or downstream processing The following are typical scenarios for using Kinesis Data Streams for analytics • Real time data analytics –Kinesis Data Streams enables realtime data analytics on streaming data such as analyzing website clickstream data and customer engagement analytics • Log and data feed intake and processing – With Kinesis Data Streams you can have producers push data directly into an Amazon Kinesis stream For example you can submit system and application logs to Kinesis Data Streams and access the stream for processing within seconds This prevents the log data from being lost if the front end or application server fails and reduces local log storage on the source Kinesis Data Streams provides accelerated data intake because you are not batching up the data on the servers before you submit it for intake • Real time metrics and reporting – You can use data ingested into Kinesis Data Streams for extracting metrics and generating KPIs to power reports and dashboards at realtime speeds This enables data processing application logic to work on data as it is streaming in continuously rather than wait for data batches to arrive Cost Model Amazon Kinesis Data Streams has simple payasyougo pricing with no up front costs or minimum fees and you only pay for the resources you consume An Amazon Kinesis stream is made up of one or more shards each shard gives you a capacity 5 read transactions per second up to a maximum total of 2 MB of data read per second Each shard can support up to 1000 write transactions per second and up to a maximum total of 1 MB data written per second The data capacity of your stream is a function of the number of shards that you specify for the stream The total capacity of the stream is the sum of the capacity ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 10 of 56 of each shard There are just two pricing components an hourly charge per shard and a charge for each 1 million PUT transactions For more information see Amazon Kinesis Data Streams Pricing Applications that run on Amazon EC2 and process Amazon Kinesis streams also incur standard Amaz on EC2 costs Performance Amazon Kinesis Data Streams allows you to choose throughput capacity you require in terms of shards With each shard in an Amazon Kinesis stream you can capture up to 1 megabyte per second of data at 1000 write transactions per second Your Amazon Kinesis applications can read data from each shard at up to 2 megabytes per second You can provision as many shards as you need to get the throughput capacity you want; for instance a 1 gigabyte per second data stream would require 1024 shards Durability and Availability Amazon Kinesis Data Streams synchronously replicates data across three Availability Zones in an AWS Region providing high availability and data durability Additionally you can store a cursor in DynamoDB to durably track what has been read from an Amazon Kinesis stream In the event that your application fails in the middle of reading data from the stream you can restart your application and use the cursor to pick up from the exact spot where the failed application left off Scalability and Elasticity You can increase or decrease the capacity of the stream at any time according to your business or operational needs without any interruption to ongoing stream processing By using API calls or development tools you can automate scaling of your Amazon Kinesis Data Streams environment to meet demand and ensure you only pay for what you need Interfaces There are two interfaces to Kinesis Data Streams: input which is used by data producers to put data into Kinesis Data Streams; and output to process and analyze data that comes in Producers can write data using the Amazon Kinesis PUT API an AWS Software Development Kit (SDK) or toolkit abstraction the Amazon Kinesis Producer Library (KPL) or the Amazon Kinesis Agent ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 11 of 56 For processing data that has already been put into an Amazon Kinesis stream there are client libraries provided to build and operate realtime streaming data processing applications The KCL17 acts as an intermediary between Amazon Kinesis Data Streams and your business applications which contain the specific processing logic There is also integration to read from an Amazon Kinesis stream into Apache Storm via the Amazon Kinesis Storm Spout AntiPatterns Amazon Kinesis Data Streams has the following antipatterns: • Small scale consistent throughput – Even though Kinesis Data Streams works for streaming data at 200 KB/sec or less it is designed and optimized for larger data throughputs • Long term data storage and analytics –Kinesis Data Streams is not suited for long term data storage By default data is retained for 24 hours and you can extend the retention period by up to 7 days You can move any data that needs to be stored for longer than 7 days into another durable storage service such as Amazon S3 Amazon Glacier Amazon Redshift or DynamoDB AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume – there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service – all with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automati cally trigger from other AWS services or call it directly from any web or mobile app Ideal Usage Pattern AWS Lambda enables you to execute code in response to triggers such as changes in data shifts in system state or actions by users Lambda can be directly triggered by AWS services such as Amazon S3 DynamoDB Amazon Kinesis Data Streams Amazon Simp le Notification Service (Amazon SNS ) and ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 12 of 56 CloudWatch allowing you to build a variety of real time data processing systems • Real time File Processing – You can trigger Lambda to invoke a process where a file has been uploaded to Amazon S3 or modified For example to change an image from color to gray scale after it has been uploaded to Amazon S3 • Real time Stream Processing – You can use Kinesis Data Streams and Lambda to process streaming data for click stream analysis log filtering and social media analysis • Extract Transform Load – You can use Lambda to run code that transforms data and loads that data into one data rep ository to another • Replace Cron – Use schedule expressions to run a Lambda function at regular intervals as a cheaper and more available solution than running cron on an EC2 instance • Process AWS Events – Many other services such as AWS CloudTrail can act as event sources simply by logging to Amazon S3 and using S3 bucket notifications to trigger Lambda functions Cost Model With AWS Lambda you only pay for what you use You are charged based on the number of requests for your functions and the time you r code executes The Lambda free tier includes 1M free requests per month and 400000 GB seconds of compute time per month You are charged $020 per 1 million requests thereafter ($00000002 per request) Additionally the duration of your code executing is priced in relation to memory allocated You are charged $000001667 for every GB second used See Lambda pricing for more details Performance After deploying your code into Lambda for the first time your functions are typically ready to call within seconds of upload Lambda is designed to process events within milliseconds Latency will be higher immediately after a Lambda function is created updated or if it has not been used recently To improve performance Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request rather than creating a new copy To learn more about how Lambda reuses function insta nces see our documentation Your code should not assume that this reuse will always happen ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 13 of 56 Durability and Availability AWS Lambda is designed to use replica tion and redundancy to provide high availability for both the service itself and for the Lambda functions it operates There are no maintenance windows or scheduled downtimes for either On failure Lambda functions being invoked synchronously respond with an exception Lambda functions being invoked asynchronously are retried at least 3 times after which the event may be rejected Scalability and Elasticity There is no limit on the number of Lambda functions that you can run However Lambda has a default safety throttle of 1000 concurrent executions per account per region A member of the AWS support team can increase this limit Lambda is designed to scale automatically on your behalf T here are no fundamental limits to scaling a function Lambda dynamically allocate s capacity to match the rate of incoming events Interfaces Lambda functions can be managed in a variety of ways You can easily list delete update and monitor your Lambda functions using the dashboard in the Lambda console You also can use the AWS CLI and AWS SDK to manage your Lambda functions You can trigger a Lambda function from an AWS event such as Amazon S3 bucket notifications Amazon DynamoDB Streams Amazon Clo udWatch logs Amazon Simple Email Service (Amazon SES) Amazon Kinesis Data Streams Amazon SNS Amazon Cognito and more Any API call in any service that supports AWS CloudTrail can be processed as an event in Lambda by responding to CloudTrail audit logs For more information about event sources see Core Components: AWS Lambda Function and Event Sources AWS Lambda supports code written in Nodejs (JavaScript) Python Java (Java 8 compatible) C# (NET Core) Go PowerShell and Ruby Your code can include existing libraries even native ones Please read our documentation on using Nodejs Python Java C# Go PowerShell and Ruby ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 14 of 56 AntiPatterns • Long Running Applications – Each Lambda function must complete within 900 seconds For long running applications that may require jobs to run longer than fi fteen minutes Amazon EC2 is recommended Alternately create a chain of Lambda functions where function 1 call s function 2 which calls function 3 and so on until the process is completed See Creating a Lambda State Machine for more information • Dynamic Websites – While it is possible to run a static website with AWS Lambda running a highly dynamic and large volume website can be performance proh ibitive Utilizing Amazon EC2 and Amazon CloudFront would be a recommended use case • Stateful Applications –Lambda code must be written in a “stateless” style ie it should assume there is no affinity to the underlying compute infrastructure Local file system access child processes and similar artifacts may not extend beyond the lifetime of the request and any persistent state should be stored in Amazon S3 DynamoDB or another Internet available storage service Amazon EMR Amazon EMR is a highly distributed computing framework to easily process and store data quickly in a costeffective manner Amazon EMR uses Apache Hadoop an open source framework to distribute your data and processing across a resizable cluster of Amazon EC2 instances and allows you to use the most common Hadoop tools such as Hive Pig Spark and so on Hadoop provides a framework to run big data processing and analytics Amazon EMR does all the work involved with provisioning managing and maintaining the infrastructure and software of a Hadoop cluster Ideal Usage Patterns Amazon EMR’s flexible framework reduces large processing problems and data sets into smaller jobs and distributes them across many compute nodes in a Hadoop cluster This capability lends itself to many usage patterns with big data analytics Here are a few examples: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 15 of 56 • Log processing and analytics • Large extract transform and load (ETL) data movement • Risk modeling and threat analytics • Ad targeting and click stream analytics • Genomics • Predictive analytics • Ad hoc data mining and analytics For more information see the documentation for Amazon EMR Cost Model With Amazon EMR you can launch a persistent cluster that stays up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of Amazon EC2 instanc e types (standard high CPU high memory high I/O and so on) and all Amazon EC2 pricing options (OnDemand Reserved and Spot) When you launch an Amazon EMR cluster (also called a ""job flow"") you choose how many and what type of Amazon EC2 instances to provision The Amazon EMR price is in addition to the Amazon EC2 price For more information see Amazon EMR Pricing Performance Amazon EMR performance is driven by the type of EC2 instances you choose to run your cluster on and how many you chose to run your analytics You should choose an instance type suitable for your processing requirements with sufficient memory storage and processing power For more information about EC2 instance specifications see Amazon EC2 Instance Types Durability and Availability By default Amazon EMR is fault tolerant for core node failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Customers can monitor the health of nodes and replace failed nodes with CloudWatch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 16 of 56 Amazon EMR is fault tolerant for slave failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Scalability and Elasticity With Amazon EMR it is easy to resize a running cluster You can add core nodes which hold the Hadoop Distributed File System (HDFS) at any time to increase your proce ssing power and increase the HDFS storage capacity (and throughput) Additionally you can use Amazon S3 natively or using EM RFS along with or instead of local HDFS which allows you to decouple your memory and compute from your storage providing greater flexibility and cost efficiency You can also add and remove task nodes at any time which can process Hadoop jobs but do not maintain HDFS Some customers add hundreds of instances to their clusters when their batch processing occurs and remove the extra instances when processing completes For example you may not know how much data your clusters will be handling in 6 months or you may have spikey processing needs With Amazon EMR you don't need to guess your future requirements or provision for peak demand because you can easily add or remove capacity at any time Additionally you can add all new clusters of various sizes and remove them at any time with a few clicks in the console or by a programmatic API call Interfaces Amazon EMR supports many tools on top of Hadoop that can be used for big data analytics and each has their own interfaces Here is a brief summary of the most popular options: Hive Hive is an open source data warehouse and analytics package that runs on top of Hadoop Hive is operated by Hive QL a SQL based language which allows users to structure summarize and query data Hive QL goes beyond standard SQL adding firstclass support for map/reduce functions and complex extensible user defined data types like JSON and Thrift This capability allows processing of complex and unstructured data sources such as text documents and log files ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 17 of 56 Hive allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Hive including direct integration with DynamoDB and Amazon S3 For example with Amazon EMR you can load table partitions automatically from Amazon S3 you can write data to tables in Amazon S3 without using temporary files and you can access resources in Amazon S3 such as scripts for custom map and/or reduce operations and additional libraries For more information see Apache Hive in the Amazon EMR Release Guide Pig Pig is an open source analytics package that runs on top of Hadoop Pig is operated by Pig Latin a SQL like language which allows users to structure summarize and query data As well as SQL like operations Pig Latin also adds firstclass support for map and reduce functions and complex extensible user defined data types This capability allows processing of complex and unstructured data sources such as text documents and log files Pig allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Pig including the ability to use multiple file systems (normally Pig can only access one remote file system) the ability to load customer JARs and scripts from Amazon S3 (such as “REGISTER s3://my bucket/piggybankjar”) and additional functionality for String and DateTime processing For more information see Apache Pig33 in the Amazon EMR Release Guide Spark Spark is an open source data analytics engine built on Hadoop with the fundamentals for inmemory MapReduce Spark provides additional speed for certain analytics and is the foundation for other power tools such as Shark (SQL driven data warehousing) Spark Streaming (streaming applications) GraphX (graph systems) and MLlib (machine learning) For more information see Apache Spark on Amazon EMR HBase HBase is an open source nonrelational distributed database modeled after Google's BigTable It was developed as part of Apache Software Foundation's Hadoop project and runs on top of Hadoop Distributed File System (HDFS) to provide BigTable like capabilities for Hadoop HBase provides you a fault tolerant efficient way of storing large quantities of sparse data using column ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 18 of 56 based compression and storage In addition HBase provides fast lookup of data because data is stored inmemory instead of on disk HBase is optimized for sequential write operations and it is highly efficient for batch inserts updates and deletes HBase works seamlessly with Hadoop sharing its file system and serving as a direct input and output to Hadoop jobs HBase also integrates with Apache Hive enabling SQL like queries over HBase tables joins with Hive based tables and support for Java Database Connectivity (JDBC) With Amazon EMR you can back up HBase to Amazon S3 (full or incremental manual or automated) and you can restore from a previously created backup For more information see Apache HBase in the Amazon EMR Release Guide Hunk Hunk was developed by Splunk to make machine data accessible usable and valuable to everyone With Hunk you can interactively explore analyze and visualize data stored in Amazon EMR and Amazon S3 harnessing Splunk analytics on Hadoop For more information see Amazon EMR with Hunk: Splunk Analytics for Hadoop and NoSQL Presto Presto is an open source distributed SQL query engine optimized for low latency adhoc analysis of data It supports the ANSI SQL standard including complex queries aggregations joins and window functions Presto can process data from multiple data sources including the Hadoop Distributed File System (HDFS) and Amazon S3 Kinesis Connector The Kinesis Connector enables EMR to directly read and query data from Kinesis Data Streams You can perform batch processing of Kinesis streams using existing Hadoop ecosystem tools such as Hive Pig MapRedu ce Hadoop Streaming and Cascading Some use cases enabled by this integration are: • Streaming Log Analysis: You can analyze streaming web logs to generate a list of top 10 error type every few minutes by region browser and access domains • Complex Data Processing Workflows: You can join Kinesis stream with data stored in Amazon S3 Dynamo DB tables and HDFS You can write queries that join clickstream data from Kinesis with advertising ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 19 of 56 campaign information stored in a DynamoDB table to identify the most effective categories of ads that are displayed on particular websites • Adhoc Queries: You can periodically load data from Kinesis into HDFS and make it available as a local Impala table for fast interactive analytic queries Other third party tools Amazon EMR also supports a variety of other popular applications and tools in the Hadoop ecosystem such as R (statistics) Mahout (machine learning) Ganglia (monitoring) Accumulo (secure NoSQL database) Hue (user interface to analyze Hadoop data) Sqoo p (relational database connector) HCatalog (table and storage management) and more Additionally you can install your own software on top of Amazon EMR to help solve your business needs AWS provides the ability to quickly move large amounts of data from Amazon S3 to HDFS from HDFS to Amazon S3 and between Amazon S3 buckets using Amazon EMR’s S3DistCp an extension of the open source tool DistCp that uses MapReduce to efficiently move large amounts of data You can optionally use the EMR File System (EMRFS) an implementation of HDFS which allows Amazon EMR clusters to store data on Amazon S3 You can enable Amazon S3 server side and client side encryption When you use EMRFS a metadata store is transparently built in DynamoDB to help manage the interactions with Amazon S3 and allows you to have multiple EMR clusters easily use the same EMRFS metadata and storage on Amazon S3 AntiPatterns Amazon EMR has the following antipatterns: • Small data sets – Amazon EMR is built for massive parallel processing; if your data set is small enough to run quickly on a single machine in a single thread the added overhead to map and reduce jobs may not be worth it for small data sets that can easily be processed in memory on a single system • ACID transaction requirements – While there are ways to achieve ACID (atomicity consistency isolation durability) or limited ACID on Hadoop using another database such as Amazon Relational Database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 20 of 56 Service (Amazon RDS ) or a relational database running on Amazon EC2 may be a better option for workloads with stringent requirements AWS Glue AWS Glue is a fully managed extract transform and load (ETL) service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly red uce the cost complexity and time spent creating ETL jobs AWS Glue is Serverless so there is no infrastructure to setup or manage You pay only for the resources consumed while your jobs are running Ideal Usage Patterns AWS Glue is designed to easily prepare data for extract transform and load (ETL) jobs Using AWS Glue gives you the following benefits: • AWS Glue can automatically crawl your data and generate code to execute or data transformations and loading processes • Integration with services like Amazon Athena Amazon EMR and Amazon Redshift • Serverless no infrastructure to provision or manage • AWS Glue generates ETL code that is customizable reusable and portable using familiar technology – Python and Spark Cost Model With AWS Glue you pay an hourly rate billed by the minute for crawler jobs (discovering data) and ETL jobs (processing and loading data) For the AWS Glue Data Catalog you pay a simple monthly fee for storing and accessing the metadata The fi rst million objects stored are free and the first million accesses are free If you provision a development endpoint to interactively develop your ETL code you pay an hourly rate billed per minute See AWS Glue Pricing for more details ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 21 of 56 Performance AWS Glue uses a scale out Apache Spark environment to load your data into its destination You can simply specify the number of Data Processing Units (DPUs ) that you want to allocate to your ETL job A n AWS Glue ETL job requires a minimum of 2 DPUs By default AWS Glue allocates 10 DPUs to each ETL job Additional DPUs can be added to increase the performance of your ETL job Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event You can also trigger one or more AWS Glue jobs from an external source such as an AWS Lambda function Durability and Availability AWS Glue connects to the data source of your preference whether it is in an Amazon S3 file an Amazon RDS table or another set of data As a result all your data is stored and available as it pertains to that data stores durability characteristics The AWS Glue service provides status of each job and pushes al l notifications to Amazon CloudWatch events You can setup SNS notifications using CloudWatch actions to be informed of job failures or completions Scalability and Elasticity AWS Glue provides a managed ETL service that runs on a Serverless Apache Spark e nvironment This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources AWS Glue works on top of the Apache Spark environment to provide a scale out execution environment for your data transformat ion jobs Interfaces AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog AWS Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the AWS Glue Data Catalog w ith corresponding table definitions and statistics You can also schedule crawlers to run periodically so that your metadata is always up todate and in sync with the underlying data Alternately you can add and update table details manually by using the AWS Glue Console or by calling the API You can also run Hive DDL statements via the Amazon Athena Console or a Hive client ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 22 of 56 on an Amazon EMR cluster Finally if you already have a persistent Apache Hive Metastore you can perform a bulk import of that met adata into the AWS Glue Data Catalog by using our import script AntiPatterns AWS Glue has the following antipatterns: • Streaming data – AWS Glue ETL is batch oriented and you can schedule your ETL jobs at a minimum of 5 min ute intervals While it can process micro batches it does not handle streaming data If your use case requires you to ETL data while you stream it in you can perfo rm the first leg of your ETL using Amazon Kinesis Amazon Kinesis Data Firehose or Amazon Kinesis Analytics Then store the data in either Amazon S3 or Amazon Redshift and trigger a n AWS Glue ETL job to pick up that dataset and continue applying additiona l transformations to that data • Multiple ETL engines – AWS Glue ETL jobs are PySpark based If your use case requires you to use an engine other than Apache Spark or if you want to run a heterogeneous set of jobs that run on a variety of engines like Hive Pig etc then AWS Data Pipeline or Amazon EMR would be a better choice • NoSQL Databases – Currently AWS Glue does not support data sources like NoSQL databases or Amazon DynamoDB Since NoSQL databases do not require a rigid schema like traditional rela tional databases most common ETL jobs would not apply Amazon Machine Learning Amazon Machine Learning (Amazon ML) is a service that makes it easy for anyone to use predictive analytics and machine learning technology Amazon ML provides visualization tools and wizards to guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology After your models are ready Amazon ML makes it easy to obtain predictions for your application using API operations without having to implement custom prediction generation code or manage any infrastructure ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 23 of 56 Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data to training the ML model to evaluating the model quality and adjusting outputs to align with business goals After a model is ready you can request predictions in either batches or using the lowlatency realtime API Ideal Usage Patterns Amazon ML is ideal for discovering patterns in your data and using these patterns to create ML models that can generate predictions on new unseen data points For example you can: • Enable applications to flag suspicious transactions – Build an ML model that predicts whether a new transaction is legitimate or fraudulent • Forecast product demand – Input historical order information to predict future order quantities • Personalize application content – Predict which items a user will be most interested in and retrieve these predictions from your application in realtime • Predict user activity – Analyze user behavior to customize your website and provide a better user experience • Listen to social media – Ingest and analyze social media feeds that potentially impact business decisions Cost Model With Amazon ML you pay only for what you use There are no minimum fees and no upfront commitments Amazon ML charges an hourly rate for the compute time used to build predictive models and then you pay for the number of predictions generated for your application For realtime predictions you also pay an hourly reserved capacity charge based on the amount of memory required to run your model The charge for data analysis model training and evaluation is based on the number of compute hours required to perform them and depends on the size of the input data the number of attributes within it and the number and types of transformations applied Data analysis and model building fees are priced at ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 24 of 56 $042 per hour Prediction fees are categorized as batch and realtime Batch predictions are $010 per 1000 predictions rounded up to the next 1000 while realtime predictions are $00001 per prediction rounded up to the nearest penny For realtime predictions there is also a reserved capacity charge of $0001 per hour for each 10 MB of memory provisioned for your model During model creation you specify the maximum memory size of each model to manage cost and control predictive performance You pay the reserved capacity charge only while your model is enabled for realtime predictions Charges for data stored in Amazon S3 Amazon RDS or Amazon Redshift are billed separately For more information see Amazon Machine Learning Pricing Performance The time it takes to create models or to request batch predictions from these models depends on the number of input data records the types and distribution of attributes within these records and the complexity of the data processing “recipe” that you specify Most realtime prediction requests return a response within 100 ms making them fast enough for interactive web mobile or desktop applications The exact time it takes for the realtime API to generate a prediction varies depending on the size of the input data record and the complexity of the data processing “recipe ” associated with the ML model that is generating the predictions Each ML model that is enabled for realtime predictions can be used to request up to 200 transactions per second by default and this number can be increased by contacting customer support You can monitor the number of predictions requested by your ML models by using CloudWatc h metrics Durability and Availability Amazon ML is designed for high availability There are no maintenance windows or scheduled downtimes The service runs in Amazon’ s proven high availability data centers with service stack replication configured across three facilities in each AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage Scalability and Elasticity By default you can process data sets that are up to 100 GB (this can be increased with a support ticket) in size to create ML models or to request batch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 25 of 56 predictions For large volumes of batch predictions you can split your input data records into separate chunks to enable the processing of larger prediction data volume By default you can run up to five simultaneous jobs and by contacting customer service you can have this limit raised Because Amazon ML is a managed service there are no servers to provision and as a result you are able to scale as your application grows without having to over provision or pay for resources not being used Interfaces Creating a data source is as simple as adding your data to Amazon S3 or you can pull data directly from Amazon Redshift or MySQL databases managed by Amazon RDS After your data source is defined you can interact with Amazon ML using the console Programmatic access to Amazon ML is enabled by the AWS SDKs and Amazon ML API You can also create and manage Amazon ML entities using the AWS CLI available on Windows Mac and Linux/UNIX systems AntiPatterns Amazon ML has the following antipatterns: • Very large data sets – While Amazon ML can support up to a default 100 GB of data (this can be increased with a support ticket) terabyte scale ingestion of data is not currently supported Using Amazon EMR to run Spark’s Machine Learning Library (MLlib) is a common tool for such a use case • Unsupported learning tasks – Amazon ML can be used to create ML models that perform binary classification (choose one of two choices and provide a measure of confidence) multiclass classification (extend choices to beyond two options) or numeric regression (predict a number directly) Unsupported ML tasks such as sequence prediction or unsupervised clustering can be approached by using Amazon EMR to run Spark and MLlib Amazon DynamoDB Amazon DynamoDB is a fast fully managed NoSQL database service that makes it simple and cost effective to store and retrieve any amount of data and serve ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 26 of 56 any level of request traffic DynamoDB helps offload the administrative burden of operating and scaling a highly available distributed database cluster This storage alternative meets the latency and throughput requirements of highly demanding applications by providing single digit millisecond latency and predictable performance with seamless throughput and storage scalability DynamoDB stores structured data in tables indexed by primary key and allows lowlatency read and write access to items ranging from 1 byte up to 400 KB DynamoDB supports three data types (number string and binary) in both scalar and multi valued sets It supports document stores such as JSON XML or HTML in these data types Tables do not have a fixed schema so each data item can have a different number of attributes The primary key can either be a single attribute hash key or a composite hash range key DynamoDB offers both global and local secondary indexes provide additional flexibility for querying against attributes other than the primary key DynamoDB provides both eventually consistent reads (by default) and strongly consistent reads (optional) as well as implicit item level transactions for item put update delete conditional operations and increment/decrement DynamoDB is integrated with other services such as Amazon EMR Amazon Redshift AWS Data Pipeline and Amazon S3 for analytics data warehouse data import/export backup and archive Ideal Usage Patterns DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies and the ability to scale storage and throughput up or down as needed without code changes or downtime Common use cases include: • Mobile apps • Gaming • Digital ad serving • Live voting • Audience interaction for live events ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 27 of 56 • Sensor networks • Log ingestion • Access control for webbased content • Metadata storage for Amazon S3 objects • Ecommerce shopping carts • Web session management Many of these use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization’s business Cost Model With DynamoDB you pay only for what you use and there is no minimum fee DynamoDB has three pricing components: provisioned throughput capacity (per hour) indexed data storage (per GB per month) data transfer in or out (per GB per month) New customers can start using DynamoDB for free as part of the AWS Free Usage Tier For more information see Amazon DynamoDB Pricing Performance SSDs and limiting indexing on attributes provides high throughput and low latency and drastically reduces the cost of read and write operations As the datasets grow predictable performance is required so that lowlatency for the workloads can be maintained This predictable performance can be achieved by defining the provisioned throughput capacity required for a given table Behind the scenes the service handles the provisioning of resources to achieve the requested throughput rate; you don’t need to think about instances hardware memory and other factors that can affect an application’s throughput rate Provisioned throughput capacity reservations are elastic and can be increased or decreased on demand Durability and Availability DynamoDB has built in fault tolerance that automatically and synchronously replicates data across three data centers in a region for high availability and to help protect data against individual machine or even facility failures ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 28 of 56 Amazon DynamoDB Streams captures all data activity that happens on your table and allows the ability to set up regional replication from one geographic region to another to provide even greater availability Scalability and Elasticity DynamoDB is both highly scalable and elastic There is no limit to the amount of data that you can store in a DynamoDB table and the service automatically allocates more storage as you store more data using the DynamoDB write API operations Data is automatically partitioned and repartitioned as needed while the use of SSDs provides predictable lowlatency response times at any scale The service is also elastic in that you can simply “dial up” or “dial down” the read and write capacity of a table as your needs change Interfaces DynamoDB provides a lowlevel REST API as well as higher level SDKs for Java ET and PHP that wrap the lowlevel REST API and provide some object relational mapping (ORM) functions These APIs provide both a management and data interface for DynamoDB The API currently offers operations that enable table management (creating listing deleting and obtaining metadata) and working with attributes (getting writing and deleting attributes; query using an index and full scan) While standard SQL isn’t available you can use the DynamoDB select operation to create SQL like queries that retrieve a set of attributes based on criteria that you provide You can also work with DynamoDB using the console AntiPatterns DynamoDB has the following antipatterns: • Prewritten application tied to a traditional relational database – If you are attempting to port an existing application to the AWS cloud and need to continue using a relational database you can use either Amazon RDS (Amazon Aurora MySQL PostgreSQL Oracle or SQL Server) or one of the many preconfigured Amazon EC2 database AMIs You can also install your choice of database software on an EC2 instance that you manage • Joins or complex transactions – While many solutions are able to leverage DynamoDB to support their users it’s possible that your ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 29 of 56 application may require joins complex transactions and other relational infrastructure provided by traditional database platforms If this is the case you may want to explore Amazon Redshift Amazon RDS or Amazon EC2 with a selfmanaged database • Binary large objects (BLOB) data – If you plan on storing large (greater than 400 KB) BLOB data such as digital video images or music you’ll want to consider Amazon S3 However DynamoDB can be used in this scenario for keeping track of metadata (eg item name size date created owner location etc) about your binary objects • Large data with low I/O rate –DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored If you plan to store very large amounts of data that are infrequently accessed other storage options may be a better choice such as Amazon S3 Amazon Redshift Amazon Redshift is a fast fully managed petabyte scale data warehouse service that makes it simple and costeffective to analyze all your data efficiently using your existing business intelligence tools It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and is designed to cost less than a tenth of the cost of most traditional data warehousing solutions Amazon Redshift delivers fast query and I/O performance for virtually any size dataset by using columnar storage technology while parallelizing and distributing queries across multiple nodes It automates most of the common administrative tasks associated with provisioning configuring monitoring backing up and securing a data warehouse making it easy and inexpensive to manage and maintain This automation allows you to build petabyte scale data warehouses in minutes instead of weeks or months taken by traditional on premises implementations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 30 of 56 Amazon Redshift Spectrum is a feature that enables you to run queries against exabyte s of unstructured dat a in Amazon S3 with no loading or ETL required When you issue a query it goes to the Amazon Redshift SQL endpoint which generates and optimizes a query plan Amazon Redshift determines what data is local and what is in Amazon S3 generates a plan to mi nimize the amount of Amazon S3 data that needs to be read and then requests Redshift Spectrum workers out of a shared resource pool to read and process the data from Amazon S3 Ideal Usage Patterns Amazon Redshift is ideal for online analytical processing (OLAP) using your existing business intelligence tools Organizations are using Amazon Redshift to: • Analyze global sales data for multiple products • Store historical stock trade data • Analyze ad impressions and clicks • Aggregate gaming data • Analyze social trends • Measure clinical quality operation efficiency and financial performance in health care Cost Model An Amazon Redshift data warehouse cluster requires no long term commitments or upfront costs This frees you from the capital expense and complexity of planning and purchasing data warehouse capacity ahead of your needs Charges are based on the size and number of nodes of your cluster There is no additional charge for backup storage up to 100% of your provisioned storage For example if you have an active cluster with 2 XL nodes for a total of 4 TB of storage AWS provides up to 4 TB of backup storage on Amazon S3 at no additional charge Backup storage beyond the provisioned storage size and backups stored after your cluster is terminated are billed at standard Amazon S3 rates There is no data transfer charge for communication between Amazon S3 and Amazon Redshift For more information see Amazon Redshift pricing ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 31 of 56 Performance Amazon Redshift uses a variety of innovations to obtain very high performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more It uses columnar storage data compression and zone maps to reduce the amount of I/O needed to perform queries Amazon Redshift has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources The underlying hardware is designed for high performance data processing using local attached storage to maximize throughput between the CPUs and drives and a 10 GigE mesh network to maximize throughput between nodes Performance can be tuned based on your data warehousing needs: AWS offers Dense Compute (DC) with SSD drives as well as Dense Storage (DS) options Durability and Availability Amazon Redshift automatically detects and replaces a failed node in your data warehouse cluster The data warehouse cluster is read only until a replacement node is provisioned and added to the DB which typically only takes a few minutes Amazon Redshift makes your replacement node available immediately and streams your most frequently accessed data from Amazon S3 first to allow you to resume querying your data as quickly as possible Additionally your data warehouse cluster remains available in the event of a drive failure; because Amazon Redshift mirrors your data across the cluster it uses the data from another node to rebuild failed drives Amazon Redshift clusters reside within one Availability Zone but if you wish to have a multi AZ set up for Amazon Redshift you can set up a mirror and then selfmanage replication and failover Scalability and Elasticity With a few clicks in the console or an API call you can easily change the number or type of nodes in your data warehouse as your performance or capacity needs change Amazon Redshift enables you to start with a single 160 GB node and scale up to a petabyte or more of compressed user data using many nodes For more information see Clusters and Nodes in Amazon Redshift in the Amazon Redshift Management Guide ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 32 of 56 While resizing Amazon Redshift places your existing cluster into read only mode provisions a new cluster of your chosen size and then copies data from your old cluster to your new one in parallel During this process you pay only for the active Amazon Redshift cluster You can continue running queries against your old cluster while the new one is being provisioned After your data has been copied to your new cluster Amazon Redshift automatically redirects queries to your new cluster and removes the old cluster Interfaces Amazon Redshift has custom JDBC and ODBC drivers that you can download from the Connect Client tab of the console allowing you to use a wide range of familiar SQL clients You can also use standard PostgreSQL JDBC and ODBC drivers For more information about Amazon Redshift drivers see Amazon Redshift and PostgreSQL There are numerous examples of validated integrations with many popular BI and ETL vendors Loads and unloads are attempted in parallel into each compute node to maximize the rate at which you can ingest data into your data warehouse cluster as well as to and from Amazon S3 and DynamoDB You can easily load streaming data into Amazon Redshift using Amazon Kinesis Data Firehose enabling near realtime analytics with existing business intelligence tools and dashboards you’re already using today Metrics for compute utilization memory utilization storage utilization and read/write traffic to your Amazon Redshift data warehouse cluster are available free of charge via the console or CloudWatch API operations AntiPatterns Amazon Redshift has the following antipatterns: • Small data sets – Amazon Redshift is built for parallel processing across a cluster If your data set is less than a hundred gigabytes you are not going to get all the benefits that Amazon Redshift has to offer and Amazon RDS may be a better solution • Online transaction processing (OLTP) – Amazon Redshift is designed for data warehouse workloads producing extremely fast and inexpensive analytic capabilities If you require a fast transactional system you may want to choose a traditional relational database system built on Amazon RDS or a NoSQL database offering such as DynamoDB ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 33 of 56 • Unstructured data – Data in Amazon Redshift must be structured by a defined schema rather than supporting arbitrary schema structure for each row If your data is unstructured you can perform extract transform and load (ETL) on Amazon EMR to get the data ready for loading into Amazon Redshift • BLOB data – If you plan on storing large binary files (such as digital video images or music) you may want to consider storing the data in Amazon S3 and referencing its location in Amazon Redshift In this scenario Amazon Redshift keeps track of metadata (such as item name size date created owner location and so on) about your binary objects but the large objects themselves are stored in Amazon S3 Amazon Elasticsearch Service Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and more Amazon ES is a fully manag ed service that delivers Elasticsearch’s easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads The service offers built in integrations with Kibana Logstash and AWS services including Amazon Kinesis Data Firehose AWS Lambda and Amazon CloudWatch so that you can go from raw data to actionable insights quickly It’s easy to get started with Amazon ES You can set up and configure your Amazon ES domain in minutes from the AWS Management Console Amazon ES provisions all the resources for your domain and launches it The service automatically detects and replaces failed Elasticsearch nodes reducing the overhead associated with self managed infrastructure and Elasticsearch software Amazon ES allows you to easily scale your cluster via a single API call or a few clicks in the console With Amazon ES you get direct access to the Elasticsearch open source API so th at code and applications you’re already using with your existing Elasticsearch environments will work seamlessly Ideal Usage Pattern Amazon Elasticsearch Service is ideal for querying and searching large amounts of data Organizations can use Amazon ES to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 34 of 56 • Analyze activity logs eg logs for customer facing applications or websites • Analyze CloudWatch logs with Elasticsearch • Analyze product usage data coming from various services and systems • Analyze social media sentiments CRM data and find trends for your brand and products • Analyze data stream updates from other AWS services eg Amazon Kinesis Data Streams and Amazon DynamoDB • Provide customer s a rich search and navigation experience • Usage monitoring for mobile applications Cost Model With Amazon Elasticsearch Service you pay only for what you use There are no minimum fees or upfront commitments You are charged for Amazon ES instance hour s Amazon EBS storage (if you choose this option) and standard data transfer fees You can get started with our free tier which provides free usage of up to 750 hours per month of a single AZ t2microelasticsearch or t2smallelasticsearch instance and 10 GB per month of optional Amazon EBS storage (Magnetic or General Purpose) Amazon ES allows you to add data durability through automated and manual snapshots of your cluster Amazon ES provides storage space for automated snapshots free of charge for ea ch Amazon Elasticsearch domain Automated snapshots are retained for a period of 14 days Manual snapshots are charged according to Amazon S3 storage rates Data transfer for using the snapshots is free of charge For more information see Amazon Elasticsearch Service Pricing Performance Performance of Amazon ES depends on multiple factors including instance type workload index number of shards used read replicas storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 35 of 56 configurations –instance storage or EBS storage (general purpose SSD) Indexes are made up of shards of data which can be distributed on different instances in multiple Availability Zones Read replica of the shards are maintained by Amazon ES in a different Availability Zone if zone awareness is checked Amazon ES can use either the fast SSD instance storage for stor ing indexes or multiple EBS volumes A search engine makes heavy use of storage devices and making disks faster will result in faster query and search performance Durability and Availability You can configure your Amazon ES domains for high availability by enabling the Zone Awareness option either at domain creation time or by modifying a live domain When Zone Awareness is enabled Amazon ES distributes the instances supporting the domain across two different Availability Zones Then if you enable repli cas in Elasticsearch the instances are automatically distributed in such a way as to deliver cross zone replication You can build data durability for your Amazon ES domain through automated and manual snapshots You can use snapshots to recover your dom ain with preloaded data or to create a new domain with preloaded data Snapshots are stored in Amazon S3 which is a secure durable highly scalable object storage By default Amazon ES automatically creates daily snapshots of each domain In addition y ou can use the Amazon ES snapshot APIs to create additional manual snapshots The manual snapshots are stored in Amazon S3 Manual snapshots can be used for cross region disaster recovery and to provide additional durability Scalability and Elasticity You can add or remove instances and easily modify Amazon EBS volumes to accommodate data growth You can write a few lines of code that will monitor the state of your domain through Amazon CloudWatch metrics and call the Amazon Elasticsearch Service API t o scale your domain up or down based on thresholds you set The service will execute the scaling without any downtime Amazon Elasticsearch Service supports 1 EBS volume (max size of 15 TB) per instance associated with a domain With the default maximum o f 20 data nodes ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 36 of 56 allowed per Amazon ES domain you can allocate about 30 TB of EBS storage to a single domain You can request a service limit increase up to 100 instances per domain by creating a case with the AWS Support Center With 100 instances you can allocate about 150 TB of EBS storage to a single domain Interfaces Amazon ES supports many of the commonly used Elasticsearch APIs so code applications and popular tools that you're already using with your current Elasticsearch environments will work seamlessly For a full list of supported Elasticsearch operations see our documentation The AWS CLI API or the AWS Management Console can be used for creating and managing your domains as well Amazon ES supports integration with several AWS services including streaming data from S3 buckets Amazon Kinesis Data S treams and DynamoDB Streams Both integrations use a Lambda function as an event handler in the cloud that responds to new data in Amazon S3 and Amazon Kinesis Data Streams by processing it and streaming the data to your Amazon ES domain Amazon ES also integrates with Amazon CloudWatch for monitoring Amazon ES domain metrics and CloudTrail for auditing configuration API calls to Amazon ES domains Amazon ES includes built in integration with Kibana an open source analytics and visualization platform and supports integration with Logstash an open source data pipeline that helps you process logs and other event data You can set up your Amazon ES domain as the backend store for all logs coming through your Logstash implementation to easily ingest structured and unstructured data from a variety of sources AntiPatterns • Online transaction processing (OLTP) Amazon ES is a real time distributed search and analytics engine There is no support for transactions or processing on data manipulation If your requirement is for a fast transactional system then a traditional relational database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 37 of 56 system built on Amazon RDS or a NoSQL databa se offering functionality such as DynamoDB is a better choice • Ad hoc data querying – While Amazon ES takes care of the operational overhead of building a highly scalable Elasticsearch cluster if running Ad hoc queries or oneoff queries against your da ta set is your usecase Amazon Athena is a better choice Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL without provisioning servers Amazon QuickSight Amazon QuickSight is a very fast easy touse cloud powered business analytics service that makes it easy for all employees within an organization to build visualizations perform ad hoc analysis and quickly get business insights from their d ata anytime on any device It can connect to a wide variety of data sources including flat files eg CSV and Excel access on premise databases including SQL Server MySQL and PostgreSQL AWS resources like Amazon RDS databases Amazon Redshift Amazo n Athena and Amazon S3 Amazon QuickSight enables organizations to scale their business analytics capabilities to hundreds of thousands of users and delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon QuickSig ht is built with ""SPICE"" – a Super fast Parallel In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code g eneration to run interactive queries on large datasets and get rapid responses SPICE supports rich calculations to help you derive valuable insights from your analysis without worrying about provisioning or managing infrastructure Data in SPICE is persis ted until it is explicitly deleted by the user SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wi de variety of AWS data sources Ideal Usage Patterns Amazon QuickSight is an ideal Business Intelligence tool allowing end users to create visualizations that provide insight into their data to help them make better business decisions Amazon QuickSight can be used to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 38 of 56 • Quick interactive ad hoc exploration and optimized visualization of data • Create and share dashboards and KPI’s to provide insight into your data • Create Stories which are guided tours through specific views of an analysis and allow you to share insights and collaborate with others They are used to convey key points a thought process or the evolution of an analysis for collaboration • Analyze and visualize data coming from logs and stored in S3 • Analyze and visual ize data from on premise databases like SQL Server Oracle PostGreSQL and MySQL • Analyze and visualize data in various AWS resources eg Amazon RDS databases Amazon Redshift Amazon Athena and Amazon S3 • Analyze and visualize data in software as a se rvice ( SaaS) applications like Salesforce • Analyze and visualize data in data sources that can be connected to using JDBC/ODBC connection Cost Model Amazon QuickS ight has two different editions for pricing; standard edition and enterprise edition For an annual subscription it is $9/user/month for standard edition and $18/user/month for enterprise edition both with 10 GB of SPICE capacity included You can get addition al SPICE capacity for $25/GB/month for standard edition and $38/GB/month for enterprise edition We also have month to month option for both the editions For standard edition it is $12/GB/month and enterprise edition is $24/GB/month Additional informat ion on pricing can be found at Amazon QuickSight Pricing Both editions offer a full set of features for creating and sharing data visualizations Enterprise edition also offers encryption at rest and Microsoft Activ e Directory (AD) integration In Enterprise edition you select a Microsoft AD directory in AWS Directory Service You use that active directory to identify and manage your Amazon QuickSight users and administrators ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 39 of 56 Performance Amazon QuickSight is built with ‘SPICE’ a Super fast Parallel and In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses Durability and Availability SPICE automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources Scalability and Elasticity Amazon QuickSight is a fully managed service and it internally takes care of scaling to meet the demands of your end users With Amazon Qui ckSight you don’t need to worry about scale You can seamlessly grow your data from a few hundred megabytes to many terabytes of data without managing any infrastructure Interfaces Amazon QuickSight can connect to a wide variety of data sources including flat files (CSV TSV CLF ELF) connect to on premises databases like SQL Server MySQL and PostgreSQL and AWS data sources including Amazon RDS Amazon Aurora Amazon Redshift Amazon Athena and Amazon S3 and SaaS applications like Salesforce You can also export analyzes from a visual to a file with CSV format You can share an analysis dashboard or story using the share icon from the Amazon QuickSight service interface You will be able to select the recipients (email address username or group name) permission levels and other options before sharing the content with others AntiPatterns • Highly formatted canned Reports – Amazon QuickSight is much more suited for ad hoc query analysis and visualization of da ta For highly formatted reports eg formatted financial statements consider using a different tool • ETL While Amazon QuickSight can perform some transformations it is not a full fledged ETL tool AWS offers AWS Glue which is a fully ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 40 of 56 managed extract t ransform and load (ETL) service that makes it easy for customers to prepare and load their data for analytics Amazon EC2 Amazon EC2 with instances acting as AWS virtual machines provides an ideal platform for operating your own selfmanaged big data analytics applications on AWS infrastructure Almost any software you can install on Linux or Windows virtualized environments can be run on Amazon EC2 and you can use the payas yougo pricing model What you don’t get are the application level managed services that come with the other services mentioned in this whitepaper There are many options for selfmanaged big data analytics; here are some examples: • A NoSQL offering such as MongoDB • A data warehouse or columnar store like Vertica • A Hadoop cluster • An Apache Storm cluster • An Apache Kafka environment Ideal Usage Patterns • Specialized Environment – When running a custom application a variation of a standard Hadoop set or an application not covered by one of our other offerings Amazon EC2 provides the flexibility and scalability to meet your computing needs • Compliance Requirements – Certain compliance requirements may require you to run applications yourself on Amazon EC2 instead of using a managed service offering Cost Model Amazon EC2 has a variety of instance types in a number of instance families (standard high CPU high memory high I/O etc) and different pricing options (OnDemand Reserved and Spot) Depending on your application requirements you may want to use additional services along with Amazon EC2 such as Amazon Elastic Block Store (Amazon EBS) for directly attached persistent storage or Amazon S3 as a durable object store; each comes with their own pricing model If you do run your big data application on Amazon EC2 you ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 41 of 56 are responsible for any license fees just as you would be in your own data center The AWS Marketplace offers many different third party big data software packages preconfigured to launch with a simple click of a button Performance Performance in Amazon EC2 is driven by the instance type that you choose for your big data platform Each instance type has a different amount of CPU RAM storage IOPs and networking capability so that you can pick the right performance level for your application requirements Durability and Availability Critical applications should be run in a cluster across multiple Availability Zones within an AWS Region so that any instance or data center failure does not affect application users For non uptime critical applications you can back up your application to Amazon S3 and restore to any Availability Zone in the region if an instance or zone failure occurs Other options exist depending on which application you are running and the requirements such as mirroring your application Scalability and Elasticity Auto Scaling is a service that allows you to automatically scale your Amazon EC2 capacity up or down according to conditions that you define With Auto Scaling you can ensure that the number of EC2 instan ces you’re using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Auto Scaling is particularly well suited for applications that experience hourly daily or weekly variability in usage Auto Scaling is enabled by CloudWatch and available at no additional charge beyond CloudWatch fees Interfaces Amazon EC2 can be managed programmatically via API SDK or the console Metrics for compute utilization memory utilization storage utilization network consumption and read/write traffic to your instances are free of charge using the console or CloudWatch API operations The interfaces for your big data analytics software that you run on top of Amazon EC2 varies based on the characteristics of the software you choose ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 42 of 56 AntiPatterns Amazon EC2 has the following antipatterns: • Managed Service – If your requirement is a managed service offering where you abstract the infrastructure layer and administration from the big data analytics then this “do it yourself” model of managing your own analytics software on Amazon EC2 may not be the correct choice • Lack of Expertise or Resources – If your organization does not have or does not want to expend the resources or expertise to install and manage a high availability installation for the system in question you should consider using the AWS equivalent such as Amazon EMR DynamoDB Amazon Kinesis Data Streams or Amazon Redshift Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is serverless so there is no infrastructure to setup or manage and you can start analyzing data immediately You don’t need to load your data into Athena as it works directly with data stored in S3 Just log into the Athena Console define your table schema and start querying Amazo n Athena uses Presto with full ANSI SQL support and works with a variety of standard data formats including CSV JSON ORC Apache Parquet and Apache Avro Ideal Usage Patterns • Interactive ad hoc querying for web logs – Athena is a good tool for interactive onetime SQL queries against data on Amazon S3 For example you could use Athena to run a query on web and application logs to troubleshoot a performance issue You simply define a table for your data and start queryi ng using standard SQL Athena integrates with Amazon QuickSight for easy visualization • To query staging data before loading into Redshift – You can stage your raw data in S3 before processing and loading it into Redshift and then use Athena to query tha t data • Send AWS Service logs to S3 for Analysis with Athena – CloudTrail Cloudfront ELB/ALB and VPC flow logs can be analyzed ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 43 of 56 with Athena AWS CloudTrail logs include details about any API calls made to your AWS services including from the console CloudFront logs can be used to explore users’ surfing patterns across web properties served by CloudFront Querying ELB/ALB logs allows you to s ee the source of traffic latency and bytes transferred to and from Elastic Load Balancing instances and backend applications VPC flow logs capture information about the IP traffic going to and from network interfaces in VPCs in the Amazon VPC service The logs allow you to investigate network traffic patterns and identify threats and risks across your VPC estate • Building Interactive Analytical Solutions with notebook based solutions eg RStudio Jupyter or Zeppelin Data scientists and Analysts are often concerned about managing the infrastructure behind big data platforms while running notebook based solutions such as RStudio Jupyter and Zeppelin Amazon Athena makes it easy to analyze data using standard SQL without the need to manage infrastructure Integrating these notebook based solutions with Amazon Athena gives data scientists a powerful platform for building interactive analytical solutions Cost Model Amazon Athena has simple payasyougo pricing with no upfront costs or minimum fees and you’ll only pay for the resources you consume It is priced per query $5 per TB of data scanned and charges based on the amount of data scanned by the query You can save from 30% to 90% on your per query costs and get better performance by compressing partitioning and converting your data into colum nar formats Converting data to the columnar format allows Athena to read only the columns it needs to process the query You are charged for the number of bytes scanned by Amazon Athena rounded up to the nearest megabyte with a 10 MB minimum per query There are no charges for Data Definition Language (DDL) statements like CREATE/ALTER/DROP TABLE statements for managing partitions or failed queries Cancelled queries are charged based on the amount of data scanned ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 44 of 56 Performance You can improve the perfo rmance of your query by compressing partitioning and converting your data into columnar formats Amazon Athena supports open source columnar data formats such as Apache Parquet and Apache ORC Converting your data into a compressed columnar format lowers your cost and improves query performance by enabling Athena to scan less data from S3 when executing your query Durability and Availability Amazon Athena is highly available and executes queries using compute resources across multiple facilities automatically routing queries appropriately if a particular facility is unreachable Athena uses Amazon S3 as its underlying data store making your data highly available and durable Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99999999999% of objects Your d ata is redundantly stored across multiple facilities and multiple devices in each facility Scalability and Elasticity Athena is serverless so there is no infrastructure to setup or manage and you can start analyz ing data immediately Since it is serverl ess it can scale automatically as needed Security Authorization and Encryption Amazon Athena allows you to control access to your data by using AWS Identity and Access Management (IAM) policies Access Control Lists (ACLs) and Amazon S3 bucket policies With IAM policies you can grant IAM users fine grained control to your S3 buckets By controlling access to data in S3 you can restrict users from querying it using Athena You can query data that’s been protected by : • Server side encryption with an Ama zon S3 managed key • Server side encryption with an AWS KMS managed key • Client side encryption with an AWS KMS managed key Amazon Athena also can directly integrate with AWS Key Management System (KMS ) to encrypt your result sets if desired ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 45 of 56 Interfaces Quer ying can be done by using the Athena Console Athena also supports CLI API via SDK and JDBC Athena also integrates with Amazon QuickSight for creating visualizations based on the Athena queries AntiPatterns Amazon Athena has the following antipatterns: • Enterprise Reporting and Business Intelligence Workloads – Amazon Redshift is a better tool for Enterprise Reporting and Business Intelligence Workloads involving iceberg queries or cached data at the nodes Data warehouses pull data from ma ny sources format and organize it store it and support complex high speed queries that produce business reports The query engine in Amazon Redshift has been optimized to perform especially well on data warehouse workloads • ETL Workloads – You should use Amazon EMR/ Amazon Glue if you are looking for an ETL tool to process extremely large datasets and analyze them with the latest big data processing frameworks such as Spark Hadoop Presto or Hbase • RDBMS – Athena is not a relation al/transactional dat abase It is not meant to be a replacement for SQL engines like M ySQL Solving Big Data Problems on AWS In this whitepaper we have examined some tools available on AWS for big data analytics This paper provides a good reference point when starting to design your big data applications However there are additional aspects you should consider when selecting the right tools for your specific use case In general each analytical workload has certain characteristics and requirements that dictate which tool to use such as: • How quickly do you need analytic results : in real time in seconds or is an hour a more appropriate time frame? • How much value will these analytics provide your organization and what budget constraints exist? • How large is the data and what is its growth rate? • How is the data structured? ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 46 of 56 • What integration capabilities do the producers and consumers have? • How much latency is acceptable between the producers and consumers? • What is the cost of downtime or how available and durable does the solution need to be? • Is the analytic workload consistent or elastic? Each one of these questions helps guide you to the right tool In some cases you can simply map your big data analytics workload into one of the services based on a set of requirements However in most realworld big data analytic workloads there are many different and sometimes conflicting characteristics and requirements on the same data set For example some result sets may have realtime requirements as a user interacts with a system while other analytics could be batched and run on a daily basis These different requirements over the same data set should be decoupled and solved by using more than one tool If you try to solve both of these examples using the same toolset you end up either over provisioning or therefore overpaying for unnecessary response time or you have a solution that does not respond fast enough to your users in real time Matching the best suited tool to each analytical problem results in the most cost effective use of your compute and storage resources Big data doesn’t need to mean “big costs” So when designing your applications it’s important to make sure that your design is cost efficient If it’s not relative to the alternatives then it’s probably not the right design Another common misconception is that using multiple tool sets to solve a big data problem is more expensive or harder to manage than using one big tool If you take the same example of two different requirements on the same data set the realtime request may be low on CPU but high on I/O while the slower processing request may be very compute intensive Decoupling can end up being much less expensive and easier to manage because you can build each tool to exact specification s and not overprovision With the AWS payasyougo model this equates to a much better value because you could run the batch analytics in just one hour and therefore only pay for the compute resources for that hour Also you may find this approach easier to manage rather than leveraging a single system that tries to meet all of the requirements Solving for different requirements with one tool results in ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 47 of 56 attempting to fit a square peg (real time requests) into a round hole (a large data warehouse) The AWS platform makes it easy to decouple your architecture by having different tools analyze the same data set AWS services have built in integration so that moving a subset of data from one tool to another can be done very easily and quickl y using parallelization Let’s put this into practice by exploring a few real world big data analytics problem scenarios and walk ing through an AWS architectural solution Example 1: Queries against an Amazon S3 Data Lake Data lakes are an increasingly popular way to store and analyze both structured and unstructured data If you use an Amazon S3 data lake AWS Glue can make all your data immediately available for analytics without moving the data AWS Glue crawlers can sca n your data lake and keep the AWS Glue Data Catalog in sync with the underlying data You can then directly query your data lake with Amazon Athena and Amazon Redshift Spectrum You can also use the AWS Glue Data Catalog as your external Apache Hive Metast ore for big data applications running on Amazon EMR ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 48 of 56 1 An AWS Glue crawler connects to a data store progresses through a prioritized list of classifiers to extract the schema of your data and other statistics and then populates the AWS Glue Data Catalog with this metadata Crawlers can run periodically to de tect the availability of new data as well as changes to existing data including table definition changes Crawlers automatically add new tables new partitions to existing table and new versions of table definitions You can customize AWS Glue crawlers t o classify your own file types 2 The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets For a given data set you can store its table definition physical location add business relevant attributes as well as track how this data has changed over time The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR For more information on setting up your EMR cluster to use AWS Glue Data Catalog as an Apache Hive Metastore click here 3 The AWS Glue Data Catalog also provides out ofbox integration with Amazon Athena Amazon EMR and Amazon Redshift Spectrum Once you add your table definitions to the AWS Glue Data Catalog they are available for ETL and also readily available for querying in Amazon ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 49 of 56 Athena Amazon EMR and Amazon Redshift Spectrum so that you can have a common view of your data between these services 4 Using a BI tool like Amazon QuickSight enables you to easily build visualizations perform ad hoc analysis and quickly get business insights from your data Amazon QuickSight supports data so urces like: Amazon Athena Amazon Redshift Spectrum Amazon S3 and many others see here: Supported Data Sources Example 2: Capturing and Analyzing Sensor Data An international air conditioner manufacturer has many large air conditioners that it sells to various commercial and industrial companies Not only do they sell the air conditioner units but to better position themselves against their competitors they also offer addon services where you can see realtime dashboards in a mobile app or a web browser Each unit sends its sensor information for processing and analysis This data is used by the manufacturer and its customers With this capability the manufacturer can visualize the dataset and spot trends Currently they have a few thousand prepurchased air conditioning (A/C) units with this capability They expect to deliver these to customers in the next couple of months and are hopi ng that in time thousands of units throughout the world will be using this platform If successful they would like to expand this offering to their consumer line as well with a much larger volume and a greater market share The solution needs to be able to handle massive amounts of data and scale as they grow their business without interruption How should you design such a system? First break it up into two work streams both originating from the same data: • A/C unit’s current information with near real time requirements and a large number of customers consuming this information • All historical information on the A/C units to run trending and analytics for internal use The data flow architecture in the following illustration sh ows how to solve this big data problem ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 50 of 56 Capturing and Analyzing Sensor Data 1 The process begins with each A/C unit providing a constant data stream to Amazon Kinesis Data Streams This provides an elastic and durable interface the units can talk to that can be scaled seamlessly as more and more A/C units are sold and brought online 2 Using the Amazon Kinesis Data Streams provided tools such as the Kinesis Client Library or SDK a simple application is built on Amazon EC2 to read data as it comes into Amazon Kinesis Data Streams analyze it and determine if the data warrants an update to the realtime dashboard It looks for changes in system operation temperature fluctuations and any errors that the units encounter 3 This data flow needs to occur in near real time so that customers and maintenance teams can be alerted as quickly as possible if there is an issue with the unit The data in the dashboard does have some aggregated trend information but it is mainly the current state as well as any system errors So the data needed to populate the dashboard is relatively small Additionally there will be lots of potential access to this data from the following sources: o Customers checking on their system via a mobile device or browser o Maintenance teams checking the status of its fleet o Data and intelligence algorithms and analytics in the reporting platform spot trends that can be then sent out as alerts such as if ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 51 of 56 the A/C fan has been running unusually long with the building temperatur e not going down DynamoDB was chosen to store this near real time data set because it is both highly available and scalable; throughput to this data can be easily scaled up or down to meet the needs of its consumers as the platform is adopted and usage grows 4 The reporting dashboard is a custom web application that is built on top of this data set and run on Amazon EC2 It provides content based on the system status and trends as well as alerting customers and maintenance crews of any issues that may come up with the unit 5 The customer accesses the data from a mobile device or a web browser to get the current status of the system and visualize historical trends The data flow (steps 25) that was just described is built for near realtime reporting of information to human consumers It is built and designed for low latency and can scale very quickly to meet demand The data flow (steps 69) that is depicted in the lower part of the diagram does not have such stringent speed and latency requirements This allows the architect to design a different solution stack that can hold larger amounts of data at a much smaller cost per byte of information and choose less expensive compute and storage resources 6 To read from the Amazon Kinesis stream there is a separate Amazon Kinesis enabled application that probably runs on a smaller EC2 instance that scales at a slower rate While this application is going to analyze the same data set as the upper data flow the ultimate purpose of this data is to store it for long term record and to host the data set in a data warehouse This data set ends up being all data sent from the systems and allows a much broader set of analytics to be performed without the near realtime requirements 7 The data is transformed by the Amazon Kinesis enabled application into a format that is suitable for long term storage for loading into its data warehouse and storing on Amazon S3 The data on Amazon S3 not only serves as a parallel ingestion point to Amazon Redshift but is durable storage that will hold all data that ever runs through this system; it can be the single source of truth It can be used to load other analytical tools if additional requirements arise Amazon S3 also comes with native integration with Amazon Glacier if any data needs to be cycled into long term lowcost storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 52 of 56 8 Amazon Redshift is again used as the data warehouse for the larger data set It can scale easily when the data set grows larger by adding another node to the cluster 9 For visualizing the analytics one of the many partner visualization platforms can be used via the OBDC/JDBC connection to Amazon Redshift This is where the reports graphs and ad hoc analytics can be performed on the data set to find certain variables and trends that can lead to A/C units underperforming or breaking This architecture can start off small and grow as needed Additionally by decoupling the two different work streams from each other they can grow at their own rate without upfront commitment allowing the manufacturer to assess the viability of this new offering without a large initial investment You could easily imag ine further additions such as adding Amazon ML to predict how long an A/C unit will last and preemptively send ing out maintenance teams based on its prediction algorithms to give their customers the best possible service and experience This level of service would be a differentiator to the competition and lead to increased future sales Example 3: Sentiment Analysis of Social Media A large toy maker has been growing very quickly and expanding their product line After each new toy release the company wants to understand how consumers are enjoying and using their products Additionally the company wants to ensure that their consumers are having a good experience with their products As the toy ecosystem grows the company wants to ensure that their products are still relevant to their customers and that they can plan future roadmaps items based on customer feedback The company wants to capture the following insights from social media: • Understand how consumers are using their products • Ensure customer satisfaction • Plan future roadmaps Capturing the data from various social networks is relatively easy but the challenge is building the intelligence programmatically After the data is ingested the company wants to be able to analyze and classify the data in a cost effective and programmatic way To do this you can use the architecture in the following illustration ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 53 of 56 Sentiment Analysis of Social Media The first step is to decide which social media sites to listen to Then create an application on Amazon EC2 that polls those sites using their corresponding APIs Next create an Amazon Kinesis stream because we might have multiple data sources: Twitter Tumblr and so on This way a new stream can be created each time a new data source is added and you can take advantage of the existing application code and architecture In this example a new Amazon Kinesis stream is created to copy the raw data to Amazon S3 as well For archival long term analysis and historical reference raw data is stored into Amazon S3 Additional Amazon ML batch models can be run on the data in Amazon S3 to perform predictive analysis and track consumer buying trends As noted in the architecture diagram Lambda is used for processing and normalizing the data and requesting predictions from Amazon ML After the Amazon ML prediction is returned the Lambda function can take action s based on the prediction – for example to route a social media post to the customer service team for further review Amazon ML is used to make predictions on the input data For example an ML model can be built to analyze a social media comment to determine whether the customer expressed negative sentiment about a product To get accurate predictions with Amazon ML start with training data and ensure that your ML models are working properly If you are creating ML models for the first time see Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer As ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 54 of 56 mentioned earlier if multiple social network data sources are used then a different ML model for each one is suggested to ensure prediction accuracy Finally actionable data is sent to Amazon SNS using Lambda and delivered to the proper resources by text message or email for further investigation As part of the sentiment analysis creating an Amazon ML model that is updated regularly is imperative for accurate results Additional metrics about a specific model can be graphically displayed via the console such as: accuracy false positive rate precision and recall For more information see Step 4: Review the ML Model Predictive Performance and Set a CutOff By using a combination of Amazon Kinesis Data Streams Lambda Amazon ML and Amazon SES we have create d a scalable and easily customizable social listening platform Note that this scenario does not describe creating an Amazon ML model You would create the model initially and then need to update it periodically or as workloads change to keep it accurate Conclusion As more and more data is generated and collected data analysis requires scalable flexible and high performing tools to provide insights in a timely fashion However organizations are facing a growing big data ecosystem where new tools emerge and “die” very quickly Therefore it can be very difficult to keep pace and choose the right tools This whitepaper offers a first step to help you solve this challenge With a broad set of managed services to collect process and analyze big data the AWS platform makes it easier to build deploy and scale big data applications This allow s you to focus on business problems instead of updating and managing these tools AWS provides many solutions to address your big data analytic requirements Most big data architecture solutions use multiple AWS tools to build a complete solution This approach help s meet stringent business requirements in the most costoptimized performant and resilient way possible The result is a flexible big data architecture that is able to scale along with your business ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 55 of 56 Contributors The following individuals and organizations contributed to this document: • Erik Swensson Manager Solutions Architecture Amazon Web Services • Erick Dame Solutions Architect Amazon Web Services • Shree Kenghe S olutions Architect Amazon Web Services Further Reading The following resources can help you get started in running big data analytics on AWS: • Big Data on AWS View the comprehensive portfolio of big data services as well as links to other resources such AWS big data partners tutorials articles and AWS Marketplace offerings on big data solutions Contact us if you need any help • Read the AWS Big Data Blog The blog features real life examples and ideas updated regularly to help you collect store clean process and visualize big data • Try one of the Big Data Test Drives Explore the rich ecosystem of products designed to address big data challenges using AWS Test Drives are developed by AWS Partner Network (APN) Consulting and Technology partners and are provided free of charge for education demonstration and evaluation purposes • Take an AWS training course on big data The Big Data on AWS course introduces you to cloud based big data solutions and Amazon EMR We show you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Pig and Hive We also teach you how to create big data environments work with DynamoDB and Amazon Redshift understand the benefits of Amazon Kinesis Streams and leverage best practices to design big data environments for security and costeffectiveness ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 56 of 56 • View the Big Data Customer Case Studies Learn from the experience of other customers who have built powerful and successful big data platforms on the AWS cloud Document Revisions Date Description December 2018 Revised to add information on Amazon Athena AWS QuickSight AWS Glue and general update s throughout January 2016 Revised to add information on Amazon Machine Learning AWS Lambda Amazon Elasti csearch Service; general update December 2014 First publication",General,consultant,Best Practices BlueGreen_Deployments_on_AWS,Blue/Green Deployments on AWS First Published August 1 2016 Updated September 29 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Abstract 1 Introduction 2 Blue/Green deployment methodology 2 Benefits of blue/green 3 Define the environment boundary 4 Services for blue/green deployments 5 Amazon Route 53 5 Elastic Load Balancing 5 Auto Scaling 6 AWS Elastic Beanstalk 6 AWS OpsWorks 6 AWS CloudFormation 6 Amazon CloudWatch 7 AWS CodeDeploy 7 Implementation techniques 8 Update DNS Routing with Amazon Route 53 8 Swap the Auto Scaling Group behind the Elastic Load Balancer 10 Update Auto Scaling Group launch configurations 13 Swap the environment of an Elastic Beanstalk application 16 Clone a Stack in AWS OpsWorks and Update DNS 19 Best Practices for Managing Data Synchronization and Schema Changes 22 Decoupling Schema Changes from Code Changes 22 When blue/green deployments are not recommended 23 Conclusion 26 Contributors 27 Document revisions 27 Appendix 28 Comparison of Blue Green Deployment Techniques 28 Amazon Web Services Blue/Green Deployments on AWS 1 Abstract The blue/green deployment technique enables you to releas e applications by shifting traffic between two identical environments that are running different vers ions of the application Blue/green deployments can mitigate common risks associated with deploying software such as downtime and rollback capability This white paper provides an overview of the blue/green deployment methodology and describes techniques customers can implement using Amazon Web Services (AWS) services and tools It also addresses considerations around the data tier which is an important component of most applications Amazon Web Services Blue/Green Deployments on AWS 2 Introduction In a traditional approach to application deployment you typically fix a failed deployment by redeploying an earlie r stable version of the application Redeployment in traditional data centers is typically done on the same set of resources due to the cost and effort of provisioning additional resources Although this approach works it has many shortcomings Rollback isn’t easy because it’s implemented by r edeployment of an earlier version from scratch This process takes time making the application potentially unavailable for long periods Even in situations where the application is only impaired a rollback is required which overwrites the faulty version As a result you have no opportunity to debug the faulty application in place Applying the principles of agility scalability utility c onsumption as well as the automation capabilities of Amazon Web Services can shift the paradigm o f application deployment This enables a better deployment technique called blue/green deployment Blue/Green deployment methodology Blue/ green deployment s provide releases with near zero downtime and rollback capabilities The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application The blue environment represents the current application version serving production traffic In parallel the green environment is staged running a different version of your application After the green environment is ready and tested production traffic is redirect ed from blue to green If any problems are identified you can roll back by reverting traffic back to the blue environment Amazon Web Services Blue/Green Deployments on AWS 3 Blue/green example Although blue/green deployment isn’t a new concept you don’t commonly see it used in traditional on premises hosted environments due to the cost and effort required to provision additional resources The advent of cloud computing dramatically changes how easy and cost effective it is to adopt the blue/green approach for deploying software Benefits of blue/green Traditional deployments with in place upgrades make it difficult to validate your new application version in a production deployment while also continuing to run the earlier version of the application Blue/green deployments provide a level of isolation between your blue and green application environments This helps ensure spinning up a parallel green environment does not affect resources underpinning your blue environment This isolation reduces your deployment risk After you deploy the green environme nt you have the opportunity to validate it You might do that with test traffic before sending production traffic to the green environment or by using a very small fraction of production traffic to better reflect real user traffic This is called canary analysis or canary testing If you discover the green environment is not operating as expected there is no impact on the blue environment You can route traffic back to it minimizing impaired operation or downtime and limiting the blast radius of impact Amazon Web Services Blue/Green Deployments on AWS 4 This ability to simply roll traffic back to the operational environment is a key benefit of blue/green deployments You can roll back to the blue environment at any time during the deployment process Impaired operation or downtime is minimized because i mpact is limited to the window of time between green environment issue detection and shift of traffic back to the blue environment Additionally impact is limited to the portion of traffic going to the green environment not all traffic If the blast radi us of deployment errors is reduced so is the overall deployment risk Blue/green deployments also work well with continuous integration and continuous deployment (CI/CD) workflows in many cases limiting their complexity Your deployment automation has to consider fewer dependencies on an existing environment state or configuration as your new green environment gets launched onto an entirely new set of resources Blue/green deployments conducted in AWS also provide cost optimization benefits You’re not tied to the same underlying resources So if the performance envelope of the application changes from one version to another you simply launch the new environment with optimized resources whether that means fewer resources or just different compute reso urces You also don’t have to run an overprovisioned architecture for an extended period of time During the deployment you can scale out the green environment as more traffic gets sent to it and scale the blue environment back in as it receives less traf fic Once the deployment succeeds you decommission the blue environment and stop paying for the resources it was using Define the environment boundary When planning for blue/green deployments you have to think about your environment boundary —where have things changed and what needs to be deployed to make those changes live The scope of your environment is influenced by a number of factors as described in the following table Table 1 Factors that affect environment boundary Factors Criteria Application architecture Dependencies loosely/tightly coupled Organization Speed and number of iterations Risk and complexity Blast radius and impact of failed deployment People Expertise of teams Amazon Web Services Blue/Green Deployments on AWS 5 Factors Criteria Process Testing/QA rollback capability Cost Operating budgets additional resources For example organizations operating applications that are based on the microservices architecture pattern could have smaller environment boundaries because of the loose coupling and well defined interfaces between the individual services Organizations running legacy monolithic apps can still utilize blue/green deployments but the environment scope can be wider and the testing more extensive Regardless of the environment boundary you should make use of automation wherever you can to streamline the proc ess reduce human error and control your costs Services for blue/green deployments AWS provides a number of tools and services to help you automate and streamline your deployments and infrastructure You can access these tools using the web console CLI tools SDKs and IDEs Amazon Route 53 Amazon Route 53 is a highly available and scalable authoritative DNS service that routes user requests for Internet based resources to the appropriate destination Route 53 runs on a global network of DNS servers providing customers with added features such as routing b ased on health checks geography and latency DNS is a classic approach to blue/green deployments allowing administrators to direct traffic by simply updating DNS records in the hosted zone Also time to live (TTL) can be adjusted for resource records; this is important for an effective DNS pattern because a shorter TTL allows record changes to propagate faster to clients Elastic Load Balancing Another common approach to routing traffic for a blue/green deployment is through the use of load balancing t echnologies Amazon Elastic Load Balancing (ELB) distributes incoming application traffic across designated Amazon Elastic Compute Cloud (Amazon EC2) instan ces ELB scales in response to incoming requests performs health checking against Amazon EC2 resources and naturally integrates with other services Amazon Web Services Blue/Green Deployments on AWS 6 such as Auto Scaling This makes it a great option for customers who want to increase application fault t olerance Auto Scaling AWS A uto Scaling helps maintain application availability and lets you scale EC2 capacity up or down automatically according to defined conditions The templates used to launch EC2 insta nces in an Auto Scaling group are called launch configurations You can attach different versions of launch configuration s to an auto scaling group to enable blue/green deployment You can also configure auto scaling for use with an ELB In this configuration the ELB balances the traffic across the EC2 instances running in an auto scaling group You define t ermination policies in auto scaling groups to determine which EC2 instances to remo ve during a scaling action ; auto scaling also al lows instances to be placed in Standby state instead of termination which helps with quick rollback when required Both auto scaling's termination policies a nd Standby state allow for blue/green deployment AWS Elastic Beanstalk AWS Elastic Beanstalk is a fast and simple way to get an application up and running on AWS It’s perfect for developers who want to deploy code without worry ing about managing the underlying infrastructure Elastic Beanstalk supports Auto Scaling and ELB both of which allow for blue/green deployment Elastic Beanstalk helps you run multiple versions of your application and provide s capabilities to swap the environment URLs facilitating blue/green deployment AWS OpsWorks AWS OpsWorks is a configuration management service based on Chef that allows customers to deploy and manage application stacks on AWS Customers can specify resource and application configuration and deploy and monitor running resources OpsWorks simplifies cloning entire stacks when you’re preparing blu e/green environments AWS CloudFormation AWS CloudF ormation provides customers with the ability to describe the AWS resources they need through JSON or YAML formatted templates This service provides very powerful automation capabilities for provisioning blue/green environments and facilitating updates to switch traffic whether through Route 53 DNS ELB or similar Amazon Web Services Blue/Green Deployments on AWS 7 tools The service can be used as part of a larger infrastructure as code strategy whe re the infrastructure is provisioned and managed using code and software development techniques such as version control and continuous integration in a manner similar to how application code is treated Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS resources and applications CloudWatch collect s and visualize s metrics ingest s and monitor s log files and define s alarms It provides system wide visibility into resource utilization ap plication performance and operational health which are key to early detection of application health in blue/green deployments AWS CodeDeploy AWS CodeDeploy is a deployment service that automates deployments to various compute types such as EC2 instances on premises instances Lambda functions or Amazon ECS services Blue/Green deployment is a feature of CodeDeploy CodeDeploy can also roll back deployment in case of failur e You can also use CloudWatch alarms to monitor the state of deployment and utilize CloudWatch Events to process the deployment or instance state change events Amazon Elastic Container Service There are three ways traffic can be shifted during a deployment on Amazon Elastic Container Services (Amazon ECS) • Canary – Traffic is shifted in two increments • Linear – Traffic is shifted in equal increments • Allatonce – All traffic is shifted to the updated tasks AWS Lambda Hooks With AWS Lambda hooks CodeDeploy can call the Lambda function during the various lifecycle events including deployment of ECS Lambda function deployment and EC2/On premise deployment The hooks are helpful in creating a deployment workflow for you r apps Amazon Web Services Blue/Green Deployments on AWS 8 Implementation techniques The following techniques are examples of how you can implement blue/green on AWS While AWS highlight s specific services in each technique yo u may have other services or tools to implement the same pattern Choose the appropriate technique based on the existing architecture the nature of the application and the goals for software deployment in your organization Experiment as much as possible to gain experience for your environment and to understand how the different deployment risk factors affect your specific workload Update DNS Routing with Amazon Route 53 DNS routing through record updates is a common approach to blue/green deployments DNS is used as a mechanism for switching traffic from the blue environment to the green and vice versa when rollback is necessary This approach works with a wide variety of environment configurations as long as you can express the endpoint into the environment as a DNS name or IP address Withi n AWS this technique applies to environments that are: • Single instances with a public or Elastic IP address • Groups of instances beh ind an Elastic Load Balancing load balancer or third party load balancer • Instances in an auto scaling group with an ELB load balancer as the front end • Services running on an Amazon Elastic Container Service (Amazon ECS) cluster fronted by an ELB load bala ncer • Elastic Beanstalk environment web tiers • Other configurations that expose an IP or DNS endpoint The following figure shows how Amazon Route 53 manages the DNS hosted zone By updating the alias record you can route traffic from the blue environment to the green environment Amazon Web Services Blue/Green Deployments on AWS 9 Classic DNS pattern You can shift traffic all at once or you can do a weighted distribution For weighted distribution with Amazon Route 53 you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carr ies the full production traffic This provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment You can test the new code and monitor for errors limiting the blast radius if any issu es are encountered It also allows the green environment to scale out to support the full production load if you’re using Elastic Load Balancing (ELB ) ELB automatically scales its request handling capacity to meet the inbound application traffic; the process of scaling isn’t instant so we recommend that you test observe and understand your traffic patterns Load balancers can also be pre warmed (configured for optimum capacity) through a support request Amazon Web Services Blue/Green Deployme nts on AWS 10 Classic DNS weighted distribution If issues arise during the deployment you can roll back by updating the DNS record to shift traffic back to the blue environment Although DNS routing is simple to implement for blue/green you should take into consideration how quickly can you complete a rollback DNS Time to Live ( TTL) determines how lon g clients cache query results However with earlier clients and potentially clients that aggressively cache DNS records certain sessions may still be tied to the previous environment Although rollback can be challenging this feature has the benefit of enabling a granular transition at your own pace to allow for more substantial testing and for scaling activities To help manage costs consider using Auto Scaling instances to scale out the resources based on actual demand This works well with the gradua l shift using Amazon Route 53 weighted distribution For a full cutover be sure to tune your Auto Scaling policy to scale as expected and remember that the new ELB endpoint may need time to scale up as well Swap the Auto Scaling Group behind the Elastic Load Balancer If DNS complexities are prohibitive consider using load balancing for traffic management to your blue and green environments This technique uses Auto Scaling to Amazon Web Services Blue/Green Deployments on AWS 11 manage the EC2 resources for your blue and green environments scaling up or do wn based on actual demand You can also control the Auto Scaling group size by updating your maximum desired instance counts for your particular group Auto Scaling also integrates with Elastic Load Balancing (ELB) so any new instances are automatically added to the load balancing pool if they pass the health checks governed by the load balancer ELB tests the health of your registered EC2 instances with a simple ping or a more sophisticated connection attempt or request Health checks occur at configurab le intervals and have defined thresholds to determine whether an instance is identified as healthy or unhealthy For example you could have an ELB health check policy that pings port 80 every 20 seconds and after passing a threshold of 10 successful ping s health check will report the instance as being InService If enough ping requests time out then the instance is reported to be OutofService With Auto Scaling an instance that is OutofService could be replaced if the Auto Scaling policy dictates Conv ersely for scale ddown activities the load balancer removes the EC2 instance from the pool and drains current connections before they terminate The following figure shows the environment boundary reduced to the Auto Scaling group A blue group carries the production load while a green group is staged and deployed with the new code When it’s time to deploy you simply attach the green group to the existing load balancer to introduce traffic to the new environment For HTTP/HTTPS listeners the load bala ncer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm For more information see How Elastic Load Balancin g works You can also control how much traffic is introduced by adjusting the size of your green group up or down Amazon Web Services Blue/Green Deployments on AWS 12 Swap Auto Scaling group pattern s As you scale up the green Auto Scaling group you can take the blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state For more information see Temporarily removing instances from your Auto Scaling group Standby is a good option because if you need to roll back to the blue environment you only have to put your blue server instances back in service and they're ready to go As soon as the green group is scaled up without issues you can decommission the blue group by adjusting the group size to zero If you need to roll back detach the load balancer from the green group or reduce the group size of the green group to zero Amazon Web Services Blue/Green Deployments on AWS 13 Blue Auto Scaling group nodes in standby and decommission This pattern’s traffic management capabilities aren’t as granular as the classic DNS but you could still exercise control through the configuration of the Auto Scaling groups For example you could have a larger fleet of smaller instances with finer scaling policies which would also help control costs of scaling Because th e complexities of DNS are removed the traffic shift itself is more expedient In addition with an already warm load balancer you can be confident that you’ll have the capacity to support production load Update Auto Scaling Group launch configurations A launch configuration contains information like the Amazon Machine Image (AMI) ID instance type key pair one or more security groups and a block device mapping Auto Scaling groups have their own launch configurations You can associate only one launch configuration with an Auto Scaling group at a time and it can’t be modified after you create it To change the launch configuration associated with an Auto Scaling group replace the existing launch configuration with a new one After a new launch config uration is in place any new instances that are launched use the new launch configuration parameters but existing instances are not affected When Auto Scaling removes instances (referred to as scaling in ) from the group the default termination policy is to remove instances with the earliest launch configuration However you Amazon Web Services Blue/Green Deployments on AWS 14 should know that if the Availability Zones were unbalanced to begin with then Auto Scaling could remove an instance with a new launch configuration to balance the zones In such sit uations you should have processes in place to compensate for this effect To implement this technique start with an Auto Scaling group and an ELB load balancer The current launch configuration has the blue environment as shown in the following figure Launch configuration update pattern To deploy the new version of the application in the green environment update the Auto Scaling group with the new launch configuration and then scale the Auto Scaling group to twice its original size Amazon Web Services Blue/Green Deployments on AWS 15 Scale up green launch configuration The next step is to shrink the Auto Scaling group back to the original size By default instances with the old launch configuration are removed first You can also utilize a group’s S tandby state to temporarily remove instances from an Auto Scaling group Having the instance in standby state helps in quick rollbacks if required As soon as you’re confident about the newly deployed version of the application you can permanently remove instances in Standby state Amazon Web Services Blue/Green Deployments on AWS 16 Scale down blue launch configuration To perform a rollback update the Auto Scaling group with the old launch configuration Then perform the preceding steps in reverse Or if the instances are in Standby state bring them back online Swap the environment of an Elastic Beanstalk application Elastic Beanstalk enables quick and easier deployment and management of applications without having to worry about the infrastructure that runs those applications To deploy an app lication using Elastic Beanstalk upload an application version in the form of an application bundle (for example java war file or zip file) and then provide some information about your application Based on application information Elastic Beanstalk d eploys the application in the blue environment and provides a URL to access the environment (typically for web server environments) Elastic Beanstalk provides several deployment policies that you can configure for use ranging from policies that perform a n inplace update on existing instances to immutable deployment using a set of new instances Because Elastic Beanstalk performs an in place update when you update your application versions your application may become unavailable to users for a short per iod of time Amazon Web Services Blue/Green Deployments on AWS 17 However you can avoid this downtime by deploying the new version to a separate environment The existing environment’s configuration is copied and used to launch the green environment with the new version of the application The new green environment will have its own URL When it’s time to promote the green environment to serve production traffic you can use Elastic Beanstalk's Swap Environme nt URLs feature To implement this technique use Elastic Beanstalk to spin up the blue environment Elastic Beanstalk environment Elastic Beanstalk provides an environment URL when the application is up and running The green environment is then spun up with its own environment URL At this time two environments are up and running but only the blue environment is serving production traffic Amazon Web Services Blue/Green Deployments on AWS 18 Prepare green Elastic Beanstalk environment Use the following procedure to promote the green environment to serve production traffic 1 Navigate to the environment's dashboard in the Elastic Beanstalk console 2 In the Actions menu choose Swap Environment URL Elastic Beanstalk perform s a DNS switch which typically takes a few minutes See the Update DNS Routing with Amazon Route 53 section for the factors to consider when performing a DNS switch 3 Once the DNS changes have propagated you can terminate the blue environment To perform a rollback select Swap Environment URL again Amazon Web Services Blue/Gree n Deployments on AWS 19 Decommission blue Elastic Beanstalk environment Clone a Stack in AWS OpsWorks and Update DNS AWS OpsWorks utilizes the concept of stacks which are logical groupings of AWS resources (EC2 instances Amazon RDS ELB and so on) that have a common purpose and should be logically managed together Stacks are made of one or more layers A layer represents a set of EC2 instances that serve a particular purpose such as serving applications or hosting a database server When a data store is part of the stack you should be aware of certain data management challenges such as those discuss ed in the next section To impleme nt this technique in AWS OpsWorks bring up the blue environment /stack with the current version of the application Amazon Web Services Blue/Green Deployments on AWS 20 AWS OpsWorks stack Next create the green environment/stack with the newer version of application At this point the green environment i s not receiving any traffic If Elastic Load Balancing needs to be initialized you can do that at this time Clone stack to create green environment Amazon Web Services Blue/Green Deployments on AWS 21 When it’s time to promote the green environment/stack into production update DNS records to point to the green environment/stack ’s load balancer You can also do this DNS flip gradually by using the Amazon Route 53 weighted routing policy This process involves updating DNS so be aware of DNS issues discussed in the Update DNS Routing with Amazon Route 53 section Decommission blue stack Amazon Web Services Blue/Green Deployments on AWS 22 Best Practices for Managing Data Synchronization and Schema Changes The complexity of m anaging data synchronization across two distinct environments depend s on the number of data stores in use the intricacy of the data model and the data consistency requirements Both the blue and green environments need up todate data: • The green environment needs up todate data access because it’s becoming the new production environment • The blue environment needs up todate data in the event of a rollback when production is either shifts back or remains on the blue environment Broadly you accomplish this by having both the green and blue environments share the same data stores Unstructured data stores such as Amazon Simple Storage Service (Amazon S3) object storage NoSQL databases and shared file systems are often easier to share b etween the two environments Structured data stores such as relational database management systems (RDBMS) where the data schema can diverge between the environments typically require additional considerations Decoupling Schema Changes from Code Changes A general recommendation is to decouple schema changes from the code changes This way the relational database is outside of the environment boundary defined for the blue/green deployment and shared between the blue and green environments The two approaches for performing the schema changes are often used in tandem: • The schema is changed first before the blue/green code deployment Database updates must be backward compatible so the old version of the application can still interact with the data • The schema is changed last after the blue/green code deployment Code changes in the new version of the application must be backward compatible with the old schema Schema modifications in the first approach are often additive You can add fields to tables new entities and relationships If needed you can use triggers or asynchronous processes to populate these new constructs with data based on data changes performed by the old application version Amazon Web Services Blue/Green Deployments on AWS 23 It’s important to follow coding best practices when d eveloping applications to ensure your application can tolerate the presence of additional fields in existing tables even if they are not used When table row values are read and mapped into source code structures ( for example objects and array hashes) y our code should ignore fields it can’t map to avoid causing application runtime errors Schema modifications in the second approach are often deletive You can remove unneeded fields entities and relationships or merge and consolidate them After this removal the earlier application version is no longer operational Decoupled schema and code changes There’s an increased risk involved when managing schema with a deletive approach : failures in the schema modification process can impact your production e nvironment Your additive changes can bring down the earlier application because of an undocumented issue where best practices weren’t followed or where the new application version still has a dependency on a deleted field somewhere in the code To mitigat e risk appropriately this pattern places a heavy emphasis on your pre deployment software lifecycle steps Be sure to have a strong testing phase and framework and a strong QA phase Performing the deployment in a test environment can help identify these sorts of issues early before the push to production When blue/green deployments are not recommended As blue/green deployments become more popular developers and companies are constantly applying the methodology to new and innovative use cases However in some common use case patterns applying this methodology even if possible isn’t recommended In these cases implementing blue/green deployment introduces too much risk whether due to workarounds or additional moving parts in the deployment process T hese Amazon Web Services Blue/Green Deployments on AWS 24 complexities can introduce additional points of failure or opportunities for the process to break down that may negate any risk mitigation benefits blue/green deployments bring in the first place The following scenarios highlight patterns that may not be well suited for blue/green deployments Are your schema changes too complex to decouple from the code changes? Is sharing of data stores not feasible? In some scenarios sharing a data store isn’t desired or feasible Schema changes are too complex to decouple Data locality introduces too much performance degradation to the application as when the blue and green environments are in geographically disparate regions All of these situations require a solution where the data store is inside of the dep loyment environment boundary and tightly coupled to the blue and green applications respectively This requires data changes to be synchronized —propagated from the blue environment to the green one and vice versa The systems and processes to accomplish t his are generally complex and limited by the data consistency requirements of your application This means that during the deployment itself you have to also manage the reliability scalability and performance of that synchronization workload adding ris k to the deployment Does your application need to be deployment aware ? You should consider using feature flags in your application to make i t deployment aware This will help you control the enabling/disabling of application features in blue/green deploym ent Your application code would run additional or alternate subroutines during the deployment to keep data in sync or perform other deployment related duties These routines are enabled /disabled turned off during the deployment by using configuration fl ags Making your applications deployment aware introduces additional risk and complexity and typically isn’t recommended with blue/green deployments The goal of blue/green deployments is to achieve immutable infrastructure where you don’t make changes to your application after it’s deployed but redeploy altogether That way you ensure the same code is operating in a production setting and in the deployment setting reducing overall risk factors Amazon Web Services Blue/Green Deployments on AWS 25 Does your commercial offtheshelf (COTS) application co me with a predefined update/upgrade process that isn’t blue/green deployment friendly? Many commercial software vendors provide their own update and upgrade process for applications which they have tested and validated for distribution While vendors are increasingly adopting the principles of immutable infrastructure and automated deployment currently not all software products have those capabilities Working around the vendor’s recommended update and deployment practices to try to implement or simulate a blue/green deployment process may also introduce unnecessary risk that can potentially negate the benefits of this methodology Amazon Web Services Blue/Green Deployments on AWS 26 Conclusion Application deployment has associated risks However advancements such as the advent of cloud computing deployment and automation frameworks and new deployment techniques blue/green for example help mitigate risks such as human error process downtime and rollback capability The AWS utility billing model and wide range of automation tools make it much easier for customers to move fast and cost effectively implement blue/green deployments at scale Amazon Web Services Blue/Green Deployments on AWS 27 Contributors The following individuals and organizations contributed to this document: • George John Solutions Architect Amazon Web Services • Andy Mui Solutions Architect Amazon Web Services • Vlad Vlasceanu Solutions Architect Amazon Web Servic es • Muhammad Mansoor Solutions Architect Amazon Web Services Document revisions Date Description September 21 2021 Updated for technical accuracy June 1 2015 Initial publication Amazon Web Services Blue/Green Deployments on AWS 28 Appendix : Comparison of Blue Green Deployment Techniques The following table offers an overview and comparison of the different blue/green deployment techniques discussed in this paper The risk potential is evaluated from desirable lower risk ( X) to less desirable higher risk ( X X X) Technique Risk Category Risk Potential Reasoning Update DNS Routing with Amazon Route 53 Application Issues X Facilitates canary analysis Application Performance X Gradual switch traffic split management People/Process Errors X X Depends on automation framework overall simple process Infrastructure Failures X X Depends on automation framework Rollback X X X DNS TTL complexities (reaction time flip/flop) Cost X Optimized via Auto Scaling Swap the Auto Scaling group behind Elastic Load Balancer Application Issues X Facilitates canary analysi s Application Performance X X Less granular traffic split management already warm load balancer People/Process Errors X X Depends on automation framework Infrastructure Failures X Auto Scaling Rollback X No DNS complexities Cost X Optimized via Auto Scaling Application Issues X X X Detection of errors/issues in a heterogeneous fleet is complex Amazon Web Services Blue/Green Deployments on AWS 29 Technique Risk Category Risk Potential Reasoning Update Auto Scaling Group launch configurations Application Performance X X X Less granular traffic split initial traffic load People/Process Errors X X Depends on automation framework Infrastructure Failures X Auto Scaling Rollback X No DNS complexities Cost X X Optimized via Auto Scaling but initial scale out overprovisions Swap the environment of an Elastic Beanstalk application Application Issues X X Ability to do canary analysis ahead of cutover but not with production traffic Application Performance X X X Full cutover People/Process Errors X Simple process automated Infrastructure Failures X Auto Scaling CloudWatch monitoring Elastic Beanstalk health reporting Rollback X X X DNS TTL complexities Cost X X Optimized via Auto Scaling but initial scale out may overprovision Clone a stack in OpsWorks and update DNS Application Issues X Facilitates canary analysis Application Performance X Gradual switch traffic split management People/Process Errors X Highly automated Infrastructure Failures X Autohealing capability Rollback X X X DNS TTL complexities Amazon Web Services Blue/Green Deployments on AWS 30 Technique Risk Category Risk Potential Reasoning Cost X X X Dual stack of resources,General,consultant,Best Practices Building_a_RealTime_Bidding_Platform_on_AWS,ArchivedBuilding a RealTime Bidding Platform on AWS February 2016 This paper has been archived For the latest technical guidance about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 2 of 21 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 3 of 21 Contents Abstract 4 Introduction 4 RealTime Bidding Explained 4 Elastic Nature of Advertising and Ad Tech 5 Why Speed Matters 7 Advertising Is Global 8 The Economics of RTB 8 Components of a RTB Platform 8 RTB Platform Diagram 11 Real Time Bidding on AWS 11 Elasticity on AWS 12 Low Latency Networking on AWS 12 AWS Global Footprint 12 The Economics of RTB on AWS 13 Components of an RTB Platform on AWS 13 Reference Architecture Example 19 Citations 19 Conclusion 19 Contributors 20 Further Reading 20 Notes 21 ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 4 of 21 Abstract Amazon Web Services (AWS) is a flexible costeffective easy touse global cloud computing platform The AWS cloud delivers a comprehensive portfolio of secure and scalable cloud computing services in a selfservice pay asyougo model with zero capital expense needed to manage your realtim e bidding platform This whitepaper helps architects engineers advertisers and developers understand realtime bidding (RTB) and the services available in AWS that can be used for RTB This paper will showcase the RTB platform reference architecture used by customers today as well as provide additional resources to get started with building an RTB platform on AWS Introduction Online advertising is a growing industry and its share of total advertising spending is also increasing every year and projected to surpass TV advertising spend in 2016 A significant area of growth is r ealtime b idding (RTB) which is the auctionbased approach for transacting digital display ads in real time at the most granular impression level RTB was the dominant transaction method in 2015 accounting for 740 percent of programmatically purchased advertising or 11 billion dollars in the US1 RTB transactions are projected to grow over 30 percent in 2016 according to industry research2 Realtime b idding is also gaining popularity in mobile display advertising as mobile advertising spend is anticipated to grow in excess of 60 percent in 20163 As the amount of data being created and collected grows organizations need to use it to make better decisions i n determining the value of each ad impression AWS has an ecosystem of solutions specifically designed to handle the realtime low latency analytics that allow you to make the best possible and most efficient ad impressions to drive your business RealTime Bidding Explain ed When you go to a website and are served an advertisement the process to serve you that advertisement involves the website or publisher contacting an ad exchange which then accepts realtime bids from many different parties The bidders us e the information about the user that they know (for example the ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 5 of 21 website and the ad location/size plus demographic information such as user location browsing history and time of day ) to determine how much they are willing to pay to deliver an advertisement to the user The data may come directly from publishers (mobile applications or websites) or thirdparty data providers Whichever bidder bids the most within a time period set by the exchange usually under 100 milliseconds gets to serve the ad and pay the b id price This process at a highlevel is depicted in Figure 1 RTB is the process of accepting data from Step 2 and doing the action in Step 3 Elastic N ature of Advertising and Ad Tech Web traffic is the engine that drives the advertising industry Daily web traffic volume can vary by 2 00 percent or more (based on time of day ) In Figure 2 you can see a typical pattern of load on an RTB platform in a single day With elasticity you can achieve greater infrastructure savings by turning off resources as traffic decreases Figure 1: Real Time Bidding Process 1 User goes to a web page 2 Ad impression is sent to an Ad Exchange 3 Ad Exchange invites bidders 4 Highest bidder wins the impression 5 Advertiser delivers the winning ad creativeArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 6 of 21 In addition Figure 3 below illustrates the typical pattern that an RTB platform will see for seasonal events (such as the Christmas holiday in December and the spring tax season in the United States) that create very large consistent spikes that might account for more than half of all traffic for the whole year These peak times are the most important time to serve the right ad to the right potential customers To accomplish this you can either build an RTB platform that always has the capacity to handle peak and spiked loads or you can build your platform to grow and shrink based on the required need Building elasticity into your platform can dramatically reduce your operating cost s You don’t need to maintain peak capacity yearround just to avoid performance issues during important holidays and busy traffic times each day Figure 2: Daily Load Pattern for RTB ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 7 of 21 Figure 3: Yearly Load Pattern for RTB Why Speed Matters An ad exchange expects to hear an answer from all bidders in 100 milliseconds (ms) If your bid is even a millisecond late you will lose your opportunity to win this ad impression and your advertisement will go unseen Lost bids are lost opportunities to g et the right advertisement to your key demographic There are millions of bids per minute and it ’s critical for advertisers to have the ability bid on all of them Therefore you need to make sure that the entire platform including the network connection to the exchange is as quick as possible Additionally any less time needed to transmit data is more time you can use to run analytics and make better bidding decisions Therefore you want to have your RTB platforms have the lowest latency connection possible to the exchange you ar e bidding on ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 8 of 21 Advertising Is Global Advertising is becoming a truly global activity This doesn’t necessarily m ean that your advertising strategy isn’t localized When providing campaigns to advertisers you want to offer the freedom of reaching customers with localized messaging across the globe To reach the largest possible audience you need to have an RTB platform physically near all the exchanges throughout the world You cannot respond to exchanges that are physically far apart and still meet the 100 ms requirement Therefore when you plan for building an RTB platform you need to make sure you are able to deploy your platform throughout the world to be as effective as possible The Economics of RTB The digital advertising business is extremely competitive with ever decreasing margins Many technological solutions might be able to deliver the required business functionality however few can deliver it at the very low cost needed to achieve the desired profitability Costs of RTB can be broken down into two broad categories: costs associated with listening to traffic and recording it and additional costs of executing the bidding logic and populating and maintaining the data repositories related to the bidding process When you use AWS these costs can be spread across AWS services with different economics and can be effectively monitored controlled and projected through the AWS budgets and forecasts capability Cost optimization of RTB on AWS is a critical part of a successful solution with numerous strategies available Components of a RTB Platform This section discusses the components that make up a functioning RTB platform Bid Traffic Ingestion and Processing As a user goes to a website that website will contact an ad exchange that will then send out bid traffic to RTB platforms for bids on this impression The bid traffic includes just the website URL that is being browsed ad/size and location on that website and demographic information about the user that the publisher knows This data must be ingest ed in real time and a decision must be made on whether you want to bid on this impression and the amount you’re willing to bid Each ad request comes with some form of user identification (ID) from the ad exchange At this point the bidder needs to be able to leverage this user ID and all available ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 9 of 21 data for that user (if this is an existing user that the system has seen previously) The bidder must map this user ID to another source of information (eg a cookie store) to match the user calculate the value of the bid and probability of winning the auction Then the bidder sends the bid along with the ad link tied to that bid so that the ad creative can be displayed to the end user in the case of an auction win To make this decision the solution must utilize a lowlatency data store along with a campaign management system which will be described in more detail below Analysis Traffic Ingestion and Processing Analysis traffic can come from ad exchanges and directly from content publishers through tracking pixels Analysis traffic is usually not as timesensitive as bidding traffic but it provides valuable information which can be used to make the real time bidding decision on future bid traffic It is important to capture all or as much analysis traffic as possible and not just sample it because analysis traffic improves the system’s ability to understand data patterns and learn from them This data is critical to making an intelligent decision on how much any given impression is worth to the advertiser and how likely it is that this impression will stick with the website user or lead to a direct action like a clickthrough Low Latency Data Repository The primary purpose of a low latency data repository is to look up and make decisions very quickly on not only if you wish to bid on an impression but also how much you are willing to pay for that impression This decision is based on three key factors: knowledge about the user (user profile) how well the user match es a set of predetermined advertising campaigns with specific budget objectives and how often the user has a specific ad The key capabilities of this data store are to provide data very fast (preferably in a single millisecond) to scale to peak traffic and to have regional replication abilities Regional replication is critical for targeting users who connect from different geographic locations and who can be targeted through advertising exchanges worldwide The data that is stored in the low latency data repository is an index for fast retrieval set of aggregated data from the durable data repository Durable Data Repository for LongTerm Storage The durable data repository is a storage platform built to hold large amounts of data inexpensively It will hold all historical data for the analytical pipelines for data transformation enrichment and preparation for rich analytics It ’s ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 10 of 21 important to have as much historical data as possible to best be able to predict user behavior and have a good impression bidding strategy For example shopping behavior may be very different in December around the Christmas holiday than in April If you have data from December of last year or Decembers over multiple years you can make better predictions about the behavior patterns and demographics that lead to the most valuable impressions In addition the advertising customers may have their own “first party” data about the customers they want to target with RTB or they might use data from other data providers ’ thirdparty data to enhance the RTB process Analytics Platform An analytics platform is used to run computation models such as machine learning to calculate the likelihood of specific campaigns getting the desired result from specific demographics and users This platform will keep track of users across multiple devices record their activities and update user profiles and audience segments It will run the analytics off the different data feeds and the long term durable data repository It will take the analytical results and store them an indexed manner in the lowlatency data store so that bid processing can quickly find the data it needs to make its bidding decisions Campaign Management Campaign management is typically a multitenant web application that manages the advertising campaigns and controls the budgets for different advertisers This web application provides detailed statistics of the bids that have already taken place in the campaigns and the audiences that have provided the best response In some cases the advertising campaign can be manually or automatically adjusted “on the fly ” and the information can be pumped back into the low latency data store so that new bidding traffic can incorp orate new or updated campaigns ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 11 of 21 RTB Platform Diagram The diagram in Figure 4 displays a generic infrastructure provider independent data flow and each component involved in a generic RTB platform This illustrates not only the components of an RTB platform but also the interactions with a website from outside sources such as ad exchanges advertisers user tracking systems publishers and end users Figure 3: RTB Platform Components Real Time Bidding on AWS We will now explore the specific advantages that AWS offers to RTB systems We’ll show how AWS help s RTB providers implement all of the components discussed earlier for their platforms AWS provides many services and features so customers can focus on analytics models and your own customers instead of spending a significant amount of time on infrastructure networking availability and the platform ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 12 of 21 Elasticity on AWS The AWS platform is built with elasticity in mind; at any time you can utilize compute databases and storage You only pay for what you use For example Amazon Elastic Compute Cloud (EC2) reduces the time required to obtain and boot new server instances to minutes This allows you to quickly scale capacity both up and down as your computing requirements change Y ou can build your RTB platform to scale up and down in size as more traffic comes in You also can do computational analytics on your data set in batches and then release the resource back when the analytics are done so you’re not continuing to pay for it This elasticity not only gives you the assurance that you can handle very large unpredictable spikes in traffic that may occur but also that you are not tied to architectural or software choices You can freely change because there is no long term commitment or investment to your existing infrastructure Low Latency Networking on AWS In the simplest case both the RTB solution and the exchange are located in the same AWS Region This is an increasingly popular scenario among the rapidly growing mobile and video exchanges In some cases however the exchange is not located on AWS so the traffic between the RTB solution and the ad exchange goes over the public Internet To reduce the latency and jitter caused by the Internet a private connectivity path via AWS Direct Connect can be established between your Amazon Virtual Private Cloud (VPC) that hosts the RTB solution and the provider that hosts the exchange Some hosting providers may require a public Autonomous System Number (ASN) in order to connect to the exchanges in the most efficient way If a company does not own a public ASN this can be accomplished by leasing an ASN from AWS Direct Connect Partners Additionally when choosing the EC2 instance type you want to make sure to pick instances with enhanced networking with SRIOV to get the best possible network performance In some cases customers may take advantage of Placement Groups that ensure nonblocking low latency connections between instances In addition different networking stacks can be deployed to further reduce latency for connections inside the VPC and outside of the VPC AWS Global Footprint AWS offers many different regions around the world where you can deploy your RTB platform to be as close as possible to the different exchanges around the ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 13 of 21 world To see a full list of current locations click here One of the big advantages of the AWS platform is that you can use deployment services like AWS CloudFormation AWS OpsWorks and AWS Elastic Beanstalk to easily deploy the exact same architecture to any region you want with a simple click in the AWS Management Console or a service API call This allows you to easily meet the demands of new campaig ns If you no longer have a campaign tied to a specific geographic location you can shut down operations at that location until there is demand Due to the AWS pay asyougo model you will pay nothing once operations cease When a new campaign starts that requir es this geographic location again just spin it up in minutes with your deployment tool of choice The Economics of RTB on AWS There are several ways of improving the economics of RTB on AWS Some of the common methods include the following: 1 Elastically scale your compute and memory resources using Auto Scaling to maximize your resources and to ensure that you are paying for peak load when only when you need the resources 2 Use Spot Instances especially with latest EC2 Spot Fleet API and Spot Bid Advisor 3 Use Reserved Instances 4 Reduce the costs of outbound network traffic with Direct Connect to exchanges outside of the AWS network 5 Dynamically scale Amazon DynamoDB These methods will typically lead to significant savings over building it yourself or using other providers without sacrificing performance or availability Components of an RTB Platform on AWS Now that you have a solid understanding of what RTB platforms are and what their generic components are let’s look at how customers have implemented this successfully on the AWS Platform The AWS platform offers a rich ecosystem of selfmanaged servers via Amazon EC2 third party products via the AWS Marketplace and managed services offerings such as Amazon DynamoDB and Amazon ElastiCache so there are multiple ways to architect your platform on ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 14 of 21 AWS We will explore how each component of an RTB p latform could be deployed on AWS Bid Traffic Ingestion and Processing on AWS To build an elastic bid traffic ingestion and processing platform you need to front all traffic into a loadbalancing tier The load balancing can be done in AWS using an Elastic Load Balancing (ELB) load balancer which is a fully managed software load balancer that will scale with traffic at a very attractive price point You can also run your own load balancing software such as HAProxy Netscaler or F5 on Amazon EC2 instances in a selfmanaged implementation However running your own load balancer requires you to ensure scalability and availability across Availability Zones Typically DNS with a health check is used to monitor your load balancers and move new traffic around if any of the instances running your load balancer has an issue or is overloaded You will also want to scale your web and application tier up and down independently as traffic fluctuates to not only ensure that you can handle traffic demand but also reduce your infrastructure cost when you do not need max capacity of servers to handle the current traffic You scale your servers yourself using the AWS API or Command Line Interface (CLI) or you can use Auto Scaling to automatically manage your fleet A best practice is to use the smallest possible instance type that can manage your web and application tier without sacrificing network throughput This will lead to the lowest possible price when running at your minimum capacity It wil l also reduce cost by allowing you to scale up and down in small increments that best match your compute and memory resources to your actual needs as the bid traffic varies throughout the day For more details on best practices for building and managing scalable architectures see the AWS whitepaper Managing Your Infrastructure at Scale An example of launching an open source bidder (RTBkit) on AWS can be found in the RTBkit GitHub repository Analysis Traffic Ingestion and Processing on AWS Analysis traffic can flow into Amazon Kinesis directly from users or it might require some preprocessing In the second scenario it will go through a load balancer to a fleet of scalable EC2 instances that preprocess the data After data arrives to EC2 instances (Kinesis Producers) and then is forwarded to Amazon Kinesis (likely with some batching to reduce the costs) it can be picked up by a ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 15 of 21 number of applications directly from the Amazon Kinesis stream using the Kinesis Client Library (KCL) Kinesis Producer Library (KPL) can be used to simplify the process of putting records into an Amazon Kinesis stream Kinesis is a convenient data store for the multiplexed stream data from several EC2 instances This data can be used to compute some metrics and do some time window calculations to understand the patterns in the web traffic In order to optimize the costs for this additional processing step the data can be flushed in small batches by concatenating the logs to Amazon Kinesis 1 MB record size to minimize the costs associated with the put record requests From Amazon Kinesis data is typically moved into a durable repository like Amazon S3 and processed with frameworks like Apache Spark ( using Spark Streaming and Kinesis integration) In addition the Amazon Kinesis Firehose service significantly simplifies the process of large volume data capture Low Latency Data Repository on AWS To have a low latency data repository on AWS you need AWS managed services like Amazon DynamoDB and Amazon ElastiCache or a multitude of do ityourself options that you would run on Amazon EC2 such as Aerospike Cassandra and Couchbase Amazon DynamoDB offers the simplicity of managing very large tables with low administrative overhead and human intervention while providing singledigit millisecond latency and utilizing multiple data centers for high durability and availability Amazon DynamoDB can be combined with DynamoDB Streams which captures all activity that happens on a table This simplifies development and administration of crossregion multi master replication scenarios Amazon DynamoDB is a convenient repository for user profile audience and cookie data as well as for keeping track of advertising served (frequency capping) and advertising budgets Amazon DynamoDB also allows for easily scaling up and down the amount of transaction requests the system can handle on a pertable basis This allows you to scale your data tier up and down as your transaction load changes throughout the year Each table in Amazon DynamoDB has its own provisioned amount of throughput that can be scaled This makes administration of your database easy; you don’t need to turn a clustered set of servers into a set of tables with different ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 16 of 21 performance characteristics and you avoid poorly written scales or an unexpected spike in traffic occurring on one table affecting your other tables This allows you to deploy the concept of hot and cold tables very easily For example there is the typical pattern of timeseries data where new data is examined often and older data is rarely needed In this case you can create a unique table for each day week or month and have the new tables have very high throughput You can also programmatically dial back the throughput on your older tables over time to further save money since older data accessed less often This simple per table throughput administration reduces performance variation and uncertainty found in clusters trying to manage many tables with varying in unpredictable loads One of the popular use cases for Amazon DynamoDB is a distributed lowlatenc y user profile store The user store contains the categories (or segments) a specific user belongs to as well as the times that user was assigned a given segment This usersegment information can be used as inputs for bidding decision logic Amazon DynamoDB can be very flexible in terms of schema design and there are several best practices for data modeling One example of a best practice is to use hash and range keys for data retrieval and modification of multiple items (segments) belonging to the same or different hash keys In this scenario the hash key is the user ID and the range key is the segment the user belongs to Durable Data Repository for LongTerm Storage on AWS Amazon Simple Storage Service ( S3) provides a scalable secure highly available and durable repository for analytical data Amazon S3 runs a pay asyou go model so that you are only charged for what you used Amazon S3 also has User ID (Hash Key) User Segment (Range Key) Timestamp (Attribute) 1234 Segment1 1448895406 1234 Segment2 1448895322 1235 Segment1 1448895201 ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 17 of 21 different storage classes S3 standard for generalpurpose storage S3 Infrequent Access (S3 IA) for data that is longlived but infrequently accessed and Amazon Glacier for a longterm archive You can also set up Object Lifecycle Management policies which will move your data between these different storage options based on a schedule at no additional cost For example a policy might move data older than a year to S3 IA then after three years to Amazon Glacier and then after seven years the data is deleted Amazon S3 is a durable scalable and inexpensive option for RTB longterm storage that can then be used as a data source for the analytical pipelines for data transformation enrichment and preparation for rich analytics AWS has several technologies you can use for distributed data transformation Amazon Elastic MapReduce (EMR) is a managed cluster compute framework that can natively read directly from Amazon S3 utilizing open source tools such as Apache Spark In addition AWS Data Pipeline is a highly available managed service that allows easy data movement Processing jobs can be implemented for managing workflows including those done by Amazon EMR and other processing and database technologies Finally you can take advantage of eventdriven processing when objects are written to Amazon S3 Eventdriven processing can automati cally trigger an event handled by an AWS Lambda function to simplify processing at scale and not require batchbased architectures RTB Analytics Platform on AWS AWS has a wide variety of analytics platforms that can be utilized by RTB platforms so that bidding decisions can be as effective as possible In the machine learning space for very large data sets a common pattern is to use the machine learning library that comes with Spark MLlib on EMR You can also utilize other tools that run on Amazon EMR or you can use a managed service such as Amazon Machine Learning (Amazon ML) All of the se options have full integration with Amazon S3 storage for your longterm data set This allows the data to be analyzed so you can utilize many different tools to achieve your predictive analytics goals You can also read about different options and benefits AWS provides for largescale analytics in the Big Data Analytics Options on AWS whitepaper Typically an analytical workload requires a workflow component and can be implemented using Amazon Simple Workflow Service (SWF) AWS Data Pipeline or AWS Lambda ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 18 of 21 Campaign Management on AWS Campaign management systems architectures on AWS look like typical well architected web application s similar to those described in our bidprocessing system but this time with a fullscale persistent data tier Campaign management should exist in Auto Scaling groups sit behind ELB load balancers and security groups and deploy in multiple Availability Zones for high availability You can use Amazon Relational Database Service (RDS) for your campaign management Amazon RDS is a managed RDBMS service that supports Oracle SQL Server Aurora MySQL PostgreSQL and MariaDB engines Amazon RDS will install patch maintain perform multiAZ synchronous replication and back up your database You could also run your own database technology on Amazon EC2 but you would need to take ownership of managing and maintaining that database yourself Your application will typically tie into your lowlatency data tier to provide realtime information on the success of your campaigns back to your customers We recommend us ing a content delivery network such as Amazon CloudFront which is a managed content delivery network that helps speed up and securely deliver dynamic and static data (eg JavaScript ad images) as close to your users as possible ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 19 of 21 Reference Architecture Example Figure 5 is an example of a reference architecture that customers have successfully deployed It has Auto Scaling groups to allow for scalability and it spans multiple Availability Zones so that any localized failure would not stop its ability to responds to bids Citations US Programmatic Ad Spend AdRoll re:Invent 2014 AdRoll Kinesis data processing Automating Analytic Workflows on AWS Figure 5: Example Reference Architecture Conclusion Realtime bidding is a growing trend that has many different components required to effectively deliver intelligent realtime purchasing of media The AWS platform is a perfect fit for each component of the RTB platform due to the global reach and breadth of services An RTB architecture on AWS allows you to get the realtime performance necessary for RTB as well as reduce the overall cost and complexity involved in running an RTB platform The result is a flexible big data ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 20 of 21 architecture that is able to scale along with your business on the AWS global infrastructure Deploying on AWS offloads a significant amount of the complexity of operating a scalable realtime infrastructure so that you can focus on what differentiates you from your competitors and focus on making the best possible bidding strategies for your customers Contributors The following individuals and organizations contributed to this document:  Steve Boltuch solutions architect Amazon Web Services  Chris Marshall solutions architect Amazon Web Services  Marco Pedroso software engineer A9  Erik Swensson solutions architect manager Amazon Web Services  Dmitri Tchikatilov business development manager Amazon Web Services  Vlad Vlasceanu solutions architect Amazon Web Services Further Reading For additional help please consult the following sources:  IAB Real Time Bidding Project  Beating the Speed of Light with Your Infrastructure on AWS  Deploying an RTBkit on AWS with a CloudF ormation Template ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 21 of 21 Notes 1 US Programmatic ad spend to double by 2016 eMarketer analysis 2 US Programmatic digital display ad spending 20142017 eMarketer analysis20142017 3 US Programmatic ad spend to double by 2016 eMarketer analysis,General,consultant,Best Practices Building_a_Secure_Approved_AMI_Factory_Process_Using_Amazon_EC2_Systems_Manager_SSM_AWS_Marketplace_and_AWS_Service_Catalog,Archived Building a Secure Approved AMI Factory Process Us ing A mazon EC 2 Systems Manager (SSM) AWS Marketplace andAWS Service Catalog November 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 Building the Approved AMI 3 Considerations for AWS Mark etplace AMIs 5 Distributing the Approved AMI 6 Distributing and Updating AWS Service Catalog 8 Continuously Scanning Published AMIs 10 Conclusion 11 Document Revisions 12 Archived Abstract Customers require that AMIs used in AWS meet general and customer specific security standards Customers may also need to install software agents such as logging or antimalware agents To meet this requirement customers often build approved AMI s that are then shared across the many te ams The responsibility of building and maintaining these can fall to a central cloud or security team or to the individual development teams This paper outlines a process using the best practices for buildi ng and maintaining Approved AMI s through Amazon EC2 Systems Manager and delivering them to your teams using AWS Service Catalog ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 1 Introduction As your organization move s more and more of your workloads to Amazon Web Services ( AWS) your IT Team need s to ensure that they can meet the security requirements defined by your internal Information Security team The Amazon Machine Images ( AMIs ) used by diffe rent customer business units must be hardened patched and scanned for vulnerabilities regularly Like most companies your organization is probably looking for ways to reduce the time required to provid e approved AMIs Often evidence of compliance and approval is required before you can use AMIs in your production environments It can be difficult for your d evelopment teams to determine which AMIs are approved and how to integrate AMIs into their own applications Organization wide cloud teams need to ensure compliance and enforce that development teams use the hardened AMIs and not just any offtheshelf AMI It isn’t uncommon for organization to build fragile internal tool chains Those are often dependent on one or two skilled people whose departure introduces risk This whitepaper presents the challenges faced by customer cloud teams It describes a method for providing a repeatable scalable and approved application stack factory that increases innovation velocity reduces effort and increases the chief information security officer’s (CISO ) confidence that teams are compliant In a typical enterprise scenario a cloud team is responsible for providing the core infrastructure services This team owns providing the appropriate AWS environment for t he many development teams and approved AMIs that include the latest operating system updates hardening requirements and required thirdparty software agents They need to provide these approved images to teams across the organization in a seamless way In a more decentralized model organizations typically use this same method Development teams want to consume the latest approved AMI in the simplest way possible often through automation They want to customize these approved AMIs with the required softw are components but also ensure that the images continue to meet your organization’s InfoSec requirements ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 2 This solution uses Amazon EC2 Systems Manager Automation to drive the workflow Automation defines a sequence of steps an d is compos able The solution is broken down into a set of logical building blocks where the master workflow invokes the following individual components : 1 Build the AMI 2 Validate the AMI 3 Publish the AMI to AWS Service Catalog The master Automation invoke s all the steps as i llustrated in the following figure Figure 1: Solution overview The development teams can repeat this process Each team can add their own software and produce a new AMI that is scanned distributed and consumed as necessary The extended flow across the teams is as follows : • Central cloud engineering team is responsible for the following : o Setting policy on the specified operating systems the variants and the frequency of change policy o Buildin g the approved AMIs that include the latest operating system updates hardening requirements and approved software agents o Running AWS EC2 Systems Manager Automation to build approved AMI o Making the AMI available to teams for further automation with EC2 Systems Manager and making the product available through Service Catalog ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 3 o Optional: Setting up AWS EC2 Systems Manager to auto mate scheduled scanning of approved AMIs for vulnerabilities using Amazon Inspector • Development Teams are responsible for the following: o Building the application stacks used in production and meet ing any hardening requirements You can use AWS EC2 Systems Manager or AWS Code Pipeline to build the required AMIs or AWS CloudFormation stacks o Optional: Completing any steps that require authorized approval o Optional: Provide the resulting approved application stack for deployment via automation or AWS Service Catalog The solution uses the following AWS Services: • AWS Service Catalog1 • Amazon EC2 Systems Manager2 • Amazon Inspector3 • AWS Marketplace4 • AWS CodePipeline5 • AWS CodeCommit6 Building the Approved AMI The key to the entire pro cess is generating an AMI that meets all your hardening requirements The following diagram illustrates the high level process ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 4 Figure 2: AMI hardening process ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 5 Phase Description Automation Trigger You can configure Amazon EC2 Systems Manager Automation to be triggered by a user or an event 1 You can set up an event using Amazon CloudWatch (for example a monthly timed event ) or some other customer event (for example when code is checked into AWS CodeC ommit Build Phase The build phase takes a source AMI as the input and generates a hardened AMI ready for testing 2 Create instance – An instance is created from the latest available base AMI This could be an Amazon AWS Marketplace or customer provided AMI As part of the instance launch you install Amazon EC2 Systems Manager (SSM) Agent using userdata 3 Run command – When the instance is up and running packages and scripts are securely downloaded from an Amazon S3 bucket and executed This could include operating system updates operating system hardening scripts and the installation of new software and configuration changes These packages and scripts could be anything from custom bash scripts to Ansible playbooks 4 Build AMI – After the instance has been updated a new h ardened AMI is created Validation Phase Depending on your requirements you can use custom scripts thirdparty security software or Amazon Inspector to verify that your instances meet your security requirements Regardless of your choice the process is the same If you have implemented a custom scanning solution 5 Create instance – A new instance is created from the h ardened AMI 6 Run command – When the instance is up and running validation scripts and tools can be securely download from an S3 bucket and then executed to validate the instance or you can use Qualys Nessus or Amazon Inspector to validate the AMI Approval Phase After the scanning is complete you can inspect the reports before approving the new hardened AMI 7 You can store the new hardened AMI ID in a data store such as the SSM Parameter Store which can be used by other automations later in the pipeline Notifications After the Automation job is complete you can notify your teams 8 You can use CloudWatch Events to generate email alerts to teams and Amazon Simple Notification Service (Amazon SNS) notifications to trigger other automations Considerations for AWS Marketplace AMIs AWS Marketplace AMIs have a Marketplace product code attached to the AMI When you create your version of the AMI this product code is copied across to ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 6 the new AMI You need to confirm that any changes you make to the AMI do n’t affect the stability or performance of the product Some Marketplace offers come with vendor designed Cloud Formation templates to reduce effort on establishing clusters and HA configurations If the product can only be launched from AWS Marketplace using an AWS CloudFormation template you must update the AMI ID in the template to customize and harden the instance to create a new AMI You can download and change the template from the AWS Marketplace product page If the template launch requires any scripting test the template to ensure that these scripts work as expected Distributing the Approved AMI After you have an approved AMI you can distribute th e AMI across AWS Regions and then share it with any other AWS accounts To do this you use an Amazon EC2 Systems Manager Automation document that uses a n AWS Lambda function to copy the AMIs across a specified list of regions and then another Lambda function to share this copied AMI with the other accounts The resulting AMI IDs can be stored in the SSM Parameter S tore or Amazon DynamoDB for later consumption ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 7 Figure 3: Copying and sharing across AWS Regions and accounts After the AMI is shared with the specified accounts you can trigger another notification using email or SNS which could start further automations If there is a requirement to encrypt the AMIs the process is similar except instead of sharing the AMI with accounts the AMI must be copied to each account and then encrypted This increases the number of AMIs to manage but you can still automate it using th e same process Note If you have sourced the AMI from AWS Marketplace make sure that any accounts you share this new AMI with subscribes to the product in Marketplace ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 8 Distributing and Updating AWS Service Catalog AWS Service Catalog has two important c omponents: product s and portfolios ( a collection of products ) Both components use JSON/YAML CloudFormation templates You can apply constraints tags and policies to a product or portfolio AWS Service Catalog supports up to 50 versions per product AWS Service Catalog provides a TagOption library that enables you to creat e and apply repeatable and consistent tags to a product After you build and distribute the AMIs you can update AWS Service Catalog portfolios a cross the AWS Regions and accounts When managing multiple AWS Service Catalog product portfolios across AWS Regions and your organization ’s AWS accounts it is good practice to use a script to creat e portfolios and products You can st ore portfolio definitions in a JSON or YAML file and then create portfolios using scripts that target specific accounts and regions as shown in the following figure ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 9 Figure 4: Distributing AWS Service Catalog portfolios and products When the AMI is updated you can create a new version of an AWS Service Catalog product To do this you need to generate a new AWS CloudFormation template for the product containing the update d AMIs You can handle AWS Regions using the standard CloudFormation mappings sections You can standardize the template and use a parameter for the AMI ID You can enforce the AMI ID by defining a template constraint Regardless of how you choose to set it up the process for deploying portfolios and products remains the same ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 10 Continuously Scanning Published AMIs You need to regularly scan approved AMIs to ensure that they don’t contain any newly discovered Common Vul nerabilities and Exposures (CVEs ) You can schedul e daily inspections of the AMI as shown in the f ollowing architecture diagram To kick start the continuous scanning process you set up a CloudWatch Event that is triggered based on a schedule The event starts a new Automation document execution as illustrated in the following figure Figure 5: Continuous scanning architecture overview ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 11 Number Description 1 Read AMI ID – The SSM Automation document reads the AMI IDs from parameter store 2 Launch AMI – The SSM Automation document launches EC2 instances with a userdata script and installs the Amazon Inspector Agent 3 Trigger Amazon Inspector assessment – The Automation document starts the Amazon Inspector assessment on the instance 4 Update assessment execution status – The results of the Amazon Inspector is sent from the agent on the instance back to Amazon Inspector 5 Update Amazon Inspector assessment result – The Amazon Inspector results are stored in an S3 bucket for later retrieval 6 Notification of any high/medium /low CVEs – A notification is sent via SNS if any CVE’s are found 7 Terminate the instance – The SSM Automation document terminates the instance 8 Send notification – After the Amazon Inspector assessment is complete a message containing the CVE details is published to an SNS topic You can also set up CloudWatch Events to identify Automation document execution failures Conclusion Setting up an efficient tool chain for a large enterprise can require substantial effort and often hinge s on a few people in a big company Many companies build internal tools and processes us ing code written by one or two developers This approach creates problems as companies grow because it doesn’t scale and usually doesn’t include automation AWS provides a consistent template model which ensures consistency and reduces the risk of failure You can source many AMIs from the Amazon EC2 Console or AWS Marketplace By building and verifying approved hardened AMIs using the solution described in this whitepaper you can tag catalog apply polic ies and distribute AMIs across your organization ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 12 Document Revisions Date Description November 2017 First publication 1 https://awsamazoncom/servicecatalog/ 2 https://awsamazoncom/ec2/systems manager/ 3 https://awsamazoncom/inspector/ 4 https://awsamazoncom/marketplace/ 5 https://awsamazoncom/codepipeline/ 6 https://awsamazoncom/codecommit/ Notes,General,consultant,Best Practices Building_Big_Data_Storage_Solutions_Data_Lakes_for_Maximum_Flexibility,Building Big Data Storage Solutions (Data Lakes) for Maximum Flexibility July 2017 Archived This document has been archived For the most recent version refer to : https://docsawsamazoncom/whitepapers/latest/ buildingdatalakes/buildingdatalakeawshtml© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon S3 as the Data Lake Storage Platform 2 Data Ingestion Methods 3 Amazon Kinesis Firehose 4 AWS Snowball 5 AWS Storage Gateway 5 Data Cataloging 6 Comprehensive Data Catalog 6 HCatalog with AWS Glue 7 Securing Protecting and Managing Data 8 Access Policy Options and AWS IAM 9 Data Encryption with Amazon S3 and AWS KMS 10 Protecting Data with Amazon S3 11 Managing Data with Object Tagging 12 Monitoring and Optimizing the Data Lake Environment 13 Data Lake Monitoring 13 Data Lak e Optimization 15 Transforming Data Assets 18 InPlace Querying 19 Amazon Athena 20 Amazon Redshift Spectrum 20 The Broader Analytics Portfolio 21 Amazon EMR 21 Amazon Machine Learning 22 Amazon QuickSight 22 Amazon Rek ognition 23 ArchivedFuture Proofing the Data Lake 23 Contributors 24 Document Revisions 24 ArchivedAbstract Organizations are collecting and analyzing increasing amounts of data making it difficult for traditional on premises solutions for data storage data management and analytics to keep pace Amazon S3 and Amazon Glacier provide an ideal storage solution for data lakes They provide options such as a breadth and depth of integration with traditional big data analytics tools as well as innovative query inplace analytics tools that help you eliminate costly and complex extract transform and load processes This guide explains each of these optio ns and provides best practi ces for building your Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 1 Introduction As o rganizations are collecting and analyzing increasing amounts of data traditional onpremise s solutions for data storage data management and analytics can no longer keep pace Data siloes that aren’t built to work well together make storage consolidation for more comprehensive and efficient analytics difficult This in turn limit s an organization’s agility ability to derive more insights and value from its data and capability to seamles sly adopt more sophisticated analytics tools and processes as its skills and needs evolve A data lake which is a single platform combining storage data governance and analytics is designed to address these challenges It’s a centralized secure and durable cloud based storage platform that allows you to ingest and store structured and unstructured data and transform these raw data assets as needed You don’t need an innovation limiting pre defined schema You can use a complete portfolio of data exploration reporting analytics machine learning and visualization tools on the data A data lake makes data and the optimal analytic s tools available to more users across more lines of business allowing them to get all of the business insights they need whe never they need them Until recently the data lake had been more concept than reality However Amazon Web Services (AWS) has developed a data lake architecture that allows you to build data lake solutions costeffectively using Amazon Simple Sto rage Service (Amazon S3) and other services Using the Amazon S3 based data lake architecture capabilities you can do the following : • Ingest and store data from a wide variety of sources into a centralized platform • Build a comprehensive data catalog to fin d an d use data assets stored in the data lake • Secur e protect and manag e all of the data stored in the data lake • Use t ools and policies to monitor analyze and optimize infrastructure and data • Transform raw data assets in place into optimized usable formats • Query data assets in place ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 2 • Use a b road and deep portfolio of data analytics data science machine learning and visualization tools • Quickly integrat e current and future third party data processing tools • Easily and securely shar e process ed datasets and results The remainder of this paper provide s more information about each of these capabil ities Figure 1 illustrates a sample AWS data lake platform Figure 1: Sample AWS data lake platform Amazon S3 as the Data Lake Storage Platform The Amazon S3 based data lake solution uses Amazon S3 as its primary storage platform Amazon S3 provides an optimal foundation for a data lake because of its virtually unlimited scalability You can seamlessly and nondisruptively increase storage from gigabyt es to petabytes of content paying only for what you use Amazon S3 is designed to provide 99999999999% durability It has scalable performance ease ofuse features and native encryption and access control capabilities Amazon S3 integrates with a broad portfolio of AWS and third party ISV data processing tools Key data lake enabling features of Amazon S3 include the following : ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 3 • Decoupling of storage from compute and data processing In traditional Hadoop and data warehouse solutions storage and compute are tightly coupled making it difficult to optimize costs and data processing workflows With Amazon S3 you can cost effectively store all data types in their native formats You can then launch as many or as few v irtual servers as you need using Amazon Elastic Compute Cloud (EC2) and you can use AWS analytics tools to process your data You can o ptimize your EC2 instances to provide the right ratios of CPU memory and bandwidth for best performance • Centralized data architecture Amazon S3 makes it easy to build a multi tenant environment where many users can bring their own data analytics tools to a common set of data This improv es both cost and data governance over that of traditional solutions which require multiple copies of data to be distributed across multiple processing platforms • Integration with clusterless and serverless AWS services Use Amazon S3 with Amazon Athena Amazon Redshift Spectrum Amazon Rekognition and AWS Glue to query and process data Amazon S3 also integrates with AWS Lambda serverless computing to run code without provisioning or managing servers With all of these capabilities you only pay for the actual amounts of data you process or for the compute time that you consume • Standardized APIs Amazon S3 R EST ful APIs are simple easy to use and supported by most major third party independent software vendors (ISVs ) including leading Apache Hadoop and analytic s tool vendors This allows customers to bring th e tools they are most comfortable with and knowledgeable about to help them perform analytics on data in Amazon S3 Data Ingest ion Methods One of the core capabilities of a data lake architecture is the ability to quickly and easily ingest multiple types o f data such as real time streaming data and bulk data assets from onpremise s storage platforms as well as data generated and processed by legacy on premise s platforms such as mainframes and data warehouses AWS provides services and capabilities to cover all of these scenarios ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 4 Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed service for delivering real time streaming data directly to Amazon S3 Kinesis Firehose automatically scales to match the volume and throughput of streaming data and requires no ongoing administr ation Kinesis Fireho se can also be configured to transform streaming data before it ’s stored in Amazon S3 Its transformation capabilities include compression encryption data batching and Lambda func tions Kinesis Fireho se can compress data before it’ s stored in Amazon S3 It currently supports GZIP ZIP and SNAPPY compression formats GZIP is the preferred format because it can be used by Amazon Athena Amazon EMR and Amazon Redshift Kinesis Fire hose encryption supports Amazon S3 server side encryption with AWS Key Management Service (AWS KMS) for encrypting delivered data in Amazon S3 You can choose not to encrypt the data or to encrypt with a key from the list of AWS KMS keys that you own (see the section Encryption with AWS KMS ) Kinesis Firehose can concatenate multiple incoming records and then deliver them to Amazon S3 as a single S3 object This is an important capability because it reduces Amazon S3 transaction costs and transactions per second load Finally Kinesis Firehose can invoke Lambda functions to transform incoming source data and deliver it to Amazon S3 Common transformation func tions include transforming Apache Log and Syslog formats to standardized JSON and/or CSV formats The JSON and CS V formats can then be directly queried using Amazon Athena If using a Lambda data transformation you can optionally back up raw source data t o another S3 bucket as Figure 2 illustrates ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 5 Figure 2: Delivering real time streaming data with Amazon Kinesis Firehose to Amazon S3 with optional backup AWS Snowball You can use AWS Snowball to securely and efficiently migrate bulk data from onpremise s storage platforms and Hadoop clusters to S3 buckets After you create a job in the AWS Management Console a Snowball appliance will be automatically shipped to you After a Snowball arrives connect it to your local network install the Snowball client on your on premises data source and then use the Snowball client to select and transfer the file directories to the Snowball device The Snowball client uses AES 256bit encrypt ion Encryption keys are never shipped with the Snowball devic e so the data transfer process is highly secure After the data transfer is complete the Snowball’s E Ink shipping label will automatically update Ship the device back to AWS Upon receipt at AWS your data is then transferred from the Snowball device t o your S3 bucket and stored as S3 objects in their original/native format Snowball also has an HDFS client so data may be migrated directly from Hadoop clusters into an S3 bucket in its native format AWS Storage Gateway AWS Storage Gateway can be used to integrate legacy on premise s data processing platforms with an Amazon S3 based data lake The File Gateway configuration of Storage Gateway offers onpremise s devices and applications a network file share via an NFS connection Files written to this mount point are converted to objects stored in Amazon S3 in their original format without any ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 6 proprietary modification This means that you can easily integrate applications and platforms that don’t have native Amazon S3 capabilities —such as on premise s lab equipment mainframe computers databases and data warehouses —with S3 buckets and then use tools such as Amazon EMR or Amazon Athena to process this data Additionally Amazon S3 natively support s DistCP which is a standard Apache Hadoop data transfer mechanism This allows you to run DistCP jobs to transfer data from an on premises Hadoop cluster to an S3 bucket The command to transfer data typically look s like the following : hadoop distcp hdfs://source folder s3a://destination bucket Data Cataloging The earliest challenges that inhibited building a data lake were keeping track of all of the raw assets as they were loaded into the data l ake and then tracking all of the new data assets and versions that were created by data trans formation data processing and analytics Thus a n essential component of an Amazon S3 based data lake is the data catalog The data catalog provides a query able interface of all assets stored in the data lake’s S3 buckets The data catalog is designed to provide a single source of truth about the contents of the data lake There are two general forms of a data catalog : a comprehensive data catalog that contains information about all assets that have been ingested into the S3 data lake and a Hive Metastore Catalog (HCatalog) that contains information about data assets that have been transformed into formats and table definitions that are usable by analytics tools like Amazon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR The two catalogs are not mutually exclusive and both may exist The comprehensive data catalog can be used to search for all assets in the data lake and the HCatalog can be used to discover and query data assets in the data lake Comprehensive Data Catalog The comprehensive data catalog can be created by using standard AWS services like AWS Lambda Amazon DynamoDB and Amazon Elastic search Service (Amazon ES) At a high level Lambda triggers are used to populate DynamoDB ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 7 tables with object names and metadata when those objects are put into Amazon S3; then Amazon ES is used to search for specific assets related met adata and data classifications Figure 3 shows a high level architectural overview of this solution Figure 3 : Comprehensive data catalog using AWS Lambda Amazon DynamoDB and Amazon Elasticsearch Service HCatalog with AWS Glue AWS Glue can be used to create a Hive compatible Metastore Catalog of data stored in an Amazon S3 based data lake To use AWS Glue to build your data catalog register your data sources with AWS Glue in the AWS Management Console AWS Glue will then crawl your S3 buckets for data sources and construct a data catalog using pre built classifiers for many popular source formats and data types including JSON CSV Parquet and more You may also add your own classifiers or choose classifiers from the AWS Glue community to add to your crawls to recognize and catalog other data formats The AWS Glue generated catalog can be used by Ama zon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR as well as third party analytics tools that use a standard Hive Metastore Catalog Figure 4 shows a sample screenshot of the AWS Glue data catalog interface ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 8 Figure 4: Sample AWS Glue data catalog interface Securing Protecting and Managing Data Building a data lake and making it the centralized repository for assets that were previously duplicated and placed across many siloes of smaller platforms and groups of users requires implementing stringent and fine grained security and access controls along with methods to protect and manage the data assets A data lake solution on AWS —with Amazon S3 as its core —provides a robust set of features and services to secure and protect your data against both internal and external threats even in large multi tenant environments Additionally innovative Amazon S3 data management features enable automation and scaling of data lake storage management even when it contains billions of objects and petabytes of data assets Securing your data lake begins with implementing very fine grained controls that allow authorized users to see access process and modify particular assets and ensure that unauthorized users are blocked from taking any actio ns that would compromise data confidentiality and security A complicating factor is that access roles may evolve over various stages of a data asset’s processing and lifecycle Fortunately Amazon has a comprehensive and well integrated set of security fe atures to secure an Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 9 Access Policy Options and AWS IAM You can manage access to your Amazon S3 resources using access policy options By default all Amazon S3 resources —buckets objects and related subresources —are private : only the resource owner an AWS account that created them can access the resource s The resource owner can then grant access permissions to others by writing an access policy Amazon S3 access policy options are broadly categorized as resource based policies and user policies Access policies that are attached to resources are referred to as resource based policies Example resource based policies include bucket policies and access control lists (ACLs) Acces s policies that are attached to users i n an account are called user policie s Typically a combination of resource based and user policies are used to manage permissions to S3 buckets objects and other resources For most data lake environments we recommend using user policies so that perm issions to access data assets can also be tied to user roles and permissions for the data processing and analytics services and tools that your d ata lake users will use User policies are associated with AWS Identity and Access Management (IAM) service wh ich allows you to securely control access to AWS services and resources With IAM you can create IAM users groups and roles in account s and then attach access policies to them that grant access to AWS resources including Amazon S3 The model for user policies is show n in Figure 5 For more details and information on securing Amazon S3 with user policies and AWS IAM please reference: Amazon Simple Storage Service Developers Guide and AWS Identity a nd Access Management User Guide Figure 5: Model for user policies ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 10 Data Encryption with Amazon S3 and AWS KMS Although user policies and IAM contr ol who can see and access data in your Amazon S3 based data lake it’s also important to ensure that users who might inadvertently or maliciously manage to gain access to those data assets can ’t see and use them This is accomplished by using encryption keys to encrypt and de encrypt data assets Amazon S3 supports multiple encryption options Additionally AWS KMS helps scale and simplify management of encryption keys AWS KMS gives you centralized control over the encryption keys used to protect your data assets You can create import rotate disable delete define usage policies for and audit the use of encryption keys used to encrypt your data AWS KMS is integrated with several other AWS services making it easy to encrypt the data sto red in these services with encryption keys AWS KMS is integrated with AWS CloudTrail which provides you with the ability to audit who used which keys on which resources and when Data lakes built on AWS primarily use two types of encryption : Server side encryption (SSE) and client side encryption SSE provides data atrest encryption for data written to Amazon S3 With SSE Amazon S3 encrypts user data assets at the object level stores the encrypted objects and then decrypts them as they are accessed and retrieved With client side encryption data objects are encrypted before they written into Amazon S3 For example a data lake user could specify client side encryption before transferring data assets into Amazon S3 from the Internet or could specify that services like Amazon EMR Amazon Athena or Amazon Redshift use client side encryption with Amazon S3 SSE and client side encryption can be combined for the highest levels of protection Given the intricacies of coordinating encryption key management in a complex environment like a data lake we strongly recommend using AWS KMS to coordinate keys across client and server side encryption and across multiple da ta processing and analytics services For even greater levels of data lake data protection other services like Amazon API Gateway Amazon Cognito and IAM can be combined to create a “shopping cart” model for users to check in and check out data lake data assets This architecture has been created for the Amazon S3 based data lake solution reference architecture which can be found downloaded and deployed at https://awsamazonco m/answers/big data/data lake solution/ ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 11 Protecting Data with Amazon S3 A vital function of a centralized data lake is data asset protection —primarily protection against corruption loss and accidental or malicious overwrites modifications or deletions Amazon S3 has several intrinsic features and capabilities to provide the highest levels of data protection when it is used as the core platform for a data lake Data protection rests on the inherent durability of the storage platform used Durability is defined as the ability to protect data assets against corruption and loss Amazon S3 provides 99999999999% data durability which is 4 to 6 orders of magnitude greater than that which most on premise s single site storage platforms can provide Put another way the durability of Amazon S3 is designed so that 10000000 data assets can be reliably stored for 10000 years Amazon S3 achieves this durability in all 16 of its global Regions by using multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Availability Zones offer the ability to operate production applications and analytics services which are more highly ava ilable fault tolerant and scalable than would be possible from a single data center Data written to Amazon S3 is redundantly stored across three Availability Zones and multiple devices within each Availability Zone to achieve 999999999% durability Thi s means that even in the event of an entire data center failure data would not be lost Beyond core data protection another key element is to protect data assets against unintentional and malicious deletion and corruption whether through users accidenta lly deleting data assets applications inadvertently deleting or corrupting data or rogue actors trying to tamper with data This becomes especially important in a large multi tenant data lake which will have a large number of users many applications and constant ad hoc data processing and application development Amazon S3 provides versioning to protect data assets against these scenarios When enabled Amazon S3 versioning will keep multiple copies of a data asset When an asset is updated prior vers ions of the asset will be retained and can be retrieved at any time If an asset is deleted the last version of it can be retrieved Data asset versioning can be managed by policies to automate management at large scale and can be combined with other Am azon S3 capabilities such as lifecycle management for long term ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 12 retention of versions on lower cost storage tiers such as Amazon Glacier and Multi Factor Authentication (MFA) Delete which requires a second layer of authentication —typically via an approve d external authentication device —to delete data asset versions Even though Amazon S3 provides 99999999999% data durability within an AWS Region many enterprise organizations may have compliance and risk models that require them to replicate their data assets to a second geographically distant location and build disaster recovery (DR) architectures in a second location Amazon S3 cross region replication (CRR) is an integral S3 capability that automatically and asynchronously copies data assets from a data lake in one AWS Region to a data lake in a different AWS Region The data assets in the second Region are exact replicas of the source data assets that they were copied from including their names metadata versions and access controls All data assets are encrypted during transit with SSL to ensure the highest levels of data security All of these Amazon S3 features and capabilities —when combined with other AWS services like IAM AWS KMS Amazon Cognito and Amazon API Gateway —ensure that a data lake using Amazon S3 as its core storage platform will be able to meet the most stringent data security compliance privacy and protection requirements Amazon S3 includes a broad range of certifications including PCI DSS HIPAA/HITECH FedRAMP SEC Rule 17 a4 FISMA EU Data Protection Directive and many other global agency certifications These levels of compliance and protection allow organizations to build a data lake on AWS that operates more securely and with less risk than one b uilt in their on premise s data centers Managing Data with Object Tagging Because data lake solutions are inherently multi tenant with many organizations lines of businesses users and applications using and processing data assets it becomes very important to associate data assets to all of these entities and set policies to manage these assets coherently Amazon S3 has introduced a new capability —object tagging —to assist with categorizing and managing S 3 data assets An object tag is a mutable key value pair Each S3 object can have up to 10 object tags Each tag key can be up to 128 Unicode characters in length and each tag value can be up to 256 Unicode characters in length For an example of object tagging suppose an object contains protected ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 13 health information (PHI) data —a user administrator or application that uses object tags might tag the object using the key value pair PHI=True or Classification=PHI In addition to being used for data classifi cation object tagging offers other important capabilities Object tags can be used in conjunction with IAM to enable fine grain controls of access permissions For example a particular data lake user can be granted permissions to only read objects with s pecific tags Object tags can also be used to manage Amazon S3 data lifecycle policies which is discussed in the next section of this whitepaper A data lifecycle policy can contain tag based filters Finally object tags can be combined with Amazon Cloud Watch metrics and AWS CloudTrail logs —also discussed in the next section of this paper —to display monitoring and action audit data by specific data asset tag filters Monitoring and Optimizing the Data Lake Environment Beyond the efforts required to architect and build a data lake your organization must also consider the operational aspects of a data lake and how to cost effectively and efficiently operate a production data lake at large scale Key elements you must co nsider are monitoring the operations of the data lake making sure that it meets performance expectations and SLAs analyzing utilization patterns and using this information to optimize the cost and performance of your data lake AWS provides multiple fea tures and services to help optimize a data lake that is built on AWS including Amazon S3 s torage analytics A mazon CloudW atch metrics AWS CloudT rail and Amazon Glacier Data Lake Monitoring A key aspect of operating a data lake environment is understand ing how all of the components that comprise the data lake are operating and performing and generating notifications when issues occur or operational performance falls below predefined thresholds Amazon CloudWatch As a n administrator you need to look at t he complete data lake environment holistically This can be achieved using Amazon CloudWatch CloudWatch is a ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 14 monitoring service for AWS Cloud resources and the applications that run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set thresholds and trigger alar ms This allows you to automatically react to changes in your AWS resources CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon S3 Amazon EMR Amazon Redshift Amazon DynamoDB and Amazon Relational Database Service ( RDS ) database instances as well as custom metrics generated by other data lake applications and service s CloudWatch provides system wide visibility into resource ut ilization application performa nce and operational health You can use these insights to proactively react to issues and keep your data lake application s and workflows running smoothly AWS CloudTrail An operational data lake has many users and multiple a dministrators and may be subject to compliance and audit requirements so it’ s important to have a complete audit trail of actions take n and who has performed these actions AWS CloudTrail is an AWS service that enables governance compliance operational audi ting and risk auditing of AWS account s CloudTrail continuously monitor s and retain s events related to API calls across the AWS services that comprise a data lake CloudTrail provides a h istory of AWS API calls for an account including A PI calls made through the AWS Management Console AWS SD Ks command line tools and most Amazon S3 based data lake services You can identify which users and accounts made requests or took actions against AWS services that support CloudTrail the source IP address the actions were made from and when the actions occurred CloudTrail can be used to simplify data lake compliance audits by automatically recording and storing activity logs for actions made within AWS accounts Integration with Amazon CloudWatch Logs provides a convenient way to search through log data identify out ofcompliance events accelerate incident investigations and expedite responses to auditor requests CloudTrail logs are stored in an S3 bucket for durability and deeper analysis ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 15 Data Lake Optimiz ation Optimizing a data lake environment includes minimizing operational costs By building a data lake on Amazon S3 you only pay for the data storage and data processing services that you actually use as you use them You can reduce cost s by optimizing how you use these services Data asset storage is often a significant portion of the costs associated with a data lake Fortunately AWS has several features that can be used to optimize and reduce costs these include S3 lifecycle management S3 storage class analy sis and Amazon Glacier Amazon S3 Lifecycle Management Amazon S3 lifecycle management allows you to create lifecycle rules which can be used to automatically migrate data assets to a lower cost tier of storage —such as S3 Standard Infrequent Access or Amazon Glacier —or let them expire when they are no longer needed A lifecycle configuration which consists of an XML file comprises a set of rules with predefined actions that you want Amazon S3 to perform on data assets dur ing their lifetime Lifecycle configurations can perform actions based on data asset age and data asset names but can also be combined with S3 object tagging to perform very granular management of data assets Amazon S3 Storage Class Analy sis One of the c hallenges of developing and configuring lifecycle rules for the data lake is gaining an understanding of how data assets are accessed over time It only makes economic sense to transition data assets to a more cost effective storage or archive tier if thos e objects are infrequently accessed Otherwise data access charges associated with these more cost effective storage classes could negate any potential savings Amazon S3 provides S3 storage class analy sis to help you understand how data lake data assets are used Amazon S3 storage class analy sis uses machine learning algorithms on collected access data to help you develop lifecycle rules that will optimize costs Seamlessly tiering to lower cost storage tiers in an important capability for a data lake particularly as its users plan for and move to more advanced analytics and machine learning capabilities Data lake users will typically ingest raw data assets from many sources and transform those assets into harmonized formats that they can use for ad hoc querying and on going business intelligence ( BI) querying via SQL However they will also want to perform more advanced analytics using streaming analytics machine learning and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 16 artificial intelligence These more advanced analytics capab ilities consist of building data models validating these data models with data assets and then training and refining these models with historical data Keeping more historical data assets particularly raw data assets allows for better training and refinement of models Additionally as your organization ’s analytics sophistication grows you may want to go back and reprocess historical data to look for new insights and value These historical data assets are infrequently accessed and consume a lot of capacity so they are often well suited to be stored on an archival storage layer Another long term data storage need for the data lake is to keep processed data assets and results for long term retention for compliance and audit purposes to be accessed by auditors when needed Both of these use cases are well served by Amazon Glacier which is an AWS storage service optimized for infrequ ently used cold data and for storing write once read many (WORM) data Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides durable storage with security features for data archiving and backup Amazon Glacier has the same data durability (99999999999%) as Amazon S3 the same integrat ion with AWS security features and can be integrated with S3 by using S3 lifecycle management on data assets stored in S3 so that data assets can be seamlessly migrated from S3 to Glacier Amazon Glacier is a great storage choice when low storage cost is paramount data assets are rarely retrieved and retrieval latency of several minutes to several hours is acceptable Different types of data lake assets may have different retrieval needs For example compliance data may be infrequently accesse d and relatively small in size but need s to be made available in minutes when auditors request data while historical raw data assets may be very large but can be retrieved in bulk over the course of a day when needed Amazon Glacier allows data lake user s to specify retrieval times when the data retrieval request is created with longer retrieval times leading to lower retrieval costs For processed data and records that need to be securely retained Amazon Glacier Vault Lock allows data lake administrato rs to easily deploy and enforce compliance controls on individual Glacier vaults via a lockable policy Administrators can specify controls such as Write Once Read Many (WORM) in ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 17 a Vault Lock policy and lock the policy from future edits Once locked the p olicy becomes immutable and Amazon Glacier will enforce the prescribed controls to help achieve your compliance objectives and provide an audit trail for these assets using AWS CloudTrail Cost and Performance Optimization You can optimize your data lake using cost and performance Amazon S3 provides a very performant foundation for the data lake because its enormous scale provides virtually limitless throughput and extremely high transaction rates Using Amazon S3 best practices for data asset naming ensures high levels of performance These best practices can be found in the Amazon Simple Storage Service Developers Guide Another area of o ptimization is to use optimal data formats when transforming raw data assets into normalized formats in preparation for querying and analytics These optimal data formats can compress data and reduce data capacities needed for storage and also substantially increase query performance by common Amazon S3 based data lake analytic services Data lake environments are designed to ingest and process many types of data and store raw data assets for future archival and reprocessing purposes as well as store processed and normal ized data assets for active querying analytics and reporting One of the key best practices to reduce storage and analytics processing costs as well as improve analytics querying performance is to use an optimized data format par ticularly a format lik e Apache Parquet Parquet is a columnar compressed storage file format that is designed for querying large amounts of data regardless of the data processing framework data model or programming language Compared to common raw data log formats like CSV JSON or TXT format Parquet can reduce the required storage footprint improve query performance significantly and greatly reduce querying costs for AWS services which charge by amount of data scanned Amazon tests comparing the CSV and Parquet format s using 1 TB of log data stored in CSV format to Parquet format showed the following : • Space savings of 87% with Parquet ( 1 TB of log data stored in CSV format compressed to 130 GB with Parquet) ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 18 • A query time for a representative Athena query was 34x faster with Parquet (237 seconds for CSV versus 513 seconds for Parquet) and the amount of data scanned for that Athena query was 99% less (115TB scanned for CSV versus 269GB for Parquet) • The cost t o run that Athena query was 997% less ($575 for CSV versus $0013 for Parquet) Parquet has the additional benefit of being an open data format that can be used by multiple querying and analytics tools in an Amazon S3 based data lake particularly Amazon Athena Amazon EMR Amazon Redshift and Amazon Redshift Spectrum Transforming Data Assets One of the core values of a data lake is that it is the collection point and repository for all of an organization’s data assets in whatever their native formats a re This enables quick ingest ion elimination of data duplication and data sprawl and centralized governance and management After the data assets are collected they need to be transformed into normalized formats to be used by a variety of data analytics and processing tools The key to ‘democratizing’ the data and making the data lake available to the widest number of users of varying skill sets and responsibilities is to transform data assets into a format that allows for efficient ad hoc SQL querying As discussed earlier when a data lake is built on AWS we recommend transforming log based data assets into Parquet format AWS provides multiple services to quickly and efficiently achieve this There are a multitude of ways to transform data assets and the “best” way often comes down to individual preference skill sets and the tools available When a data lake is built on AWS services there is a wide variety of tools and services available fo r data transformation so you can pick the methods and tools that you are most comfortable with Since the data lake is inherently multi tenant multiple data transformation jobs using different tools can be run concurrently The two most common and strai ghtforward methods to transform data assets into Parquet in an Amazon S3 based data lake use Amazon EMR clusters The first method involves creating an EMR cluster with Hive installed using the raw data assets in Amazon S3 as input transforming those data assets into Hive ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 19 tables and then writing those Hive tables back out to Amazon S3 in Parquet format The second related method is to use Spark on Amazon EMR With this method a typical transformation can be achieved with only 20 lines of PySpark code A third simpler data transformation method on an Amazon S3 based data lake is to use AWS Glue AWS Glue is an AWS fully managed extract transform and load ( ETL ) service that can be directly used with data stored in Amazon S3 AWS Glue simplifies and automates difficult and time consuming data discovery conversion mapping and job sched uling tasks AWS Glue guides you through the process of transforming and moving your data assets with an ea sy touse console that helps you understand your data sources transform and prepare the se data assets for analytics and load them reliably from S3 data sources back into S3 destinations AWS Glue automatically crawls raw data assets in your data lake ’s S3 buckets identifies data formats and then suggests schemas and transformations so that you don’t have to spend time hand coding data flows You can then edit these transformations if necessary using the tools and technologies you already know such as Python Spark Git and your favorite integ rated developer environment (IDE) and then share them with other AWS Glue users of the data lake AWS Glue’s flexible job scheduler can be set up to run data transformation flows on a recurring basis in response to triggers or even in response to AWS Lambda events AWS Glue automatically and transparently provisions hardware resources and distributes ETL jobs on Apache Spark nodes so that ETL run times remain consistent as data volume grows AWS Glue coordinates the execution of data lake jobs in the ri ght sequence and automatically re tries failed jobs With AWS Glue t here are no servers or clusters to manage and you pay only for the resources consumed by your ETL jobs InPlace Querying One of the most important capabilities of a data lake that is built on AWS is the ability to do in place transformation and querying of data assets without having to provision and manage clusters This allows you to run sophisticated analytic queries direc tly on your data assets stored in Amazon S3 without having to copy and load data into separate analytics platforms or data warehouses You ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 20 can query S3 data without any additional infrastructure and you only pay for the queries that you run This makes t he ability to analyze vast amounts of unstruc tured data accessible to any data lake user who can use SQL and makes it far more cost effective than the traditional method of performing an ETL process creating a Hadoop cluster or data warehouse loading th e transformed data into these environments and then running query jobs AWS Glue as described in the previous sections provides the data discovery and ETL capabilities and Amazon Athena and Amazon Redshift Spectrum provide the inplace querying capabilities Amazon Athena Amazon Athena is an interactive query service that makes it easy for you to analyze data directly in Amazon S3 using standard SQL With a few actions in the AWS Management Console you can use Athena directly against data assets stored in the data lake and begin using standard SQL to run ad hoc queries and get results in a mat ter of seconds Athena is serverless so there is no infrastructure to set up or manage and you only pay for the volume of data assets scanned during the queries you run Athena scales automatically —executing queries in parallel —so results are fast even with large datasets and complex queries You can use Athena to process unstructured semi structured and structured data sets Supported data asset formats include CSV JSON or columnar data formats such as Apache Parquet and Apache ORC Athena integrate s with Amazon QuickSight for easy visualization It can also be used with third party reporting and business intelligence tools by connecting these tools to Athena with a JDBC driver Amazon Redshift Spectrum A second way to perform in place querying of da ta assets in an Amazon S3 based data lake is to use Amazon Redshift Spectrum Amazon Redshift is a large scale managed data warehouse service that can be used with data assets in Amazon S3 However data assets must be loaded into Amazon Redshift before q ueries can be run By contrast Amazon Redshift Spectrum enables you to run Amazon Redshift SQL queries directly against massive amounts of data — up to exabytes —stored in an Amazon S3 based data lake Amazon Redshift Spectrum applies sophisticated query opt imization scaling processing across thousands of nodes so results are fast —even with large data sets and complex ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 21 queries Redshift Spectrum can directly query a wide variety of data assets stored in the data lake including CSV TSV Parquet Sequence and RCFile Since Redshift Spectrum supports the SQL syntax of Amazon Redshift you can run sophisticated queries us ing the same BI tools that you use today You also have the flexibility to run queries that span both frequently accessed data assets that are s tored loca lly in Amazon Redshift and your full da ta sets stored in Amazon S3 Because Amazon Athena and Amazon R edshift share a common data catalog and common data formats you can use both Athena and Redshift Spectrum against the same data assets You would typically use Athena for ad hoc data discovery and SQL querying and then use Redshift Spectrum for more comp lex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads The Broader Analytics Portfolio The power of a data lake built on AWS is that data assets get ingested and stored in one massively scalable low cost performant platform —and that data discovery transformation and SQL querying can all be done in place using innovative AWS services like AWS Glue Amazon Athena and Amazon Redshift Spectrum In addition there are a wide variety of other AWS services that can be directly integrated with Amazon S3 to create any number of sophisticated analytics machine learning and artificial intelligence (AI) data processing pipelines This allows you to quickly solve a wide range of analytics business challenges on a single platform against common data assets without having to worry about provisioning hardware and installing and configuring complex software packages before loading data and performin g analytics Plus you only pay for what you consume Some of the most common AWS services that can be used with data assets in an Amazon S3 based data lake are described next Amazon EMR Amazon EMR is a highly di stributed computing framework used to quick ly and easily process data in a cost effective manner Amazon EMR uses Apache Hadoop an open sour ce framework to distribute data and processing across a n elastically resizable cluster of EC2 instances and allows you to use all the common Hadoop tools suc h as Hive Pig Spark and HBase Amazon EMR does all the heavily lifting involved with provisioning managing and maintaining the infrastructure a nd software of a Hadoop cluster and is integrated directly with Amazon S3 With Amazon EMR you can launch a persistent cluster that stays ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 22 up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of EC2 instance types encompassing genera l purpose compute memory and storage I/O optimized (eg T2 C4 X1 and I3 ) instances and all Amazon EC2 pricing options (On Demand Reserved and Spot) When you launch an EMR cluster (also called a job flow ) you choose how many and what type of EC2 instances to provision Companies with many different lines of business and a large number of users can build a single data lake solution store their data assets in Amazon S3 and then spin up multiple EMR clusters to share data assets in a multi tenant fashion Amazon Machine Learning Machine learning is another important data lake use case Amazon Machine Learning (ML) is a data lake service that makes it easy for anyone to use predictive analytics and machine learnin g technology Amazon ML provides visualization tools and wizards to guide you through the process of creating ML models without having to learn complex algorithms and technology After the models are ready Amazon ML makes it easy to obtain predictions for your application using API operations You don’ t have to implement custom prediction generation code or manage any infrastructure Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data training the ML model evaluating the model quality and adjusting outputs to align with business goals Af ter a model is ready you can request predictions either in batches or by using the low latency real time API As discussed earlier in this paper a data lake built on AWS greatly enhances machine learning capabilities by combining Amazon ML with large historical data sets than can be cost effectively stored on Amazon Glacier but can be easily recalled when needed to train new ML models Amazon QuickSight Amazon QuickSight is a very fast easy touse business analytics service that makes it easy for you to build visualizations perform ad hoc analysis and quickly get business insights from your data assets store d in the data lake anytime on any device You can use Amazon QuickSight to seamlessly discover AWS data sources such as Amazon Redshift Amazon RDS Amazon Auror a Amazon Athena and Amazon S3 connect to any or all of these data source s and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 23 data assets and get insights from this data in minutes Amazon QuickSight enables organizations using the data lake to seamlessly scale their business analytics capabilities to hundreds of thousands of users It delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon Rekognition Another innovative data lake service is Amazon Rekognition which is a fully managed image recognition service powered by deep learning run agai nst image data assets stored in Amazon S3 Amazon Rekognition has been built by Amazon’s Computer Vision teams over many years and already analyzes billions of images every day The Amazon Rekognition easy touse API detects thousands of objects and scene s analyzes faces compares two faces to measure similarity and verifies faces in a collection of faces With Amazon Rekognition you can easily build applications that search based on visual content in images analyze face attributes to identify demograp hics implement secure face based verification and more Amazon Rekognition is built to analyze images at scale and integrates seamlessly with data assets stored in Amazon S3 as well as AWS Lambda and other key AWS services These are just a few examples of power ful data processing and analytics tools that can be integrated with a data lake built on AWS See the AWS website for more examples and for the latest list of innovative AWS services available for data lake users Future Proofing the Data Lake A data lake built on AWS can immediately solve a broad r ange of business analytics challenges and quickly provide value to your business H owever business needs are constantly evolving AWS and the analytics partner ecosystem are rapidly evolving and adding new services and capabilities a s businesses and their data lake users achieve more experience and analytics sophistication over time Therefore it’s important that the data lake can seamlessly and non disruptively evolve as needed AWS futureproofs your data lake with a standardized storage solution that grows with your organization by ingesting and storing all of your business’ s data assets on a platform with virtually unlimited scalability and well defined APIs and integrat es with a wide variety of data processing tools This allow s you to ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 24 add new capabilities to your data lake as you need them without infrastructure limitations or barriers Additionally you can perform agile analytics experiments against data lake assets to quickly explore new processing methods and tools and then scale the promising ones into production without the need to build new infrastructure duplicate and/or migrate data and have users migrate to a new platform In closing a data lake built on AWS allows you to evolve your business around your data assets and to use these data assets to quickly and agilely drive more business value and competitive differentiation without limits Contributors The following individuals and organizations co ntributed to this document: • John Mallory Business Development Manager AWS Storage • Robbie Wright Product Marketing Manager AWS Storage Document Revisions Date Description July 2017 First publication Archived,General,consultant,Best Practices Building_FaultTolerant_Applications_on_AWS,Fault Tolerant Components on AWS Novem ber 2019 This paper has been archived For the latest technical information see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Failures Shouldn’t Be THAT Interesting 1 Amazon Elastic Compute Cloud 1 Elastic Block Store 3 Auto Scaling 4 Failures Can Be Useful 5 AWS Global Infrastructure 6 AWS Regions and Availability Zones 6 High Availability Through Multiple Availability Zones 6 Building Architectures to Achieve High Availability 7 Improving Continuity with Replication Between Regions 7 High Availability Building Blocks 7 Elastic IP Addresses 7 Elastic Load Balancing 8 Amazon Simple Queue Service 10 Amazon Simple Storage Service 11 Amazon Elastic File System and Amazon FSx for Windows File Server 12 Amazon Relational Database Service 12 Amazon DynamoDB 13 Using Serverless Architectures for High Availability 14 What is Serverless? 14 Using Continuous Integration and Continuous Deployment/Delivery to Roll out Application Changes 15 What is Continuous Integration? 15 What is Continuous Deployment/Delive ry? 15 How Does This Help? 16 Utilize Immutable Environment Updates 16 ArchivedLeverage AWS Elastic Beanstalk 16 Amazon CloudWatch 17 Conclusion 17 Contributors 17 Further Reading 18 Document Revisions 18 ArchivedAbstract This whitepaper provides an introduction to building fault tolerant software systems using Amazon Web Services (AWS) You will learn about the diverse array of AWS services at your disposal including compute storage networking and database solutions By leveraging these solutions you can set up an infrastructure that refreshes automatically helping you to avoid degradations and points of failures The AWS platform can be operated with minimal human interaction and up front financial investment In addition you will learn about the AWS Global Infrastructure an architecture that provides high availability using AWS Regions and Availability Zones This paper is intended for IT managers and system architects looking to deploy or migrate their solutions to the cloud using a platform that provides highly available reliable and fault tolerant systems ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 1 Introd uction Fault tolerance is the ability for a system to remain in operation even if some of the components used to build the system fail Even with very conservative assumptions a busy e commerce site may lose thousands of dollars for every minute it is una vailable This is just one reason why businesses and organizations strive to develop software systems that can survive faults Amazon Web Services (AWS) provides a platform that is ideally suited for building fault tolerant software systems The AWS platfo rm enables you to build fault tolerant systems that operate with a minimal amount of human interaction and up front financial investment Failures Shouldn’t Be THAT Interesting The ideal state in a traditional on premises data center environment tends to be one where failure notifications are delivered reliably to a staff of administrators who are ready to take quick and decisive action s in order to solve the problem Many organizations are able to reach this state of IT nirvana however doing so typicall y requires extensive experience up front financial investment and significant human resources Amazon Web Services provides services and infrastructure to build reliable faulttolerant and highly available systems in the cloud As a result potential f ailures can be dealt with automatically by the system itself and as a result are fairly uninteresting events AWS gives you access to a vast amount of IT infrastructure —compute storage networking and databases just to name a few (such as Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Block Store (Amazon EBS) and Auto Scaling )—that you can allocate automatically (or nearly automatically) to account for almost any kind of failure You are charged only for resources that you actually use so there is no up front financial investment Amazon Elastic Compute Cloud Amazon Elastic Compute Cloud (Amazon EC2) provides computing resources literally server instances that you use to build and host your software systems Amazon EC2 is a natural e ntry point to AWS for your application development You can build a highly reliable and fault tolerant system using multiple EC2 instances and ancillary services such as Auto Scaling and Elastic Load Balancing ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 2 On the surface EC2 instances are very simila r to traditional hardware servers EC2 instances use familiar operating systems like Linux or Windows As such they can accommodate nearly any kind of software that runs on those operating systems EC2 instances have IP addresses so the usual methods of i nteracting with a remote machine (for example SSH or RDP) can be used The template that you use to define your service instances is called an Amazon Machine Image (AMI) which contains a defined software configuration ( that is operating system applicati on server and applications) From an AMI you launch an instance which is a copy of the AMI running as a virtual server in the cloud You can launch multiple instances of an AMI as shown in the following figure Instance types in Amazon EC2 are essent ially hardware archetypes You choose an instance type that matches the amount of memory (RAM) and computing power (number of CPUs) that you need for your application Your instances keep running until you stop or terminate them or until they fail If an instance fails you can launch a new one from the AMI Amazon publishes many AMIs that contain common software configurations for public use In addition members of the AWS developer community have published their own custom AMIs You can also create your own custom AMI enabl ing you to quickly and easily start new instances that contain the software configuration you need The first step towards building fault tolerant applications on AWS is to decide on how the AMIs will be configured There are two distinct mechanisms to do this dynamic and static A dynamic configuration starts with a base AMI and on launch deploys the software and data required by the application A static configuration deploys the required software and data to the base AMI and then uses this to create an application specific AMI that is used for application deployment Take the following factors into account when deciding to use either a dynamic or static configuration : ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 3 • The frequency of application changes —a dynamic configuratio n offers greater flexibility for frequent application changes • Speed of launch —an application installed on the AMI reduce s the time between launch and when the instance becomes available If this is important then a static configuration minimize s the launc h time • Audit —when an audit trail of the application configuration is required then a static configuration combined with a retention policy for AMIs allow s past configurations to be recreated It is possible to mix dynamic and static configuration s A common pattern is for the application software to be deployed on the AMI while data is deployed once the instance is launched Your application should be comprised of at least one AMI that you have configured To start your application l aunch the required number of instances from your AMI For example if your application is a website or a web service your AMI could include a web server the associated static content and the code for the dynamic pages As a result after you launch an i nstance from this AMI your web server starts and your application is ready to accept requests When the required fleet of instances from the AMI is launched then an instance failure can be addressed by launching a replacement instance that uses the same A MI This can be done through an API invocation scriptable command line tools or the AWS Management Console Additionally an Auto Scaling group can be configured to automatically replace failed or degraded instances The ability to quickly replace a problematic instance is just the first step towards fault tolerance With AWS an AMI lets you launch a new instance based on the same template allowing you to quickly recover from failures or problematic behaviors To minimiz e downtime you have the option to keep a spare instance running ready to take over in the event of a failure This can be done efficiently using elastic IP addresses Failover to a replacement instance or (running) spare instance by remapping your elastic IP address to the new instance Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances EBS volumes can be attached to a running EC2 instance and can persist independently from the instance EBS v olumes are automatically replicated within an Availability Zone providing high durability and availability along with protection from component failure ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 4 Amazon EBS is especially suited for applications that require a database a file system or access to raw block storage Typical use cases include big data analytics relational or NoSQL databases stream or log processing applications and data warehousing applications Amazon EBS and Amazon EC2 are often used in conjunction with one another when buildin g a fault tolerant application on the AWS platform Any application data that needs to be persisted should be stored on EBS volumes If the EC2 instance fails and needs to be replaced the EBS volume can simply be attached to the new EC2 instance Since th is new instance is essentially a duplicate of the original there should be no loss of data or functionality Amazon EBS volumes are highly reliable but to further mitigate the possibility of a failure backups of these volumes can be created using a fea ture called snapshots A robust backup strategy will include an interval (time between backups generally daily but perhaps more frequently for certain applications) a retention period (dependent on the application and the business requirements for rollba ck) and a recovery plan To ensure high durability for backups of EBS volumes snapshots are stored in Amazon Simple Storage Service (Amazon S3) EBS snapshots are used to create new Amazon EBS volumes which are an exact replica of the original volume at the time the snapshot was taken Because snapshots represent the on disk state of the application care must be taken to flush in memory data to disk before initiating a snapshot EBS snapshots are created and managed using the API AWS Management Console Amazon Data Lifecycle Manager (DLM) or AWS Backup Auto Scaling An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management In the context of a high ly availab le solution using an Auto Scaling group ensure s that an EC2 fleet provides the required capacity The continuous monitoring of the fleet instance health metrics allows for failures to be automatic ally detect ed and for replacement instances to be launched when required Where the required size of the EC2 fleet varies Auto Scaling can adjust the capacity using a number of criteria including scheduled and target ed tracking against the value for a specific metric Multiple scaling criteria can be applie d providing a flexible mechanism to manage EC2 capacity ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 5 The requirements of your application and high availability ( HA) strategy determine s the number of Auto Scaling groups need ed For an application that uses EC2 capacity spread across one or more Availability Zones (AZ) then a single Auto Scaling group suffice s Capacity launche s where available and the Auto Scaling group replace s instances as required ; but the placement within selected AZs is arbitrary If the HA strategy requires more precise con trol of the distribution of EC2 capacity deployments then using an Auto Scaling group per AZ is the appropriate solution An example is an application with two instances —production and fail over—that needs to be deployed in separate Availability Zones Us ing two Auto Scaling groups to manage the capacity of each application instance separately ensure s that they do not both have capacity in the same Availability Zone Failures Can Be Useful Software systems degrade over time This is due in part to : • Softwar e leak ing memory and/or resources includ ing software that you wrote and software that you depend on ( such as application frameworks operating systems and device drivers) • File systems fragment ing over time which impact s performance • Hardware ( particular ly storage) devices physically degrad ing over time Disciplined software engineering can mitigate some of these problems but ultimately even the most sophisticated software system depends on a number of components that are out of its control ( such as the operating system firmware and hardware) Eventually some combination of hardware system software and your software will cause a failure and interrupt the availability of your application In a traditional IT environment hardware can be regularly mai ntained and serviced but there are practical and financial limits to how aggressively this can be done However with Amazon EC2 you can terminate and recreate the resources you need at will An application that takes full advantage of the AWS platform c an be refreshed periodically with new server instances This ensures that any potential degradation does not adversely affect your system as a whole Essentially y ou are using what would be considered a failure ( such as a server termination) as a forcing function to refresh this resource ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 6 Using this approach an AWS application is more accurately defined as the service it provides to its clients rather than the server instance(s) it is comprised of With this mindset server instances become immaterial an d even disposable AWS Global Infrastructure To build fault tolerant applications in AWS it is important to understand the architecture of the AWS Global Infrastructure The AWS Global Infrastructure is built around Regions and Availability Zones AWS Regions and Availability Zones An AWS Region is a geographical area of the world Each AWS Region is a collection of data centers that are logically grouped into what we call Availability Zones AWS Regions provide multiple (typically three) physically separated and isolated Availability Zones which are connected with low latency high throughput and highly redundant networking 1 Each AZ consists of one or more physical data centers Availability Zones are designed for physical redundancy and provide resilience enabling uninterrupted performance even in the event of power outages Internet downtime floods and other natural disasters Note: Refer to the Global Infrastructure page for current information about AWS Regions and Avail ability Zones or our interactive map High Availability Through Multiple Availability Zones Availability Zones are connected to each other with fast private fiber optic networking enabling you to architect applications that automatically fail over between AZs without interruption These AZs offer AWS customers an easier and more effective way t o design and operate applications and databases making them more highly available fault tolerant and scalable than traditional single data center infrastructures or multi data center infrastructures 1 Asia Pacific (Osaka) is a Local Region with a single AZ that is available to select AWS customers to provide regional redundancy in Japan ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 7 Building Architectures to Achieve High Availability You can achieve high availability by deploying your applications to span across multiple Availability Zones For each application tier ( that is web application and database) placing multiple redundant instances in distinct AZs creates a multi site solu tion Using Elastic Load Balancing (ELB) you get improved fault tolerance as the ELB service automatically balance s traffic across multiple instances in multiple Availability Zones ensuring that only healthy instances receive traffic The desired goal is to have an independent copy of each application stack in two or more AZs with automated traffic routing to healthy resources Improving Continuity with Replication Between Regions In addition to replicating applications and data across multiple data cent ers in the same Region using Availability Zones you can also choose to increase redundancy and fault tolerance further by replicating data between geographic Regions You can do so using both private high speed networking and public internet connections to provide an additional layer of business continuity or to provide low latency access across the globe High Availability Building Blocks Amazon EC2 and its related services provide a powerful yet economic platform upon which to deploy and build your ap plications However they are just one aspect of the Amazon Web Services platform AWS offers a number of other services that can be incorporated into your application development and deployments to increase the availability of your applications Elastic I P Addresses An Elastic IP address is a static public IPv4 address allocated to your AWS account With an Elastic IP address you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account Elastic IPs do not change and remain all ocated to your account until you delete them An Elastic IP address is allocated from the public AWS IPv4 network ranges in a specific region If your instance does not have a public IPv4 address you can associate an Elastic IP address with your instance to enable communication with the internet ; for ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 8 example to connect to your instance from your local computer Elastic IP addresse s are mapped via an Internet Gateway to the private address of the instance Once you associate an Elastic IP address with an i nstance it remains associated until you remove the association or associate the address with another resource Elastic IP addresse s are one method for handling failover especially for legacy type applications that cannot be scaled horizontally In the ev ent of a failure of a single server with an associated Elastic IP address the failover mechanism can re associate the Elastic IP address to a replacement instance ideally in an automated fashion While this scenario may experience downtime for the applic ation the time may be limited to the time it takes to detect the failure and quickly re associate the Elastic IP address to the replacement resource Where higher availability levels are required you can use multiple instances and an Elastic Load Balance r Elastic Load Balancing Elastic Load Balancing is an AWS service that automatically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers IP addresses and Lambda functions and ensures only healthy t argets receive traffic It can handle the varying load of your application traffic in a single Availability Zone or across multiple AZs and supports the ability to load balance across AWS and on premises resources in the same load balancer Elastic Load B alancing offers three types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant Application Load Balancer The Application Load Balancer is best suited for load balancing HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including microservices and containers Operating at the individual request level (Layer 7) the Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 9 Network Load Balancer Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP) User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required Operating at the connection level (Layer 4) Network Load Balancer routes traffic to targets within Amazon VPC and is capable of handling millions of requests per second while maintaining ultra low latencies Network Load Balancer is also optimized to handle sudden and volatile traffic patterns Benefits of Using Elastic Load Balancing • Highly available —Elastic Load Balancing automatically distributes incoming traffic across multiple targets —Amazon EC2 instances containers IP addresses and Lambda functions —in multiple AZs and ensures only healthy targets receive traffic The Amazon Elastic Load Balancing Service Level Agreement commitment is 9999% a vailability for a load balancer • Secure —Elastic Load Balancing works with Amazon VPC to provide robust security features including integrated certificate management user authentication and SSL/TLS decryption Together they give you the flexibility to centrally manage TLS settings and offload CPU intensive workloads from your applications • Elastic —Elastic Load Balancing is capable of handling rapid changes in network traffic patterns Additionally deep integration with Auto Scaling ensures sufficient ap plication capacity to meet varying levels of application load without requiring manual intervention • Flexible —Elastic Load Balancing also allows you to use IP addresses to route requests to application targets This offers you flexibility in how you virtua lize your application targets allowing you to host more applications on the same instance This also enables these applications to have individual security groups and use the same network port to further simplify inter application communication in microse rvice based architecture • Robust monitoring & auditing —Elastic Load Balancing allows you to monitor your applications and their performance in real time with Amazon CloudWatch metrics logging and request tracing This improves visibility into the behavio r of your applications uncovering issues and identifying performance bottlenecks in your application stack at the granularity of an individual request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 10 • Hybrid load balancing —Elastic Load Balancing offers ability to load balance across AWS and on premises resources using the same load balancer This makes it easy for you to migrate burst or failover on premises applications to the cloud Amazon Simple Queue Service Amazon Simple Queue Service ( Amazon SQS) is a fully managed message queuing service that en ables you to decouple and scale microservices distributed systems and serverless applications Amazon SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware and empowers developers to focus on diffe rentiating work Using Amazon SQS you can send store and receive messages between software components at any volume without losing messages or requiring other services to be available Messages are stored in queues that you create Each queue is define d as a URL so it can be accessed by any server that has access to the Internet subject to the Access Control List (ACL) of that queue Use Amazon SQS to ensure that your queue is always available; any messages that you send to a queue are retained for up to 14 days SQS offers two types of message queues Standard queues offer maximum throughput besteffort ordering and at least once delivery with best effort ordering SQS FIFO queues offer high throughput and are designed to guarantee th at messages are processed exactly once in the exact order that they are sent Using Amazon SQS with Other AWS Infrastructure Web Services Amazon SQS message queuing can be used with other AWS Services such as Amazon Redshift Amazon DynamoDB Amazon Relat ional Database Service (Amazon RDS) Amazon EC2 Amazon Elastic Container Service (Amazon ECS) AWS Lambda and Amazon S3 to make distributed applications more scalable and reliable Common design patterns include: • Work Queues —Decouple components of a dis tributed application that may not all process the same amount of work simultaneously • Buffer and Batch Operations —Add scalability and reliability to your architecture and smooth out temporary volume spikes without losing messages or increasing latency • Request Offloading —Move slow operations off of interactive request paths by enqueuing the request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 11 • Fanout —Combine SQS with Simple Notification Service (SNS) to send identical copies of a message to multiple queues in parallel • Priority —Use separate queues to provide prioritization of work • Scalability —Scale up the send or receive rate of messages by adding another process since message queues decouple your processes • Resiliency —Continue adding messages to the queue even if a process that is reading messages from the queue fails; once the system recovers the queue can be processed since message queues decouple components of your system Amazon Simple Storage Service Amazon S3 is an object storage service that provides highly durable secure fault tolerant data storage AWS is responsible for maintaining availability and fault tolerance; you simply pay for the storage that you use Data is stored as objects within resources called buckets and a single object can be up to 5 terabytes in size Behind the scenes Amazon S3 stores objects redundantly on multiple devices across multiple facilities in an AWS Region —so even in the rare case of a failure in an AWS data center you will still have access to your data Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies globally Amazon S3 is ideal for any kind of object data storage requirements that your application might have Amazon S3 can be accessed using the AWS Management Console by a U RL through a Command Line Interface (CLI) or via API using an SDK with your programming language of choice The versioning feature in Amazon S3 allows you to retain prior versions of objects stored in S3 and also protects against accidental deletions initiated by a misbehaving application Versioning can be enabled for any of your S3 buckets You can also use either S3 Cross Region Repli cation (CRR) to replicate objects in another region or Same Region Replication (SRR) to replicate objects in the same AWS Region for reduced latency security disaster recovery and other use cases In addition to providing highly available storage Amazon S3 provides multiple storage classes to help reduce storage costs while still providing high availability and durability Using S3 Lifecycle policies objects can be transferred to lower cost storage If you are unsure of your data access patterns you can select S3 Intelligent Tiering which automatically move s your data based on changing access patterns ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 12 By using Amazon S3 you can delegate the responsibility of one critical aspect of fault tolerance —data storage —to AWS Amazon Elastic File System and Amazon FSx for Windows File Server While Amazon S3 is ideal for applications that can access data as objects many applications store and access data as files Amazon Elastic File System (Amazon EFS) and Amazon FSx for Windows File Server (Amazon FSx) are fully managed AWS services that provide file based storage for applications Amazon EFS provides a simple scalable elastic file sys tem for Linux based workloads File systems grow and shrink on demand and can scale to petabytes of capacity Amazon EFS is a regional service storing data within and across multiple Availability Zones for high availability and durability Applications tha t need access to shared storage from multiple EC2 instances can store data reliably and securely on Amazon EFS Amazon FSx provides a fully managed native Microsoft Windows file system so you can move your Windows based applications that require file stora ge to AWS With Amazon FSx you can launch highly durable and available Windows file systems that can be accessed from up to thousands of application instances Amazon FSx is highly available within a single AZ For applications that require additional lev els of availability Amazon FSx supports the use of Distributed File System (DFS) Replication to enable multi AZ deployments Using either Amazon EFS or Amazon FSx you can provide highly available fault tolerant file storage to your applications running in AWS Amazon Relational Database Service Amazon Relational Database Service guides you in the setup operat ion and scal ing a relational database in the cloud It provides cost efficient and resizable capacity while automating time consuming administrati on tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on severa l database instance types and is optimized for memory performance or I/O You can choose from six familiar database engines ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 13 including Amazon Aurora PostgreSQL MySQL MariaDB Oracle Database and SQL Server Amazon RDS has many features that enhance reliability for critical production databases including automated backups database snapshots and automatic host replacement • Administration —Go from project conception to deployment using the Amazon RDS Management Console the AWS RDS CLI or API calls to access the capabilities of a production ready relational database in minutes No need for infrastructure provisioning or installing and maintaining database software • Scalability —Scale your database compute and storage resources using the console or an API call often with no downtime Many Amazon RDS engine types allow you to launch one or more Read Replicas to offload read traffic from your primary database instance • Availability —Run on the same highly reliable infrastructure used by other Amazo n Web Services Use Amazon RDS for replication to enhance availability and reliability for production workloads across Availability Zones Us e the Multi AZ deployment option to run mission critical workloads with high availability and builtin automated fa ilover from your primary database to a synchronously replicated secondary database • Security —Control network access to your database by running your database instances in an Amazon VPC which enables you to isolate your database instances and to connect t o your existing IT infrastructure through an industry standard encrypted IPsec VPN Many Amazon RDS engine types offer encryption at rest and encryption in transit • Cost —Pay for only the resources you actually consume with no up front or long term commitments You have the flexibility to use on demand resources or utilize our Reserved Instance pricing to further reduce your costs Amazon DynamoDB Amazon DynamoDB is a key value and document database that delivers single digit millisecond performance at any scale It's a fully managed multi region multi master database with built in security backup and restore and in memory caching for internet scale applications ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 14 Amazon DynamoDB is purpose built for mission critical workloads DynamoDB helps secur e your data with encryption at rest by default and continuously backs up your data for protection with guaranteed reliability through a service level agreement Point in time recovery (PITR) helps protect DynamoDB tables from accidental write or delete operations PITR provides continuous backups of your DynamoDB table data and you can restore that table to any point in time up to the second during the preceding 35 days With Amazon DynamoDB there are no servers to provision patch or manage and no software to install maintain or operate DynamoDB automatically scales tables to adjust for capacity and maintains performance with zero administration Availability and fault tolerance are built in eliminating the need to architect your applications for t hese capabilities Using Serverless Architectures for High Availability What is Serverless? Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS increasing your agility and innovatio n Serverless allows you to build and run applications and services without thinking about servers It eliminates infrastructure management tasks such as server or cluster provisioning patching operating system maintenance and capacity provisioning Ser verless provides built in availability and fault tolerance You don't need to architect for these capabilities since the services running the application provide them by default Central to many serverless designs is AWS Lambda AWS Lambda automatically runs your code on highly available fault tolerant infrastructure spread across multiple Availability Zones in a single region without requiring you to provision or manage servers With Lambda you can run code for virtually any type of application or backend service no administration Upload your code and Lambda will run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app AWS Lambda automatically scales your application by running code in response to each trigger Your code runs in parallel and processes each trigger individually scaling precisely with the size of the workload In addition to AWS Lambda other AWS ser verless technologies include: ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 15 • AWS Fargate —a serverless compute engine for containers • Amazon DynamoDB —a fast and flexible NoSQL database • Amazon Aurora Serverless —a MySQL compatible relational database • Amazon API Gateway —a service to create publish monitor and secure APIs • Amazon S3—a secure durable and highly scalable object storage • Amazon Elastic File System —a simple scalable elastic file storage • Amazon SNS—a fully managed pub/sub messaging service • Amazon SQS —a fully managed message queuing service Note: While a full discussion on Serverless capabilities is outside the scope of this paper you may find additional information about Serverless Computing on our website Using Continuous Integration and Continuous Deployment/Delivery to Rollout Application Changes What is Continuous Integration? Continuous integration (CI) is a software development practice where developers regularly merge their code change s into a central repository after which automated builds and tests are run What is Continuous Deployment/Delivery? Continuous deployment/ delivery (CD) is a software development practice where code changes are automatically built tested and prepared for production release It expands on continuous integration by deploying all code changes to a testing environment a production environment or both after the build stage has been completed Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 16 How Does This Help? Continuous integration and c ontinuous deployment/ delivery tools remove the human factor of rolling out application changes and instead add automation as much as possible Prior to CI/CD tools scripts were used which required manual intervention or kick off process Many deployments occurred during the weekends to minimize potential disruptions to the business and could be quickly rolled back if issues arose Deployment steps were usually documented in runbooks AWS CodeBuild AWS CodePipeline and AWS CodeDeploy are part of the CI/CD services that DevOps teams us e to deploy applications or application changes in their environment For example a single pipeline can roll out application changes in one region and if successful the same pipeline roll s out the changes in other region s With a streamlined CI/CD pipe line developers can deploy application changes which are transparent to end users These pipelines can be leveraged to perform multi region deployments or to quickly deploy a bug fix If a fault occurs in one environment users can be redirected to anothe r environment (or region) and updates can be r olled out to the faulty environment Once the fault has been addressed you can redirect users back to the original environment Utilize Immutable Environment Updates An immutable environment is a type of infrastructure in which resources ( that is servers) are never modified once they have been deployed Typically these servers are built from a common image (such as an Amazon Machine Image) The benefit of this type of environment is increas ed reliability consistency and a more predictable environment In AWS this can be achieved by creating the infrastructure using AWS CloudFormation or AWS Cloud Development Kit (CDK) Leverage AWS Elastic Beanstalk With AWS Elastic Beanstalk you can quic kly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications Elastic Beanstalk reduces management complexity without restricting choice or control After upload ing your application Elas tic Beanstalk will automatically handle the details of capacity provisioning load balancing scaling and application health monitoring ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 17 Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure You can use these metrics to measure the performance and response times and also capture custom metrics for your applications These metrics that can be used to do additional actions such as trigger Auto Scaling Notification Fan out trigger automated tasks etc To capture the custom metrics you can publish your own metr ics to CloudWatch through a simple API request Conclusion Amazon Web Services provides services and infrastructure to build reliable fault tolerant and highly available systems in the cloud Services that provide basic infrastructure such as Amazon EC2 and Amazon EBS provide specific features such as availability zones elastic IP addresses and snapshots In particular Amazon EBS provides durable block storage for applications running on EC2 and an Auto Scaling group ensures your Amazon EC2 fleet oper ates at the required capacity automatically detects failures and replaces instances as needed Higher level building blocks such as Amazon S3 provide highly scalable globally accessible object storage with 11 9s of durability For durable fault toleran t file storage for applications running in AWS you can use Amazon EFS and Amazon FSx for Windows The wide spectrum of building blocks available give you the flexibility and capability to set up the reliable and highly available environment you need and o nly pay for the services you consume Contributors Contributors to this document include : • Jeff Bartley Solutions Architect Amazon Web Services • Lewis Foti Solutions Architect Amazon Web Services • Bert Zahniser Solutions Architect Amazon Web Services • Muhammad Mansoor S olutions Architect Amazon Web Services ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 18 Further Reading Amazon API Gateway Amazon Aurora Amazon Aurora Serverless Amazon CloudWatch Amazon DynamoDB Amazon Elastic Block Store Amazon Elastic Compute Cloud Amazon Elastic Container Service Amazon Elastic File System Amazon FSx for Windows File Server Amazon Machine Image Amazon Redshift Amazon Relational Database Service Amazon Simple Notification Service Amazon Simple Queue Service Amazon Simple Storage Service Amazon Virtual Private Cloud AWS Auto Scaling AWS Cloud Development Kit AWS CloudFormation AWS CodeBuild AWS CodeDeploy AWS CodePipeline AWS Command Line Interface AWS Elastic Beanstalk AWS Fargate AWS Global Infrastructure Region and Availability Zones AWS Lamb da AWS Management Console Elastic IP Addresses Elastic Load Balancing Serverless Document Revisions Date Description Novem ber 2019 Refreshed the paper removing outdated references and adding newer AWS services not previously available October 2011 First publication Archived,General,consultant,Best Practices Building_Media__Entertainment_Predictive_Analytics_Solutions_on_AWS,Building Media & Entertainment Predictive Analytics Solutions on AWS First published December 2016 Updated March 30 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Overview of AWS Enabled M&E Workloads 1 Overview of the Predictive Analytics Process Flow 3 Common M&E Predictive Analytics Use Cases 6 Predictive Analytics Archi tecture on AWS 8 Data Sources and Data Ingestion 9 Data Store 13 Processing by Data Scientists 14 Prediction Processing and Serving 22 AWS Services and Benefits 23 Amazon S3 23 Amazon Kinesis 24 Amazon EMR 24 Amazon Machine Learning (Amazon ML) 25 AWS Data Pipeline 25 Amazon Elastic Compute Cloud (Amazon EC2) 25 Amazon CloudSearch 26 AWS Lambda 26 Amazon Relational Database Service (Amazon RDS) 26 Amazon DynamoD B 26 Conclusion 27 Contributors 27 Abstract This whitepaper is intended for data scientists data architects and data engineers who want to design and build Media and Entertainment ( M&E ) predictive analytics solutions on AWS Specifically this paper provide s an introduction to common cloud enabled M&E workloads and describes how a predictive analytics workload fits into the overall M&E workflows in the cloud The paper provide s an overview of the main phases for the predictive analytics business process as well as an overview of comm on M& E predictive analytics use case s Then the paper describes the technical reference architecture and tool options for implementing predictive analytics solutions on AWS Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 1 Introduction The world of Media and Entertainment (M&E) has shifted from treating custo mers as mass audiences to forming connection s with individuals This progression was enabled by unlocking insights from data generated through new distribution platforms and web and social networks M&E companies a re aggressivel y moving from a traditional mass broadcasting business model to an Over The Top (OTT) model where relevant data can be gathered In this new model they are embracing the challenge of acquiring enriching and retaining customers through big data and predictive analytic s solutions As cloud technology adoption becomes mainstream M&E companies are moving many analytics workload s to AWS to achieve ag ility scale lower cost rapid innovation and operational efficiency As these companies start their journey to the cloud they have questions about c ommon M&E use case s and how to design build and operate these solutions AWS provides many services i n the data and analytics space that are well suited for all M&E analytics workloads including traditional BI reporting real time analytics and predictive analytics In this paper we discuss the approach to architecture and tools We’ll cover design build and operate aspects of predictive analytics in subsequent papers Overview of AWS Enabled M&E Workloads M&E c ontent producers have traditionally relied heavily on systems located on premise s for production and post production workloads Content produ cers are increasingly looking into the AWS Cloud to run workloads This is d ue to the huge increase in the volume of content from new business models such as on demand and other online delivery as well as new content formats such as 4k and high dynamic r ange ( HDR ) M&E customers deliver live linear on demand and OTT content with the AWS Cloud AWS services also enable media partners to build solutions across M&E lines of business Examples include: • Managing digital assets • Publishing digital content • Automating media supply chains • Broadcast master control and play out Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 2 • Streamlining content distribution to licensees • Affiliates (business to business B2B) • Direct to consumer ( business to consumer B2C) channels • Solutions for content and customer analytics using real time data and machine learning Figure 1 is a diagram that shows a typical M&E workflow with a brief description of each area Figure 1 — Cloud enabled M&E workflow Acquisition — Workloads that capture and ingest media contents such as videos audio and images into AWS VFX & NLE — Visual Effects (VFX) and nonlinear editing system (NLE) workloads that allow editing of im ages for visual effects or nondestruc tive editing of video and audio source files DAM & Archive — Digital asset management (DAM) and archive solutions for the management of media assets Media Supply Chain — Workloads that manage the process to deliver digital asset s such as video or music from the point of origin to the destinat ion Publishing — Solutions for media contents publishing OTT — Systems that allow the delivery of aud io content and video content over the Internet Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 3 Playout & Distribution — Systems that support the transmission of media contents and channels into the broadcast network Analytics — Solutions that provide business intelligence and predictive analytics capabilities on M&E data Some typical domain questions to be answered by the analytics solutions are: How do I segment my customers for email campaign? What videos should I be promoting at the top of audiences OTT/VOD watchlists? Who is at risk of cancelling a subscription? What ads can I target mid roll to maximize audience engagement? What is the aggregate trending sentiment regarding titles brands prop erties and talents across social media and where is it headed? Overview of the Predictive Analytics Process Flow There are two main categories of analytics : business and predictive Business analytics focus on reporting metrics for historical and real time data Predictive analytics help predict future events and provide estimations by applying predictive modeling that is based on historical and real time data This paper will only cover predictive analytics A predictive analytics initiative involves man y phases and is a highly iterative process Figure 2 shows some of the main phases in a predictive analytics project with a brief description of each phase Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 4 Figure 2 — Cross industry standards process for data mining 1 Business Understanding — The main objective of this phase is to develop an understanding of the business goals and t hen t ranslate the goals into predictive analytics objectives For the M&E industry examples of business goals could include increasing con tent consumption by existing customers or understanding social sentiment toward contents and talents to assist with new content development The associated predictive analytics goals could also include personalized content recommendations and sentiment a nalysis of social data regarding contents and talents 2 Data Understanding — The goal of this phase is to consider the data required for predictive analytics Initial data collection exploration and quality assessment take place during this phase To dev elop high quality models the dataset needs to be relevant complete and large enough to support model training Model training is the process of training a machine learning model by providing a machine learning algorithm with training data to learn from Some relevant datasets for M&E use case s are customer information/profile data content viewing history data content rating data social engagement data customer behavioral data content subscription data and purchase data Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 5 3 Data Preparation — Data preparation is a critical step to ensure that highquality predictive models can be generated In this phase the data required for each modeling process is selected Data acquisition mechanisms need to be created Data is integrated formatted transformed and enriched for the modeling purpose Supervised machine learning algorithms require a labeled training dataset to generate predictive models A labeled training dataset has a target prediction variable and other dependent data attributes or features The quality of the training data is often considered more important than the machine learning algorithms for performance improvement 4 Modeling — In this phase the appropriate modeling techniques are selected for different modeling and business objec tives For example : o A clustering technique could be employed for customer segmentation o A binary classification technique could be used to analyze customer churn o A collaborative filtering technique could be applied to content recommendation The perform ance of the model can be evaluated and tweaked using technical measures such as Area Under Curve (AUC) for binary classification (Logistic Regression) Root Mean Square (RMSE) for collaborative filtering (Alternating Least Squares) and Sum ofSquared Error (SSE) for clustering (K Means) Based on the initial evaluation of the model result the model setting s can be revised and fine tuned by going back to the data preparation stage 5 Model Evaluation — The generated models are formally evaluated in this phase not only in terms of technical measures but also in the context of the business success criteria set out during the business understanding phase If the model properly addresses the initial business objectives it can be approved and prepared for deployment 6 Deployment — In this phase the model is deployed into an environment to generate predictions for future events Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 6 Common M&E Predictive Analyt ics Use Cases To a certain extent some of the predictive analytics use case s for the M&E industry do not differ much from other industries The following are common use case s that apply to the M&E industry Customer segmentation — As the engagement betw een customers and M&E companies become s more direct across different channels and as more data is collected on those engagements appropriate segmentation of customers becomes increasingly important Customer relationship management (CRM) strategies incl uding customer acquisition customer development and customer retention greatly rely upon such segmentation Although customer segmentation can be achieved using basic business rules it can only efficiently handle a few attributes and dimensions A dat adriven segmentation with a predictive modeling approach is more objective and can handle more complex datasets and volumes Customer segmentation solution s can be implemented by leveraging clustering algorithms such as k means which is a type of unsup ervised learning algorithm A clustering algorithm is used to find natural clusters of customers based on a list of attributes from the raw customer data Content recommendation — One of the most widely adopted predictive analytics by M&E companies this type of analytics is an importan t technique to maintain customer engagement and increase content consumption Due to the huge volume of available content customers need to be guided to the content they might find most interesting Two comm on algorithms leveraged in recommendation solutions are content based filtering and collaborative filtering • Content based filtering is based on how similar a particular item is to other items based on usage and rating The model uses the content attribut es of items (categories tags descriptions and other data) to generate a matrix of each item to other items and calculates similarity based on the ratings provided Then the most similar items are listed together with a similarity score Items with the highest score are most similar Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 7 • Collaborative filtering is based on making predictions to find a specific item or user based on similarity with other items or users The filter applies weights based on peer user preferences The assumption is users who di splay similar profile or behavior have similar preferences for items More advanced recommendation solutions can leverage deep learning techniques for better performance One example of this would be using Recurrent Neural Networks (RNN) with collaborative filtering by predicting the sequence of items in previous streams such as past purchases Sentiment analysis — This is the process of categorizing words phrases and other contextual information into subjective feelings A common outcome fo r sentiment analysis is positive negative or neutral sentiment Impressions publicized by consumers can be a val uable source of insight into the opinions of broader audiences These insights when employed in real time can be used to significantly enhan ce audience engagement Insights can also be used with other analytic learnings such as customer segmentation to identify a positive match between an audience segment and associated content There are many tools to analyze and identify sentiment and many of them rely on linguistic analysis that is optimized for a specific context From a machine learning perspective one traditional approach is to consider sentiment analysis as a classification problem The sentiment of a document sentence or word is cl assified with positive negative or neutral labels In general the algorithm consists of tokenization of the text feature extraction and classification using different classifiers such as linear classifiers (eg Support Vector Machine Logistic Regre ssion) or probabilistic classifiers (eg Naïve Bayes Bayesian Network) However this traditional approach lacks recognition for the structur e and subtleties of written language A more advanced approach is to use deep learning algorithm s for sentiment analysis You don’t need to provide these models with predefined features as the model can learn sophisticated features from the dataset The words are represented in highly dimensional vectors and features are extracted by the neural netwo rk Examples of deep learning algorithms that can be used for sentiment analysis are Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) MXNet Tensorflow and Caffe are some deep learning frameworks that are well suited for RNN and CNN model training AWS makes it easy to get started with these frameworks by providing an Amazon Machine Image (AMI) that includes these frameworks preinstalled This AMI can be run on a large number of instance types including the P2 instances that provide general Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 8 purpose GPU processing for deep learning applications The Deep Learning AMI is available in the AWS Marketplace Churn prediction — This is the identification of customers who are at risk of no longer being customers Churn prediction helps to identify where to deploy retention resources most effectively The data used in churn prediction is generally user activity data related to a specif ic service or content offering This type of analysis is generally solved using a logistic regression with a binary classification The binary classification is designated as customer leave predicted or customer retention predicted Weightings and cutoff values can be used with predictive models to tweak the sensitivity of predictions to minimize false positives or false negatives to optimize for business objectives For example Amazon Machine Learning (Amazon ML) has an input for cutoff and sliders for precision recall false positive rate and accuracy Predictive Analytics Architecture on AWS AWS includes the components needed to enable pipelines for predictive analytics workflows There are many viable architectural patterns to effectively compute pr edictive analytics In this section we discuss some of the technology options for building predictive analytics architecture on AWS Figure 3 shows one such conceptual architecture Figure 3 — Conceptual reference architecture Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 9 Data Sources and Data Ingestion Data collection and ingestion is the first step and one of the most important technical architecture components to the overall predictive analytics architecture At a high level the main s ource data required for M&E analytics can be classified into the following categories • Dimension data — Provides structured labeling information to numeric measures Dimension data is mainly used for grouping filtering and labeling of information Exampl es of dimension data are customer master data demographics data transaction or subscription data content metadata and other reference data These are mostly structured data stored in relational databases such as CRM Master Data Management (MDM) or Digital Asset Management (DAM) databases • Social media d ata — Can be used for sentiment analysis Some of the main social data sources for M&E are Twitter YouTube and Facebook The data could encompass content ratings reviews social sharing tagging and bookmarking events • Event data — In OTT and online media examples of event data are audience engagement behaviors with st reaming videos such as web browsing patterns searchi ng events for content video play/watch/stop events and device data These are mostly real time click streaming data from website s mobile apps and OTT players • Other relevant data — Includes data from aggregators (Nielson comS core etc) advertising response data customer contacts and service case data There are two main modes of data ingestion into AWS : batch and streaming Batch Ingestion In this mode data is ingested as files (eg database extracts) following a specified schedule Data ingestion approaches include the following Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 10 • Third party applications — These applications have connector integration with Amazon Simple Storage Service (Amazon S3) object storage that can ingest data into Amazon S3 buckets The applications can either take source files or extract data from the source database directly and store them in Amazon S3 There are commercial products (eg Informatica Talend) and open source utilities ( eg Embulk ) that can extract data from databases and export the data into a n Amazon S3 bucket directly • Custom applications using AWS SDK/APIs — Custom applications can use AWS SDKs and the Amazon S3 application programming interface (API) to ingest data into target Amazon S3 buckets The SDKs and API also support multipart upload for faster data transfer to Amazon S3 buckets • AWS Data Pipeline — This service facilitates moving data between different sources including AWS services AWS Data Pipeline launch es a task runner that is a Linux based Amazon Elastic Compute Cloud (Amazon EC2) instance which can run scripts and commands to move data on a n event based or scheduled basis • Command line interface (CLI) — Amazon S3 also provides a CLI for interacting and ingesting data into Amazon S3 buckets • File synchronization utilities — Utilities such as rsynch and s3synch can keep source data directories in sync with Amazon S3 buckets as a way to move files from source locations to Amazon S3 buc kets Streaming Ingestion In this mode data is ingested in streams (eg clickstream data) Architecturally there must be a streaming store that accepts and stores streaming data at scale and in real time Additionally data collectors that collect dat a at the sources are needed to send data to the streaming store • Stream ing stores — There are various options for the streaming stores Amazon Kinesis Stream s and Amazon Kinesis Firehose are fully managed stream ing stores Streams and Firehose also provide SDKs and APIs for programmatic integration Alternatively open source platforms such as Kafka can be installed and configured on EC2 clusters to manage streaming data ingestion and storage Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 11 • Data collector s — These can be web mobile or OTT appl ications that send data directly to the streaming store or collector agents running next to the data sources (eg clickstream logs) that send data to the streaming store in real time There are several options for the data collectors Flume and Flentd are two open source data collectors that can collect log data and send data to streaming stores An Amazon Kinesis agent can be used as the data collector for Streams and Firehose One common practice is to ingest all the input data into staging Amazon S3 buckets or folders first perform further data processing and then store the data in target Amazon S3 buckets or folders Any data processing related to data quality (eg data completeness invalid data) should be handled at the sources when possible and is not discussed in this document During this stage the following data processing might be needed • Data transformatio n — This could be transformation of source data to the defined common standards For example breaking up a single name field into first name middle name and last name field s • Metadata extraction and persistence — Any metadata associated with input files s hould be extracted and stored in a persistent store This could include file name file or record size content description data source information and date or time information • Data enrichment — Raw data can be enha nced and refined with additional infor mation For example you can enrich source IP addresse s with geographic data • Table schema creation and maintenance — Once the data is processed into a target structure you can create the schemas for the target systems File Formats The various file formats have tradeoffs regarding compatibility storage efficiency read performance write performance and schema extensibility In the Hadoop ecosystem there are many variations of file based data stores The following are some of the more common ones i n use Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 12 • Comma Separated Values (CSV) — CSV typically the lowest common denominator of file formats excels at providing tremendous compatibility between platforms It’s a common format for going into and out of the Hadoop ecosystem This file type can be easily inspected and edited with a text editor which provides flexibility for ad hoc usage One drawback is poor support for compression so the files tend to take up more storage space than some other available formats You should also note that CSV sometimes has a header row with column names Avoid using this with machine learning tools because it inhibits the ability to arbitrarily split files • JavaScript Object Notation (JSON) — JSON is similar to CSV in that text editors can consume this format easily JSON records can be stored using a delimiter such as a newline character as a demarcation to split large data sets across multiple files However JSON files include some additional metadata whereas CSV files typically do not when used in Hadoop JSON files with one record should be avoided because this would generally result in too many small files • Apache Parquet — A columnar storage format that is integrated into much of the Hadoop ecosystem Parquet allows for compression schemes to be specified on a per column level This provides the flexibility to take advantage of compression in the right places without the penalty of wasted CPU cycles compressi ng and de compressing data that doesn’t need compressing Parquet is also flexible for encoding columns Selecting the right encoding mechanism is also important to maximize CPU utiliz ation when reading and writing data Because of the columnar format Parquet can b e very efficient when processing jobs that only require reading a subset of columns However this columnar format also comes with a write penalty if your processing includes writes • Apache Avro — Avro can be used as a file format or as an object format that is used within a file format such as Parquet Avro uses a binary data format requiring less space to represent the same data in a text format This results in lower processing demands in terms of I/O and memory Avro also has the advantage of being compressible further reducing the storage size and increasing disk read performance Avro includes schema data and data that is defined in JSON while still being persisted in a binary format The Avro data format is flexible and expressive allowin g for schema evolution and support for more complex data structures such as nested types Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 13 • Apache ORC — Another column based file format designed for high speed within Hadoop For flat data structures ORC has the advantage of being optimized for reads tha t use predicates in WHERE clauses in Hadoop ecosystem queries It also compresses quite efficiently with compression schemes such as Snappy Zlib or GZip • Sequence files — Hadoop often uses sequence files as temporary files during processing steps of a M apReduce job Sequence files are binary and can be compressed to improve performance and reduce required storage volume Sequence files are stored row based with sync ending markers enabling splitting However any edits will require the entire file to be rewritten Data Store For the data stor e portion of your solution you need storage for the data derived data lake schemas and a metadata data catalogue As part of that a critical decision to make is the type or types of data file formats you will pr ocess Many types of object models and storage formats are used for machine learning Common storage locations include databases and files From a storage perspective Amazon S3 is the preferred storage option for data science proces sing on AWS Amazon S3 provides highly durable storage and seamless integration with various data processing services and machine learning platforms on AWS Data Lake Schemas Data lake schema s are Apache HIVE tables that supp ort SQLlike data querying using Hadoop based query tools such as Apache HIVE Spark SQL and Presto Data lake schemas are based on the schema onread design which means table schemas can be created after the source data is already loaded into the data store A data lake schema uses a HIVE metastor e as the schema repository which can be accessed by different query engines In addition t he tables can be created and managed using the HIVE engine directly Metadata Data Catalogue A metadata data catalogue contain s information about the data in the data store It can be loosely categorized into three areas: technical operational and business Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 14 • Technical metadata refers to the forms and structure of the data In addition to data types technical metadata can also contain information about what data is valid and the data’s sensitivity • Operational metadata captures information such as the source of the data time of ingestion and what ingested the data Operat ional metadata can show data lineage movement and transformation • Business metadata provides labels and tags for data with business level attribute s to make it easier for someone to search and brows e data in the data store There are different options to process and store metadata on AWS One way is to trigger AWS Lambda functions by using Amazon S3 events to extract or derive metadata from the input files and store metadata in Amazon DynamoDB Processing by Data Scien tists When all relevant data is available in the data store data scientists can perform offline data exploration and model selection data preparation and model training and generation based on the defined business objectives The following solutions were selected because they are ideal for handling the large amount of data M&E use case s generate Interactive Data Exploration To develop the data understanding needed to support the modeling process data scientists often must explore the available datasets and determine their usefulness This is normally an interactive and iterative process and require s tools that can query data quickly across massive amount s of datasets It is also useful to be able to visualize the data with graphs charts and maps Table 1 provides a list of data exploration tools available on AWS followed by some specific examples that can be used to explore the data interactively Table 1: Data exploration tool options on AWS Query Style Query Engine User Interface Tools AWS Services SQL Presto AirPal JDBC/ODBC Clients Presto CLI EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 15 Query Style Query Engine User Interface Tools AWS Services Spark SQL Zeppelin Spark Interactive Shell EMR Apache HIVE Apache HUE HIVE Interactive Shell EMR Programmatic R/SparkR (R) RStudio R Interactive Shell EMR Spark(PySpark Scala) Zeppelin Spark Interactive Shell EMR Presto on Amazon EMR The M&E datasets can be stored in Amazon S3 and are accessible as external HIVE tables An external Amazon RDS database can be deployed for the HIVE metastore data Presto running in an Amazon EMR cluster can be used to run interactive SQL queries against the data sets Presto supports ANSI SQL so you can run complex quer ies as well as aggregation against any dataset size from gigab ytes to petabytes Java Database Connectivity ( JDBC ) and Open Database Connectivity ( ODBC ) drivers support connections from data vis ualization tools such as Qlikview Tableau and Presto for rich data visualization Web tools such as AirPal provide an easy touse web front end to run Presto queries directly Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 16 Figure 4 — Data exploration with Presto Apache Zeppelin with Spark on EMR Another tool for data exploration is Apache Zeppelin notebook with Spark Spark is a general purpose cluster computing system It provides high level APIs fo r Java Python Scala and R Spark SQL an in memory SQL engine can integrate with HIVE external tables using HiveContext to query the da taset Zeppelin provides a fr iendly user interface to interact with Spark and visualize data using a range of charts and tables Spark SQL can also support JDBC/ODBC connectivity through a server running Thrift EMR Data Storage on S3 HIVE Metastore DB BI Tool JDBC/ODBC Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 17 Figure 5 — Data exploration with Zeppelin R/SparkR on EMR Some data scientists like to use R /RStudio as the tool for data exploration and analysis but feel constrained by the limitations of R such as single threaded execution and small data size support SparkR provides both the interactive environment rich statistical libraries and visualization of R Additionally SparkR provides the scalable fast distributed storage and processing capability of Spark SparkR uses DataF rame s as the data structure which is a distributed collection of data organized into named columns DataFrames can be constructed fro m wide array of data sources including HIVE tables EMR Data Storage on S3 HIVE Metastore DB Zeppelin Notebook Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 18 Figure 6 — Data exploration with Spark + R Training Data Preparation Data scientists will need to prepare training data to support supervised and unsupervised model training Data is formatted transformed and enriched for the modeling purpose As only the relevant data variable should be included in the model training feature selection is often performed to remove unneeded and irrelevant attributes that do not cont ribute to the accura cy of the predictive model Amazon ML provides feature transformation and feature selection capability that simplifies this process Labeled training dataset s can be stored in Amazon S3 for easy access by machine learning services and f ramework s Interactive Model Training To generate and select the right models for the target business use case s data scientists must perform interactive model training against the tr aining data Table 2 provides a list of use cases with potential product s that can you can use to create your solution followed by several example architectures for interactive model training EMR Data Storage on S3 HIVE Metastore DB Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 19 Table 2 — Machine learning options on AWS M&E Use Case ML Algorithms ML Software AWS Services Segmentation Clustering (eg k Means) Spark ML Mahout R EMR Recommendation Collaborative Filtering (eg Alternating Least Square) Spark ML Apache Mahout EMR Neural Network MXNet Amazon EC2/GPU Customer Churn Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Spark ML Apache Mahout R EMR Sentiment Analysis Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Classification (eg Support Vector Machines Naïve Bayes) Spark ML Mahout R EMR Neural Network MXNet Caffe Tensorflow Torch Theano Amazon EC2/GPU Amazon ML Architecture Amazon ML is a fully managed machine learning service that provides the quickest way to get started with model training Amazon ML can support long tail use case s such as churn and sentiment analysis where logistic regression (for classification) or linear regression (for the prediction of a numeric value) algorithms can be applied The followi ng are the main steps of model training using Amazon ML Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 20 1 Data source creation — Label training data is loaded directly from the Amazon S3 bucket where the data is stored A target column indicating the prediction field must be selected as part the data source creation 2 Feature processing — Certain variables can be transformed to improve the predictive power of the model 3 ML model generation — After the data source is created it can be used to train the machine learning mode l Amazon ML automatically split s the labeled training set into a training set (70%) and an evaluation set (30%) Depending on the selected target column Amazon ML automatically picks one of three algorithms ( binary logistic regression multinomial logist ic regression or linear regression) for the training 4 Performance evaluation — Amazon ML provides model evaluation features for model performance assessment and allows for adjustment to the error tolerance threshold All trained models are stored and man aged directly within the Amazon ML service and can be used for both batch and real time prediction Spark ML/Spark MLlib on Amazon EMR Architecture For the use case s that require other machine learning algorithms such as clustering (for segmentation) and collaborative filtering (for recommendation) Amazon EMR provides cluster management support for running Spark ML To use Spark ML and Spark MLlib for interactive data modeling data scientist s have two choices They can use Spark shell by SSH’ing onto the master node of the EMR cluster or use data science notebook Zeppelin running on the EMR cluster master node Spark ML or Spark MLlib support s a range of machine learning algorithms for classification regression collaborative filter ing clustering decomposition and optimization Another key benefit of Spark is that the same engine can perform data extraction model training and interactive query A data scientist will need to programmatically train the model using languages such as Java Python or Scala Spark ML provides a set of APIs for creating and tuning machine learning pipelines The following are the main concepts to understand for pipeline s Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 21 • DataFrame — Spark ML uses a DataFrame from Spark SQL as an ML dataset For example a DataFrame can have different columns corresponding to different columns in the training dataset that is stored in Amazon S3 • Transformer — An algorithm that can transform one DataFrame into another DataFrame For instance an ML model is a Transformer that transforms a DataFrame with features into a DataFrame with predictions • Estimator — An algorithm that can fit on a DataFrame to produce a transformer • Parameter — All transformers and estimators share a common API for specifying parameters • Pipeline — Chains multiple Transformers and Estimators to specify an ML workflow Spark ML provides two approaches for model selection : cross validation and validation split With cross validation the dataset is split into multiple folds that are used as separat e training and test datasets Two thirds of each fold are used for training and onethird of each fold is used for testing This approach is a wellestablished method for choosing parameters and is more statistical ly sound than heuristic tuning by hand However it can be very expensive as it cross validates over a grid of parameters With validation split the dataset is split into a training asset and a test data asset This approach is less expensive but when the training data is not sufficiently large it won’t produce results that are as reliable as using cross validation Spark ML supports a method to export models in the Predictive Model Markup Language (PMML) format The trained model can be exported a nd persisted into an Amazon S3 bucket using the model save function The saved models can then be deployed into other environment s and loaded for generating prediction Machine Learning on EC2 /GPU /EMR Architecture s For use case s that require dif ferent ma chine learning frameworks that are not supported by A mazon ML or Amazon EMR these frameworks can be installed and run on EC2 fleet s An AMI is available with preinstalled machine learning Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 22 packages including MXNet CNTK Caffe Tensorflow Theano and Torch Additional machine learning packages can be added easily to EC2 instances Other machine learning frameworks can also be installed on Amazon EMR via bootstrap actions to take advantage of the EMR cluster management Examples include Vowpal Wabbit Skytree and H2O Prediction Processing and Serving One architecture pattern for serving predictions quickly using both historic and new data is the lambda architecture The components for this architecture include a batch layer speed layer and serving layer all working together to enable up todate predictions as new data flows into the system Despite its name this pattern is not related to the AWS Lambda service The following is a brief description for each portion of the pattern shown in Figure 7 • Event data — Eventlevel data is typically log data based on user activity This could be data captured on websites mobile devices or social media activities Amazon Mobile Analytics provides an easy way to capture user activity for mobile devices The Amazon Kinesis Agent makes it easy to ingest log data such as web logs Also the Amazon Kinesis Producer Library (KPL) makes it easy to programmatically ingest data int o a stream • Streaming — The streaming layer ingests data as it flows into the system A popular choice for processing streams is Amazon Kinesis Streams because it is a managed service that minimiz es administration and maintenance Amazon Kinesis Firehose c an be used as a stream that stores all the records to a data lake such as an Amazon S3 bucket Figure 7 — Lambda architecture components Event Data Streaming Speed Layer Serving Layer Data Lake Batch Layer Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 23 • Data lake — The data lake is the storage layer for big data associated with event level data generated by M&E users The popular choice in AWS is Amazon S3 for highly durable and scalable data • Speed layer — The speed layer continually updat es predictive results as new data arrives This layer processes less data than the batch layer so the results may not be as accurate as the batch layer However the results are more readily available This layer can be implemented in Amazon EMR using Spark Streaming • Batch layer — The batch layer processes machine learning models using the full set of event level data available This processing can take longer but ca n produce higher fidelity predictions This layer can be implemented using Spark ML in Amazon EMR • Serving layer — The serving layer respond s to predictions on an ongoing basis This layer arbitrate s between the results generated by the batch and speed la yers One way to accomplish this is by storing predictive results in a NoSQL database such as DynamoDB With this approach predictions are stored on an ongoing basis by both the batch and speed layers as they are processed AWS Services and Benefits Mach ine learning solutions come in many shapes and sizes Some of t he AWS services commonly used to build machine learning solutions are described in the following sections During the predictive analytics process work flow different resources are needed throughout different parts of the lifecycle AWS services work well in this scenario because resources can run on demand and y ou pay only for the services you consume Once you stop using them there are no additional costs or terminat ion fees Amazon S3 In the context of machine learning Amazon S3 is an excellent choice for storing training and evaluation data Reasons for this choice include its provision of highly parallelized low latency access that it can store vast amounts of structure d and unstructured data and is low cost Amazon S3 is also integrated into a useful ecosystem of tools and other services extending the functionality of Amazon S3 for ingestion and processing of new data For example Amazon Kinesis Firehose can be used to capture streaming data AWS Lambda event Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 24 based triggers enable serverless compute processing when data arrives in an Amazon S3 bucket Amazon ML uses Amazon S3 as input for training and evaluation dataset s as well as for batch predictions Amazon EMR with its ecosystem of machine learning tools also benefits from using Amazon S3 buckets for storage By using Amazon S3 EMR clusters can decouple storage and compute which has the advantage of scaling eac h independently It also facilitates using transient clusters or multiple clusters for reading the same data at the same time Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS offering powerful services to make it easy to load and analyze streaming data The Amazon suite of services also provid es the ability for you to build custom streaming data applications for specialized needs One such use case is applying machine learning to stream ing data There are three Amazon Kinesis services that fit different needs : • Amazon Kinesis Firehose accepts streaming data and persists the data to persistent storage including Amazon S3 Amazon Redshift and Amazon Elasticsearch Service • Amazon Kinesis Analytics lets you gain insights from streaming data in real time using standard SQL Analytics also include advanced functions such as the Random Cut Forest which calculates anomalies on streaming datasets • Amazon Kinesis Streams is a streaming service that can be used to create custom streaming applications or integrate into other applications such as Spark Streaming in Amazon EMR for real time Machine Learning Library (MLlib) workloads Amazon EMR Amazon EMR simplifies big data processing providing a managed Hadoop framework This approach makes it easy fast and cost effective for you to distribute and process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run o ther popular distributed frameworks such as Apache Spark and Presto in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB The large ecosystem of Hadoop based machine learning tools can be used in Amazon EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 25 Amazon Machine Learning (Amazon ML) Amazon ML is a service that makes it easy for developers of all skill levels to use machine learning technology Amazon ML provides visualization tools and wizards that guide you through the process of creating machine learning models without having to learn complex machine learning algorithms and technology Once your models are ready Amazon ML makes it easy to obtain predictions for your application using simple APIs without having to implement custom prediction generation code or manage any infrastructure Amazon ML is based on the same proven highly scalable machine learning technology used for years by Amazon’s internal data scientist community The service uses powerful algorithms to create machine learning models by finding patterns in your existing data Then Amazon ML uses these models to process new data and generate predictions for your application Amazon ML is highly scalable and can generate billion s of predictions daily and serve those predictions in real time and at high throughput With Amazon ML there is no upfront hardware or software investment and you pay as you go so you can start small and scale as your application grows AWS Data Pipeli ne AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise s data sources at specified intervals With Data Pipeline you can regularly access your data where it’s stored trans form and process it at scale and efficiently transfer the results to AWS services such as Amazon S3 Amazon RDS Amazon DynamoDB and Amazon EMR Data Pipeline helps you easily create complex data processing workloads that are fault tolerant repeatable and highly available You don’t have to worry about ensuring resource availability managing intertask dependencies retrying transient failures or timeouts in individual tasks or creating a failure notification system Data Pipeline also enables you to m ove and process data that was previously locked up in on premise s data silos unlocking new predictive analytics workloads Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a simple yet powerful compute service that provid es complete control of server instances that can be used to run many machine learning packages The EC2 instance type options include a wide variety of options to Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 26 meet the various needs of machine learni ng packages These include compute optimized instances with relatively more CPU cores memory optimized instances for packages that use lots of RAM and massively powerful GPU optimized instances for packages that can take advantage of GPU processing power Amazon CloudSearch Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost effective to set up manage and scale a search solution for your website or application In the context of predictive analytics architecture CloudSearch can be used to serve prediction outputs for the various use cases AWS Lambda AWS Lambda lets you run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service all with zero administration In the predictive analytics architecture Lambda can be used for tasks such a s data processing triggered by events machine learning batch job scheduling or as the back end for microservices to serve prediction results Amazon Relational Database Service (Amazon RDS) Amazon RDS makes it e asy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing you up to focus on your applications and business In the predicti ve analytics architecture Amazon RDS can be used as the data store for HIVE metastore s and as the database for servicing prediction results Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL dat abase service ideal for any applications that need consistent single digit millisecond latency at any scale It is a fully managed cloud database and supports both document and key value store models In the predictive analytics architecture DynamoDB ca n be used to store data processing status or metadata or as a database to serve prediction results Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 27 Conclusion In this paper we provided an overview of the common Media and Entertainment (M&E) predictive analytics use case s We presented an architectur e that uses a broad set of services and capabilities of the AWS Cloud to enable both the data scientist workflow and the predictive analytics generation workflow in production Contributors The following individuals and organizations contributed to this do cument: • David Ping solutions architect Amazon Web Services • Chris Marshall solutions architect Amazon Web Services Document revisions Date Description March 30 2021 Reviewed for technical accuracy February 24 2017 Corrected broken links added links to libraries and incorporated minor text updates throughout December 2016 First publication,General,consultant,Best Practices Choosing_the_Operating_System_for_Oracle_Workloads_on_Amazon_EC2,Choosing the Operating System for Oracle Workloads on Amazon EC2 Published June 2014 Updated July 19 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Oracle AMIs 2 Operating systems and Oracle licensing 3 Oracle certified operating systems 3 Red Hat Enterprise Linux 3 SUSE Lin ux Enterprise Server 4 Oracle Linux 5 Microsoft Windows Serv er 6 Conclusion 6 Contributors 6 Further reading 7 Document history 7 Abstract Amazon Web Services (AWS) provide s a comprehensive set of services and tools for deploying enterprise applications in a highly secure reliable available and cost effective manner The AWS Cloud is an excellent platform to run business critical Oracle workloads in a n efficient way This whitepaper discusses the operating system choices that are best suited for running Oracle workloads on AWS The target audience for this whitepaper includes enterprise architects database administrators IT managers and developers who want to migrate Or acle workloads to AWS Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 1 Introduction Oracle software works well on Amazon Web Services (AWS) and many enterprises run their critical Oracle workloads on AWS for both production systems and non production systems These applications can benefit from the many features of the AWS Cloud like scriptable infrastructure instant provisioning and de provisionin g scalability elasticity usage based billing and the ability to support a wide variety of operating systems Whether you are migrating your existing Oracle environments to AWS or implement ing new Oracle applications on AWS choosing the operating system on which these applications will run is a crucial decision We highly recommend that you choose an Oracle certified operating system to run Oracle software on AWS whether you are running Or acle Database Oracle enterprise applications or Oracle middleware You can use the following Oracle certified operating systems on AWS: • Red Hat Enterprise Linux (RHEL) • SUSE Linux Enterprise Server • Oracle Linux • Microsoft Windows Server Note : Only Oracle c an make definitive statements about what products are considered certified For details see Oracle's My Oracle Support website or ask your Oracle sales representative You can use any one of the four oper ating systems for all of your Oracle workloads or you can use a combination of them as needed For example to implement Oracle Siebel you can run Oracle Database on RHEL while running web servers and application servers on Microsoft Windows Server All four of these operating systems are well suited for enterprise workloads but each of them has features and capabilities that the others do not have Knowing the differences will help you make the right decision about what is best for your envi ronments If you migrate an existing Oracle environment on Intel platform to AWS and if that environment currently uses one of the four operating systems in the preceding list then it might be best to choose the same operating system on AWS to keep any compatibil ity risks to a minimum However it also might be worthwhile to evaluate other options If you migrate from a non Intel platform or implement a completely new Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 2 environment on AWS then you should carefully evaluate the operating systems before you choose th e one that is best for your environment Oracle AMIs An Amazon Machine Image (AMI) is a special type of pre configured operating system and virtual application software that is used to create a virtual machine on Amazon Elastic Compute Cloud ( Amazon EC2) The AMI serves as the basic unit of deployment for services delivered using Amazon EC2 The AMI provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can lau nch as many instances from the AMI as you need An AMI includes the following: • A template for the root volume for the instance • Launch permissions that control which AWS accounts can use the AMI to launch instances • A block device mapping that specifies the volumes to attach to the instance when it's launched There are no official AMIs available for most Oracle products In addition the AMIs that are available might not always be the latest version Even when there are latest versions of the AMIs available they will be based on the Oracle Linux operating system so depending on your operating system of choice this might not be the best option You do not need an Oracle provided AMI to install and use Oracle products on AWS You can start an Amazon EC2 insta nce with an operating system AMI and then download and install Oracle software from the Oracle website just as you would do in the case of a physical server You can use any one of the four operating systems discussed in the preceding section for this pu rpose Once you have the first environment set up with all the necessary Oracle software you can create your own custom AMI for subsequent installations You can also directly launch AMIs from AWS Mark etplace You should closely scrutinize any community AMIs provided by third parties for security and reliability before using them AWS is not responsible or liable for their security or reliability AMIs use one of two types of virtualization: • Paravirtua l (PV) Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 3 • Hardware Virtual M achine (HVM) The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special ha rdware extensions (CPU network and storage) for better performance Note: For the best performance we recommend that you use current generation instance types and HVM AMIs when you launch new instances For more information on current generation instanc e types see Amazon EC2 Instances Operating systems and Oracle licensing On AWS the operating system you use does not affect Oracle licensing The number of Oracle licenses you need to run your Oracle workloads on AWS will be the same no matter which operating system you choose As currently advised by Oracle the key factor that affect s Oracle licensing on AWS for Amazon EC2 and RDS is that you should count two vCPUs as equivalent to one Oracle Processor license if hyper threading is enabled and one vCPU as equivalent to one Oracle Processor license if hyper threading is not enabled When counting Oracle Processor license requirements in AWS Cloud Environments the Oracle Processor Core Factor Table is not applicable You can consult Oracle’s Licensing Oracle Software in the Cloud Computing Environment to understand how Oracle licensing applies to AW S To find out the physical core count of each Amazon EC2 instance type see Physical Cores by Amazon EC2 Instance Type Oracle certified operating systems This section provides information about t he four operating systems that are certified by Oracle and recommended for use with AWS Note: It is possible to run Oracle products on non certified operating systems but for the best performance and supportability we recommend that you use an Oracle certified operating system for use on AWS Red Hat Enterprise Linux A large number of enterprises of all sizes use Red Hat Enterprise Linux (RHEL) to deploy Oracle workloads RHEL is a great choice for any Oracle workloads on AWS Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 4 AWS and Red Hat have teamed to offer RHEL on Amazon EC2 providing a complete enterprise class computing environment with the simplicity and scalability of AWS Red Hat maintains the base RHEL images for Amazon EC2 As an AWS customer you will receive updates at the same time that updates are made available from Red Hat so your computing environment remains reliable and your RHEL certified applications maintain their supportability For additional information about RHEL on AWS see Red Hat on AWS RHEL is available for all Amazon EC2 instance types on AWS including HVM instances On HVM instances RHEL supports HugePages which can especially enhance the performance of Oracle Database HugePages is a Linux feature that makes it possible for the operating system to support very large memory pages On AWS you can use HugePages only on HVM instances For more information about HVM instances on AWS see Linux AMI virtualization types Important: A special feature in RHEL named Transparent HugePages (THP) is not compatible with O racle Database and should be disabl ed for best performance RHEL on AWS Pricing AWS customers can quickly deploy and scale compute resources according to their business needs with flexible purchase options for RHEL and RHEL with High availability: • Payasyougo Provision resources on demand as computing needs grow without long term commitments or upfront costs • Reserved Instances Lower cost further by purchasing compute resources with a one time upfront payment • Bring existing subscription Customers with Re d Hat Enterprise Linux Premium subscriptions can use Red Hat Cloud Access to move subscriptions to Amazon EC2 SUSE Linux Enterprise Server SUSE Linux Enterprise Server (SL ES) is an operating system of choice for Oracle workloads in many large Oracle deployments SLES is a great choice to run Oracle workloads on AWS as well SUSE maintains the base SLES images for Amazon EC2 Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 5 and as an AWS customer you will receive updates a t the same time that updates are made available from SUSE SLES also is available for all Amazon EC2 instance types on AWS including HVM instances On HVM instances SLES supports HugePages which can especially enhance the performance of Oracle Database You can launch an SLES based Amazon EC2 instance directly from the AWS console or from the AWS Marketplace For additional information about SLES on AWS see SUSE and AWS SUSE on AWS Pricing SUSE on AWS is available with the ondemand and Bring Your Own Subscription (BYOS) subscription model AWS on demand SUSE subscriptions are offered at either a flat hourly rate with no commitment or through a one time upfront payment Both purchase options include Amazon EC2 compute charges and SUSE subscription charges Amazon tracks and bills customers who purchase SUSE Li nux Enterprise Server (SLES) or SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) subscriptions through AWS In BYOS image customers use existing products purchased from SUSE on a BYOS basis with images available as a Community AMI Oracle L inux As the operating system Oracle uses to build and test their products Oracle Linux is an excellent choice for running Oracle workloads on AWS Oracle Linux EC2 instances can be launched using an Amazon Machine Image (AMI) available in the AWS Marketpl ace or as a Community AMI You can also bring your own Oracle Linux AMI or existing Oracle Linux license to AWS Unlike the other three Linux operating systems discussed here Oracle Linux has no cost for licensing making it the lowest cost option You ca n purchase support directly from Oracle but support is not necessary to get updates and patches Oracle provides public yum repositories to download updates and patches even for customers who have not subscribed to support For customers who have subscri bed to support Oracle Linux allows zero downtime updates which can be useful for mission critical applications Oracle Linux has a special feature named Database Smart Flash Cache that is not available in any of the other operating systems discussed here Database Smart Flash Cache allows the database buffer cache to expand beyond the system global area Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 6 (SGA ) in main memory to a second level cache on flash memory Making use of Database Smart Flash Cache for Oracle Database can substantially increase the database performance This is a good feature to use with Amazon EC2 instances that have a large amount of SSD instance storage Microsoft Windows Server Microsoft Windows Server versions 2012 2012 R2 2016 and 2019 are available on Amazon EC2 as Oracle certified operating systems to run Oracle workloads Microsoft Windows Server is an excellent choice for many Oracle workloads especially for running enterprise applications like PeopleSoft Siebel and JD Edwards M icrosoft Windows is available on all types of Amazon EC2 instances including HVM making it a good choice for Amazon EC2 instance types with large memory configurations To access and launch all Microsoft Windows AMIs see Windows AMIs Microsoft Windows on Amazon EC2 is available with the managed service model where AWS takes on all the burdens of acquiring Microsoft Windows licenses to use in the Amazon EC2 service The Microsoft Windows l icense tends to be more expensive than the other three Oracle certified operating systems for the same instance type Conclusion We recommend that you choose one of the four operating systems discussed in this whitepaper for any of your Oracle environments on AWS so that your Oracle workloads run on an Oracle certified operating system You can use any one of the four operating systems for all your Oracle workloads or you can use a combination of them as needed Your choice typically will depend on familia rity type of workload instance choice and cost preference Contributors Contributors to this document include: • Vuyisa Maswana Solutions Architect Amazon Web Services • Abdul Sathar Sait Amazon Web Services Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 7 Further reading For additional information about running Oracle workloads on AWS consult the following resources: Oracle Database on AWS: • Advanced Architectures for Oracle Database on Amazon EC2 • Strategies for Migrating Oracle Database to AWS • Determining the IOPS Needs for Oracle Database on AWS • Best Practices for Running Oracle Database on AWS Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle AWS service details • AWS Cloud Products • AWS Documentation • AWS Whitepapers & Guides AWS pricing information : • AWS Pricing • AWS Pricing Calculator Document history Date Description July 19 2021 Updated for latest service changes and technologies December 2014 First publication,General,consultant,Best Practices Comparing_the_Use_of_Amazon_DynamoDB_and_Apache_HBase_for_NoSQL,"Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Amazon DynamoDB Overview 2 Apache HBase Overview 3 Apache HBase Deployment Options 3 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) 4 Managed Apache HBase on Amazon EMR (HDFS Storage Mode) 4 SelfManaged Apache HBase Deployment Model on Amazon EC2 5 Feature Summary 6 Use Cases 8 Data Models 9 Data Types 15 Indexing 17 Data Processing 21 Throughput Model 21 Consistency Model 23 Transaction Model 23 Table Operations 24 Architecture 25 Amazon DynamoDB Architecture Overview 25 Apache HBase Architecture Overview 26 Partitioning 28 Performance Optimizations 29 Amazon DynamoDB Performance Considerations 29 Apache HBase Performance Considerations 33 Conclusion 37 Contributors 38 Further Reading 38 Document Revisions 38 Abstract One challenge that architects and developers face today is how to process large volumes of data in a timely cost effective and reliable manner There are several NoSQL solutions in the market and choosing the most appropriate one for your partic ular use case can be difficult This paper compares two popular NoSQL data stores —Amazon DynamoDB a fully managed NoSQL cloud database service and Apache HBase an open source column oriented distributed big data store Both Amazon DynamoDB and Apache HBase are available in the Amazon Web Services (AWS) Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 1 Introduction The AWS Cloud accelera tes big data analytics With access to instant scalability and elasticity on AWS you can focus on analytics instead of infrastructure Whether you are indexing large data sets analyzing massive amounts of scientific data or processing clickstream logs AWS provides a range of big data products and services that you can leverage for virtually any data intensive project There is a wide adoption of NoSQL databases in the growing industry of big data and realtime web applications Amazon DynamoDB and Apach e HBase are examples of NoSQL databases which are highly optimized to yield significant performance benefits over a traditional relational database management system (RDBMS) Both Amazon DynamoDB and Apache HBase can process large volumes of data with hig h performance and throughput Amazon DynamoDB provides a fast fully managed NoSQL database service It lets you offload operating and scaling a highly available distributed database cluster Apache HBase is an open source column oriented distributed bi g data store that runs on the Apache Hadoop framework and is typically deployed on top of the Hadoop Distributed File System (HDFS) which provides a scalab le persistent storage layer In the AWS Cloud you can choose to deploy Apache HBase on Amazon Elastic Compute Cloud (Amazon EC2) and manage it yourself Alternatively you can leverage Apache HBase as a managed service on Amazon EMR a fully managed hosted Hadoo p framework on top of Amazon EC2 With Apache HBase on Amazon EMR you can use Amazon Simple Storage Service (Amazon S3) as a data store using the EMR File System (EMRFS) an implementation of HDFS that all Amazon EMR clusters use for reading and writing regular files from Amazon EMR directly to Amazon S3 The following figure shows the relationsh ip between Amazon DynamoDB Amazon EC2 Amazon EMR Amazon S3 and Apache HBase in the AWS Cloud Both Amazon DynamoDB and Apache HBase have tight integration with popular open source processing frameworks like Apache Hive and Apache Spark to enhance querying capabilities as illustrated in the diagram Amazon Web Services Comparing the Us e of Amazon DynamoDB and Apache HBase for NoSQL Page 2 Figure 1: Relation between Amazon DynamoDB Amazon EC2 Amazon EMR and Apache HBase in the AWS Cloud Amazon Dynam oDB Overview Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB offers the following benefits: • Zero administrative overhead —Amazon DynamoDB manages the burdens of hardware provisioning setup and configuration replication cluster scaling hardware and software updates and monitoring and handling of hardware failures • Virtually unlimited throughput and scale —The provisioned throughput model of Amazon DynamoDB allows you to specify throughput capacity to serve nearly any level of request traffic With Amazon DynamoDB there is virtually no limit to the amount of data that can be stored and retrieved • Elasticity and flexibility —Amazon DynamoDB can handle unpredictable workloads with predictable performance and still maintain a stable latency profile that shows no latency increase or throughput decrease as the data volume rises with increased usage Amazon DynamoDB lets you increa se or decrease capacity as needed to handle variable workloads Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 3 • Auto matic scaling— Amazon DynamoDB can scale automatically within user defined lower and upper bounds for read and write capacity in response to changes in application traffic These qualitie s render Amazon DynamoDB a suitable choice for online applications with spiky traffic patterns or the potential to go viral anytime • Integration with other AWS services —Amazon DynamoDB integrates seamlessly with other AWS services for logging and monitorin g security analytics and more For more information see the Amazon DynamoDB Developer Guide Apache HBase Overview Apache HBase a Hadoop NoSQL database offers the following benefits: • Efficient storage of sparse data —Apache HBase provides fault tolerant storage for large quantities of sparse data using column based compression Apache HBase is capable of storing and processing billions of rows and millions of columns per row • Store for high frequency counters —Apache HBase is suitable for tasks such as high speed counter aggregation because of its consistent reads and writes • High write throughput and update rates —Apache HBase supports low latency lookups and range scans efficient updates and deletions of individual records and high write throughput • Support for multiple Hadoop jobs —The Apache HBase data store allows data to be used by one or more Hadoop jobs on a single cluster or across multiple Hadoop clusters Apache HBase Deployment Options The following section provides a description of Apache HBase deployment options in the AWS Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 4 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) Amazon EMR enables you to use Amazon S3 as a data store for Apache HBase using the EMR File System and offers the following benefits : • Separation of compute from storage — You can size your Amazon EMR cluster for compute instead of data requirements allowing you to avoid the need for the customary 3x repli cation in HDFS • Transient clusters —You can scale compute nodes without impacting your underlying storage and terminate your cluster to save costs and quickly restore it • Built in availability and durability —You get the availability and durability of Amazon S3 storage by default • Easy to provision read replicas —You can create and configure a read replica cluster in another Amazon EC2 Availability Zone that provides read only access to the same data as the primary cluster ensuring uninterrupted access to you r data even if the primary cluster becomes unavailable Managed Apache HBase on Amazon EMR (HDFS Storage Mode) Apache HBase on Amazon EMR is optimized to run on AWS and offers the following benefits : • Minimal administrative overhead —Amazon EMR handles provi sioning of Amazon EC2 instances security settings Apache HBase configuration log collection health monitoring and replacement of faulty instances You still have the flexibility to access the underlying infrastructure and customize Apache HBase furthe r if desired • Easy and flexible deployment options —You can deploy Apache HBase on Amazon EMR using the AWS Management Console or by using the AWS Command Line Interface (AWS CLI) Once launched resizing an Apache HBase cluster is easily accomplished with a single API call Activities such as modifying the Apache HBase configuration at launch time or i nstalling third party tools such as Ganglia for monitoring performance metrics are feasible with custom or predefined scripts Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 5 • Unlimited scale —With Apache HBase running on Amazon EMR you can gain significant cloud benefits such as easy scaling low cost pay only for what you use and ease of use as opposed to the self managed deployment model on Amazon EC2 • Integration with other AWS services —Amazon EMR is designed to seamlessly integrate with other AWS serv ices such as Amazon S3 Amazon DynamoDB Amazon EC2 and Amazon CloudWatch • Built in backup feature —A key benefit of Apache HBase running on Amazon EMR is the built in mechanism available for backing up Apache HBase data durably in Amazon S3 Using this f eature you can schedule full or incremental backups and roll back or even restore backups to existing or newly launched clusters anytime SelfManaged Apache HBase Deployment Model on Amazon EC2 The Apache HBase self managed model offers the most flexibi lity in terms of cluster management but also presents the following challenges: • Administrative overhead —You must deal with the administrative burden of provisioning and managing your Apache HBase clusters • Capacity planning —As with any traditional infrast ructure capacity planning is difficult and often prone to significant costly error For example you could over invest and end up paying for unused capacity or under invest and risk performance or availability issues • Memory management —Apache HBase is mai nly memory driven Memory can become a limiting factor as the cluster grows It is important to determine how much memory is needed to run diverse applications on your Apache HBase cluster to prevent nodes from swapping data too often to the disk The numb er of Apache HBase nodes and memory requirements should be planned well in advance • Compute storage and network planning —Other key considerations for effectively operating an Apache HBase cluster include compute storage and network These infrastructur e components often require dedicated Apache Hadoop/Apache HBase administrators with specialized skills Amazon Web Services Comparing the Use of Ama zon DynamoDB and Apache HBase for NoSQL Page 6 Feature Summary Amazon DynamoDB and Apache HBase both possess characteristics that are critical for successfully processing massive amounts of data The following table provides a summary of key features of Amazon DynamoDB and Apache HBase that can help you understand key similarities and differences between the two databases These features are discussed in later sections Table 1: Amazon DynamoDB and Apache HBase Feature Summary Feature Amazon DynamoDB Apache HBase Description Hosted scalable database service by Amazon Column store based on Apache Hadoop and on concepts of BigTable Implementation Language Java Server Operating Systems Hosted Linux Unix Windows Database Model Keyvalue & Document store Wide column store Data Scheme Schema free Schema free Typing Yes No APIs and Other Access Methods Flexible Flexible Supported Programming Languages Multiple Multiple Server side Scripts No Yes Triggers Yes Yes Partitioning Methods Sharding Sharding Throughput Model User provisions throughput Limited to hardware configuration Auto matic Scaling Yes No Partitioning Automatic partitioning Automatic sharding Replication Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 7 Feature Amazon DynamoDB Apache HBase Durability Yes Yes Administration No administration overhead High administration overhead in self managed and minimal on Amazon EMR User Concepts Yes Yes Data Model Row Item – 1 or more attributes Columns/column families Row Size Item size restriction No row size restrictions Primary Key Simple/Composite Row key Foreign Key No No Indexes Optional No built in index model implemented as secondary tables or coprocessors Transactions Row Transactions Itemlevel transactions Single row transactions Multi row Transactions Yes Yes Cross table Transactions Yes Yes Consistency Model Eventually consistent and strongly consistent reads Strongly consistent reads and writes Concurrency Yes Yes Updates Conditional updates Atomic read modify write Integrated Cache Yes Yes Time ToLive (TTL) Yes Yes Encryption at Rest Yes Yes Backup and Restore Yes Yes Point intime Recovery Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 8 Feature Amazon DynamoDB Apache HBase Multiregion Multi master Yes No Use Cases Amazon DynamoDB and Apache HBase are optimized to process massive amounts of data Popular use cases for Amazon DynamoDB and Apache HBase include the following: • Serverless applications —Amazon DynamoDB provides a durable backend for storing data at any scale and has become the de facto database for powering Web and mobile backend s for ecomm erce/retail education and m edia verticals • High volume special events —Special events and seasonal events such as national electoral campaigns are of relatively short duration and have variable workloads with the potential to consume large amounts of resources Amazon DynamoDB lets you increase capacity when you need it and decrease as needed to handle variable workloads This quality renders Amazon DynamoDB a suitable choice for such high volume special events • Social media applications —Community based applications such as online gaming photo sharing location aware applications and so on have unpredictable usage patterns with the potential to go viral anytime The elasticity and flexibility of Amazon DynamoDB make it suitable for such high volume variable workloads • Regulatory and complianc e requirements —Both Amazon DynamoDB and Amazon EMR are in scope of the AWS compliance efforts and therefore suitable for healthcare and financial services workloads as described in AWS Se rvices in Scope by Compliance Program • Batch oriented processing —For large datasets such as log data weather data product catalogs and so on you m ay already have large amounts of historical data that you want to maintain for historical trend analysis but need to ingest and batch process current data for predictive purposes For these types of workloads Apache HBase is a good choice because of its high read and write throughput and efficient storage of sparse data Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 9 • Reporting —To process and report on hi gh volume transactional data such as daily stock market trades Apache HBase is a good choice because it supports high throughput writes and update rates which make it suitable for storage of high frequency counters and complex aggregations • Real time analytics —The payload or message size in event data such as tweets E commerce and so on is relatively small when compared with application logs If you want to ingest streaming event data in real time for sentiment analysis ad serving trend ing analysis and so on Amazon DynamoDB lets you increase throughout capacity when you need it and decrease it when you are done with no downtime Apache HBase can handle realtime ingestion of data such as application logs with ease due to its high write throughput and efficient storage of sparse data Combining this capability with Hadoop's ability to handle sequential reads and scans in a highly optimized way renders Apache HBase a powerful tool for real time data analytics Data Models Amazon Dynam oDB is a key/value as well as a document store and Apac he HBase is a key/value store For a meaningful comparison of Amazon DynamoDB with Apache HBase as a NoSQL data store this document focus es on the key/value data model for Amazon DynamoDB Amazon DynamoDB and Apache HBase are designed with the goal to deliver significant performance benefits with low latency and high throughput To achieve this goal key/value stores and document stores have simpler and less constrained data models than trad itional relational databases Although the fundamental data model building blocks are similar in both Amazon DynamoDB and Apache HBase each database uses a distinct terminology to describe its specific data model At a high level a database is a collecti on of tables and each table is a collection of rows A row can contain one or more columns In most cases NoSQL database tables typically do not require a formal schema except for a mandatory primary key that uniquely identifies each row The following t able illustrates the high level concept of a NoSQL database Table 2: High Level NoSQL Database Table Representation Amazon Web Services Comparing the Use of Amazon Dyna moDB and Apache HBase for NoSQL Page 10 Table Row Primary Key Column 1 Columnar databases are devised to store each column separately so that aggregate operations for one column of the entire table are significantly quicker than the traditional row storage model From a comparative standpoint a row in Amazon DynamoDB is referred to as an item and each item can have any number of attributes An attribute comprises a key and a value and commonly referred to as a name value pair An Amazon DynamoDB table can have unlimited items indexed by primary key as shown in the following example Table 3: High Level Representation of Amazon DynamoDB Table Table Item 1 Primary Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Item 2 Primary Key Attribute 1 Attribute 3 Item n Primary Key Attribute 2 Attribute 3 Amazon DynamoDB defines two types of primary keys: a simple primary key with one attribute called a partition key (Table 4) and a composite primary key with two attributes (Table 5) Table 4: Amazon DynamoDB Simple Primary Key (Partition Key) Table Item Partition Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Table 5: Amazon DynamoDB Composite Primary Key (Partition & Sort Key) Table Item Partition Key Sort Key Attribute 1 Attribute 2 Attribute 3 attribute …n Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 11 A JSON representation of the item in the Table 5 with additional nested attributes is given below: { ""Partition Key"": ""Value"" ""Sort Key"": ""Value"" ""Attribute 1"": ""Value"" ""Attribute 2"": ""Value"" ""Attribute 3"": [ { ""Attribute 4"": ""Value"" ""Attribute 5"": ""Value"" } { ""Attribute 4"": ""Value"" ""Attribute 5"": ""Value"" } ] } In Amazon DynamoDB a single attribute primary key or partition key is useful for quick reads and writes of data For example PersonID serves as the partition key in the following Person table Table 6: Example Person Amazon DynamoDB Table Person Table Item PersonId (Partition Key) FirstName LastName Zipcode Gender Item 1 1001 Fname 1 Lname 1 00000 Item 2 1002 Fname 2 Lname 2 M Item 3 2002 Fname 3 Lname 3 10000 F A composite key in Amazon DynamoDB is indexed as a partition key and a sort key This multi part key maintains a hierarchy between the first and second element values Holding the partition key element constant facilitates searches across the sort key elem ent to retrieve items quickly for a given partition key In the following GameScores table the composite partition sort key is a combination of PersonId (partition key) and GameId (sort key ) Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 12 Table 7: Example GameScores Amazon Dyna moDB Table GameScores Table PersonId (Partition Key) GameId (Sort Key) TopScore TopScoreDate Wins Losses item1 1001 Game01 67453 201312 09:17:24:31 73 21 item2 1001 Game02 98567 2013 12 11:14:14:37 98 27 Item3 1002 Game01 43876 2013 12 15:19:24:39 12 23 Item4 2002 Game02 65689 2013 10 01:17:14:41 23 54 The partition key of an item is also known as its hash attribute and sort key as its range attribute The term hash attribute arises from the use of an internal hash function that takes the value of the partition key as input and the output of that hash funct ion determines the partition or physical storage node where the item will be stored The term range attribute derives from the way DynamoDB stores items with the same partition key together in sorted order by the sort key value Although there is no expli cit limit on the number of attributes associated with an individual item in an Amazon DynamoDB table there are restrictions on the aggregate size of an item or payload including all attribute names and values A small payload can potentially improve perf ormance and reduce costs because it requires fewer resources to process For information on how to handle items that exceed the maximum item size see Best Practices for Storing Large Items and Attributes In Apache HBase the most basic unit is a column One or more columns form a row Each row is addressed uniquely by a primary key referred to as a row key A row in Apache HBase can have millions of columns Each column can have multiple versions with each distinct value contained in a separate cell One fundamental modeling concept in Apache HBase is that of a column family A column family is a container for grouping sets of related data together within on e table as shown in the following example Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 13 Table 8: Apache HBase Row Representation Table Column Family 1 Column Family 2 Column Family 3 row row key Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Apache HBase groups columns with the same general access patterns and size characteristics into column families to form a basic unit of separation For example in the following Person table you can group personal data into one column family called personal_info and the statistical data into a demographic column family Any other columns in the table would be grouped accordingly as well as shown in the following example Table 9: Example Person Table in Apache HBase Person Table personal_info demographic row key firstname lastname zipcode gender row 1 1001 Fname 1 Lname 1 00000 row 2 1002 Fname 2 Lname 2 M row 3 2002 Fname 3 Lname 3 10000 F Columns are addressed as a combination of the column family name and the column qualifier expressed as family:qualifier All members of a column family have the same prefix In the preceding example the firstname and lastname column qualifiers can be refe renced as personal_info:firstname and personal_info:lastname respectively Column families allow you to fetch only those columns that are required by a query All members of a column family are physically stored together on a disk This means that optimiz ation features such as performance tunings compression encodings and so on can be scoped at the column family level Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 14 The row key is a combination of user and game identifiers in the following Apache HBase GameScores table A row key can consist of mult iple parts concatenated to provide an immutable way of referring to entities From an Apache HBase modeling perspective the resulting table is tallnarrow This is because the table has few columns relative to the number of rows as shown in the following example Table 10: TallNarrow GameScores Apache HBase Table GameScores Table top_scores metrics row key score date wins loses row 1 1001 game01 67453 2013 12 09:17:24:31 73 21 row 2 1001 game02 98567 2013 12 11:14:14:37 98 27 row 3 1002 game01 43876 2013 12 15:19:24:39 12 23 row 4 2002 game02 65689 2013 10 01:17:14:41 23 54 Alternatively you can model the game identifier as a column qualifier in Apache HBase This approach facilitates precise column lookups and supports usage of filters to read data The result is a flatwide table with few rows relative to the number of col umns This concept of a flat wide Apache HBase table is shown in the following table Table 11: Flat Wide GameScores Apache HBase Table GameScores Table top_scores metrics row key gameId score top_score_date gameId wins loses row 1 1001 game01 98567 2013 12 11:14:14:37 game01 98 27 game02 43876 2013 12 15:19:24:39 game02 12 23 Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 15 GameScores Table row 2 1002 game01 67453 2013 12 09:17:24:31 game01 73 21 row 3 2002 game02 65689 2013 10 01:17:14:41 game02 23 54 For performance reasons it is important to keep the number of column families in your Apache HBase schema low Anything above three column families can potentially degrade performance The recommended best practice is to maintain a one column family in your s chemas and introduce a two column family and three column family only if data access is limited to a one column family at a time Note that Apache HBase does not impose any restrictions on row size Data Types Both Amazon DynamoDB and Apache HBase support unstructured datasets with a wide range of data types Amazon DynamoDB supports the data types shown in the following table: Table 12: Amazon DynamoDB Data Types Type Description Example (JSON Format) Scalar String Unicode with UTF8 binary encoding {""S"": ""Game01""} Number Positive or negative exact value decimals and integers {""N"": ""67453""} Binary Encoded sequence of bytes {""B"": ""dGhpcyB0ZXh0IGlzIGJhc2U2NC1l""} Boolean True or false {""BOOL"": true} Null Unknown or undefined state {""NULL"": true} Document List Ordered collection of values {""L"": [""Game01"" 67453]} Amazon Web Services Comparing the Use of Amazon DynamoDB and Apa che HBase for NoSQL Page 16 Type Description Example (JSON Format) Map Unordered collection of name value pairs {""M"": {""GameId"": {""S"": ""Game01""} ""TopScore"": {""N"": ""67453""}}} Multi valued String Set Unique set of strings {""SS"": [""Black""""Green] } Number Set Unique set of numbers {""NS"": [""422"""" 1987""] } Binary Set Unique set of binary values {""BS"": [""U3Vubnk=""""UmFpbnk=] } Each Amazon DynamoDB attribute can be a name value pair with exactly one value (scalar type) a complex data structure with nested attributes (document type) or a unique set of values (multi valued set type) Individual items in an Amazon DynamoDB table c an have any number of attributes Primary key attributes can only be scalar types with a single value and the only data types allowed are string number or binary Binary type attributes can store any binary data for example compressed data encrypted data or even images Map is ideal for storing JSON documents in Amazon DynamoDB For example in Table 6 Person could be represented as a map of person id that maps to detailed information about the person: name gender and a list of their previous a ddresses also represented as a map This is illustrated in the following script : { ""PersonId"": 1001 ""FirstName"": ""Fname 1"" ""LastName"": ""Lname 1"" ""Gender"": ""M"" ""Addresses"": [ { ""Street"": ""Main S t"" ""City"": ""Seattle"" ""Zipcode"": 98005 ""Type"": ""current"" } { ""Street"": ""9th S t"" ""City"": Seattle ""Zipcode"": 98005 ""Type"": ""past"" Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 17 } ] } In summary Apache HBase defines the following concepts: • Row —An atomic byte array or key/value container • Column—A key with in the key/value container inside a row • Column Family —Divides columns into related subsets of data that are stored together on disk • Timestamp —Apache HBase adds the concept of a fourth dimension column that is expressed as an explicit or implicit timestam p A timestamp is usually represented as a long integer in milliseconds • Value—A time versioned value in the key/value container This means that a cell can contain multiple versions of a value that can change over time Versions are stored in decreasing t imestamp with the most recent first Apache HBase supports a bytes in/bytes out interface This means that anything that can be converted into an array of bytes can be stored as a value Input could be strings numbers complex objects or even images as long as they can be rendered as bytes Consequently key/value pairs in Apache HBase are arbitrary arrays of bytes Because row keys and column qualifiers are also arbitrary arrays of bytes almost anything can serve as a row key or column qualifier from strings to binary representations of longs or even serialized data structures Column family names must comprise printable characters in human readable format This is because column family names are used as part of the directory name in the file system Furthermore column families must be declared up front at the time of schema definition Column qualifiers are not subjected to this restriction and can comprise any arbitrary binary characters and be created at runtime Indexing In general data i s indexed using a primary key for fast retrieval in both Amazon DynamoDB and Apache HBase Secondary indexes extend the basic indexing functionality and provide an alternate query path in addition to queries against the primary key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 18 Amazon DynamoDB support s two kinds of secondary indexes on a table that already implements a partition and sort key : • Global secondary index —An index with a partition and optional sort key that can be different from those on the table • Local secondary index —An index that has the same partition key as the table but a different sort key You can define one or more global secondary indexes and one or more local secondary indexes per table For documents you can create a local secondary index or global secondary index on any top level JSON element In the example GameScores table introduced in the preceding section you can define LeaderBoardIndex as a global secondary index as follows: Table 13: Example Global Secondary Index in Amazon DynamoDB LeaderBoardIndex Index Key Attribute 1 GameId (Partition Key) TopScore (Sort Key) PersonId Game01 98567 1001 Game02 43876 1001 Game01 65689 1002 Game02 67453 2002 The LeaderBoardIndex shown in Table 13 defines GameId as its primary key and TopScore as its sort key It is not necessary for the index key to contain any of the key attributes from the source table However the table’s primary key attributes are always present in the global secondary index In this example PersonId is automatically projected or copied into the index With LeaderBoardIndex defined you can easily obtain a list of top scores for a specific game by simply querying it The output is ordered by TopScore the sort key You can choose to project additional attributes from the source table into the index A local secondary index on the other hand organizes data by the index sort key It provides an alternate query pat h for efficiently accessing data using a different sort key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 19 You can define PersonTopScoresIndex as a local secondary index for the example GameScores table introduced in the preceding section The index contains the same partition key PersonId as the source table and defines TopScoreDate as its new sort key The old sort key value from the source table (in this example GameId ) is automatically projected or copied into the index but it is not a part of the index key as shown in the following table Table 14: Local Secondary Index in Amazon Dynamo DB PersonTopScoresIndex Index Key Attribute1 Attribute2 PersonId (Partition Key) TopScoreDate (New Sort Key) GameId (Old Sort Key as attribute) TopScore (Optional projected attribute) 1001 2013 12 09:17:24:31 Game01 67453 1001 2013 12 11:14:14:37 Game02 98567 1002 2013 12 15:19:24:39 Game01 43876 2002 2013 10 01:17:14:41 Game02 65689 A local secondary index is a sparse index An index will only have an item if the index sort key attribute has a value With local secondary indexes any group of items that have the same partition key value in a table and all their associated local secondary indexes form an item collection There is a size restriction on item collections in a DynamoDB table For more infor mation see Item Collection Size Limit The main difference between a global secondary index and a local secondary index is that a global secondary index def ines a completely new partition key and optional sort index on a table You can define any attribute as the partition key for the global secondary index as long as its data type is scalar rather than a multi value set Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBas e for NoSQL Page 20 Additional highlights between global and local secondary indexes are captured in the following table Table 15: Global and secondary indexes Global Secondary Indexes Local Secondary Indexes Creation Can be created for existing tables (Online indexing supported) Only at table creation time (Online indexing not supported) Primary Key Values Need not be unique Must be unique Partition Key Different from primary table Same as primary table Sort Key Optional Required (different from Primary table) Provisioned Throughput Independent from primary table Dependent on primary table Writes Asynchronous Synchronous For more information on global and local secondary indexes in Amazon DynamoDB see Improving Data Access with Secondary Indexes In Apache HBase all row s are always sorted lexicographically by row key The sort is byteordered This means that each row key is compared on a binary level byte by byte from left to right Row keys are always unique and act as the primary index in Apache HBase Although Apac he HBase does not have native support for built in indexing models such as Amazon DynamoDB you can implement custom secondary indexes to serve as alternate query paths by using these techniques: • Create an index in another table —You can maintain a secondary table that is periodically updated However depending on the load strategy the risk with this method is that the secondary index can potentially become out of sync with the main table You can mitigate this risk if you build the secondary index while publishing data to the cluster and perform concurrent writes into the index table • Use the coprocessor framework —You can leverage the coprocessor framework to implement custom secondary indexes Coprocessors act like triggers that are similar to sto red procedures in RDBMS Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 21 • Use Apache Phoenix —Acts as a front end to Apache HBase to convert standard SQL into native HBase scans and queries and for secondary indexing In summary both Amazon DynamoDB and Apache HBase define data models that allow efficient storage of data to optimize query performance Amazon DynamoDB imposes a restriction on its item size to allow efficient processing and reduce costs Apache HBase uses the concept of column families to provide data locality for more efficient read operations Amazon DynamoDB supports both scalar and multi valued sets to accommodate a wide range of unstructured datasets Similarly Apache HBase stores its key/value pairs as arbitrary arrays of bytes giving it the flexibility to store any data type Amazon DynamoDB supports built in secondary indexes and automatically updates and synchronizes all indexes with their parent tables With Apache HBase you can implement and ma nage custom secondary indexes yourself From a data model perspective you can choose Amazon DynamoDB if your item size is relatively small Although Amazon DynamoDB provides a number of options to overcome row size restrictions Apache HBase is better equ ipped to handle large complex payloads with minimal restrictions Data Processing This section highlights foundational elements for processing and querying data within Amazon DynamoDB and Apache HBase Throughput Model Amazon DynamoDB uses a provisioned th roughput model to process data With this model you can specify your read and write capacity needs in terms of number of input operations per second that a table is expected to achieve During table creation time Amazon DynamoDB automatically partitions and reserves the appropriate amount of resources to meet your specified throughput requirements Automatic scaling for Amazon DynamoD B automate s capacity management and eliminates the guesswork involved in provisioning adequate capacity when creating new tables and global secondary indexes With automatic scaling enabled you can specify percent target utilization and DynamoDB will scale the provisioned capacity for reads and writes within the bounds to meet the target utilization percent For more information see Managing Throughput Capacity Automatically with DynamoDB Auto Scaling Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase fo r NoSQL Page 22 To decide on the required read and write throughput values for a table without auto scaling feature enabled consider the following factors: • Item size —The read and write capacity units that you specify are based on a predefined data item size per read or per write operation For more information about provisioned throughput data item size restrictions see Provisioned Throughput in Am azon DynamoDB • Expected read and write request rates —You must also determine the expected number of read and write operations your application will perform against the table per second • Consistency —Whether your application requires strongly consistent or eventually consistent reads is a factor in determining how many read capacity units you need to provision for your table For more information about consistency and Amazon DynamoDB see the Consistency Model section in this document • Global secondary indexes —The provisioned throughput settings of a global secondary index are separate from those of its parent table Therefore you must also consider the expe cted workload on the global secondary index when specifying the read and write capacity at index creation time • Local secondary indexes —Queries against indexes consume provisioned read throughput For more information see Provisioned Throughput Considerations for Local Secondary Indexes Although read and write requirements are specified at table creation time Amazon DynamoDB lets yo u increase or decrease the provisioned throughput to accommodate load with no downtime With Apache HBase the number of nodes in a cluster can be driven by the required throughput for reads and/or writes The available throughput on a given node can vary depending on the data specifically: • Key/value sizes • Data access patterns • Cache hit rates • Node and system configuration Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 23 You should plan for peak load if load will likely be the primary factor that increases node count within an Apache HBase cluster Consistency Model A database consistency model determines the manner and timing in which a successful write or update is reflected in a subsequent read operation of that same value Amazon DynamoDB lets you specify the desired consistency characteristics f or each read request within an application You can specify whether a read is eventually consistent or strongly consistent The eventual consistency option is the default in Amazon DynamoDB and maximizes the read throughput However an eventually consiste nt read might not always reflect the results of a recently completed write Consistency across all copies of data is usually reached within a second A strongly consistent read in Amazon DynamoDB returns a result that reflects all writes that received a su ccessful response prior to the read To get a strongly consistent read result you can specify optional parameters in a request It takes more resources to process a strongly consistent read than an eventually consistent read For more information about re ad consistency see Data Read and Consistency Considerations Apache HBase reads and writes are strongly consistent This means that all reads and writes to a single row in Apache HBase are atomic Each concurrent reader and writer can make safe assumptions about the state of a row Multi versioning and time stamping in Apache HBase contribute to its strongly consistent model Transaction Model Unlike RDBMS NoSQL databases typically have no domain specific language such as SQL to query data Amazon DynamoDB and Apache HBase provide simple application programming interfaces (APIs) to perform the standard create read update and delete (CRUD) o perations Amazon DynamoDB Transactions support coordinated all ornothing changes to multiple items both within and across tables Transactions provide atomicity consistency isolation and durability (ACID) in DynamoDB helping you to maintain data correctness in your applications Apache HBase integrates with Apache Phoenix to add cross row and cross table transaction support with full ACID semantics Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for No SQL Page 24 Amazon DynamoDB provides atomic item and attribute operations for adding updating or deleting data Further item level transactions can specify a condition that must be satisfied before that transaction is fulfilled For example you can choose to update an item only if it already has a ce rtain value Conditional operations allow you to implement optimistic concurrency control systems on Amazon DynamoDB For conditional updates Amazon DynamoDB allows atomic increment and decrement operations on existing scalar values without interfering wi th other write requests For more information about conditional operations see Conditional Writes Apache HBase also supports atomic high update rates (the classic read modify write) within a single row key enabling storage for high frequency counters Unlike Amazon DynamoDB Apache HBase uses multi version concurrency control to implement updates This means that an existing piece of data is not overwritten with a new one; instead it becomes obsolete when a newer version is added Row data access in Apache HBase is atomic and includes any number of columns but there are no further guarantees or transactional feat ures spanning multiple rows Similar to Amazon DynamoDB Apache HBase supports only single row transactions Amazon DynamoDB has an optional feature DynamoDB Streams to capture table activity The data modification events such as add update or delete c an be captured in near real time in a time ordered sequence If stream is enabled on a DynamoDB table each event gets recorded as a stream record along with name of the table event timestamp and other metadata For more information see the section on Capturing Table Activity with DynamoDB Streams Amazon DynamoDB Streams can be u sed with AWS Lambda to create trigger code that executes automatically whenever an event of interest (add update delete) appears in a stream This pattern enables powerful solutions such as data replication within and across AWS Regions materialized views of data in DynamoDB tables data analysis using Amazon Kinesis notifications via Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Email Service (Amazon SES) and much more For more information see DynamoDB Streams and AWS Lambda Triggers Table Operations Amazon D ynamoDB and Apache HBase provide scan operations to support large scale analytical processing A scan operation is similar to cursors in RDBMS By taking advantage of the underlying sequential sorted storage layout a scan operation can Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 25 easily iterate ove r wide ranges of records or entire tables Applying filters to a scan operation can effectively narrow the result set and optimize performance Amazon DynamoDB uses parallel scanning to improve performance of a scan operation A parallel scan logically sub divides an Amazon DynamoDB table into multiple segments and then processes each segment in parallel Rather than using the default scan operation in Apache HBase you can implement a custom parallel scan by means of the API to read rows in parallel Both Amazon DynamoDB and Apache HBase provide a Query API for complex query processing in addition to the scan operation The Query API in Amazon DynamoDB is accessible only in tables that define a composite primary key In Apache HBase bloom filters improve Get operations and the potential performance gain increases with the number of parallel reads In summary Amazon DynamoDB and Apache HBase have similar data processin g models in that they both support only atomic single row transactions Both databases also provide batch operations for bulk data processing across multiple rows and tables One key difference between the two databases is the flexible provisioned throughp ut model of Amazon DynamoDB The ability to increase capacity when you need it and decrease it when you are done is useful for processing variable workloads with unpredictable peaks For workloads that need high update rates to perform data aggregations or maintain counters Apache HBase is a good choice This is because Apache HBase supports a multi version concurrency control mechanism which contributes to its strongly consistent reads and writes Amazon DynamoDB gives you the flexibility to specify whet her you want your read request to be eventually consistent or strongly consistent depending on your specific workload Architecture This section summarizes key architectural components of Amazon DynamoDB and Apache HBase Amazon DynamoDB Architecture Overv iew At a high level Amazon DynamoDB is designed for high availability durability and consistently low latency (typically in the single digit milliseconds) performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 26 Amazon DynamoDB runs on a fleet of AWS managed servers that leverage solid state drives (SSDs) to create an optimized high density storage platform This platform decouples performance from table size and eliminates the need for the working set of data to fit in memory while still returning consistent low latency responses to queries As a managed service Amazon DynamoDB abstracts its underlying architectural details from the user Apache HBase Architecture Overview Apache HBase is typically deployed on top of HDFS Apache ZooKeeper is a critical component for maintai ning configuration information and managing the entire Apache HBase cluster The three major Apache HBa se components are the following: • Client API — Provides programmatic access to D ata Manipulation Language (DML) for performing CRUD operations on HBase tables • Region servers — HBase tables are split into regions and are served by region servers • Master server — Responsible for monitoring all region server instan ces in the cluster and is the interface for all metadata changes Apache HBase stores data in indexed store files called HFiles on HDFS The store files are sequences of blocks with a block index stored at the end for fast lookups The store files provide an API to access specific values as well as to scan ranges of values given a start and end key During a write operation data is first written to a commit log called a write ahead log (WAL) and then moved into memory in a structure called Memstore When the size of the Memstore exceeds a given maximum value it is flushed as a HFile to disk Each time data is flushed from Memstores to disk new HFiles must be created As the number of HFiles builds up a compaction process merges the files into fewer lar ger files A read operation essentially is a merge of data stored in the Memstores and in the HFiles The WAL is never used in the read operation It is meant only for recovery purposes if a server crashes before writing the in memory data to disk A regio n in Apache HBase acts as a store per column family Each region contains contiguous ranges of rows stored together Regions can be merged to reduce the Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 27 number of store files A large store file that exceeds the configured maximum store file size can trigg er a region split A region server can serve multiple regions Each region is mapped to exactly one region server Region servers handle reads and writes as well as keeping data in memory until enough is collected to warrant a flush Clients communicate d irectly with region servers to handle all data related operations The master server is responsible for monitoring and assigning regions to region servers and uses Apache ZooKeeper to facilitate this task Apache ZooKeeper also serves as a registry for reg ion servers and a bootstrap location for region discovery The master server is also responsible for handling critical functions such as load balancing of regions across region servers region server failover and completing region splits but it is not pa rt of the actual data storage or retrieval path You can run Apache HBase in a multi master environment All masters compete to run the cluster in a multi master mode However if the active master shuts down then the remaining masters contend to take ove r the master role Apache HBase on Amazon EMR Architecture Overview Amazon EMR defines the concept of instance groups which are collections of Amazon EC2 instances The Amazon EC2 virtual servers perform roles analogous to the master and slave nodes of Hadoop For best performance Apache HBase clusters should run on at least two Amazon EC2 instances There are three types of instance groups in an Amaz on EMR cluster • Master —Contains one master node that manages the cluster You can use the Secure Shell (SSH) protocol to access the master node if you want to view logs or administer the cluster yourself The master node runs the Apache HBase master server and Apache ZooKeeper • Core —Contains one or more core nodes that run HDFS and store data The core nodes run the Apache HBase region servers • Task —(Optional) Contains any number of task nodes Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled the HBase root directory is stored in Amazon S3 including HBase store files Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 28 and table metadata For more information see HBase on Amazon S3 (Amazon S3 Storage Mode) For production workloads EMRFS consistent view is recommended when you enable HBase on Amazon S3 Not usin g consistent view may result in performance impacts for specific operations Partitioning Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability within a region Data is auto partitioned primarily using the partition key As throughput and data size increase Amazon DynamoDB will automatically repartition and reallocate data across more nodes Partitions in Amazon DynamoDB are fully independent resulting in a shared nothing cluster However provisioned throughput is divided evenly across the partiti ons A region is the basic unit of scalability and load balancing in Apache HBase Region splitting and subsequent load balancing follow this sequence of events: 1 Initially there is only one region for a table and as more data is added to it the system monitors the load to ensure that the configured maximum size is not exceeded 2 If the region size exceeds the configured limit the system dynamically splits the region into two at the row key in the middle of the region creating two roughly equal halves 3 The master then schedules the new regions to be moved off to other servers for load balancing if required Behind the scenes Apache ZooKeeper tracks all activities that take place during a region split and maintains the state of the region in case of server failure Apache HBase regions are equivalent to range partitions that are used in RDBMS sharding Regions can be spread across many physical servers that consequently distribute the load resulting in scalability In summary as a managed service the architectural details of Amazon DynamoDB are abstracted from you to let you focus on your application details Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 29 With the self managed Apache HBase deployment model it is crucial to u nderstand the underlying architectural details to maximize scalability and performance AWS gives you the option to offload Apache HBase administrative overhead if you opt to launch your cluster on Amazon EMR Performance Optimizations Amazon DynamoDB and Apache HBase are inherently optimized to process large volumes of data with high performance NoSQL databases typically use an on disk column oriented storage format for fast data access and reduced I/O when fulfilling queries This performance characteri stic is evident in both Amazon DynamoDB and Apache HBase Amazon DynamoDB stores items with the same partition key contiguously on disk to optimize fast data retrieval Similarly Apache HBase regions contain contiguous ranges of rows stored together to im prove read operations You can enhance performance even further if you apply techniques that maximize throughput at reduced costs both at the infrastructure and application tiers Tip: A recommended best practice is to monitor Amazon DynamoDB and Apache H Base performance metrics to proactively detect and diagnose performance bottlenecks The following section focuses on several common performance optimizations that are specific to ea ch database or deployment model Amazon DynamoDB Performance Consideration s Performance considerations for Amazon DynamoDB focus on how to define an appropriate read and write throughput and how to design a suitable schema for an application These performance considerations span both infrastruct ure level and application tiers Ondemand Mode – No Capacity Planning Amazon DynamoDB on demand is a flexible billing option capable of serving thousands of requests per second without capacity planning For on demand mode tables you don't need to specify how much read and write through put you expect your application to perform DynamoDB tables using on demand capacity mode automatically adapt to Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 30 your application’s traffic volume On demand capacity mode instantly accommodates up to double the previous peak traffic on a table For more i nformation see On Demand Mode Tip: DynamoDB recommends spacing your traffic growth over at least 30 minutes be fore driving more than 100000 reads per second Provisioned Throughput Considerations Factors that must be taken into consideration when determining the appropriate throughput requirements for an application are item size expected read and write rates consistency and secondary indexes as discussed in the Throughput Model section of this whitepaper If an application performs more reads per second or writes per second than a table’s provisioned throughput capacity a llows requests above the provisioned capacity will be throttled For instance if a table’s write capacity is 1000 units and an application can perform 1500 writes per second for the maximum data item size Amazon DynamoDB will allow only 1000 writes p er second to go through and the extra requests will be throttled Tip: For applications where capacity requirement increases or decreases gradually and the traffic stays at the elevated or depressed level for at least several minutes manage read and write throughput capacity automatically using auto scaling feature With any changes in traffic pattern DynamoDB will scale the provisioned capacity up or down within a specified range to match the desired capacity utilization you enter for a table or a g lobal secondary index Read Performance Considerations With the launch of Amazon DynamoDB Accelerator (DAX) you can now get microsecond access to data that live s in Amazon DynamoDB DAX is an in memory cache in front of DynamoDB and has the identical API as DynamoDB Because reads can be served from the DAX layer for queries with a cache hit and the table will only serve the reads when there is a cache miss th e provisioned read capacity units can be lowered for cost savings Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 31 Tip: Based on the size of your tables and data access pattern consider provisioning a single DAX cluster for multiple smaller tables or multiple DAX clusters for a single bigger table or a hybrid caching strategy that will work best for your application Primary Key Design Considerations Primary key design is critical to the performance of Amazon DynamoDB When storing data Amazon DynamoDB divides a table's items into multiple partitions and distributes the data primarily based on the partition key element The provisioned throughput associated with a table is also divided evenly among the partitions with no sharing of provisioned throughput across partitions Tip: To efficiently use the overall provisioned throughput spread the workload across partition key values For example if a table has a very small number of heavily accessed partition key elements possibly even a single very heavily used partition key element traffic can become concentrated on a single partition and create ""hot spots"" of read and write activity within a single item collection In extreme cases throttling can occur if a single partition exceeds its maximum capacity To better accommodate uneven access patterns Amazon DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled provided that traffic does not exc eed your table’s total provisioned capacity or the partition maximum capacity Adaptive capacity works by automatically and instantly increasing throughput capacity for partitions that receive more traffic To get the most out of Amazon DynamoDB throughpu t you can build tables where the partition key element has a large number of distinct values Ensure that values are requested fairly uniformly and as randomly as possible The same guidance applies to global secondary indexes Choose partitions and sort keys that provide uniform workloads to achieve the overall provisioned throughput Local Secondary Index Considerations When querying a local secondary index the number of read capacity units consumed depends on how the data is accessed For example whe n you create a local secondary Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 32 index and project non key attributes into the index from the parent table Amazon DynamoDB can retrieve these projected attributes efficiently In addition when you query a local secondary index the query can also retrieve attributes that are not projected into the index Avoid these types of index queries that read attributes that are not projected into the local secondary index Fetching attributes from the parent table that are not specified in the local secondary index c auses additional latency in query responses and incurs a higher provisioned throughput cost Tip: Project frequently accessed non key attributes into a local secondary index to avoid fetches and improve query performance Maintain multiple local secondary indexes in tables that are updated infrequently but are queried using many different criteria to improve query performance This guidance does not apply to tables that experience heavy write activity If very high write activity to the table is e xpected one option to consider is to minimize interference from reads by not reading from the table at all Instead create a global secondary index with a structure that is identical to that of the table and then direct all queries to the index rather t han to the table Global Secondary Index Considerations If a query exceeds the provisioned read capacity of a global secondary index that request will be throttled Similarly if a request performs heavy write activity on the table but a global secondar y index on that table has insufficient write capacity then the write activity on the table will be throttled Tip: For a table write to succeed the provisioned throughput settings for the table and global secondary indexes must have enough write capacity to accommodate the write; otherwise the write will be throttled Global secondary indexes support eventually consistent reads each of which consume one half of a read capacity unit The number of read capacity units is the sum of all projected attribut e sizes across all of the items returned in the index query results With write activities the total provisioned throughput cost for a write consists of the sum of write capacity units consumed by writing to the table and those consumed by updating the g lobal secondary indexes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 33 Apache HBase Performance Considerations Apache HBase performance tuning spans hardware network Apache HBase configurations Hadoop configurations and the Java Virtual Machine Garbage Collection settings It also includes applyin g best practices when using the client API To optimize performance it is worthwhile to monitor Apache HBase workloads with tools such as Ganglia to identify performance problems early and apply recommended best practices based on observed performance met rics Memory Considerations Memory is the most restrictive element in Apache HBase Performance tuning techniques are focused on optimizing memory consumption From a schema design perspective it is important to bear in mind that every cell stores its val ue as fully qualified with its full row key column family column name and timestamp on disk If row and column names are long the cell value coordinates might become very large and take up more of the Apache HBase allotted memory This can cause severe performance implications especially if the dataset is large Tip: Keep the number of column families small to improve performance and reduce the costs associated with maintaining HFiles on disk Apache HBase Configurations Apache HBase supports built in mechanisms to handle region splits and compactions Split/compaction storms can occur when multiple regions grow at roughly the same rate and eventually split at about the same time This can cause a large spike in disk I/O because of the compactions nee ded to rewrite the split regions Tip: Rather than relying on Apache HBase to automatically split and compact the growing regions you can perform these tasks manually If you handle the splits and compactions manually you can perform them in a time controlled manner and stagger them across all regions to spread the I/O load as much as possible to avoid potential split/compaction storms With the manual option you can further alleviate any problematic split/compaction storms and gain additional performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 34 Schema Design A region can run hot when dealing with a write pattern that does not distribute the load across all servers evenly This is a common scenario when dealing with streams processing events with time series data The gradually increa sing nature of time series data can cause all incoming data to be written to the same region This concentrated write activity on a single server can slow down the overall performance of the cluster This is because inserting data is now bound to the perfo rmance of a single machine This problem is easily overcome by employing key design strategies such as the following • Applying salting prefixes to keys; in other words prepending a random number to a row • Randomizing the key with a hash function • Promotin g another field to prefix the row key These techniques can achieve a more evenly distributed load across all servers Client API Considerations There are a number of optimizations to take into consideration when reading or writing data from a client using the Apache HBase API For example when performing a large number of PUT operations you can disable the auto flush feature Otherwise the PUT operations will be sent one at a time to the region server Whenever you use a scan operation to process large numbers of rows use filters to limit the scan scope Using filters can potentially improve performance This is because column over selection can incur a nontrivial performance penalty especially over large data sets Tip: As a recommended best practice set the scanner caching to a value greater than the default of 1 especially if Apache HBase serves as an input source for a MapReduce job Setting the scanner caching value to 500 for example will transfer 500 rows at a time to the client to be proces sed but this might potentially cost more in memory consumption Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 35 Compression Techniques Data compression is an important consideration in Apache HBase production workloads Apache HBase natively supports a number of compression algorithms that you can enab le at the column family level Tip: Enabling compression yields better performance In general compute resources for performing compression and decompression tasks are typically less than the overheard for reading more data from disk Apache HBase on Amazon EMR (HDFS Mode) Apache HBase on Amazon EMR is optimized to run on AWS with minim al administration overhead You still can access the underlying infrastructure and manually configure Ap ache HBase settings if desired Cluster Considerations You can resize an Amazon EMR cluster using core and task nodes You can add more core nodes if desired Task nodes are useful for managing the Amazon EC2 instance capacity of a cluster You can increase capacity to handle peak loads and decrease it later during demand lulls Tip: As a recommended best practice in production workloads you can launch Apache HBase on one cluster and any analysis tools such as Apache Hive on a separate cluster to improve performance Managing two separate clusters ensures that Apache HBase has ready access to the infrastructure resources it requires Amazon EMR provi des a feature to backup Apache HBase data to Amazon S3 You can perform either manual or automated backups with options to perform full or incremental backups as needed Tip: As a best practice every production cluster should always take advantage of the backup feature available on Amazon EMR Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 36 Hadoop and Apache HBase Configurations You can use a bootstrap action to install additional software or change Apache HB ase or Apache Hadoop configuration settings on Amazon EMR Bootstrap actions are scripts that are run on the cluster nodes when Amazon EMR launches the cluster The scripts run before Hadoop starts and before the node begins processing data You can write custom bootstrap actions or use predefined bootstrap actions provided by Amazon EMR For example you can install Ganglia to monitor Apache HBase performance metrics using a predefined bootstrap action on Amazon EMR Apache HBase on Amazon EMR (Amazo n S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled keep in recommended best practices discussed in this section Read Performance Considerations With Amazon S3 storage mode enabled Apache HBase r egion servers us e MemStore to store data writes in memory and use write ahead logs to store data writes in HDFS before the data is written to HBase StoreFiles in Amazon S3 Reading record s directly from th e StoreFile in Amazon S3 results in significantly higher latency a nd higher standard deviation than reading from HDFS Amazon S3 scales to support very high request rates If your request rate grows steadily Amazon S3 automatically partitions your buckets as needed to support higher request rates However the maximum request rates for Amazon S3 are lower than what can be achieved from the local cache For more information about Amazon S3 performance see Performance Optimization For read heavy workloads caching data inmemory or on disk caches in Amazon EC2 instance storage is recommended Because Apache HBase region servers use BlockCache to store data reads in memory and BucketCache to store data reads on EC2 instance storage you can choose an EC2 instance type with sufficient instance store In addition you can add Amazon Elastic Block Store (Amazon EBS) storage to accommodate your required cache size You can increase the BucketCache size on attached instance stores and EBS volumes using the hbasebucketcachesize property Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 37 Write Performance Considerations As discussed in the preceding section t he frequency of MemStore flushes and the number of StoreFiles present during minor and major compactions can contribute significantly to an increase in region server response times and consequently impact write per formance C onsider increasing the size of the MemStore flush and HRegion block multiplier which increases the elapsed time between major compactions for optimal write performance Apache HBase compactions and region servers perform optimally when fewer StoreFiles need to be compacted You may get better performance using larger file block sizes (but less than 5 GB) to trigger Amazon S3 multipart upload functionality in EMRFS In summary whether you are running a managed NoSQL database such as Amazon DynamoDB or Apache HBase on Amazon EMR or manag ing your Apache HBase cluster yourself on Amazon EC2 or on premises you should take performance optimizations into consideration if you want to maximize performance at reduced costs The key difference between a hosted NoSQL solution and managing it yours elf is that a managed solution such as Amazon DynamoDB or Apache HBase on Amazon EMR lets you offload the bulk of the administration overhead so that you can focus on optimizing your application If you are a developer who is getting started with NoSQL Amazon DynamoDB or the hosted Apache HBase on the Amazon EMR solution are suitable options depending on your use case For developers with in depth Apache Hadoop/Apache HBase knowledge who need full control of their Apache HBase clusters the self managed Apache HBase deployment model offers the most flexibility from a cluster management standpoint Conclusion Amazon DynamoDB lets you offload operating and scaling a highly available distributed database cluster making it a suitable choice for today’s rea ltime web based applications As a managed service Apache HBase on Amazon EMR is optimized to run on AWS with minimal administration overhead For advanced users who want to retain full control of their Apache HBase clusters the self managed Apache HBa se deployment model is a good fit Amazon DynamoDB and Apache HBase exhibit inherent characteristics that are critical for successfully processing massive amounts of data With use cases ranging from Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 38 batch oriented processing to real time data serving Ama zon DynamoDB and Apache HBase are both optimized to handle large datasets However knowing your dataset and access patterns are key to choosing the right NoSQL database for your workload Contributors Contributors to this document include : • Wangechi Doble Principal Solutions Architect Amazon Web Services • Ruchika Abbi Solutions Architect Amazon Web Services Further Reading For additional information see: • Amazon DynamoDB Developer Guide • Amazon EC2 User Guide • Amazon EMR Management Guide • Amazon EMR Migration Guide • Amazon S3 Developer Guide • HBase: The Definitive Guide by Lars George • The Apache HBase™ Reference Guide • Dynamo: Amazon’s Highly Available Key value Store Document Revisions Date Description January 2020 Amazon DynamoDB foundational featu res and transaction model updates November 2018 Amazon DynamoDB Apache HBase on EMR and template updates September 2014 First Publication",General,consultant,Best Practices Configuring_Amazon_RDS_as_an_Oracle_PeopleSoft_Database,"Configuring Amazon RDS as an Oracle PeopleSoft Database July 2019 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 2019 Amazon Web Services Inc and DLZP Group All rights reserved Contents About this Guide 4 Introduction 1 Prerequisites 2 Decide Which AWS Region to Use 2 Identify VPC and Subnets 2 Validate IAM Permissions 2 Determine the Size of the Database 3 Set Up the AWS Command Line Interface (Optional) 3 Certification Licensing and Availability 4 PeopleSoft Certification 4 Oracle Licensing 4 Amazon RDS for Oracle Availability 4 Configuring the Database Instance 6 Create Security Groups 6 Create a DB Subnet Group 9 Create an Option Group 11 Create a Parameter Group 13 Modifying Parameters 15 Create the Database Instance 17 Create a DNS Alias for the Database Instance 27 Running the PeopleSoft DB Creation Scripts 30 Editing the Database Scripts 30 Conclusion 35 References 35 Contributors 35 Document Revisions 35 About this Guide Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying enterprise grade solutions in a rapid reliable and cost effective manner Oracle Database is a widely used rela tional database management system that is dep loyed and used with many Oracle applications of all sizes to manage various forms of data in many phases of business transactions In this guide we describe the preferred method for configuring an Amazon R elational Database Service (Amazon RDS) for Oracle Database as a back end database for Oracle PeopleSoft Enterprise a widely used enterprise resource planning ( ERP) application Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 1 Introduction An Amazon Relational Database Service (Amazon RDS ) for Oracle Database provides scalability performance monitoring and bac kup and restore support Deploying an Amazon RDS for Oracle D atabase in multiple Availability Zones (AZs) simplifies creating a highly available architecture because a multiAZ deployment contains built in support for automated failover from your primary database to a synchronously replicated secondary database in an alternative AZ Amazon RDS for Oracle always provides the latest version of Oracle Database with the latest patch set updates (PSU s) and manages the database upgrade process on your schedule eliminating manual database upgrade and patching tasks You can use Oracle PeopleSoft Enterprise with Amazon RDS and the preferred Oracle Database edition ( using your own license or a license managed by AWS ) to create a production Amazo n RDS for Oracle Database instance or the Standard Edition /Standard Edition One/Standard Edition Two to create Amazon RDS for Oracle preproduction environments Before you can use the PeopleSoft components you must create and populate schemas for them in your Amazon RDS for Oracle Database To do so use the Amazon RDS console or AWS C ommand Line Interface (AWS CLI) to launch your database (DB) instance After the instance is created you need to modify the delivered PeopleSoft Database Creation Scripts and run them against the Amazon RDS for Oracle Database instance After completing the procedure s described in this guide you can leverage the manageability feature s of Amazon RDS for Oracle —such as multiple Availability Zones for high availability hourly pricing of an Oracle D atabase and a virtual private cloud ( VPC) for network security —while operating the PeopleSoft Enterprise application on AWS Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 2 Prerequisites Before you create an Amazon RDS for Oracle Database instance you need to make some decisions about your configuration and complete some basic tasks You will use the decisions you make in this section later to configure your Amazon RDS for Oracle Database instance Decide Which AWS Region to Use Decide which of the available AWS Regions you want to use for your workload When choosing a Region consid er the following factors: • Latency between the end users and the AWS Region 1 • Latency between your data center and the AWS Region This is one of the most critical factors when you have PeopleSoft running in the cloud and backends running on premises • AWS cost: The AWS service cost varies depending on the Region • Legislation and compliance : There might be restrictions on which count ry your customers ' data can be stored in Identify VPC and Subnets Determine which VPC and subnets you will be using to deploy your resources If you don’t have a VPC you can create an Amazon Virtual Private Cloud (Amazon VPC) by referring to the Amazon Virtual Private Cloud User Guide 2 NOTE: If creating an Amazon VPC follow Step 1: Create the VPC from the Amazon Virtual Private Cloud User Guide You will be creating a security group using this guide Validate IAM Permissions You must have AWS Identity and Access Management (IAM) permissions to perform the actions described in this guide You will need permissions to configure the following AWS services : • Amazon Virtual Private Cloud3 • Amazon Elastic Compute Cloud (Amazon EC2)4 Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 3 • Amazon Relational Database Service5 • Amazon Route 536 Determine the Size of the D atabase Determine the database (DB) size you will require for the installation Table 1 lists various DB instance classes by size of the PeopleSoft Environment Note that this table is provided only as a guideline You should validate your individual class size requirements against your actual usage For a current listing of available instance classes refer to Amazon RDS for Oracle Pricing Table 1: DB instan ce classes by size of the PeopleSoft environment DB Instance Class Notes medium Ideal for a small PeopleSoft demo/dev environment large Ideal for a medium PeopleSoft environment: <100 users xlarge Ideal for a medium PeopleSoft environment: <1000 users 2xlarge Ideal for a medium PeopleSoft environment: <10000 users 4xlarge Ideal for a large PeopleSoft environment: <50000 users 8xlarge Ideal for a very large PeopleSoft environment: <250000 users Set Up the AWS C ommand Line Interface (Optional) You can use either the AWS Management Console or the AWS CLI to perform the tasks described in this guide To use the AWS CLI ensure that you have installed AWS CLI and that you have either an Amazon EC2 instance that has an AWS IAM role associated with it (recommended) or an access key ID and secret key Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 4 Certification Licensing and Availability Before getting started with installing a PeopleSoft application on Amazon RDS for Oracle check for certification make licensing considerati ons and verify general availability PeopleSoft Certification Oracle certification for PeopleSoft software is controlled by the PeopleTools version that is being used Use your My Oracle Support account to check that your PeopleTools version is currently certified to run on the Oracle Database Release you plan to use with Amazon RDS as well as review any PeopleSoft application certification notes that may apply NOTE : Oracle has numerous documents on My Oracle Support regarding support for Oracle Applications in the Cloud Documents regarding issues with deploying PeopleSoft on Amazon RDS for Oracle are resolved by the steps in this guide In addition there are features that are specific to a database release that may or may not be available based on the database edition that you own Oracle Licensing When creating an Amazon RDS for Oracle database you can select either Bring Your Own License (BYOL) or License Included (LI) Not all editions are available for License Included Before creating the database instance verify which license (s) your organization holds if any For more details refer to the Amazon RDS for Oracle FAQs Amazon RDS for Oracle Availability After reviewing certification and licensing refer to Table 2 to identify the Oracle Database Release and corresponding PeopleTools Release along with other details that are available on Amazon RDS for Oracle Refer to the Amazon RDS User Guide section on Oracle on Amazon RDS for an up to date list of available RDS Oracle Releases Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 5 Table 2: Certified Oracle releases for PeopleTools available on Amazon RDS PeopleTools Release Amazon RDS Oracle DB Release Amazon RDS Oracle DB Edition Amazon RDS Oracle DB License Model 857 856 12201 Enterprise Edition LI Standard Edition Two LI BYOL 12102 Enterprise Edition LI Standard Edition Two LI BYOL 855 12201 Enterprise Edition LI Standard Edition Two LI BYOL 12102 Enterprise Edition LI Standard Edition Two LI BYOL 11204 Enterprise Edition LI Standard Edition LI BYOL Standard Edition One LI BYOL 854 853 12102 Enterprise Edition LI Standard Edition Two LI BYOL 11204 Enterprise Edition LI Standard Edition LI BYOL Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 6 PeopleTools Release Amazon RDS Oracle DB Release Amazon RDS Oracle DB Edition Amazon RDS Oracle DB License Model Standard Edition One LI BYOL Configuring the Database Instance You can configure AWS resources by using either the AWS Management Console or AWS CLI Steps for both options are provided in this guide If you plan to use AWS CLI follow the console procedure first because it provides context for the step The AWS CLI commands provided in this guide map directly to the tasks executed using the console NOTE : This guide provides the steps for creating a Demo PeopleSoft environment As such the settings and configurations provided apply towards this smaller environment where performance is not a requirement Create Security Groups A security group acts as a virtual firewall for your instance to control inbound and outbound traffic For more information on Security Groups reference Securit y Groups for Your VPC You will create two security group s to define network level traffic to the Amazon RDS database ; one for the Amazon RDS database and one for the Amazon EC2 application servers that will access the database Using security groups to define database network access allows you to be more restrictive and intentional about security F or example by having separate security groups for Production and Development environments you can prevent Development servers from comm unicating with the Prod uction database Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 7 Table 3: Security groups inbound rules Security Group Name Protocol Port Range Source Notes peoplesoft demo app None None None Attach this security group to your PeopleSoft Application server EC2 instance s peoplesoft demo db TCP 1521 peoplesoft demo app Allow 1521 traffic from peoplesoft demo app Using the AWS Management Console Create peoplesoft demo app 1 In the console choose Services VPC Security Groups Create Security Group 2 Enter peoplesoft demo app for the Security group name and PeopleSoft Demo Application Server for the Description Select the appro priate VPC for your account 3 Choose Create Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 8 4 Note the Security Group ID for later use Create peoplesoft demo db 1 In the console choose Services VPC Security Groups Create Security Group 2 For the Security group name enter peoplesoft demo db and for the Description enter PeopleSoft Demo RDS Database 3 Select the appropriate VPC for your account and c hoose Create 4 To update the Inbound Rules : • Select peoplesoft demo db from the security group list • Choose Actions Edit inbound rules • For Type select Oracle RDS • For Source select Custom • Enter the Security Group ID from the previous step • For Description enter PeopleSoft Demo Application Server 5 Choose Save rules Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 9 Using the AWS CLI Update the VPC_ID variable below with your VPC and execute in your CLI environment VPC_ID=< Replace with your VPC ID> PS_APP_SG=$(aws ec2 create security group groupname peoplesoft demoapp description ""PeopleSoft Demo Application Server"" vpcid $VPC_ID output text) aws ec2 create tags resources $PS_APP_SG tags ""Key=NameValue= PeopleSoft Demo Application Se rver"" PS_RDS_SG=$(aws ec2 create security group groupname peoplesoft demodb description "" PeopleSoft Demo RDS Database "" vpcid $VPC_ID output text) aws ec2 create tags resources $ PS_RDS_SG tags ""Key=NameValue= PeopleSoft Demo RDS Database"" aws ec2 authorize security groupingress groupid $PS_RDS_SG protocol tcp port 1521 sourcegroup $PS_APP_SG Create a DB Subnet Group Before you can create an Amazon RDS for Oracle Database instance you must define a subnet group A subnet group is a collection of subnets (typically private) that you create in a VPC and designate for your DB instances For an Amazon RDS for Oracle Database you must select two subnets each in a different Availability Zone Using the AWS Management Console 1 In the Console choose Services RDS Subnet Groups Create DB Subnet Group 2 For the Subnet Group details section s pecify the following: • For Name enter peoplesoft demo subnetgroup • For Description enter PeopleSoft Demo Subnet Group Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 10 • For VPC choose your VPC 3 For the Add Subnets section use the Availability zone drop down to choose an AZ select a Subnet designated for databases and Choose Add subnet Choose a minimum of 2 Subnet’s each from a different Availability Zone 4 Choose Create Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 11 Using the AWS CLI Update the PS_RDS_SN_1 and PS_RDS_SN_2 variables below with the subnets you created in the previous step and execute in your CLI environment PS_RDS_SN_1= PS_RDS_SN_2= aws rds create dbsubnetgroup dbsubnetgroupname peoplesoft demosubnetgroup dbsubnetgroupdescription ""PeopleSoft Demo Subnet Group "" subnetids $PS_RDS_SN_1 $PS_RDS_SN_2 Create an Option Group An Option Group provides additional feature options that you might want to add to your Amazon RDS for Oracle DB instance Amazon RDS provides default Option Groups but they cannot be modified For this reason create a new option group so feature options can be added or modified later You can assign an option group to multiple Amazon RDS for Oracle DB instances For a production database always review your current Oracle licensing agreement For more information reference Options for Oracle DB Instances Licensing may be requ ired IMPORTANT : The only option required for PeopleSoft to run correctly is Timezone This must be set to have the desired timestamps inside PeopleSoft Using the AWS Management Console 1 In the AWS Management Console choose Services RDS Option Groups Create Group 2 Specify values for the following fields : • For Name enter peoplesoft demo oracle ee122 • For Description enter PeopleSoft Demo Option Group • For Engine choose the Engine that correlates with the Oracle Database Edition chosen in the Certification Licensing and Availability section of this paper Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 12 • For Major Engine version choose the Major engine version that correlates with the Oracle Database Release chosen in the Certification Licensing and Availability section of this paper 3 Choose Create 4 To update the Option Group select peoplesoft demo og from the list of option groups and choose Add option 5 Select Timezone from the Option list and choose your local Time Zone you want to be reflected in PeopleSoft Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 13 6 Select whether or not you want the DB instance option to be applied immediately and choose Add Option Using the AWS CLI Update PS_TZ with a valid Time Zone such as US/Pacific and execute in your CLI environment PS_TZ= aws rds create optiongroup optiongroupname peoplesoft demooracleee122 enginename oracle ee majorengine version 122 optiongroupdescription ""PeopleSoft Demo Option Group"" aws rds add optiontooptiongroup optiongroupname peoplesoft demooracleee122 applyimmediately options ""OptionName=TimezoneOptionSettings=[{Name=TIME_ZONEValue=$ PS_TZ}]"" Create a Parameter Group A DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances If you create a DB instance without specifying a DB parameter group the DB instance uses a default DB parameter group It is not possible to update the default parameter group therefore it is recommend ed that you create a new parameter group even if you don't need to customize any parameters at this point It is also recommend ed to consider how you will reuse the parameter groups among multiple Amazon RDS for Oracle DB instances For a PeopleSoft deployment it is recommended that you use a unique parameter group for each environm ent (DEV T EST PROD) since parameters may need to be modified to suit a particular use case and/or provide you the ability to test a change in the configuration prior to app lying it to a new environment For more information refer to the Amazon RDS User Guide Working with DB Parameter Groups Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 14 Using the AWS Management Console 1 In the AWS Management Console choose Services RDS Parameter Groups Create Parameter Group 2 Specify values for the following fields: • For Parameter group family choose the database edition you want to use in your RDS for Oracle DB instances In this example oracle ee122 is used • For Group name enter s peoplesoft demo oracle ee122 • For Description enter PeopleSoft Demo Parameter Group 3 Choose Create Using the AWS CLI Execute the following command in your CLI environment aws rds create dbparameter group dbparameter groupname peoplesoft demooracleee122 dbparameter groupfamily oracleee122 description ""PeopleSoft Demo Parameter Group"" Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 15 Modifying Parameters For PeopleSoft there are recommended parameter s for Oracle databases as shown in Table 4 Table 4: List of parameters to customize Parameter Value Notes open_cursors 1000 db_block_size This parameter is automatically set by Amazon RDS (although the creation of nonstandard block size tablespaces and setting DB_nK_CACHE_SIZE parameters is supported) db_files 1021 Optionally you can leave the default setting provided by Amazon RDS nls_length_semantic CHAR for Unicode BYTE for non Unicode memory_target {DBInstanceCla ssMemory*3/4}; The default may be used ; change if you have a specific requirement _gby_hash_aggregation_enabl ed false This hash scheme enables group by and aggregation _unnest_subquery false Enable un nesting of complex sub queries optimizer_adaptive_ features false This parameter is for version 121x You can either enable or disable the adaptive optimize features optimizer_adaptive_ plans true (default) This parameter is for version 122x You can either enable or disable the adaptive optimize features optimizer_adaptive_ statistics false (default) This parameter is for version 122x You can either enable or disable the adaptive optimize features Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 16 Parameter Value Notes _fix_control 14033181:0 This parameter is for Oracle 121x Databases ONLY and is an interim patch resolution that you can enable This patch is not required for Version 122x release NOTE: This is an example of know n settings as of the publication date For more details abou t specific PeopleSoft –Oracle parameter settings refer to the following Oracle Support Document : EORA Advice for the PeopleSoft Oracle DB A (Doc ID 14459651) For instance classes with at least 100 GiB of memory use sga_target and enable Huge Pages Using the AWS Management Console 1 Select the parameter group you created (example shown below ) 2 Choose Parameter group actions Edit 3 Type the name of the Parameter you want to edit (as listed in Table 4 ) 4 Optional: f or example enter open_cursors into the filter change the Values field to 1000 and then choose Save C hanges 5 Repeat step 3 and 4 for each parameter you want to edit Using the AWS CLI Update the commands if you need to customize other parameters E xecute the following command in your CLI environment Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 17 aws rds modify dbparameter group dbparameter groupname peoplesoft demooracleee122 –parameters ""ParameterName=open_cursorsParameterValue=1000ApplyMethod= pendingreboot"" Create the Database Instance Next you are ready to create a highly available Oracle Database across two Availability Zones Keep in mind that running a database in multiple Availability Zones increases cost Depending on your SLA requirements you can consider running the database in a single Availability Zone instead Using the AWS Management Console 1 In the Amazon RDS console select Create database choose Oracle and your Edition and choose Next Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 18 2 For our example c hoose the Dev/Test template then choose Next 3 Specify the DB details: • License model : Choose the License model which depend s on you r Oracle Edition Reference the Certification Licensing and Availability section • DB engine version : Choose the most recent engine version The most recent version will have all Oracle patches available to Amazon RDS Reference the Certification Licensing and Availability section • DB instance class: Because this is fo r Demo Purposes choose a relatively small DB instance c lass such as dbt3medium You can change the DB instance class at any point which require s restarting the DB instance • Multi AZ deployment : Choose Yes so that you can have a second standby instance running in a second Availability Zone • Storage type: For a Dev/Test environment choose General Purpose (SSD) For a high performance environment in production Provisioned IOPS (SSD) should be used • Allocated storage : Allocate 200 GB Note that b aseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB This will give you a 600 IOPS baseline and can burst to 3000 IOPS using credits For more information on credits refer to the Amazon RDS User Guide I/O Credits and Burst Performa nce Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 19 4 Review the monthly costs Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 20 5 Specify the Settings : • DB instance identifier : Create a unique name for the DB instance identifier Amazon RDS uses this identifier to define the database hostname Our example use s psfdmo • Master username : This is similar to the SYS user but with fewer privileges because Amazon RDS does not allow you to use either a SYS user or the SYSDBA role Our example uses psftadmin • Master password : Create a password for the master user The m aster password must be at least eight characters long and c an include any printable ASCII character except for the following: / "" or @ 6 Choose Next 7 Specify the DB instance advanced settings: • VPC: Choose t he VPC where the database will be deployed • Subnet group : Choose t he subnet group you created in Create a DB Subnet Group The DB instance is deployed against the subnets associated with the subnet group • Public A ccessibility : Allows external access to the database provided it’s deployed in a public subnet In most cases you would choose No to restrict access to (1) within our VPC and (2) use security groups Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 21 • Availability zone: Use the default No preference • VPC security group s: Choose t he security group that will be associated with your DB instance This security group provide s access to the database listener You created this security group peoplesoft demo db in Create Security Groups Remove the default security group Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 22 8 Specify your database options : • Database name: Choose t he database service name which your database clients will use to connect Our example use s PSFDMO This follows Oracle Database DB_NAME naming conventions reference Oracle’s documentation Selecting a Database Name for more info • Port: Choose t he TCP port that the database listener listen s on In our example we chose 1521 which is a default port for Oracle • DB parameter group : The database engine parameters Choose the peoplesoft demo oracle ee122 DB parameter group that you created in Create a Parameter Group • Option group : The features that will be enabled in the database Choose the peoplesoft demo oracle ee122 option group which you created in Create an Option Group • Character set name: Choose t he character set for your database In our example we use WE8ISO8859P15 a non Unicode database Use the character set that is required for you r PeopleSoft installation Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 23 9 Choose whether to enable or disable the e ncryption If you require your database data to be encrypted choose Enable encryption For more information see the Amazon RDS User Guide Encrypting Amazon RDS Resources For our example we chose No 10 Specify the b ackup options : • Backup retention period : Set the backup retention period i n days (maximum of 35) for your database Set to 0 to disable automatic backups Our example uses the default of 7 days • Backup window : Set the backup window for your daily backup Choose the default No Preference • Copy tags to snapshots : When this option is enabled Amazon RDS copies any tag ass ociated with your DB instance to the database snapshots ; useful for tracking usage and cost Select the checkbox to enable this option Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 24 11 Choose whether to enable or disable the monitoring option Enhanced Monitoring provides Amazon RDS metrics in real time for the operating system (OS) that your DB instance runs on For more information see the Amazon RDS User Guide Enhanced Monitoring Our example disables t his option 12 Choose whether to enable or disable the performance insight s option Amazon RDS Performance Insights monitors your Amazon RDS DB instance load so that you can analyze and troubleshoot your database performance For more information see the A mazon RDS User Guide Using Amazon RDS Performance Insights In our example we chose Disable Performance Insights 13 Specify the log(s) to export if any Database logs can be exported to Amazon CloudWatch this can be useful in a Production environment In our example we chose not to export any logs Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 25 14 Specify the m aintenance settings • Auto Minor Ve rsion Upgrade : If you want to manage when database maint enance runs then select to Disable auto minor version upgrade • Maintenance Window: Set the timing fo r the minor maintenance window For our example w e chose No Preference 15 Determine whether to enable the d elete protection option (for most use cases th is should be checked to prevent accidental deletion of the instance ) For o ur example the option is unchecked Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 26 16 Choose Create database to launch the DB instance Using the AWS CLI Update the RDS_MASTER_PWD and PS_RDS_SG variables and modify as necessary E xecute the in your CLI environment RDS_MASTER_PWD= PS_RDS_SG= aws rds create dbinstance \ dbname PSFDMO \ dbinstance identifier psfdmo \ allocatedstorage 200 \ dbinstance class dbt3medium \ engine oracle ee \ masterusername psftadmin \ masteruserpassword $RDS_MASTER_PWD \ vpcsecurity groupids $PS_RDS_SG \ dbsubnetgroupname peoplesoft demosubnetgroup \ dbparameter groupname peoplesoft demooracleee122 \ port 1521 \ multiaz \ engineversion 12201ru 201904rur201904r1 \ noautominorversionupgrade \ licensemodel bring yourownlicense \ optiongroupname peoplesoft demooracleee122 \ character setname WE8ISO8859P15 \ nostorageencrypted \ nopublicly accessible \ noenableperformance insights \ nodeletion protection \ storagetype gp2 \ copytagstosnapshot Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 27 Create a DNS Alias for the Database Instance When you create an RDS for Oracle DB instance Amazon RDS creates a unique DNS hostname for your instance ( for example psdmoc6jc3rya3ntdus east1rdsamazonawscom:1521 ) You can use that hostname to connect to the database However you have no control over the hostname so you end up with a database URL that is not easy to remember In addition you might at some point need to restore the database from a snapshot For example you would need to do so if an operator makes a mistake in manipulating the data or if a bug in your application corrupts the data But you can’t restore a snapshot to an existing RDS DB instance When you restore a database from a snapshot Amazon RDS creates a new DB instance and generates a new hostname To avoid affecting existing applications and having to update their database endpoints create a DNS alias for your RDS DB instance Depending on your architecture you may register the DNS alias either in your corporate DNS server running on premises or in a DNS server running in AWS We will show you how to register a DNS alias in AWS as a private hosted zone using Amazon Route 53 When you use a private hosted zone only hosts in your VPC can resolve the DNS names for your database (There is a way to extend the name resolution outside of the VPC but that's beyond the scope of this guide ) Create a n Amazon Route 53 Private Hosted Zone Create a private hosted name using either the AWS Management Console or using the AWS CLI Using the AWS Management Console 1 In the AWS Management Console choose Services Route 53 Hosted zones and then choose Create Hosted Zone 2 Provide the details of your hosted zone: • Domain Name : Type t he private domain name Our example uses peoplesoftlocal • Type : Choose Private Hosted Zone for Amazon VPC Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 28 • VPC ID : Choose t he ID of the VPC used by your PeopleSoft infrastructure in AWS Using the AWS CLI Execute the following command: aws route53 create hostedzone name peoplesoftlocal vpc '{""VPCRegion"":""us east1"" ""VPCId"": ""vpc ******21""}' hostedzoneconfig '{""PrivateZone"": true}' caller reference 112017 Note : Retain the HostedZone I D because you will need it to create the record sets in the next section For the command provided above the HostedZone I D is: Z234334ABCDEF Create a DNS Alias By creating a DNS alias you can manage your database’s endpoint and avoid the need to change the code in your existing application Using the AWS Management Console 1 Choose Create Record Set and then provide the following details: • Name : The fully qualified domain name which in our example is psfdmo The value you type will be prepended to the domain name In our example it is psfdmopeoplesoftlocal Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 29 • Type : Choose CNAME • Alias : Choose No • TTL (Seconds) : Enter 300 • Value : The RDS instance hostname (do not add the port information 1521) Using the AWS CLI Execute the following command: aws route53 change resource recordsets hostedzoneid Z234334ABCDEF changebatch '{""Changes"": [{""Action"": ""CREATE"" ""ResourceRecordSet"": {""Name"": ""psdmopeoplesoftlocal """"Type"": ""CNAME"" ""TTL"": 300 ""ResourceRecords"": [{""Value"": ""psdmoak34e3krdsamazonawscom ""}]}}]}' After creating a DNS alias connect to our demo database using the following URL: psfdmopeoplesoftlocal:1521/ps fdmo Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 30 Running the PeopleSoft DB Creation Scripts With the database created you are ready to create the PeopleSoft DB The following steps illustrate the procedure detailed in the PeopleTools 85 x Installation for Oracle guide Ensure that you use the appropriate PeopleTools insta ll guide for your installation and note the following changes in the manual installation steps Editing the Database Scripts To start modify the delivered database creation scripts There are 2 types of changes that w e need to make to these scripts : (1) Tablespace creation and (2) SYSDBA SQL commands Tablespace Creation For creating tablespace Amazon RDS supports only the Oracle Managed Files (OMF) for data files log files and control files When you create data f iles and log files you can not specify the physical file names By default Oracle delivers these scripts using physical file paths so they must be updated to the OMF format Reference Amazon RDS User Guide Creating and Sizing Tablespaces for more information SYSDBA SQL Commands When you create a DB instance in Amazon RDS the master account use d to create the instance gets DBA user privileges (with some limitations) Use this account for any administrative tasks such as creating additional user accounts in the database The SYS user SYSTEM user and other administrative accounts can not be used These commands have been identified below along with the RDS procedure to properly run Reference Amazon RDS User Guide Granting SELECT or EXECUTE Privileges to SYS Objects for more information Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 31 Create DB SQL [ createdbsql ] Skip this script It does not need to be modified nor run because you already created the database UTL Space Script [ utlspacesql ] Modify the create tablespace commands for creating the PS DEFAULT tablespace to OMF format Delivered : CREATE TEMPORARY TABLESPACE PSTEMP TEMPFILE '/u03/oradata//pstemp01dbf' SIZE 300M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K ; CREATE TABLESPACE PSDEFAULT DATAFILE '/u03/oradata //psdefaultdbf' SIZE 100M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ; Modi fied: CREATE TEMPORARY TABLESPACE PSTEMP TEMPFILE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K ; CREATE TABLESPACE PSDEFAULT EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ; Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 32 Application Specific Table space Creation [ xxddlsql ] Modify the application specific tablespace creation script for the PeopleSoft application you are creating For example epddlsql for FSCM or hcddlsql for HCM Please refer to the PeopleTools Installation for Oracle for details on the DDL scripts that ar e appropriate f or your application Modify all CREATE TABLESPACE commands as below Delivered : CREATE TABLESPACE AMAPP DATAFILE '/u04/oradata //amappdbf' SIZE 2M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO / Modified : CREATE TABLESPACE AMAPP EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO / DB Owner Script [ dbownersql ] Replace system/manager with the RDS Master username and Master password Delivered : CONNECT system/manager; Modified (Replace with your credentials): CONNECT / ; Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 33 PS Roles Script [ psrolessql ] Modify all grants below if present Delivered: GRANT SELECT ON V_$MYSTAT to PSADMIN; GRANT SELECT ON USER_AUDIT_POLICIES to PSADMIN; GRANT SELECT ON FGACOL$ to PSADMIN; grant execute on dbms_refresh to PSADMIN; GRANT SELECT ON ALL_DEPENDENCIES to PSADMIN; Modified : exec rdsadminrdsadmin_util grant_sys_object('V_$MYSTAT'' PSADMIN''SELECT'); exec rdsadminrdsadmin_utilgrant_sys_object('USER_AUDIT_P OLICIES''PSADMIN''SELECT'); exec rdsadminrdsadmin_utilgrant_sys_object('FGACOL$''PS ADMIN'' SELECT'); exec rdsadminrdsadmin_utilgrant_sys_obj ect('DBMS_REFRESH ''PSADMIN''EXECUTE'); exec rdsadminrdsadmin_utilgrant_sys_object('ALL_DEPENDEN CIES''PSADMIN''SELECT'); Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 34 PS Admin Script [ psadminsql ] Replace system/manager with the Amazon RDS master username and master password Delivered : connect system/manager Modified (Replace with your credentials): connect / Connect Script [ connectsql ] No changes necessary Execute Database creation scripts After all the scripts have been updated execute on the database as usual as per the PeopleSoft installation guide Create and Run Data Mover Import Scripts Follow the PeopleSoft installation guide for creating and running the Data Mover Import scripts for your PeopleSoft Application Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 35 Conclusion We have n ow complete d the setup of a PeopleSoft demo database that is fully managed on Amazon RDS and ready to perform This guide describe d how to configure Amazon RDS for Oracle as a backend database for an Oracle PeopleSoft Enterprise demo application By using these procedures you can use Amazon RDS for Oracle to set up and operate many different PeopleSoft application databases As a result you have the steps to run your PeopleSoft applic ations on an Amazon RDS for Oracle D atabase References • PeopleT ools 8 57 Installation for Oracle • Amazon Web Services API Reference Contributors The following individuals and organizations contributed to this document: • David Brunet VP Research and Development DLZP Group • Nick Sefiddashti AWS Solutions Architect DLZP Group • Muhammed Sajeed PeopleSoft Architect DLZP Group • Yoav Eilat Senior Product Marketing Manager AWS • Tsachi Cohen Software Development Manager AWS • Michael Barras Senior Database Engineer AWS Document Revisions Date Description March 2017 First publication July 2019 Updated with the latest features of Amazon RDS PeopleSoft release versions AWS Management Console updates Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 36 1 https://docsawsamazoncom/general/latest/gr/randehtml 2 https://docsawsamazoncom/AmazonVPC/latest/GettingStartedGuide/ GetStartedhtml 3 https://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_IAMhtml 4 https://docsawsamazoncom/AWSEC2/latest/UserGuide/UsingIAMhtml 5 https://docsawsamazoncom/AmazonRDS/latest/UserGuide/UsingWithRDS IAMAccessControlIdentityBasedhtml 6 https://docsawsamazoncom/Rou te53/latest/DeveloperGuide/auth and access controlhtml Notes",General,consultant,Best Practices Considerations_for_Using_AWS_Products_in_GxP_Systems,"GxP Systems on AWS Published March 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 About AWS 1 AWS Healthcare and Life Sciences 2 AWS Services 2 AWS Cloud Security 4 Shared Security Responsibility Model 6 AWS Certifications and Attestations 8 Infrastructure Description and Controls 13 AWS Quality Management System 17 Quality Infrastructure and Support Processes 18 Software Development 25 AWS Products in GxP Systems 30 Qualif ication Strategy for Life Science Organizations 32 Supplier Assessment and Cloud Management 38 Cloud Platform/Landing Zone Qualification 42 Qualifying Building Blocks 48 Computer Systems Validation (CSV) 54 Conclusion 55 Contributors 55 Further Reading 55 Document Revisions 56 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services 57 Abstract This whitepaper provides information on how AWS approaches GxP related compliance and security and provides customers guidance on using AWS Produ cts in the context of GxP The content has been developed based on experience with and feedback from AWS pharmaceutical and medical device customers as well as software partners who are currently using AWS Products in their validated GxP systems Amazon Web Services GxP Systems on AWS 1 Introduction According to a recent publication by Deloitte on the outlook of Global Life Sciences in 2020 prioritization of cloud technologies in the life sciences sector has steadily increased as customers seek out highly reliable scalable and secure solutions to operate their regulated IT systems Amazon Web Services (AWS ) provides cloud services designed to help customers run their most sensitive workloads in the cloud including the computerized systems that support Good Manufacturing Practice Good Laboratory Practice and Good Clinical Practice (GxP) GxP guidelines are established by the US Fo od and Drug Administration (FDA) and exist to ensure safe development and manufacturing of medical devices pharmaceuticals biologics and other food and medical product industries The first section of this whitepaper outlines the AWS services and organi zational approach to security along with compliance that support GxP requirements as part of the Shared Responsibility Model and as it relates to the AWS Quality System for Information Security Management After establishing this information the whitepap er provides information to assist you in using AWS services to implement GxP compliant environments Many customers already leverage industry guidance to influence their regulatory interpretation of GxP requirements Therefore the primary industry guidan ce used to form the basis of this whitepaper is the GAMP (Good Automated Manufacturing Practice) guidance from ISPE (International Society for Pharmaceutical Engineering) in effect as a type of Good Cloud Computing Practice While the following content provides information on use of AWS services in GxP environments you should ultimately consult with your own counsel to ensure that your GxP policies and procedures satisfy regulatory compliance requirements Whitepapers containing more specific information about AWS products privacy and data protection considerations are available at https://awsamazoncom/compliance/ About AWS In 2006 Amazon Web Services (AWS) began offering on demand IT infrastructure services to businesses in the form of web services with pay asyougo pricing Today AWS provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in countries around the world Using AWS businesses no longer need to plan for and procure servers and other IT Amazon Web Services GxP Systems on AWS 2 infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Offering ov er 175 fully featured services from data centers globally AWS gives you the ability to take advantage of a broad set of global cloud based products including compute storage databases networking security analytics mobile developer tools management tools IoT and enterprise applications AWS's rapid pace of innovation allows you to focus in on what's most important to you and your end users without the undifferentiated heavy lifting AWS Healthcare and Life Sciences AWS started its dedicated Genomi cs and Life Sciences Practice in 201 4 in response to the growing demand for an experienced and reliable life sciences cloud industry leader Today the AWS Life Sciences Practice team consists of members that have been in the industry on average for over 1 7 years and had previous titles such as Chief Medical Officer Chief Digital Officer Physician Radiologist and Researcher among many others The AWS Genomics and Life Sciences practice serves a large ecosystem of life sciences customers including pharm aceutical biotechnology medical device genomics start ups university and government institutions as well as healthcare payers and providers A full list of customer case studies can be found at https://awsamazoncom/health/customer stories In addition to the resources available within the Genomics and Life Science practice at AWS you can also work with AWS Life Sciences Competency Partners to drive innovation and improve efficiency acr oss the life sciences value chain including cost effective storage and compute capabilities advanced analytics and patient personalization mechanisms AWS Life Sciences Competency Partners have demonstrated technical expertise and customer success in building Life Science solutions on AWS A full list of AWS Life Sciences Competency Partners can be found at https://awsamazoncom/health/lifesciences partner solutions AWS Serv ices Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable you to run a wide range of applications Helping to protect the confidentiality integrity and availabili ty of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence Amazon Web Services GxP Systems on AWS 3 Similar to other general purpose IT products such as operating systems and database engines AWS offers commercial off theshelf (CO TS) IT services according to IT quality and security standards such as ISO NIST SOC and many others For purposes of this paper w e will use the definition of COTS in accordance with the definition established by FedRAMP a United States government wide program for procurement and security assessment FedRAMP references the US Federal Acquisition Regulation (FAR) for its definition of COTS which outlines COTS items as: • Products or services that are offered and sold competitively in substantial quantities in the commercial marketplace based on an established catalog • Offered without modification or customization • Offered under standard commercial terms and conditions Under GAMP guidelines (such as GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems) organizations implementing GxP compliant environments will need to categorize AWS services using respective GAMP software and hardware categories (eg Software Category 1 for Infrastructure Software including operating systems dat abase managers and security software or Category 5 for custom or bespoke software) Most often organizations utilizing AWS services for validated applications will categorize them under Software Category 1 AWS offers products falling into several categor ies Below is a subset of those AWS offerings spanning Compute Storage Database Networking & Content Delivery and Security and Compliance A later section of this whitepaper AWS Products in GxP Systems will provide information to assist you in using AWS services to implement your GxPcompliant environments Table 1: Subset of AWS offerings by group Group AWS Products Compute Amazon EC2 Amazon EC2 Auto Scaling Amazon Elastic Container Registry Amazon Elastic Container Service Amazon Elastic Kubernetes Service Amazon Lightsail AWS Batch AWS Elastic Beanstalk AWS Fargate AWS Lambda AWS Outposts AWS Serverless Applicati on Repository AWS Wavelength VMware Cloud on AWS Amazon Web Services GxP Systems on AWS 4 Group AWS Products Storage Amazon Simple Storage Service ( Amazon S3) Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System ( Amazon EFS) Amazon FSx for Lustre Amazon FSx for Windows File Server Amazon S3 Gl acier AWS Backup AWS Snow Family AWS Storage Gateway CloudEndure Disaster Recovery Database Amazon Aurora Amazon DynamoDB Amazon DocumentDB Amazon ElastiCache Amazon Keyspaces Amazon Neptune Amazon Quantum Ledger Database ( Amazon QLDB) Amazon R DS Amazon RDS on VMware Amazon Redshift Amazon Timestream AWS Database Migration Service Networking & Content Delivery Amazon VPC Amazon API Gateway Amazon CloudFront Amazon Route 53 AWS PrivateLink AWS App Mesh AWS Cloud Map AWS Direct Connect AWS Global Accelerator AWS Transit Gateway Elastic Load Balancing Security Identity and Compliance AWS Identity & Access Management (IAM) Amazon Cognito Amazon Detective Amazon GuardDuty Amazon Inspector Amazon Macie AWS Artifact AWS C ertificate Manager AWS CloudHSM AWS Directory Service AWS Firewall Manager AWS Key Management Service AWS Resource Access Manager AWS Secrets Manager AWS Security Hub AWS Shield AWS Single Sign On AWS WAF Details and specifications for the full portfolio of AWS products are available online at https://awsamazoncom/ AWS Cloud Security AWS infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today It is designed to provide an extremely scalable highly reliable platform that enables customers to deploy applications and data quickly and securely This infrastructure is built and managed not only accordin g to security best practices and standards but also with the unique needs of the cloud in mind AWS uses redundant and layered controls continuous validation and testing and a substantial amount of automation to ensure that the underlying infrastructure is monitored and protected 24x7 Amazon Web Services GxP Systems on AWS 5 We have many customer testimonials that highlight the security benefits of using the AWS cloud in that the security capabilities provided by AWS far exceed the customer’s own on premises capabilities “We had heard urban legends about ‘security issues in the cloud’ but the more we looked into AWS the more it was obvious to us that AWS is a secure environment and we would be able to use it with peace of mind” Yoshihiro Moriya Certified Information System Auditor at Ho ya “There was no way we could achieve the security certification levels that AWS has We have great confidence in the logical separation of customers in the AWS Cloud particularly through Amazon VPC which allows us to customize our virtual networking environment to meet our specific requirements” Michael Lockhart IT Infrastructure Manager at GPT “When you’re in telehealth and you touch protected health information security is paramount AWS is absolutely critical to do what we do today Security and compliance are table stakes If you don’t have those the rest doesn’t matter"" Cory Costley Chief Product Officer Avizia Many more customer testimonials including those from health and life science companies can be found here: https://awsamazoncom/compliance/testimonials/ IT Security is often not the core business of our customers IT departments operate on limited budgets and do a good job of securing their data cente rs and software given limited resources In the case of AWS security is foundational to our core business and so significant resources are applied to ensuring the security of the cloud and helping our customers ensure security in the cloud as described f urther below Amazon Web Services GxP Systems on AWS 6 Shared Security Responsibility Model Security and Compliance is a shared responsibility between AWS and the customer This shared model can help relieve your operational burden as AWS operates manages and controls the components from the hos t operating system and virtualization layer down to the physical security of the facilities in which the service operates Customers assume responsibility and management of the guest operating system (including updates and security patches) other associat ed application software as well as the configuration of the AWS provided security group firewall You should carefully consider the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The following figure provides an overview of the shared responsibility model This differentiation of responsibility is c ommonly referred to as Security “of” the Cloud versus Security “in” the Cloud which will be explained in more detail below Figure 1: AWS Shared Responsibility Model AWS is responsible for the security and compliance of the Cloud the infrastructure that runs all of the services offered in the AWS Cloud Cloud security at AWS is the highest priority AWS customers benefit from a data center and network architecture tha t are built to meet the requirements of the most security sensitive organizations This Amazon Web Services GxP Systems on AWS 7 infrastructure consists of the hardware software networking and facilities that run AWS Cloud services Customers are responsible for the security and compliance in the Cloud which consists of customer configured systems and services provisioned on AWS Responsibility within the AWS Cloud is determined by the AWS Cloud services that you select and ultimatel y the amount of configuration work you must perform as part of your security responsibilities For example a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and as such requires you to perfo rm all of the necessary security configuration and management tasks Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches) any application software or utilities i nstalled by you on the instances and the configuration of the AWS provided firewall (called a security group) on each instance For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer the operating system an d platforms and customers access the endpoints to store and retrieve data You are responsible for managing your data and component configuration (including encryption options) classifying your assets and using IAM tools to apply the appropriate permiss ions The AWS Shared Security Responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between you and AWS so is the management operation and verification of IT controls shared AWS can help rel ieve your burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by you As every customer is deployed differently in AWS you can take advan tage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment You can then use the AWS control and compliance documentation available to you as well as techniques discussed later in this whitepaper to perform your control evaluation and verification procedures as required Below are examples of controls that are managed by AWS AWS Customers and/or both Inherited Controls – Controls which you fully inherit from AWS • Physical and Environmental controls Shared Controls – Controls which apply to both the infrastructure layer and customer layers but in completely separate contexts or perspectives In a shared control AWS provides the requirements for the infrastructure and you must provide your own contr ol implementation within your use of AWS services Examples include: Amazon Web Services GxP Systems on AWS 8 • Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure but you are responsible for patching your guest OS and applications • Configuration Management – AWS maintains the configuration of its infrastructure devices but you are responsible for configuring your own guest operating systems databases and applications • Awareness & Training AWS trains AWS employees but you must t rain your own employees Customer Specific – Controls which are ultimately your responsibility based on the application you are deploying within AWS services Examples include: • Data Management – for instance placement of data on Amazon S3 where you activa te encryption While certain controls are customer specific AWS strives to provide you with the tools and resources to make implementation easier For further information about AWS physical and operational security processes for the network and server in frastructure under the management of AWS see: AWS Cloud Security site For customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) see the Best Practices for Security Identity & Compliance AWS Certifications and Attestations The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards With AWS you can be assured that you are building web architectures on top of some of the most secure computing infrastructure in the world The IT infrastructure that AWS provides to you is designed and managed in alignment with security best practices and a variety of IT security standards including the following that life science customers may find most relevant : • SOC 1 2 3 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • HITRUST • FedRAMP Amazon Web Services GxP Systems on AWS 9 • CSA Security Trust & Assurance Registry (STAR) There are no specific certifications for GxP comp liance for cloud services to date however the controls and guidance described by this whitepaper in conjunction with additional resources supplied by AWS provide information on AWS service GxP compatibility which will assist you in designing and buildin g your own GxP compliant solutions AWS provides on demand access to security and compliance reports and select online agreements through AWS Artifact with reports accessible via AWS customer accounts unde r NDA AWS Artifact is a go to central resource for compliance related information and is a place that you can go to find additional information on the AWS compliance programs described further below SOC 1 2 3 AWS System and Organization Controls (SOC) Reports are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations a nd compliance The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financial statements The AWS SOC 1 report is designed to cover specific key controls likely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios The AWS SOC1 control objectives include security organization employee user access logical security sec ure data handling physical security and environmental protection change management data integrity availability and redundancy and incident handling The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set f orth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report pr ovides additional transparency into AWS security and availability based on a pre defined industry standard of leading practices and further demonstrates the commitment of AWS to protecting customer data The SOC2 report information includes outlining AWS controls a description of AWS Services relevant to security availability and Amazon Web Services GxP Systems on AWS 10 confidentiality as well as test results against controls You will likely find the SOC 2 report to be the most detailed and r elevant SOC report as it relates to GxP compliance AWS publishes a Service Organization Controls 3 (SOC 3) report The SOC 3 report is a publicly available summary of the AWS SOC 2 report The report includes the external auditor’s assessment of the opera tion of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services FedRAMP The Federal Risk and Aut horization Management Program (FedRAMP) is a US government wide program that delivers a standard approach to the security assessment authorization and continuous monitoring for cloud products and services FedRAMP uses the NIST Special Publication 800 se ries and requires cloud service providers to receive an independent security assessment conducted by a third party assessment organization (3PAO) to ensure that authorizations are compliant with the Federal Information Security Management Act (FISMA) For AWS Services in Scope for FedRAMP assessment and authorization see https://awsamazoncom/compliance/services inscope/ ISO 9001 ISO 9001:2015 outlines a process oriented approach to d ocumenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization Specific sections of the standard contain information on topics such as: • Requirements for a quality management system (QMS) including documentation of a quality manual document control and determining process interactions • Responsibilities of management • Management of resources including human resources and an organization’s work environment • Service development including the steps from design to delivery • Customer satisfaction • Measurement analysis and improvement of the QMS through activities like internal audits and corrective and preventive actions Amazon Web Services GxP Systems on AWS 11 The AWS ISO 9001:2015 certification directly supports custome rs who develop migrate and operate their quality controlled IT systems in the AWS cloud You can leverage AWS compliance reports as evidence for your own ISO 9001:2015 programs and industry specific quality programs such as GxP in life sciences and ISO 1 31485 in medical devices ISO/IEC 27001 ISO/IEC 27001:2013 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk asses sments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This widely recognized international security standard specifies that AWS do the following: • We s ystematically evaluate AWS information security risks taking into account the impact of threats and vulnerabilities • We d esign and implement a comprehensive suite of information security controls and other forms of risk management to address customer and architecture security risks • We have an overarching management process to ensure that the information security controls meet our needs on an ongoing basis AWS has achieved ISO 27001 certification of the Information Security Management System (ISMS) covering AWS infrastructure data centers and services ISO/IEC 27017 ISO/IEC 27017:2015 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO/IEC 27002 and ISO/IEC 27001 standards This code of practice provides additional inform ation security controls implementation guidance specific to cloud service providers The AWS attestation to the ISO/IEC 27017:2015 standard not only demonstrates an ongoing commitment to align with globally recognized best practices but also verifies that AWS has a system of highly precise controls in place that are specific to cloud services Amazon Web Services GxP Systems on AWS 12 ISO/IEC 27018 ISO 27018 is the first International code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII prot ection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification an internationally recognized code of practice which demonstrates the commitment of AWS to the privacy and protection of your content HITRU ST The Health Information Trust Alliance Common Security Framework (HITRUST CSF) leverages nationally and internationally accepted standards and regulations such as GDPR ISO NIST PCI and HIPAA to create a comprehensive set of baseline security and priv acy controls HITRUST has developed the HITRUST CSF Assurance Program which incorporates the common requirements methodology and tools that enable an organization and its business partners to take a consistent and incremental approach to managing compli ance Further it allows business partners and vendors to assess and report against multiple sets of requirements Certain AWS services have been assessed under the HITRUST CSF Assurance Program by an approved HITRUST CSF Assessor as meeting the HITRUST CS F Certification Criteria The certification is valid for two years describes the AWS services that have been validated and can be accessed at https://awsamazoncom/compliance/hitrust/ You may l ook to leverage the AWS HITRUST CSF certification of AWS services to support your own HITRUST CSF certification in complement to your GxP compliance programs CSA Security Trust & Assurance Registry (STAR) In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assur ance Registry (STAR) is a free publicly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering Amazon Web Services GxP Systems on AWS 13 AWS partic ipates in the voluntary CSA Security Trust & Assurance Registry (STAR) SelfAssessment to document AWS compliance with CSA published best practices AWS publish es the completed CSA Consensus Assessments Initiative Questionnaire (CAIQ) on the AWS website Infrastructure Description and Controls Cloud Models (Nature of the Cloud) Cloud computing is the on demand delivery of compute power da tabase storage applications and other IT resources through a cloud services platform via the Internet with pay asyougo pricing As cloud computing has grown in popularity several different models and deployment strategies have emerged to help meet spe cific needs of different users Each type of cloud service and deployment method provides you with different levels of control flexibility and management Cloud Computing Models Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) contai ns the basic building blocks for cloud IT and typically provides access to networking features computers (virtual or on dedicated hardware) and data storage space IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today (eg Amazon Elastic Compute Cloud (Amazon EC2)) Platform as a Service (PaaS) Platform as a Service (PaaS) removes the need for organi zations to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications (eg AWS Elastic Beanstalk) This helps you be more efficient as you don’t need to worry about resource procurement capacity planning software maintenance patching or any of the other undifferentiated heavy lifting involved in running your application Software as a Service (SaaS) Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider In most cases people referring to Software as a Service are referring to end user applications (eg Amazon Connect) With a SaaS offering you do not have to think about how the se rvice is maintained or how the Amazon Web Services GxP Systems on AWS 14 underlying infrastructure is managed; you only need to think about how you will use that particular piece of software A common example of a SaaS application is web based email which can be used to send and receive email with out having to manage feature additions to the email product or maintain the servers and operating systems on which the email program is running Cloud Computing Deployment Models Cloud A cloud based application is fully deployed in the cloud and all parts of the application run in the cloud Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing ( https://awsamazoncom/what iscloud computing/ ) Cloud based applications can be built on low level infrastructure pieces or can use higher level services that provide abstraction from the management architecting and scaling requi rements of core infrastructure Hybrid A hybrid deployment is a way to connect infrastructure and applications between cloud based resources and existing resources that are not located in the cloud The most common method of hybrid deployment is between t he cloud and existing on premises infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system For more information on how AWS can help you with hybrid deployment visit the AW S hybrid page (https://awsamazoncom/hybrid/ ) Onpremises The deployment of resources on premises using virtualization and resource management tools is sometimes sought for its ability to provide dedicated resources (https://awsamazoncom/hybrid/ ) In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase r esource utilization Security Physical Security Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not branded as AWS Amazon Web Services GxP Systems on AWS 15 facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic me ans Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an empl oyee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Additional information on infrastructure security may be found on the webpage on AWS Data Center controls Single or Multi Tenant Environments As cloud technology has rapidly evolved over the past decade one fundamental technique used to maximize physical resources as well as lower customer costs has been to offer multi tenant services to cloud customers To facilitate this architecture AWS has developed and implemented powerful and flexible logical security controls to create strong isolation boundaries between customers Security is job zero at AWS and you will find a rich history of AWS steadily enhancing its features and controls to help customers achieve their security posture requirements such as GxP Coming from operating an on premises environment you will often find that CSPs like AWS enable you to effectively optimize your security configurations in the cloud compared to your onpremises solutions The AWS logical security capabilities as well as security controls in place address the concerns driving physical separation to protect your data The pro vided isolation combined with the automation and flexibility added offers a security posture that matches or bests the security controls seen in traditional physically separated environments Additional detailed information on logical separation on AWS ma y be found in the Logical Separation on AWS whitepaper Amazon Web Services GxP Systems on AWS 16 Cloud Infrastructure Qualification Activities Geography AWS serves over a million active customers i n more than 200 countries As customers grow their businesses AWS will continue to provide infrastructure that meets their global requirements The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical l ocation in the world which has multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the abil ity to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates in over 70 Availability Zones within over 20 geographic Regions aroun d the world with announced plans for more Availability Zones and Regions For more information on the AWS Cloud Availability Zones and AWS Regions see AWS Global Infrastructure Each Amazon Region is designed to be completely isolated from the other Amazon Regions This achieves the greatest possible fault tolerance and stability Each Availability Zone is isolated but the Availability Zones in a Region are connected through low laten cy links AWS provides customers with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region Each Availability Zone is designed as an independent failure zo ne This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by AWS Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability Zones are all redundantly connected to multiple tier 1 transit providers Data Locatio ns Where geographic limitations apply unlike other cloud providers who often define a region as a single data center the multiple Availability Zone (AZ) design of every AWS Region offers you advantages If you are focused on high availability you can desi gn your applications to run in multiple AZ's to achieve even greater fault tolerance AWS infrastructure Regions meet the highest levels of security compliance and data protection If you have data residency requirements you can choose the AWS Region that is in close proximity to your desired location You retain complete control and Amazon Web Services GxP Systems on AWS 17 ownership over the region in which your data is physically located making it easy to meet regional compliance and data residency requirements In addition for moving on premises data to AWS for migrations or ongoing workflows the following AWS website on Cloud Data Migration descr ibes the various tools and services that you may use to ensure data onshoring compliance including: • Hybrid cloud storage (AWS Storage Gateway AWS Direct Connect) • Online data transfer (AWS DataSync AWS Transfer Family Amazon S3 Transfer Acceleration AWS Snowcone Amazon Kinesis Data Firehose APN Partner Products) • Offline data transfer (AWS Snowcone AWS Snowball AWS Snowmobile) Capacity When it comes to capac ity planning AWS examines capacity at both a service and rack usage level The AWS capacity planning process also automatically triggers the procurement process for approval so that AWS doesn’t have additional lag time to account for and AWS relies on ca pacity planning models which are informed in part by customer demand to trigger new data center builds AWS enables you to reserve instances so that space is guaranteed in the region(s) of your choice AWS uses the number of reserved instances to inform planning for FOOB (future out of bound) Uptime AWS maintains SLAs (Service Level Agreements) for various services across the platform which at the time of this writing includes a guaranteed monthly uptime percentage of at least 9999% for Amazon EC2 an d Amazon EBS within a Region A full list of AWS SLAs can be found at https://awsamazoncom/legal/service level agreements/ In addition Amazon Web Services publishes the most up totheminute information on service availability in the AWS Service Health Dashboard (https://statusawsamazoncom/ ) It is important to note that as part of the shared security responsibility model it is your responsibility to architect your application for resilience based on your organization’s requirements AWS Quality Management System Life Science customers with obligations under GxP requirements need to ensure that quality is part of manufacturing and controls during the design development and deployment of their GxP regulated product This quality assurance includes an Amazon Web Services GxP Systems on AWS 18 appropriate assessment of cloud service suppliers like AWS to meet the obligations of your quality system For a deeper description of the AWS Quality Management System you may use AWS Artifact to access additional documents under NDA Below AWS provide s information on some of the concepts and components of the AWS Q uality System of most interest to GxP customers like you Quality Infrastructure and Support Processes Quality Management System Certification AWS has undergone a systematic independent examination of our quality system to determine whether the activitie s and activity outputs comply with ISO 9001:2015 requirements A certifying agent found our quality management system ( QMS ) to comply with the requirements of ISO 9001:2015 for the activities described in the scope of registration The AWS quality manageme nt system has been certified to ISO 9001 since 2014 The reports cover six month periods each year (April September / October March) New reports are released in mid May and mid November To see the AWS ISO 9001 registration certification certification bo dy information as well as date of issuance and renewal please see the information on the ISO 9001 AWS compliance program website: https://awsamazoncom/compliance/iso 9001 faqs/ The certi fication covers the QMS over a specified scope of AWS services and Regions of operations If you are pursuing ISO 9001:2015 certification while operating all or part of your IT systems in the AWS cloud you are not automatically certified by association however using an ISO 9001:2015 certified provider like AWS can make your certification process easier AWS provides additional detailed information on the quality management system accessible within AWS Artifact via customer accounts in the AWS console (https://awsamazoncom/artifact/ ) Software Development Approach AWS’s strategy for design and development of AWS services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements The design of all new services or any significant changes to current services are controlled through a project management system with multi disciplinary Amazon Web Services GxP Systems on AWS 19 participation Requirements and service specifications are established during service development taking into account legal and regulatory requirements customer contractual commitments and requirements to meet the confidentiality integrity and availability of the service in alignment with the quality objectives established within the quality management system Service reviews are completed as part of the developm ent process and these reviews include evaluation of security legal and regulatory impacts and customer contractual commitments Prior to launch each of the following requirements must be complete: • Security Risk Assessment • Threat modeling • Security des ign reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from third parties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon Quality Proc edures In addition to the software hardware human resource and real estate assets that are encompassed in the scope of the AWS quality management system supporting the development and operations of AWS services it also includes documented information including but not limited to source code system documentation and operational policies and procedures AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management Amazon Web Services GxP Systems on AWS 20 commitment All policies are maintained in a centralized location that is accessible by employees Project Management Processes The design of new service s or any significant changes to current services follow secure software development practices and are controlled through a project management system with multi disciplinary participation Quality Organization Roles AWS Security Assurance is responsible for familiarizing employees with the AWS security policies AWS has established information security functions that are aligned with defined structure reporting lines and responsibilities Leadership involvement provides clear direction and visible support for security initiatives AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS maintains a documen ted audit schedule of internal and external assessments The needs and expectations of internal and external parties are considered throughout the development implementation and auditing of the AWS control environment Parties include but are not limite d to: • AWS customers including current customers and potential customers • External parties to AWS including regulatory bodies such as the external auditors and certifying agents • Internal parties such as AWS services and infrastructure teams security and overarching administrative and corporate teams Quality Project Planning and Reporting The AWS planning process defines service requirements requirements for projects and contracts and ensures customer needs and expectations are met or exceeded Planning is achieved through a combination of business and service planning project teams quality improvement plans review of service related metrics and documentation selfassessments and supplier audits and employee training The AWS quality system is documented to ensure that planning is consistent with all other requirements AWS continuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model Amazon Web Services GxP Systems on AWS 21 to assess infrastructure usage and demands at least monthly and usually more frequently In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Electronics Records and Electronic Signatures In the United States (US) GxP regulations are enforced by the US Food and Drug Administration (FDA) and are contained in Title 21 of the Code of Federal Regulations (21 CFR) Within 21 CFR Part 11 contains the requirements for computer systems that create modify maintain archive retrieve or distribute electronic records and electronic signatures in support of GxP regulated activities (and in the EU EudraLex Volume 4 Good Manufacturing Practice (GMP) guidelines – Annex 11 Computerised Systems) Part 11 was created to permit the adoption of new information technologies by FDA regulated life sciences organizations while simultaneously providing a framework to ensure that the electronic GxP data is trustworthy and reliable There is no GxP certification for a commercial cloud provider such as AWS AWS offers commercial off theshelf (COTS) IT services according to IT quality and security standards such as ISO 27001 ISO 27017 ISO 27018 ISO 9001 NIST 800 53 and many others GxP regulated life sciences customers like you are responsible for purchasing and using AWS services to develop and operate your GxP sys tems and to verify your own GxP compliance and compliance with 21 CFR 11 This document used in conjunction with other AWS resources noted throughout may be used to support your electronic records and electronic signatures requirements A further desc ription of the shared responsibility model as it relates to your use of AWS services in alignment with 21 CFR 11 can be found in the Appendix Company SelfAssessments AWS Security Assurance monitors the implementation and maintenance of the quality management system by performing verification activities through the AWS audit program to ensure compliance suitability and effectiveness of the quality management system The AWS audit program includes selfassessment s third party accreditation audits and supplier audits The objective of these audits are to evaluate the operating effectiveness of the AWS quality management system Selfassessment s are performed periodically Audits by third part ies for accreditation are conducted to review the continued performance of AWS against standards based criteria and to identify general improvement opportunities Supplier audits are performed to assess the supplier’s potential for pro viding services or material that conform to AWS supply requirements Amazon Web Services GxP Systems on AWS 22 AWS maintains a documented schedule of all assessments to ensure implementation and operating effectiveness of the AWS control environment to meet various objectives Contract Reviews AWS offers Services for sale under a standardized customer agreement that has been reviewed to ensure the Services are accurately represented properly promoted and fairly priced Please contact your account team if you have questions about AWS service ter ms Corrective and Preventative Actions AWS takes action to eliminate the cause of nonconformities within the scope of the quality management system in order to prevent recurrence The following procedure is followed when taking corrective and preventiv e actions: 1 Identify the specific nonconformities; 2 Determine the causes of nonconformities; 3 Evaluate the need for actions to ensure that nonconformities do not recur; 4 Determine and implement the corrective action(s) needed; 5 Record results of action(s) taken ; 6 Review of the corrective action(s) taken 7 Determine and implement preventive action needed; 8 Record results of action taken; and 9 Review of preventive action The records of corrective actions may be reviewed during regularly scheduled AWS management meeti ngs Customer Complaints AWS relies on procedures and specific metrics to support you Customer reports and complaints are investigated and where required actions are taken to resolve them You can contact AWS at https://awsamazoncom/contact us/ or speak directly with your account team for support Amazon Web Services GxP Systems on AWS 23 Third Party Management AWS maintains a supplier management team to foster third party relationships and monitor thi rd party performance SLAs and SLOs are implemented to monitor performance AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided (for example network s ervices service delivery or information exchange) and implements appropriate relationship management mechanisms in line with their relationship to the business AWS monitors the performance of third parties through periodic reviews using a risk based app roach which evaluate performance against contractual obligations Training Records Personnel at all levels of AWS are experienced and receive training in the skill areas of the jobs and other assigned training Training needs are identified to ensure tha t training is continuously provided and is appropriate for each operation (process) affecting quality Personnel required to work under special conditions or requir ing specialized skills are trained to ensure their competency Records of training and certi fication are maintained to verify that individuals have appropriate training AWS has developed documented and disseminated role based security awareness training for employees responsible for designing developing implementing operating maintaining and monitoring the system affecting security and availability and provides resources necessary for employees to fulfill their responsibilities Training includes but is not limited to the following information (when relevant to the employee’s role): • Workforce conduct standards • Candidate background screening procedures • Clear desk policy and procedures • Social engineering phishing and malware • Data handling and protection • Compliance commitments • Use of AWS security tools • Security precautions while travel ing • How to report security and availability failures incidents concerns and other complaints to appropriate personnel Amazon Web Services GxP Systems on AWS 24 • How to recognize suspicious communications and anomalous behavior in organizational information systems • Practical exercises that reinforce training objectives • HIPAA responsibilities Personnel Records AWS performs periodic formal evaluation s of resourcing and staffing including an assessment of employee qualification alignment with entity objectives Personnel records are managed th rough an internal Amazon System Infrastruc ture Management The Infrastructure team maintains and operates a configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts thr ough the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a co ntinuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX host s to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS notifies you of certain changes to the AWS service offerings where appropriate AWS continuously evolves and improves the ir existing services frequently adding new Services or features to existing Services Further as AWS services are controlled using APIs if AWS changes or discontinues any API used to make calls to the Services AWS continues to offer the existing API fo r 12 months (as of this publication) to give you time to adjust accordingly Additionally AWS provides you with a Personal Health Dashboard with service health and status information specific to your account as well as a public Service Health Dashboard t o provide all customers with the real time operational status of AWS services at the regional level at http://statusawsamazoncom Amazon Web Services GxP Systems on AWS 25 Software Development Software Development Processes The Project and Operation stages of the life cycle approach in GAMP for instance are reflected in the AWS information and activities surrounding organizational mechanisms to guide the development and configuration of the information system including software developmen t lifecycles and software change management Elements of the organizational mechanisms include policies and standards the code pathway deployment a change management tool ongoing monitoring security reviews emergency changes management of outsourced and unauthorized development and communication of changes to customers The software development lifecycle activities at AWS include the code development and change management processes at AWS which are centralized across AWS teams developing externally and internally facing code with processes applying to both internal and external service teams Code deployed at AWS is developed and managed in a consistent process regardless of its ultimate destination There are several systems utilized in this proces s including: • A code management system used to assemble a code package as part of development • Internal source code repository • The hosting system in which AWS code pipelines are staged • The tool utilized for automating the testing approval deployment and ongoing monitoring of code • A change management tool which breaks change workflows down into discrete easy to manage steps and tracks change details • A monitoring service to detect unapproved changes to code or configurations in production systems Any variances are escalated to the service owner/team Code Pathway The AWS Code Pathway steps to development and deployment are outlined below This process is executed regardless of whether the code is net new or if it represents a change to a n existing codebase Amazon Web Services GxP Systems on AWS 26 1 Developer writes the code in an approved integrated development environment running on an AWS managed developer desktop environment The developer typically does an initial build and integration test prior to the next step 2 The develop er checks in the code for review to an internal source code repository 3 The code goes through a Code Review Verification in which at least one additional person reviews the code and approves it The list of approvals are stored in an immutable log that is retained within the code review tool 4 The code is then built from source code to the appropriate type of deployable code package (which varies from language to language) in an internal build system 5 After successful build including successful passing of a ll integration tests the code gets pushed to a test environment 6 The code goes through automated integration and verification tests in the pre production environments and upon successful testing the code is pushed to production AWS may implement open sou rce code within its Services but any such use of open source code is still subject to the approval packaging review deployment and monitoring processes described above Open source software including binary or machine executable code and open source licenses is additionally reviewed and approved prior to implementation AWS maintains a list of approved open source as well as open source that is prohibited Deployment and Testing A pipeline represents the path approved code packages take from initia l check in through a series of automated (and potentially manual) steps to execution in production The pipeline is where automation testing and approvals happen At AWS the deployment tool is used to create view and enforce code pipelines This tool is utilized to promote the latest approved revision of built code to the production environment A major factor in ensuring safe code deployment is deploying in controlled stages and requiring continuous approvals prior to pushing code to production As p art of the deployment process pipelines are configured to release to test environments (eg “beta” “gamma” and others as defined by the team) prior to pushing the code to the production environment Automated quality testing (eg integration testing structural Amazon Web Services GxP Systems on AWS 27 testing behavioral testing ) is performed in these environments to ensure code is performing as anticipated If code is found to deviate from standards the release is halted and the team is notified of the need to review These development and test environments emulate the production environment and are used to properly assess and prepare for the impact of a change to the production environment In order to reduce the risks of unauthorized access or change to the production environment the dev elopment test and production environments are all logically separated The tool additionally enforces phased deployment if the code is to be deployed across multiple regions Should a package include deployment for more than one AWS region the pipelin e will enforce deployment on a single region basis If the package were to fail integration tests at any region the pipeline is halted and the team is notified for need to review Configuration and Change Management Configuration management is performed during information system design development implementation and operation through the use of the AWS Change Management process Routine emergency and configuration changes to existing AWS infrastructure are autho rized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on you and your use of the services Software AWS applies a systematic approach to managing change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain th e integrity of service to you Changes deployed into production environments are: • Prepared: this includes scheduling determining resources creating notification lists scoping dependencies minimizing concurrent changes as well as a special process for e mergent or long running changes • Submitted: this includes utilizing a Change Management Tool to document and request the change determine potential impact conduct a code review create a detailed timeline and activity plan and develop a detailed rollback procedure Amazon Web Services GxP Systems on AWS 28 • Reviewed and Approved: Peer reviews of the technical aspects of a change are required Changes must be authorized in order to provide appropriate oversight and understanding of business and security impact The configuration management process includes key organizational personnel that are responsible for reviewing and approving proposed changes to the information system • Tested : Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance • Performed: This includes pre and post change notification managing timeline monitoring service health and metrics and closing out the change AWS service teams maintain a current authoritative baseline configuration for systems and devices Change Manage ment tickets are submitted before changes are deployed (unless it is an emergency change) and include impact analysis security considerations description timeframe and approvals Changes are pushed into production in a phased deployment starting with lo west impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely m onitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket AWS service teams retain older versions of AWS baseline packages and configurations necessary to support rollback and p revious versions are s tored in the repository systems Integration testing and the validation process is performed before rollbacks are implemented When possible changes are scheduled during regular change windows In addition to the preventative controls that are part of the pipeline (eg code review verifications test environments) AWS also uses detective controls configured to alert and notify personnel when a change is detected that may have been made without standard procedure AWS checks deployments to ensure that they have the appropriate reviews and approvals to be applied before the code is committed to production Exceptions for reviews and approvals for production lead to automatic ticketing and notification of the service team After code is depl oyed to the Production environment AWS performs ongoing monitoring of performance through a variety of monitoring processes AWS host configuration settings are also monitored as part of vulnerability monitoring to validate compliance with AWS security st andards Audit trails of the changes are maintained Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and Amazon Web Services GxP Systems on AWS 29 approved as appropriate Periodically AWS p erforms self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actions are taken t o bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Reviews AWS performs internal security reviews against Amazon security standards of externally launched pr oducts services and significant feature additions prior to launch to ensure security risks are identified and mitigated before deployment to a customer environment AWS security reviews include evaluating the service’s design threat model and impact to AWS’ risk profile A typical security review starts with a service team initiating a review request to the dedicated team and submitting detailed information about the artifacts being reviewed Based on this information AWS reviews the design and identif ies security considerations; these considerations include but are not limited to: appropriate use of encryption analysis of data handling regulatory considerations and adherence to secure coding practices Hardware firmware and virtualization software also undergo security reviews including a security review of the hardware design actual implementation and final hardware samples Code package changes are subject to the following security activities: • Full security assessment • Threat modeling • Security design reviews • Secure code reviews (manual and automated methods) • Security testing • Vulnerability/penetration testing Success ful completion of the above mentioned activities are pre requisites for Service launch Development teams ar e responsible for the security of the features they develop that meet the security engineering principles Infrastructure teams incorporate security principles into the configuration of servers and network devices with least privilege enforced throughout Findings identified by AWS are categorized in terms of risk and are tracked in an automated workflow tool Amazon Web Services GxP Systems on AWS 30 Product Release For all AWS services information can be found on the associated service website which describes the key attributes of the Servi ce and product details as well as pricing information developer resources (including release notes and developer tools) FAQs blogs presentations and additional documentation such as developer guides API references and use cases where relevant ( https://awsamazoncom/products/ ) Customer Training AWS has implemented various methods of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact your experience A Service Health Dash board is available and maintained by the customer support team to alert you to any issues that may be of broad impact The AWS Cloud Security Center (https://awsamazoncom/security/ ) and Healthcare and Life Sciences Center (https://awsamazoncom/health/ ) is available to provide you with security and compliance details and Life Sciences related enablement information about AWS You can also su bscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues AWS also has a series of training and certification programs ( https://wwwawstraining/ ) on a number of cloud related topics in addition to a series of service and support offerings available through your AWS account team AWS Products in GxP Systems With limited technical guidance from regulatory and industry bod ies this section aims to describe some of the best practices we’ve seen customers adopting when using cloud services to meet their regulatory compliance needs The Final FDA Guidance Document “ Data Integrity and Compliance With Drug CGMP ” explicitly brings cloud infrastructure into scope through the revised definition of “computer or related systems”: “The American National Standards Institute (ANSI) defines systems as people machines and methods organized to accomplish a set of specific functions Computer or related systems can refer to computer hardware software peripheral devices networks cloud infrastructure personnel and associated documents (eg user manuals and standard operating pr ocedures)“ Amazon Web Services GxP Systems on AWS 31 Further industry organizations like ISPE are increasingly dedicating publications on cloud usage in the life sciences ( Getting Ready For Pharma 40: Data integrity in cloud and big data applications ) As described throughout this whitepaper there is no unique certification for GxP regulations so each customer defines their own risk profile Therefore it is important to note that although this whitepaper i s based on AWS experience with life science customers you must take final accountability and determine your own regulatory obligations To begin with even when deployed in the cloud GxP applications still need to be validated and their underlying infras tructure still needs qualifying The basic principles governing on premise infrastructure qualification still apply to virtualized cloud infrastructure Therefore the current industry guidance should still be leveraged Traditionally a regulated company was accountable and responsible for all aspects of their infrastructure qualification and application validation With the introduction of public cloud providers part of that responsibility has been shifted to a cloud supplier The regulated company is st ill accountable but the cloud supplier is now responsible for the qualification of the physical infrastructure virtualization and service layers and to completely manage the services they provide ie the big difference now is that there is a shared com pliance responsibility model which is similar to the shared security responsibility model described earlier in this whitepaper Previous sections of this whitepaper described how AWS takes care of their part of the shared responsibility model This section provides recommended strategies on how to cover your part of the shared responsibility model for GxP environments Involving AWS Achieving GxP compliance when adopting cloud technology is a journey AWS has helped many customers along this journey and th ere is no compression algorithm for experience For example Core Informatics states: “Using AWS we can help organizations accel erate discovery while maintaining GxP compliance It’s transforming our bu siness and more importantly helping our customers tr ansform their businesses” Richard Duffy Vice President of Engineering Core Informatics Amazon Web Services GxP Systems on AWS 32 For the complete case study see Core Informatics Case Study For a selection of other customer case studies see AWS Custom er Success Industry guidance recommends that companies should try and maximize supplier involvement and leverage our knowledge experience and even our documentation as much as possible as we provide in the following sections and throughout this whitepap er Please contact us to discuss starting your journey to the cloud Qualification Strategy for Life Science Organizations One of the concerns for regulated enterprise customers becomes how to qualify and demonstrate control over a system when so much of the responsibility is now shared with a supplier The purpose of a Qualification Strategy is to answer this question Some customers view a Qualification Strategy as an overarching Validation Plan The str ategy will employ various tactics to address the regulatory needs of the customer To better scope the Qualification Strategy the architecture should be viewed in its entirety Enterprise scale customers typically define the architecture similar to the following: Figure 2: Layered architecture AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksAmazon Web Services GxP Systems on AWS 33 The diagram il lustrates a layered architecture where a large part is delegated to AWS From this approach a Qualification Strategy can be defined to address four main areas: 1 How to work with AWS as a supplier of services 2 The qualification of the regulated landing zone 3 The qualification of building blocks 4 Supporting the development of GxP applications The situation also changes slightly if the customer leverages a service provider like AWS Managed Services where the build operation and maintenance of the landing zone is done by the service provi der Conversely for workloads that must remain on premises AWS Outposts extends AWS services including compute storage and networking to customer sites Data can be configured to be stored locally and customers are responsible for controlling access around Outposts equipment Data that is processed and stored on premises is accessible over the customer’s local network In this case customer responsibility extends into the AWS Services box ( Figure 3) Figure 3: Layered architecture with service provider In this situation even more responsibility is delegated by the customer and so the controls that are typically to be put in place by the customer to control their own AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building Blocks Service Provider ResponsibilityAmazon Web Services GxP Systems on AWS 34 operations now need adaptations to check that similar controls are implemented by the service provider The controls that are inherited from AWS are shared or that remain with the customer were covered previously in the Shared Security Responsibility Model section of this whitepaper This s ection describe s these layers at a high level These layers are expanded upon in later sections of this whitepaper Industry Guidance The following guidance is at a minimum a best practice for your environment You should still work with your professiona ls to ensure you comply with applicable regulatory requirements The same basic principles that govern on premise s infrastructure qualification also apply to cloud based systems Therefore this strategy uses a tactic of leveraging and building upon that s ame industry guidance using a cloud perspective based on the following ISPE GAMP Good Practice Guides ( Figure 4): • GAMP Good Practice Guide: IT Infrast ructure Control and Compliance 2nd Edition • GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Figure 4: Mapping industry guidance to architecture layers AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksGAMP 5: A RiskBased Approach to Compliant GxP Computerized Systems GAMP Good Practice Guide: IT Infrastructure Control and Compliance 2nd Edition Amazon Web Services GxP Systems on AWS 35 Supplier Assessment and Management Industry guidance suggest s you l everage a supplier ’s experience knowledge and documentation as much as possible However w ith so much responsibility now delegated to a supplier the supplier assessment becomes even more important A regulated company is still ul timately accountable for demonstrating that a GxP system is compliant even if a supplier is responsible for parts of that system so the regulated customer needs to establish enough trust in their supplier The cloud service provider must be assessed to f irst determine if they can deliver the services offered but also to determine the suitability of their quality system and that it is systematically followed The supplier needs to show that they have a QMS and follow a documented set of procedures and st andards governing activities such as: • Infrastructure Qualification and Operation • Software Development • Change Management • Release Management • Configuration Management • Supplier Management • Training • System security Details of the AWS QMS are covered in the software section of this whitepaper The capabilities of AWS to satisfy these areas may be reassessed on a periodic basis typically by reviewing the latest materials available through AWS Artifact (ie AWS certifications and audit reports) It is also important to consider and plan how operational processes that span the shared responsibility model will operate For example how to manage changes made by AWS to services used a s part of your landing zone or applications incident response management in cases of outages or portability requirements should there be a need to change cloud service provider Regulated Landing Zone One of the main functions of the landing zone is to provide a solid foundation for development teams to build on and address as many regulatory requirements as possible thus removing the responsibility from the development teams Amazon Web Services GxP Systems on AWS 36 The GAMP IT Infrastructure Control and Compliance guidance document follows a platform based approach to the qualification of IT infrastructure which aligns perfectly with a customer’s need to qualify their landing zone AWS Control Tower provides the easiest way to set up and g overn a new secure multi account AWS environment based on best practices established through AWS’ experience working with thousands of enterprises as they move to the cloud See AWS Control T ower features for further details of what is included in a typical landing zone GAMP also describes two scenarios for approaching platform qualification 1 The first scenario is independent of any specific application and instead considers generic requireme nts for the platform or landing zone 2 The second scenario is where the requirements of the platform are derived directly from the applications that will run on the platform For many customers when first building their landing zone the exact nature of t he applications that will run on it is unclear Therefore this paper follows scenario 1 and approach es the qualification independent of any specific application The objective of the landing zone is to provide application teams with a solid foundation upo n which to build while addressing as many regulatory requirements as possible so the regulatory burden on the application team is reduced Tooling and Automation Many customers include common tooling and automation as part of the landing zone so it can be qualified and validated once and used by all development teams This common tooling i s often within the shared services account of the landing zone For example standard tooling around requirements management test management CI/CD etc need to be qualified and validated Similarly any automation of IT processes also needs to be validated For example it’s possible to automate the Installation Qualification (IQ) step of your Computer Systems Validation process Leveraging Managed Services Instead of building and operating a landing zone yourself you have the option of delegating this responsibility This delegation could be to AWS by making use of AWS Managed Services or to a partner within t he AWS Partner Network (APN) This means the service provider is responsible for building a landing zone based on AWS best practices operating it in accordance with industry best practices and providing suf ficient evidence to you in meeting your expectations Amazon Web Services GxP Systems on AWS 37 Building Blocks When it comes to the virtualized infrastructure and service instances supporting an application there are two approaches to take 1 Commission service instances for a specific applicatio n Each application team therefore takes care of their own qualification activities but possibly causing duplication of qualification effort across application/product teams 2 Define ‘building blocks’ to be used across all applications Create standard reusable building blocks that can be qualified once and used many times To reduce the overall effort and the increase developer productivity this paper assume s the use of option 2 A ‘building block’ could be a single AWS service such as Amazon EC2 or Ama zon RDS a combination of AWS services such as Amazon VPC and NAT Gateway or a complete stack such as a three tier web app or ML Ops stack The qualification of ‘building blocks’ follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance document’s ‘92 Infrastructure Building Block Concept’ To accelerate application development you could create a library of these standardized and pre qualified building blocks which are made available to development teams to easily consume Computer System Validation With a solid and regulatory compliant foundation from the supplier assessment and landing zone you can look at improving your existing Computer Systems Validatio n (CSV) standard operating procedure (SOP) Most customers already have existing SOPs around Computer Systems Validation Many customers also state that their processes are old slow and very manual in nature and view moving to the cloud as an opportunity to improve these processes and automate as much as possible The ‘building block’ approach described earlier is already a great accelerator for development teams enabling them to stitch together pre qualified building blocks to form the basis of their app lication However the application team is still responsible for the Validation of their application including Installation Qualification (IQ) Again this is another area where customer approach varies Some customers follow existing processes and still g enerate documentation which is stored in their Enterprise Amazon Web Services GxP Systems on AWS 38 Document Management System Other customers have fully adopted automation and achieved ‘ near zero documentation’ by validating their tool chain and relying on the data stored in those tools as evide nce Validation During Cloud Migration One important point that may be covered in a Qualification Strategy is the overarching approach to Computer System Validation (CSV) during migration If you are embarking on a migration effort part of the analysis of the application portfolio will be to identify archetypes or groups of applications with similar architectures A single runbook can be developed and then repeated for each of the applications in the group speeding up migration At this point if the app lications are GxP relevant the CSV/migration strategy can also be defined for the archetype and repeated for each application Supplier Assessment and Cloud Management As mentioned earlier gaining trust in a Cloud Service Provider is critical as you will be inheriting certain cloud infrastructure and security controls from the Cloud Service Provider The approach described by industry guidance involves several steps whi ch we will cover here Basic Supplier Assessment The first (optional) step is to perform a basic supplier assessment to check the supplier’s market reputation knowledge and experience working in regulated industries prior experience working with other re gulated companies and what certifications they hold You can leverage industry assessments such as Gartner’s assessment on the AWS News Blog post AWS Named as a Cloud Leader for the 10th Consecutive Year in Gartner’s Infrastructure & Platform Services Magic Quadrant and customer testimonials Documentation Review A supplier assessment often include s a deep dive into the assets available from the supplier describing their QM S and operations This includes reviewing certifications audit reports and whitepapers For more information see the AWS Risk and Compliance whitepape r Amazon Web Services GxP Systems on AWS 39 AWS and its customers share control over the IT environment and both parties have responsibility for managing the IT environment The AWS part in this shared responsibility inc ludes providing services on a highly secure and cont rolled platform and providing a wide array of security features customers can use The customer’s responsibility includes configuring their IT environments in a secure and controlled manner for their purposes While customers don’t communicate their use and configurations to AWS AWS does communicate its security and control environment relevant to customers AWS does this by doing the following: • Obtaining industry certifications and independent third party attestations • Publishing information about the AWS security and control pra ctices in whitepapers and web site content • Providing certificates reports and other documentation directly to AWS customers under NDA (as required) For a more detailed description of AWS Security see AWS Cloud Security AWS Artifact provides on demand access to AWS security and compliance reports and select online agreements Reports available in AWS Artifact include our Service Organization Control (SOC) reports Payment Card Industry (PCI) reports and certifications from accredita tion bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreemen t (NDA) For a more detailed description of AWS Compliance see AWS Compliance If you have additional questions about the AWS certifications or the compliance documentation AWS makes available please bring those questions to your account team Review Service Level Agreements (SLA) AWS offers service level agreements for certain AWS services Further information can be found under Service Level Agreements (SLAs) Audit Mail Audit – To supplement the AWS documentation you have gathered a mail audit questionnaire (sometimes referred to as a supplier questionnaire) may be submitted to AWS to gather additional information or to ask cla rifying questions You should work with your account team to request a mail audit Amazon Web Services GxP Systems on AWS 40 Onsite Audit – AWS regularly undergoes independent third party attestation audits to provide assurance that control activities are operating as intended Currently AWS participates in over 50 different audit programs The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact These thirdparty attestations and certifications of AWS provide you with visibility and independent validation of the control en vironment eliminating the need for customers to perform individual onsite audits Such attestations and certifications may also help relieve you of the requirement to perform certain validation work yourself for your IT environment in the AWS Cloud For d etails see the AWS Quality Management System section of this whitepaper Contractual Agreement Once you have completed a supplier assessment of AWS the next step is to set up a contractual agreement for using AWS services The AWS Customer Agreement is available at: https://awsamazoncom/agreement/ ) You are responsible for interpreting regulations and determining whether the appropriate requirements are included in a contract with standard terms If you have any questions about entering into a service agreement with AWS please contact your account team Cloud Management Processes There are certain processes that span the shared responsibility model and typically must be captured in your QMS in the form of SOPs and work instructions Change Management Change Management is a bidirectional process when dealing with a cloud service provider On the one hand AWS is co ntinually making changes to improve its services as mentioned earlier in this paper On the other hand you can make feature requests which is highly encouraged as 90% of the AWS service features are as a result of direct customer feedback Customers typically use a risk based approach appropriate fo r the type of change to determine the subsequent actions Changes to AWS services which add functionality are not usually a concern because no application will be using that new functionality yet However new functionality may trigger an internal assessme nt to determine if it affects the risk profile of the service and Amazon Web Services GxP Systems on AWS 41 should be allowed for use If mandated by your QMS this may trigger a re qualification of building blocks prior to allowing the new functionality Deprecations are considered more critical because they could break an application A deprecation may include a thirdparty library utility or version of languages such as Python The deprecation of a service or feature is rare Once you receive the notification of a deprecation you should trigger an impact assessment If an impact is found the application teams should plan changes to remediate the impac t The notice period for a deprecation should allow for time for assessme nt and remediation AWS will also help you understand the impact of the change There are other changes such as enhancements and bug fixes which do not change the functionality of th e service and do not trigger notifications to customers These types of changes are synonymous with “standard” changes in ITIL which are usually pre authorized low risk relatively common and follow a specific procedure If you want to generate evidence s howing no regression is introduced due to this class of change you could create a test bed which repeatedly tests the AWS services to detect regression Should a problem be uncovered it should immediately be reported to AWS for resolution Incident Manag ement The Amazon Security Operations team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution As part o f the process potential breaches of customer content are investigated and escalated to AWS Security and AWS Legal Affected customers and regulators are notified of breaches and incidents where legally required You can subscribe to the AWS Security Bulletins page ( https://awsamazoncom/security/security bulletins ) which provides information regarding identified security issues You can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage You are responsible for reporting incidents involving your storage virtual machines and applications unless the incident is caused by AWS For more information refer to the AWS Vulnerability Reporting webpage: https://awsamazoncom/security/vulnerability reporting/ Customer Support AWS develops and maintains customer supp ort procedures that include metrics to verify performance When you contact AWS to report that AWS services do not meet Amazon Web Services GxP Systems on AWS 42 their quality objectives your issue is investigated and where required commercially reasonable actions are taken to resolve it Where AWS is the first to become aware of a customer impacting issue procedures exist for notifying impacted customers according to their contract requirements and/or via the AWS Service Health Dashboard http://sta tusawsamazoncom/ You should ensure that your policies and procedures align to the customer support options provided by AWS Additional details may be found in the Customer Complaints and Customer Training sections in this document Cloud Platform/Landing Zone Qualification A landing zone such as the one created by AWS Control Tower is a well architected multi account AWS environment that's based on security and compliance best practices The landing zone includes capabilities for centralized logging security account vending and core network connectivity We recommend that you then build features into the landing zone to satisfy as many regulatory requirements as possible and to effectively remove the burden from the development teams which build on it The objective of the landing zone and the team owning it should be to prov ide the guardrails and features that free the developers to use the ‘right tools for the job’ and focus on delivering differentiated business value rather than on compliance For example account vending could be extended to include account bootstrapping t o automatically direct logs to the central logging account drop default VPCs and instantiate an approved VPC (if needed at all) deploy baseline stack sets and establish standard roles to support things like automated installation qualification (IQ) The Shared Services account would house centralized capabilities and automations such as the mentioned automation of IQ The centralized logging account could satisfy regulatory requirements around audit trails including for example record retention through the use of lifecycle policies The addition of a backup and archive account could provide standard backup and restore along with archiving services for application teams to use Similarly a standardized approach to disaster recovery (DR) can be provided by the landing zone using tools like CloudEndure Disaster Recovery If you follow AWS guidance and implement a Cloud Center of Excellence (CCoE) and consider the landing zone as a product the CCoE team takes on the responsibility of building these capabilities into the landing zone to satisfy regulatory requirements Amazon Web Services GxP Systems on AWS 43 The number of capabilities built into the la nding zone is often influenced by the organizational structure around it If you have a traditional structure with a divide between development teams and infrastructure tasks like server and network management are centralized and these capabilities are built into the platform If you adopt a product centric operating model the development teams become more autonomous and responsible for more of the stack perhaps even the entire stack from the VPC and everything built on it Also consider with serverless architectures you may not need a VPC because there are no servers to manage This underlying cloud platform when supporting GxP applications should be qualified to demonstrate proper configuration and to ensure that a state of control and compliance is maintained The qualification of the cloud can follow a traditional infrastructure qualification project which include s the planning specification and design risk assessment qualification test planning installation qualification (IQ) operational qualif ication (OQ) and handover (as described in Section 5 of GAMP IT Qualification of Platforms) The components (configuration items) that make up the landing zone should all be deployed through automated means ie an automated pipeline This approach support s better change management going forward After the completion of the infrastructure project and the creation of the operations and maintenance SOPs you have a qualified cloud platform upon which GxP workloads can run The SOPs cover topic s such as account provisioning access management change management and so on Maintaining the Landing Zone’s Qualified State Once the landing zone is live it must be maintained in a qualified state Unless the operations are delegated to a partner you typically create a Cloud Platform Operations and Maintenance SOP based on Section 6 of GAMP IT Infrastructure Control and Compliance According to GAMP there are several areas where control must be shown such as change management configuration managemen t security management and others GAMP guidance also suggests that ‘automatic tools’ should be used whenever possible The following sections cover these control areas and how AWS services can help with automation Change Management Change Management processes control how changes to configuration items are made These processes should include an assessment of the potential impact on the GxP Amazon Web Services GxP Systems on AWS 44 applications supported by the landing zone As mentioned earlier all of the landing zone components are deployed using an automated pipeline Therefore once a change has been approved and committed in the source code repository tool like AWS CodeCommit the pipeline is triggered and the change deployed There will likely be multiple pipelines for the va rious parts that make up the landing zone The landing zone is made up of infrastructure and automation components Now through the use of infrastructure as code there is no real difference between how these different components are deployed We recommen d a continuous deployment methodology because it ensures changes are automatically built tested and deployed with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and au tomate each step allowing development teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a pipeline containing stages AWS CodePipeline can be used along with AW S CodeCommit AWS CodeBuild and AWS CodeDeploy For customers needing additional approval steps AWS CodePipeline also supports the inclusion of manual steps All changes to AWS services either manual or automated are logged by AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account With CloudTrail you can log continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies secur ity analysis resource change tracking and troubleshooting In addition you can use CloudTrail to detect unusual activity in your AWS accounts These capabilities help simplify operational analysis and troubleshooting Of course customers also want to b e alerted about any unauthorized and unintended changes You can use a combination of AWS CloudTrail and AWS CloudWatch to detect unauthorized changes made to the production environment and even automate immediate remediation Amazon CloudWatch is a monitoring service for AWS Cloud resources and can be used to trigger responses to AWS CloudTrail events (https://docsawsamazoncom/awscloudtrail/latest/userguide/cloudwatch alarms for cloudtrailhtml ) Amazon Web Services GxP Systems on AWS 45 Configuration Management Going hand in hand wi th change management is configuration management Configuration items (CIs) are the components that make up a system and CIs should only be modified through the change management process Infrastructure as Code brings automation to the provisioning process through tools like AWS CloudFormation Rather than relying on manually performed steps both administrators and develope rs can instantiate infrastructure using configuration files Infrastructure as Code treats these configuration files as software code These files can be used to produce a set of artifacts namely the compute storage network and application services tha t comprise an operating environment Infrastructure as Code eliminates configuration drift through automation thereby increasing the speed and agility of infrastructure deployments AWS Tagging and Resource Groups lets you organize your AWS landscape by applying tags at different levels of granularity Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions Using this approach you can globally manage all the application business data and technology components of your target landscape A Resource Group is a collection of resources that share one or more tags It can be used to create an enterprise architecture view of your IT landscape consolidating AWS resources into a per project (that is the on going programs that realize your target landscape) per entity (that is capabilities roles processes) and per domain (that is Business Application Data Technology) view AWS Config is a service that lets you assess audit and evaluate the configurations of AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine their overall compliance against the configurations specified in your internal guidelines This enable s you to simplify compliance auditing security analysis change management and operational troubleshooting In addition AWS provides conformance packs for AWS Config to provide a general purpose compliance framework designed to enable you to create security operational or cost optimization governance checks using managed or custom AWS Config rules and AWS Conf ig remediation actions including a conformance pack for 21 CFR 11 You can use AWS CloudFormation AWS Config Tagging and Reso urce Groups to see exactly what cloud assets your company is using at any moment These services Amazon Web Services GxP Systems on AWS 46 also make it easier to detect when a rogue server or shadow application appear in your target production landscape Security Management AWS has defined a set o f best practices for customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) These AWS resources provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud These AWS resources also provide an overview of different security topics such as identifying categorizing and protecting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data operating systems applications and overall infrastructure in the cloud AWS provides you with an extensive set of tools to secure workloads in the cloud If you implement full automation it could negate the need for anyone to have direct access to any environment beyond development However if a situation occurs that requires someone to access a production environment they must explicitly request access have the access reviewed and approved by the appropriate owner and upon approval obtain temporary access with the least privileg e needed and only for the duration required You should then track their activities through logging while they have access You can refer to this AWS resource for fu rther information Problem and Incident Management With AWS you get access to many tools and features to help you meet your problem and incident management objectives These capabilities help you establish a configuration and security baseline that meets your objectives for your applications running in the cloud When a deviation from your baseline does occur (such as by a mis configuration) you may need to respond and investigate To successfully do so you must understand the basic concepts of security incident response within your AWS environment as well as the issues needed to consider to prepare educate and train your cloud teams before security issues occur It is important to know which controls and capabilities you can use to review topical examples for resolving potential concerns and to identify remediation methods that can be used to leverage automation and impro ve response speed Amazon Web Services GxP Systems on AWS 47 Because security incident response can be a complex topic we encourage you to start small develop runbooks leverage basic capabilities and create an initial library of incident response mechanisms to iterate from and improve upon Th is initial work should include teams that are not involved with security and should include your legal departments so that they are better able to understand the impact that incident response (IR) and the choices they have made have on your corporate go als For a comprehensive guide see the AWS Security Incident Response Guide Backup Restore Archiving The ability to back up and restore is required for all validat ed applications It is therefore a common capability that can be centralized as part of the regulated landing zone Backup and restore should not be confused with archiving and retrieval but the two areas can be combined into a centralized capability For a cloud based backup and restore capability consider AWS Backup AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services Using AWS Backup you can centrally configure backup policies and monitor backup activity for AWS resources such as Amazon E BS volumes Amazon EC2 instances Amazon RDS databases Amazon DynamoDB tables Amazon EFS file systems Amazon FSx file systems and AWS Storage Gateway volumes AWS Backup automates and consolidates backup tasks previously performed service byservice r emoving the need to create custom scripts and manual processes With just a few clicks in the AWS Backup console you can create backup policies that automate backup schedules and retention management AWS Backup provides a fully managed policy based back up solution simplifying your backup management enabling you to meet your business and regulatory backup compliance requirements Disaster Recovery In traditional on premises situations Disaster Recovery (DR) involve s a separate data center located a cer tain distance from the primary data center This separate data center only exists in case of a complete disaster impacting the primary data center Often the infrastructure at the DR site sits idle or at best host s preproduction instances of applications thus running the risk of it being out ofsync with production With the advent of cloud DR is now much easier and cheaper The AWS global infrastructure is built around AWS Regions and Availability Zones (AZ) AWS Regions provide multiple physically sepa rated and isolated Availability Zones which are connected with low latency high throughput and highly redundant Amazon Web Services GxP Systems on AWS 48 networking With Availability Zones you can design and operate applications and databases that automatically fail over between Availability Zones without interruption Availability Zones are more highly available fault tolerant and scalable than traditional single or multiple data center infrastructures With AWS Availability Zones it is very easy to create a multi AZ architecture capable o f withstanding a complete failure of one or more zones For even more resilience multiple AWS Regions can be used With the use of Infrastructure as Code the infrastructure and applications in a DR Region do not need to run all of the time In case of a disaster the entire application stack can be deployed into another Region The only components that must run all the time are those keeping the data repositories in sync With tooling like CloudEndure Disaster Recovery you can now automate disaster recovery Performance Monitoring Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in customer AWS resources CloudWatch monitors and logs the behavior of the customer application landscape CloudWatch can also trigger events based on the behavior of your application Qualifying Building Blocks Customers frequently want to know how AWS gives developers freedom to use any AWS service while still maintaining regulatory compliance and fast development To address this problem you can leverage technology but this also involves changes in process design to move away from blocking steps and towards guardrails The changes required to your processes and IT operating model is beyond the scope of this whitepa per However we cover the core steps of a supporting process to qualify building blocks which is one tactic for maintaining regulatory compliance more efficiently The infrastructure building block concept as defined by GAMP is an approach to qualify individual components or combinations of components which can then be put together to build out the IT infrastructure The approach is applicable to AWS services The benefit of this approach is that you can qualify one instance of a building block once and a ssume all the other instances will perform the same way reducing the overall effort across applications The approach also enables customers to change a building block Amazon Web Services GxP Sys tems on AWS 49 without needing to re qualify all of the others or revalidate the applications dependen t upon the infrastructure Service Approval Service approval is a technique used by many customers as part of architecture governance that is it’s used across regulated and non regulated workloads Customers often consider multiple regulations when appro ving a service for use by development teams For example you may allow all services to be used in sandbox accounts but may restrict the services in an account to only HIPAA eligible services if the application is subject to HIPAA regulations Service app roval is implemented through the use of AWS Organizations and Service Control Policies You could take this approach to allow services to be used as part of GxP relevant applications For example a combination of ISO PCI SOC and HIPAA eligibility may provide sufficient confidence Sometimes customers want to implement automated controls over the approved service as described in Approving AWS services for GxP workloads You may prefer to follow a more rigorous qualification process like the following building block qualification Building Block Qualification The qualification of AWS service building blocks follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance documents ‘Infrastructure Building Block Concept’ (Section 9 / Appendix 2 of GAMP IT) According to EU GMP the definition of qualification is: “Action of proving that any equipment works correctly and actually leads to the expected results” The equipment also needs to continue to lead to the expected results over it s lifetime In other words your process should show that the building block works as intended and is kept under control throughout its operational life There will be written procedures in place and when executed records will show that the activities ac tually occurred Also the staff operating the services need to be appropriately trained This process is often described in an SOP describing the overall qualification and commissioning strategy the scope roles and responsibilities a deliverables list and any good engineering practices that will be followed to satisfy qualification and commissioning requirements Amazon Web Services GxP Systems on AWS 50 With the number of AWS services it can be difficult for you to qualify all AWS services at once An iterative and risk based approach is recommended where services are qualified in priority order Initial prioritization will take into account the needs of the first applications moving to cloud and then the prioritization can be reass essed as demand for cloud services increases Design Stage Requirements The first activity is to consider the requirements for the building block One approach is to look at the service API definition Each AWS service has a clearly documented API describi ng the entire functionality of that service Many service APIs are extensive and support some advanced functionality However not all of this advanced functionality may be required initially so any existing business use cases can be considered to help refine the scope For example when noting Amazon S3 requirements you include the core functionality of creating/deleting buckets and the ability to put/get/delete objects However you may not include the lifecycle policy functionality because this function ality is not yet needed These requirements are captured in the building block requirements specification / requirements repository It’s also important to consider non functional requirements To ensure suitability of a service you can look at the service s SLA and limits Gap Analysis Where application requirements already exist in the same way you can restrict the scope you can also identify any gaps Either the gap can be addressed by including more functionality for the building block like bringing t he Amazon S3 Bucket Lifecycle functionality into scope or the service is not suitable for satisfying the requirements and an alternate building block should be used If no other service seems to meet the requirements you can custom develop a service or make a feature request to AWS for service enhancement Risk Assessment Infrastructure is qualified to ensure reliability security and business continuity for the validated applications running on it These three dimensions are usually included as part of any risk assessment The published AWS SLA provides confidence in AWS services reliability Data regarding the current status of the service plus historical Amazon Web Services GxP Systems on AWS 51 adherence to SLAs is available from https://statusa wsamazoncom For confidence in security the AWS certifications can be checked for the relevant service For business continuity AWS builds to guard against outages and incidents and accounts for them in the design of AWS services so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible This step is also not only for GxP qualification purposes The risk assessment should include any additional check s for other regulations such as HIPAA When assessing the risks for a cloud service it’s important to consider the relationship to other building blocks For example an Amazon RDS database may have a relationship to the Amazon VPC building block because you decided a database is only allowed to exist within the private subnet of a VPC Therefore the VPC is taking care of many of the risks around access control These dependencies will be captured in the risk assessment and then focus on additional risks s pecific to the service or residual risks which cannot be catered for by the surrounding production environment Each cloud service building block goes through a risk assessment that identifies a list of risks For each identified risk a mitigation plan is created The mitigation plan can influence one or more of the following components : • Service Control Policy • Technical Design/Infrastructure as Code Template • Monitoring & Alerting of Automated Compliance Controls A risk can be mitigated through the use of Service Control Policies (SCPs) where a service or specific operation is deemed too risky and its use explicitly denied through such a policy For example you can use an SCP to restrict the deletion of an Amazon S3 object through the AWS Management Consol e Another option is to control service usage through the technical design of an approved Infrastructure as Code (IaC) template where certain configuration parameters are restricted or parameterized For example you may use an AWS CloudFormation template to always configure an Amazon S3 bucket as private Finally you can define rules that feed into monitoring and alerting For example if the policy states Amazon S3 buckets cannot be public but this configuration is not enforce d in the infrastructure tem plate then the infrastructure can be monitored for any public Amazon S3 buckets When an S3 bucket is configured as public an alert trigger s remediation such as immediately changing a bucket to private Technical Design In response to the specified requ irements and risks an architecture design specification will be created by a Cloud Infrastructure Architect describing the logical service building Amazon Web Services GxP Systems on AWS 52 block design and traceability from risk or requirement to the design This design specification will among other things describe the capabilities of the building block to the end users and application development teams Design Review To verify that the proposed design is suitable for the intended purpose within the surrounding IT infrastructure design a design review can be performed by a suitably trained person as a final check Construction Stage The logical design may be captured in a document but the physical design is captured in an Infrastructure as Code (IaC) template like a n AWS CloudFormation template This IaC template is always used to deploy an instance of the building block ensuring consistency For one approach see the Automating GxP compliance in the cloud: Best practices and architecture guidelines blog post The IaC template will u se parameters to deal with workload variances As part of the design effort it will be determined often by IT Quality and Security which parameters affect the risk profile of the service and so should be controlled and which parameters can be set by the user For example the name of a database can be set by the template user and generally does not affect the risk profile of a database service However any parameter controlling encryption does affect the risk profile and therefore is fixed in the templa te and not changeable by the template user The template is a text file that can be edited However the rules expressed in the template are also automated within the surrounding monitoring and alerting For example the rule stating that the encryption se tting on a database must be set can be checked by automated rules Therefore a developer may override the encryption setting in the development environment but that change isn’t allowed to progress to a validated environment or beyond At this point automated test scripts can be prepared for executing during the qualification step to generate test evidence The author of the automated tests must be suitably trained and a separate and suitably trained person perform s a code review and/or random testing of the automated tests to ensure the quality level The automated tests ensure the building block initially functions as expected These tests can be run again to ensure the building block continues to function as expected especially after any change Howev er to ensure nothing has changed once in production you should identify and create automated controls Using the Amazon S3 example again all buckets should be private If a public bucket is detected it can be Amazon Web Services GxP Systems on AWS 53 switched back to private and an alert raised and notification sent You can also determine the individual that created the S3 bucket and revoke their permissions The final part of construction is the authoring and approval of any needed additio nal guidance and operations manuals For example how to recover a database would be included in the operations manual of an Amazon RDS building block Qualification and Commissioning Stage It’s important to note that infrastructure is deployed in the same way for every building block ie through AWS CloudFormation using an Infrastructure as Code template Therefore there is usually no need for building block specific installation instructions Also you are confident that every deployment is done according to specification and has the correct configuration Automated Testing If you want to generate test evidence you can demonstrat e that the functional requirements are fulfilled and that all identifi ed risks have been mitigated thus indicating the building block is fit for its intended use through the execution of the automated tests created during construction The output of these automated tests are captured into a secure repository and can be use d as test evidence This automation deploy s the building block template into a test environment execute s the automated tests capture s the evidence and then destroy s the stack again avoiding any ongoing costs Testing may only make sense in combination with other building blocks For example the testing of a NAT gateway can only be done within an existing VPC One alternative is to test within the context of standard archetypes ie a complete stack for a typical application architecture Handover to Operations Stage The handover stage ensures that the cloud operation team is familiar with the new building block and is trained in any service specific operations Once the operations team approves the new building block the service can be app roved by changing a Service Control Policy (SCP) The Infrastructure as Code template can be made available for use by adding it into the AWS Service Catalog or other secure template repository If the response to a risk was a SCP or Monitoring Rule change then the process to deploy those changes are triggered at this stage Amazon Web Services GxP Systems on AWS 54 Computer Systems Validation (CSV) You must still perform computer systems validation activities even if an application is running in the cloud In fact the overarching qualification strategy we have laid out in this paper has ensured that this CSV process can fundamentally be the same as before and hasn’t become more difficult for the application development teams through the introd uction of cloud technologies However with the solid foundation provided by AWS and the regulated landing zone we can shift the focus to improving a traditional CSV process You typically have a Standard Operating Procedure ( SOP ) describing your Software Development Lifecycle (SDLC ) which is often based on GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Many SOPs we have seen involve a lot of manual work and approvals which slow down the process The more automation that can be introduced the quicker the process and the lower the chances of human error The automation of IT processes is nothing new and customers have been implementing automated toolchains for years for on premises development The move to cloud provides all those same capabilities but also introduces some additional opportunities especially in the virtualized infrastructure areas In this section we will focus primarily on those additional capabilities now available through the cloud Automating Installatio n Qualification (IQ) It’s important to note that even though we are qualifying the underlying building blocks the application teams still need to validate their application including performing the installation qualification (IQ) as part o f their normal CSV activities in orde r to demonstrate their application specific combination of infrastructure building blocks was deployed and is functioning as expected However they can focus on testing the interaction between building blocks rather than the functionality of each building block itself As mentioned the automation of the development toolchain is nothing new to any high performing engineering team The use of CI/CD and automated testing tools has been around for a long time What hasn’t been possible before is the fully aut omated deployment of infrastructure and execution of the Installation Qualification (IQ) step The use of Infrastructure as Code opens up the possibility to automate the IQ step as described in this blog post The controlled infrastructure template acts as the pre Amazon Web Services GxP Systems on AWS 55 approved specification which can be compared against the stacks deployed by AWS CloudFormation Summary reports and test evidence can be created or if a deviation is found the stack can be rolled back to the last known good state Assuming the IQ step completes successfully the automation can continue to the automation of Operational Qualification (OQ) and Performance Qualification (PQ) Maintainin g an Application ’s Qualified State Of course once an application has been deployed it needs to be maintained under a state of control However a lot of the heavy lifting for things like change management configuration management security management b ackup and restore have been built into the regulated landing zone for the benefit of all application teams Conclusion If you are a Life Science customer with GxP obligations you retain accountability and responsibility for your use of AWS products inclu ding the applications and virtualized infrastructure you develop validate and operate using AWS Products Using the recommendations in this whitepaper you can evaluate your use of AWS products within the context of your quality system and consider strat egies for implementing the controls required for GxP compliance as a component of your regulated products and systems Contributors Contributors to this document include : • Sylva Krizan PhD Security Assurance AWS Global Healthcare and Life Sciences • Rye Ro binson Solutions Architect AWS Global Healthcare and Life Sciences • Ian Sutcliffe Senior Solutions Architect AWS Global Healthcare and Life Sciences Further Reading For additional information see: • AWS Compliance • Healthcare & Life Sciences on AWS Amazon Web Services GxP Systems on AWS 56 Document Revisions Date Description March 2021 Updated to include more elements of AWS Quality System Information and updated guidance on customer approach to GxP compliance on AWS January 2016 First publication Amazon Web Services GxP Systems on AWS 57 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services Applicability of 21 CFR 11 to regulated medical products and GxP systems are the responsibility of the customer as determined by the intended use of the system(s) or product(s) AWS has mapped some of these requirements based on the AWS Shared Responsibility Model ; however customers are responsible for meeting their own regulatory obligations Below we have identified each subpart of 21 CFR 11 and clarified areas where AWS services and operations and the customer share responsibility in order to meet 21 CFR 11 requirements 21 CFR Subpart AWS Responsibility Customer Responsibility 1110 Controls for closed systems Persons who use closed systems to create modify maintain or transmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and when appropriate the confidentiality of electronic records and to ensure that the signer cannot readily repudiate the signe d record as not genuine Such procedures and controls shall include the following: Amazon Web Services GxP Systems on AWS 58 1110(a) Validation of systems to ensure accuracy reliability consistent intended performance and the ability to discern invalid or altered records AWS services are b uilt and tested to conform to IT industry standards including SOC ISO PCI and others https://awsamazoncom/compliance/programs/ AWS compliance programs and reports provide objective evidenc e that AWS has implemented several key controls including but not limited to: Control over the installation and operation of AWS product components including both software components and hardware components; Control over product changes and configuratio n management; Risk management program; Management review planning and operational monitoring; Security management of information availability integrity and confidentiality; and Data protection controls including mechanisms for data backup restore and archiving All purchased materials and services intended for use in production processes are documented and documentation is reviewed and approved prior to use and verified to be in conformance with the specifications Final inspection and testing is perf ormed on AWS services prior to their release to general availability The final service release review procedure includes a verification that all acceptance data is present and that all product requirements were met Once in production AWS services underg o continuous performance monitoring In addition AWS’s significant customer base authorization for use by government agencies AWS products are basic building blocks that allow you to create private virtualized infrastructure environments for your custom software applications and commercial offthe shelf applications In this way you remain responsible for enabling (ie installing) configuring and operating AWS products to meet your data application and industry specific needs like GxP software validation and GxP infrastructure qualification as well as validation to support 21 CFR Part 11 requirements AWS products are however unlike traditional infrastructure software products in that they are highly automatable allowing you to programmatically create qualified infrastructure via version controlled JSON[1] scripts instead of manually executed paper p rotocols where applicable This automation capability not only reduces effort it increases control and consistency of the infrastructure environment such that continuous qualification [2] is possible Installation qualification of AWS services into your environment operational and performance qualification (IQ/OQ/PQ) are your responsibility as are the validation activities to demonstrate that systems with GxP workloads managing electronic records are appropriate for the intended use and meet regulatory requirements Amazon Web Services GxP Systems on AWS 59 21 CFR Subpart AWS Responsibility Customer Responsibility and recognition by industry analysts as a leading cloud services provider are further evidence of AWS products delivering their documented functionality https://awsamazoncom/documentation/ Relevant SOC2 Common Criteria: CC12 CC14 CC32 CC71 CC72 CC73 CC74 1110(b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection review and copying by the agency Persons should contact the agency if there are any questions reg arding the ability of the agency to perform such review and copying of the electronic records Controls are implemented subject to industry best practices in order to ensure services provide complete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may referen ce to help protect data hosted within AWS You ultimately will verify that electronic records are accurate and complete within your AWS environment and determine the format by which data is human and/or machine readable and is suitable for inspection by regulators per the regulatory requirements Amazon Web Services GxP Systems on AWS 60 (c) Protection of records to enable their accurate and ready retrieval throughout the records retention period Controls are implemented subject to industry best practices in order to ensure services provide com plete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has identified critical system components required to maintain the availability of our system and recover service in the event of outage Critical system components are backed up across multiple isolated locations known as Availability Zones and back ups are maintained Each Availability Zone is engineered to operate independently with high reliability Backups of critical AWS system components are monitored for successful replication across multiple Availability Zones Refer to the AWS SOC 2 Report C C A12 The AWS Resiliency Program encompasses the processes and procedures by which AWS identifies responds to and recovers from a major event or incident within our environment This program builds upon the traditional approach of addressing contingenc y management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Availability Zones (AZs) and continu ous infrastructure capacity planning AWS service resiliency plans are periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors The AWS Business Continuity Plan outlines measures to avoid a nd lessen environmental disruptions It includes operational details AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may reference to help protect your data hosted within AWS You are responsible for implementation of appropriate security configurations for your environment to protect data integrity as well as ensure data and resources are only retrieved by appropriate permission You are also responsible for creating and testing record retention policies as well as backup and recovery processes You are responsible for properly configuring and using the Service Offerings and taking your own steps to maintain appropriate security protection and backup of your Customer Content which may include the use of encryption technology (to protect your content from unauthorized access) and routine archiving Using Service Offerings such as Amazon S3 Amazon Glacier and Amazon RDS in combination with replication and high availability configurations AWS's broad range of storage solutions for backup and reco very are designed for many customer workloads https://awsamazoncom/backup recovery/ AWS services provide you with capabilities to design for resiliency and maintain business continuity including the utilization of frequent server instance back ups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region You need to architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain Amazon Web Services GxP Systems on AWS 61 21 CFR Subpart AWS Responsibility Customer Responsibility about steps to take before during and after an event The Business Continuity Plan is supported by testing that includes simulations of different scenarios During and after testing AWS documents people and process performance corrective actions and lessons learned with the aim of continuous improvement AWS data centers are designed to anticipate and tolerate failure while maintaining service levels In case of failure automated pro cesses move traffic away from the affected area Core applications are deployed to an N+1 standard so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites Refer to the AWS S OC 2 Report CC31 CC32 A12 A13 resilient in the face of most failure modes including natural disasters or system failures The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover You are responsible for DR planning and testing Amazon Web Services GxP Systems on AWS 62 (d) Limiting system access to authorized individuals AWS implements both physical and logical security controls Physical access to all AWS data centers housing IT infrastructure components is restricted to authorized data cent er employees vendors and contractors who require access in order to execute their jobs Employees requiring data center access must first apply for access and provide a valid business justification These requests are granted based on the principle of least privilege where requests must specify to which layer of the data center the individual needs access and are time bound Requests are reviewed and approved by authorized personnel and access is revoked after the requested time expires Once granted admittance individuals are restricted to areas specified in their permissions Access to data centers is regularly reviewed Access is automatically revoked when an employee’s record is terminated in Amazon’s HR system In addition when an employee or contractor’s access expires in accordance with the approved request duration his or her access is revoked even if he or she continues to be an employee of Amazon AWS restricts logical user access priv ileges to the internal Amazon network based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems requires approval from the authorized personnel and validation of the active user Access privileges to AWS systems are reviewed on a regular AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security prot ection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS acc ounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to AWS resources you must control who can access and use your data and AWS resource s (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best pract iceshtml Maintaining physical access to your facilities and assets is solely your responsibility Amazon Web Services GxP Systems on AWS 63 21 CFR Subpart AWS Responsibility Customer Responsibility basis When an employee no longer requires these privileges his or her access is revoked Refer to the AWS SOC 2 Report C12 C13 and CC61 66 to verify the AWS physical and logical security controls Amazon Web Services GxP Systems on AWS 64 (e) Use of secure computer generated timestamped audit trails to independently record the date and time of operator entries and actions that create mod ify or delete electronic records Record changes shall not obscure previously recorded information Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available f or agency review and copying AWS maintains centralized repositories that provide core log archival functionality available for internal use by AWS service teams Leveraging S3 for high scalability durability and availability it allows service teams to collect archive and view service logs in a central log service Production hosts at AWS are equipped with logging for security purposes This service logs all human actions on hosts including logons failed logon attempts and logoffs These logs are stored and accessible by AWS security teams for root cause analysis in the event of a suspected security incident Logs for a given host are also available to the team that owns that host A frontend log analysis tool is available to service teams to search their logs for operational and security analysis Processes are implemented to protect logs and audit tools from unauthorized access modification and deletion Refer to the AWS SOC 2 Report CC51 CC71 Verification and implementation of audit trails as well as back up and retention procedures of your electronic records are your responsibility AWS provides you with the ability to properly configure and use the Service Offerings in order to maintain appropriate audit trail and logging of data access use and modification (including prohibiting disablement of audit trail functionality) Logs within your control (described below) can be used for monitoring and detection of unauthorized changes to your data Using Service Offerings such as AWS CloudTrail AWS CloudWatch Logs and VPC Flow Logs you can monitor your AWS data operations in the cloud by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command line t ools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support AWS CloudTrail the source IP address the calls were made from and when the calls occurred You can integrate AWS CloudTrail into applications using the API automate trail creation for your organization check the status of your trails and control how administrators turn logging services on and off AWS CloudTrail records two types of events: (1) Management Events: Represent stan dard API activity for AWS services For example AWS CloudTrail delivers management events for API calls such as launching EC2 instances or creating S3 buckets (2) Data Events: Represent S3 object level API activity such as Get Put Delete and List Amazon Web Services GxP Systems on AWS 65 21 CFR Subpart AWS Responsibility Customer Responsibility actions https://awsamazoncom/cloudtrail/ https://awsamazoncom/documentation/cloudtr ail/ http://docsawsamazoncom/AmazonVPC/late st/UserGuide/flow logshtml (f) Use of operational system checks to enforce permitted sequencing of steps and events as appropriate Not appl icable to AWS – this requirement only applies to the customer’s system You are responsible for configuring establishing and verifying enforcement of permitted sequencing of steps and events within the regulated environment (g) Use of authority checks to ensure that only authorized individuals can use the system electronically si gn a record access the operation or computer system input or output device alter a record or perform the operation at hand Not applicable to AWS – this requirement only applies to the customer’s system AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security protection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS accounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to A WS resources you must control who can access and use your data and AWS resources (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best practiceshtml Amazon Web Services GxP Systems on AWS 66 21 CFR Subpart AWS Responsibility Customer Responsibility (h) Use of device (eg terminal) checks to determine as appropriate the validit y of the source of data input or operational instruction Not applicable to AWS – this requirement only applies to the customer’s system You are responsible for establishing and verifying the source of the data input into your system is valid whether ma nually or for example by enforcing only certain input devices or sources are utilized (i) Determination that persons who develop maintain or use electronic record/electronic signature systems have the education training and experience to perform t heir assigned tasks AWS has implemented formal documented training policies and procedures that address purpose scope roles responsibilities and management commitment AWS maintains and provides security awareness training to all information system u sers on an annual basis The policy is disseminated through the internal Amazon communication portal to all employees Relevant SOC2 Common Criteria: CC13 CC14 CC22 CC23 You are responsible for ensuring your AWS users — including IT staff developers validation specialists and IT auditors —review the AWS product documentation and complete the product training programs you have determined are appropriate for your personnel AWS products are extensively documen ted online https://awsamazoncom/documentation/ and a wide range of user training and certification resources are available including introductory labs videos self paced online courses instructor lead training and AWS Certification https://awsamazoncom/training/ Adequacy of training programs for your personnel as well as maintenance of documentation of personnel training and qualifications (such as training record job description and resumes) are your responsibility (j) The establishment of and adherence to written policies that hold individuals accountable and responsible for actions initiated under their electronic signatures in order to d eter record and signature falsification Not applicable to AWS – this requirement only applies to the customer’s system Establishment and enforcement of policies to hold personnel accountable and responsible for actions initiated under their electronic signatures is your responsibility including training and associated documentation (k) Use of appropriate controls over systems documentation including: Amazon Web Services GxP Systems on AWS 67 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Adequate controls over the distribution of access to and use of documentation for system operation and maintenance AWS maintains formal documented policies and procedures that provide guidance for operations and i nformation security within the organization and the supporting AWS environments Policies are maintained in a centralized location that is only accessible by employees Security p olicies are reviewed and approved on an annual basis by Security Leadership and are assessed by third party auditors as part of our audits Refer to SOC2 Common Criteria CC22 CC23 CC53 You are responsible to establish and maintain your own controls over the distribution access and use of documentation and documentation systems for system operation and maintenance Amazon Web Services GxP Systems on AWS 68 21 CFR Subpart AWS Responsibility Customer Responsibility (2) Revision and change control procedures to maintain an audit trail that documents timesequenced development and modification of systems documentation AWS policies and procedures go through processes for appro val version control and distribution by the appropriate personnel and/or members of management These documents are reviewed periodically and when necessary supporting data is evaluated to ensure the document fulfills its intended use Revisions are re viewed and approved by the team that owns the document unless otherwise specified Invalid or obsolete documents are identified and removed from use Internal policies are reviewed and approved by AWS leadership at least annually or following a significa nt change to the AWS environment Where applicable AWS Security leverages the information system framework and policies established and maintained by Amazon Corporate Information Security AWS service documentation is maintained in a publicly accessible online location so that the most current version is available by default https://awsamazoncom/documentation/ Refer to the AWS SOC 2 Report CC23 CC34 CC67 CC81 You are responsible for changes to your computerized systems running within your AWS accounts System components must be authorized designed developed configured documented tested approved and implemented according to your security and availability com mitments and system requirements Using Service Offerings such as AWS Config you can manage and record your AWS resource inventory configuration history and configuration change notifications to enable security and governance AWS Config Rules also enab les you to create rules that automatically check the configuration of AWS resources recorded by AWS Config https://awsamazoncom/documentation/config/ Change records and associated logs within your environment may be retained according to your record retention schedule You are responsible for storing managing and tracking electronic documents in your AWS account and as part of your overall quality management system including maintaining an audit trail that documents time sequenced development and modification of systems documentation Amazon Web Services GxP Systems on AWS 69 21 CFR Subpart AWS Responsibility Customer Responsibility §1130 Controls for open systems Persons who use open systems to create modify maintain or t ransmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and as appropriate the confidentiality of electronic records from the point of their creation to the point of their receipt Such procedures a nd controls shall include those identified in §1110 as appropriate and additional measures such as document encryption and use of appropriate digital signature standards to ensure as necessary under the circumstances record authenticity integrity an d confidentiality Industry standard controls and procedures are in place to protect and maintain the authenticity integrity and confidentiality of customer data Refer to the AWS SOC 2 Report C11 C12 You are responsible for determining whether your use of AWS services within your environment meets the definition of an open or closed system and whether these requirements apply Refer to the responsibilities in §1110 above for more information for recommended procedures and controls Additional measure s such as document encryption and use of appropriate digital signature standards are your responsibility to maintain data integrity authenticity and confidentiality §1150 Signature manifestations (a) Signed electronic records shall contain information associated with the signing that clearly indicates all of the following: (1) The printed name of the signer; (2) The date and time when the signature was executed; and (3) The meaning (such as review approval responsibility or authorship) as sociated with the signature (b) The items identified in paragraphs (a)(1) (a)(2) and (a)(3) of this section shall be subject to the same controls as for electronic records and shall be included as part of any human readable form of the electronic record (such as electronic display or printout) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications meet the signed electronic records requirements iden tified Amazon Web Services GxP Systems on AWS 70 21 CFR Subpart AWS Responsibility Customer Responsibility §1170 Signature/ record linking Electronic signatures and handwritten signatures executed to electronic records shall be linked to their respective electronic records to ensure that the signatures cannot be excised copied or otherwise transferred to falsify an electronic record by ordinary means Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your application s/systems meet the signature/record linking requirements identified including any required policies and procedures Subpart C —Electronic Signatures §11100 General requirements (a) Each electronic signature shall be unique to one individual and shall no t be reused by or reassigned to anyone else Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature re quirements identified including any required policies and procedures to enforce electronic signature governance (b) Before an organization establishes assigns certifies or otherwise sanctions an individual's electronic signature or any element of su ch electronic signature the organization shall verify the identity of the individual Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature requirements identified including any required policies and procedures to verify individual identity prior to use of an electronic signature Amazon Web Services GxP Systems on AWS 71 21 CFR Subpart AWS Responsibility Customer Responsibility (c) Persons using electronic signatures shall prior to or at the time of such use certify to the agency that the electronic signatures in their system used on or after August 20 1997 are intended to be the legally binding equivalent of traditional handwritten signatures (1) The certification shall be submitted in paper fo rm and signed with a traditional handwritten signature to the Office of Regional Operations (HFC 100) 5600 Fishers Lane Rockville MD 20857 (2) Persons using electronic signatures shall upon agency request provide additional certification or testimony that a specific electronic signature is the legally binding equivalent of the signer's handwritten signature Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establis hing and verifying that your applications/systems meet the general electronic signature requirements identified including determining whether any required notification to the agency is required and documenting accordingly §11200 Electronic signature c omponents and controls (a) Electronic signatures that are not based upon biometrics shall: Not applicabl e to AWS – this requirement only applies to the customer’s applications Amazon Web Services GxP Systems on AWS 72 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Employ at least two distinct identification components such as an identification code and password (i) When an individual executes a series of signings duri ng a single continuous period of controlled system access the first signing shall be executed using all electronic signature components; subsequent signings shall be executed using at least one electronic signature component that is only executable by a nd designed to be used only by the individual (ii) When an individual executes one or more signings not performed during a single continuous period of controlled system access each signing shall be executed using all of the electronic signature compone nts (2) Be used only by their genuine owners; and (3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedu res for use of identifying components and use by genuine owners (b) Electronic signatures based upon biometrics shall be designed to ensure that they cannot be used by anyone other than their genuine owners Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedures for use by genuine owners Amazon Web Services GxP Systems on AWS 73 21 CFR Subpart AWS Responsibility Customer Responsibility §11300 Controls for identification codes/passwords Persons who use electronic signatures based upon use of identification codes in combination with passwords shall employ controls to ensure their security and integrity Such controls shall include: (a) Maintaining the uniqueness of each combined identification code and password such that no two individuals have the same combination of identification code and password Not applicable to AWS – this requirement only applies to the customer’s applicatio ns You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for uniqueness of password and ID code combinations (b) Ensuring that identification code and password issuances are periodically checked recalled or revised (eg to cover such events as password aging) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for es tablishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for periodic review of password issuance (c) Following loss management procedures to electronica lly deauthorize lost stolen missing or otherwise potentially compromised tokens cards and other devices that bear or generate identification code or password information and to issue temporary or permanent replacements using suitable rigorous contro ls Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the proce dures and controls for loss management of compromised devices that generate ID code or passwords Amazon Web Services GxP Systems on AWS 74 21 CFR Subpart AWS Responsibility Customer Responsibility (d) Use of transaction safeguards to prevent unauthorized use of passwords and/or identification codes and to detect and report in an immediate and urgent manner any attempts at their unauthorized use to the system security unit and as appropriate to organizational management Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to prevent detect and report unauthorized use of ID codes and/or passwords (e) Initial and periodic testi ng of devices such as tokens or cards that bear or generate identification code or password information to ensure that they function properly and have not been altered in an unauthorized manner Not applicable to AWS – this requirement only applies to th e customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to periodically test devices that generate I D codes or passwords for proper functionality [1] In computing JSON (JavaScript Object Notation) is the open standard syntax used for AWS CloudFormation templates https://awsamazonc om/documentation/cloudformation/ [2] https://wwwcontinuousvalidationcom/what iscontinuous validation/",General,consultant,Best Practices Core_Tenets_of_IoT,ArchivedCore Tenets of IoT July 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers ArchivedContents Overview 1 Core Tenets of IoT 2 Agility 2 Scalability and Global Footprint 2 Cost 3 Security 3 AWS Services for IoT Solutions 4 AWS IoT 4 Event Driven Services 6 Automation and DevOps 7 Administration and Security 8 Bringing Services and Solutions Together 9 Pragma Architecture 10 Summary 11 Contributors 12 Further Reading 12 ArchivedAbstract This paper outlines core tenets that should be consider ed when developing a strategy for the Internet of Things (IoT) The paper help s customers understand the benefits of Amazon Web Services (AWS) and how the AWS cloud platform can be the critical component supporting the core tenets of an IoT solution The paper also provides an overview of AWS services that should be part of an overall IoT strat egy This paper is intended for decision makers who are learning about Internet of Things platforms ArchivedAmazon Web Services – Core Tenets of IoT Page 1 Overview One of the value propositions of an Internet of Things (IoT) strategy is the ability to provide insight into context that was previously invisibl e to the business But before a business can develop a strategy for IoT it need s a platform that meets the foundational principles of an IoT solution AWS believes in some basic freedoms that are driving organizational and economic benefits of the cloud into businesses These freedoms are why more than a million customers already use the AWS platform to support virtually any cloud workload These freedoms are also why the AWS pla tform is proving itself as the primary catalyst to any Internet of Things strategy across commercial consumer and industrial solutions AWS customers working across such a spectrum of solutions have identified core tenets vital to the success of any IoT platform T hese core tenets are agility scale cost and security ; which have been shown as essential to the long term success of any IoT strategy This whitepaper defines the se tenets as:  Agility – The freedom to quickly analyze execute and build business and technical initiatives in an unfettered fashion  Scale – Seamlessly expand infrastructure regionally or globally to meet operational demands  Cost – Understand and control the costs of operating an IoT platform  Security – Secure communication from device through cloud while maintaining compliance and iterating rapidly By using the AWS platform companies are able to build agile solution s that can scale to meet exponential device growth with an ability to manage cost while building on top of s ome of the most secure computing infrastructure in the world A company that selects a platform that has these freedoms and promotes these core tenets will improve organizational focus on the differentiators of its business and the strategic value of imple menting solutions within the Internet of Things ArchivedAmazon Web Services – Core Tenets of IoT Page 2 Core Tenets of IoT Agility A leading benefit companies seek when creating an IoT solution is the ability to efficiently quantify opportunities These opportunities are derived from reliable sensor data remote diagnostics and remote command and control between users and devices Companies that can effectively collect these metrics open the door to explore different business hypotheses based on their IoT data For example manufacturers can build predic tive analytics solutions to measure test and tune the ideal maintenance cycle for their products over time The IoT lifecycle is comprised of multiple stages that are required to procure manufacture onboard test deploy and manage large fleets of phy sical devices When developing physical devices the waterfall like process introduces challenges and friction that can slow down business agility This friction coupled with the upfront hardware costs of developing and deploying physical assets at scale often result in the requirement to keep devices in the field for long periods of time to achieve the necessary return on investment (ROI) With the ever growing challenges and opportunities that face companies today a company’s IT division is a competiti ve differentiator that supports business performance product development and operations In order for a company’s IoT strategy to be a competitive advantage the IT organization relies on having a broad set of tools that promote interoperability througho ut the IoT solution and among a heterogeneous mix of devices Companies that can achieve a successful balance between the waterfall processes of hardware releases and the agile metho dologies of software development can continuously optimize the value that’s derived from their IoT strategy Scalability and Global Footprint Along with an exponential growth of connected devices each thing in the Internet of Things communicates packets of data that require reliable connectivity and durable storage Prior to cloud platforms IT departments would procure additional hardware and maintain underutilized overprovisioned capacity in order to handle the increasing growth of data emitted by devices also known as telemetry With IoT an organization is challenged with managing monitoring and securing the immense number of network connections from these dispersed connected devices ArchivedAmazon Web Services – Core Tenets of IoT Page 3 In addition to scaling and growing a solution in one regional location IoT solutions require the ability to scale globally and across different physical locations IoT solutions should be deployed in multiple physical locations to meet the business objectives of a global enterprise solution such as data compliance data sovereignty and lower communication latency for better respo nsiveness from devices in the field Cost Often the greatest value of an IoT solution is in the telemetric and context ual data that is generated and sent from devices Building onpremise infrastructure requires upfront capital purchase of hardware ; it can be a large fixed expense that does not directly correlate to the value of the telemetry that a device will produce sometime in the future To balance the need to receive telemetry today with an uncertain value derived from telemetr ic data in the future an IoT strategy should leverage an elastic and scalable cloud platform With the AWS platform a company pays only for the services it consumes without requiring a long term contract By leveraging a flexible consumption based pricing model the cost of a n IoT solution and the related infrastructure can be directly accessed alongside the business value delivered by ingesting processing storing and analyzing the telemetr y received by that same IoT solution Security The foundation of an IoT solution st arts and ends with security Since d evices may send large amounts of sensitive data and end users of IoT application s may also have the ability to directly control a device the security of things must be a pervasive design requirement IoT solutions shoul d not just be designed with security in mind but with security controls permeating every layer of the solution Security is not a static formula ; IoT applications must be able to continuously model monitor and iterate on security best practices In the Internet of Things the attack surface is different than traditional web infrastructure The pervasiveness of ubiquitous computing means that IoT vulnerabilities could lead to exploits that result in the loss of life for example from a compromised control system for gasoline pipelines or power grids A competing dynamic for IoT security is the lifecycle of a physical device and the constrained hardware for sensors microcontrollers actuators and embedded libraries These constrained factors may limit the security capabilities each ArchivedAmazon Web Services – Core Tenets of IoT Page 4 device can perform With these additional dynamics IoT solutions must continuously adapt their architecture firmware and software to stay ahead of the changing security landscape Although the constrained factors of devices can present increased risks hurdles and potential tradeoffs between security and cost building a secure IoT solution must be the primary objective for any organization AWS Services for IoT Solutions The AWS platform provides a foundation for executing an agile scalable secure and cost effective IoT strategy In order to achieve the business value that IoT can bring to an organization customers should evaluate the breadth and depth of AWS services that are common ly used in large scale distr ibuted IoT deployments AWS provides a range of services to accelerate time to market: from device SDKs for embedded software to real time data processing and event driven compute services In these sections we will cover the most common AWS services used in IoT applications and how these services correspond to the core tenets of an IoT solution AWS IoT The Internet of Things cannot exist without things Every IoT solution must first establish connectivity in order to begin interacting with devices AWS IoT is an AWS managed service that addresses the challenges of connecting managing and operating large fleets of devices for an application The combination of scalability of connectivity and security mechanisms for data transmission within AWS IoT provides a foundation for IoT communication as part of an IoT solution Once data has been sent to AWS IoT a solution is able to leverage an ecosystem of AWS services spanning databases mobile services big data analytics machine learning and more Device Gateway A device gateway is responsible for maintaining the sessions and subscriptions for all connected devices in an IoT solution The AWS IoT Device Gateway enables secure bi directional communication between connected devices and the AWS platf orm over MQTT Web Sock ets and HTTP Communication protocols such as MQTT and HTTP enable a company to utilize industry ArchivedAmazon Web Services – Core Tenets of IoT Page 5 standard protocol s instead of using a proprietary protocol that would limit future interoperability As a publish and subscribe protoco l MQTT inherently encourages scalable fault tolerant communication patterns and fosters a wide range of communication options among devices and the Device Gateway These message patterns range from communication between two devices to broadcast pattern s where one device can send a message to a large field of devices over a shared topic In addition the MQTT protocol exposes different levels of Quality of Service (QoS) to control the retransmission and delivery of message s as they are published to subscr ibers The combination of p ublish and subscribe with QoS not only opens the possibilities for IoT solutions to control how devices interact in a solution but also drive more predictability in how messages are delivered acknowledged and retried in the ev ent of network or device failures Shadows Device Registry and Rules Engine AWS IoT consists of additional features that are essential to building a robust IoT application The AWS IoT service includes the R ules Engine which is capable of filtering transforming and forwarding device messages as they are received by the Device Gateway The Rules Engine utilizes a SQL based syntax that selects data from message payloads and triggers actions based on the characteristics of the IoT data AWS IoT also provi des a Device Shadow that maintains a virtual representation of a device The Device Shadow acts as a message channel to send commands reliably to a device and store the last known state of a device in the AWS platform For managing the lifecycle of a fleet of devices AWS IoT has a Device Registry The Device Registry is the central location for storing and querying a predefined set of attributes related to each thing The Device Registry supports the creation of a holistic management view for an IoT solution to control the associations between things shadows permissions and identities Security and Identity For connected devices an IoT platform should utilize concepts of identity least privilege encryption and authorization throughout the hardware and software development lifecycle AWS IoT encrypts traffic to and from the service over Transport Layer Security (TLS) with support for most major cipher suites For identification AWS IoT requires a connected d evice to authenticate using a X509 certificate Each certificate must be provisioned activated and then ArchivedAmazon Web Services – Core Tenets of IoT Page 6 installed on a device before it can be used as a valid identity with AWS IoT In order to support this separation of identity and access for devices AWS IoT provides IoT Policies for device identities AWS IoT also utilizes AWS Identity and Access Management ( AWS IAM) policies for AWS users groups and roles By using IoT Policies an organization has control over allowing and denying communication s on IoT topics for each specific device’s identity AWS IoT policies certificates and AWS IAM are designed for explicit whitelist configur ation of the communication channels of every device in a company’s AWS IoT ecosystem Event Driven Services In order to achieve the tenets of scalability and flexibility in an IoT solution an organization should incorporate the techniques of an event driven architecture An e vent driven architecture fosters scalable and decoupled communication through the creat ion storage consumption and reaction to events of interest that occur in an IoT solution Messages that are generated in an IoT solution should first be categorized and mapped to a series of events A n IoT solution should then associate these events with business logic that execute s commands and possibly generate s additional events in the IoT system The AWS platform provides several application services for building a distributed event driven IoT architecture Foundationally event driven architectures rely on the ability to durably store and transfer events through an ecosystem of interested subscribers In order to support decoupled event orchestration the AWS platform has several application services that are designed for reliable event storage and highly scalable event driven computation An event driven IoT solution should utilize Amazon Simple Queue Service ( Amazon SQS) Amazon Simple Notification Service ( Amazon SNS ) and AWS Lambda as foundational applica tion components for creat ing simple and complex event workflow s Amazon SQS is a fast durable scalable and fully managed message queuing service Amazon SNS is a web service that publishes messages from an application and immediately delivers them to su bscribers or other applications AWS Lambda is designed to run code in response to events while the underlying computer resources are automatically managed AWS Lambda can receive and respond to notifications directly from other AWS services In an event driven IoT architecture AWS Lambda is where the business logic is executed to determine when events of interest have occurred in the context of an IoT ecosystem ArchivedAmazon Web Services – Core Tenets of IoT Page 7 AWS services such as Amazon SQS Amazon SNS and AWS Lambda can separate the consuming of events from the processing and business logic applied to t hose events This separation of responsibilities creates flexibility and agility in an end toend solution This separation enables the rapid modification of event trigger logic or the logic used t o aggregate contextual data between parts of a system Finally this separation allows changes to be introduce d in an IoT solution without blocking the continuous stream of data being sent between end devices and the AWS platform Automation and DevOps In IoT solutions the initial release of an application is the beginning of a long term approach to constant ly refine the business advantages of an IoT strategy After the first release of an application a majority of time and effort will be spent adding new features to the current IoT solution With the tenet of remaining agile throughout the solution lifecycle customers should evaluate services that enable rapid development and deployment as business needs change Unlike traditional web architectures where DevOps technologies only apply to the backend servers an IoT application will also require the ability to incrementally roll out changes to disparate globally connected devices With the AWS platfo rm a company can implement server side and device side DevOps practices to automate operation s Applications deployed in the AWS cloud platform can take advantage of several DevOps technologies on AWS For an overview of AWS DevOps we recommend reviewing the document Introduction to DevOps on AWS 1 Although most solutions will differ in deployment and operations requirements IoT solutions can utilize AWS CloudFormation to define th eir server side infrastructure as code Infrastructure treated as code h as the benefits of being reproducible testable and more easily deployable across other AWS regions Enterprise organizations that utilize AWS CloudFormation in addition to other DevOps tools greatly increase their agility and pace of application changes In order to design an IoT so lution that adheres to the tene ts of security and agility organizations must also update their connected devices after they have been deployed into the environment Firmware updates provide a company a mechanism to ad d new features to a device and are a critical path for delivering security patches during the lifetime of a device To implement firmware updates to connected devices an IoT solution should first store the firmware in a ArchivedAmazon Web Services – Core Tenets of IoT Page 8 globally accessible service such as Amazon Simple Storage Service (Amazon S3) for secure durable highly scalable cloud storage Then the IoT solution can implement Amazon CloudFront a global content delivery network (CDN) service to bring the the firmware stored in Amazon S3 to the lower latency points of presence for connected devices Finally a customer can leverage the AWS IoT Shadow to push a command to a device to request that it download the new version of firmware from a pre signed Amazon CloudFront URL that restricts access to the firmware objects available through the CDN Once the upgrade is complete the device should acknowledge success by sending a message back into the IoT solution By orchestrating this small set of services for firmware updates customers control their Device DevOps approach and can scale it in a way that aligns with their overall IoT strategy In IoT automation and DevOps procedures expand beyond the application services that are deployed in the AWS platform and include the connected devices that have been deployed as part of the overall IoT architecture By designing a system that can easily perform regular and global updates for new software changes and firmware changes organizations can iterate on ways to increase value from their IoT solution and t o continuously innovate as new market opportunities arise Administration and Security Security in IoT is more than data anonymization; it is the ability to have insight auditability and control throughout a system IoT security includes the capability to monitor events throughout the solution and react to those events to achieve the desired compliance and governance Security at AWS is our number one priority Through the AWS Shared Responsibility Model an organization has the flexibil ity agility and control to implement their security requirements 2 AWS manages the security of the cloud while customers are responsible for sec urity in the cloud Customers maintain control o ver what security mechanisms they implement to protect their data applications devices systems and networks In addition companies can leverage the broad set of security and administrative tools that AWS and AWS partners provide to create a strong logically isolated and secure IoT solution for a fleet of devi ces The first service that should be enabled for monitoring and visibility is AWS CloudTrail AWS CloudTrail is a web service that records AWS API calls for an account and delivers log files to Amazon S3 After enabling AWS CloudTrail a ArchivedAmazon Web Services – Core Tenets of IoT Page 9 solution should build security and governance processes that are based on the realtime input from API calls made across an AWS account AWS CloudTrail provides an additional level of visibility and flexibility in creating and iterating on operational openness in a system In addition to logging API calls customers should enable Amazon CloudWatch for all AWS services used in the system Amazon CloudWatch allows applications to monitor AWS metrics and create custom metrics generated by an application These metrics can th en trigger alerts based off of those events Along with Amazon CloudWatch metrics there are Amazon CloudWatch Logs which store additional logs from AWS services or customer application s and can then trigger events based off of those additional metrics AWS services such as AWS IoT directly integrate with Amazon CloudWatch Logs; these logs can be dynamically read as a stream of data and processed using the business logic and context of the system for real time anomaly detection or security threats By pairing services like Amazon CloudWatch and Amazon CloudTrail with the capabilities of AWS IoT identities and policies a company can immediately collect valuable data around security practices at the start of the IoT strategy and meet the need s for a proa ctive implementation of security within their IoT solution Bringing Services and Solutions Together To better understand customer usage predict future trends or run an IoT fleet more efficiently an organization needs to collect and process the potentia lly vast amount of data gathered from connected devices in addition to connecting with and managing large fleets of things AWS provides a breadth of services for collecting and analyzing large scale datasets often called big data These services may be in tegrated tightly within an IoT solution to support collecting processing and analyzing the solution’s data as well as proving or disproving hypotheses based upon IoT data The ability to formulate and answer questions with the same platform one is using to manage fleets of things ultimately empowers an organization to avoid undifferentiated work and to unlock business innovations in an agile fashion ArchivedAmazon Web Services – Core Tenets of IoT Page 10 The high level cohesive architectural perspective of an IoT solution that brings IoT big data and other services together is called the Pragma Architecture The Pragma Architecture is comprised of layers of solutions:  Things The device and fleet of devices  Control Layer The control point for access to the Speed Layer and the nexus for fleet management  Speed Layer The inbound high bandwidth device telemetry data bus and the outbound device command bus  Serving Layer The access point for systems and humans to interact with th e devices in a fleet to perform analysis archive and correlate data and to use realtime views of the fleet Pragma Architecture The Pragma Architecture is a single cohesive perspective of how the core tenets of IoT manifest as an IoT solution when using AWS services One scenario of a Pragma Architecture based IoT Solution is around processing of data emitted by devices; data also known as telemetry In the diagram above after a device authenticates using a device certificate obtained from the AWS IoT service in the control layer the device regularly sends telemetry data to the AWS IoT Device G ateway in the Speed Layer That telemetry data is then processed by the IoT Ru les Engine as an event to be output by Amazon Kinesis or AWS Lambda for use by web users interacting with the serving layer ArchivedAmazon Web Services – Core Tenets of IoT Page 11 Another scenario of a Pragma Architecture based IoT Solution is to send a command to a device In the diagram above the user’s application would write the desired command value to the target device’s IoT Shadow Then the AWS IoT Shadow and the Device Gate way work together to overcome an intermittent network to convey the command to the specific device These are just two device focused scenarios from a broad tapestry of solutions that fit the Pragma Architecture Neither of these scenarios address the nee d to process the potentially vast amount of data gathered from connected devices this is where having an integrated Big Data Backend starts to become important The Big Data Backend in this diagram is congruent with the entire ecosystem of real time and b atch mode big data solutions that customers already leverage the AWS platform to create Simply put from the big data perspective IoT telemetry equals “ingest ed data” in big data solutions If you’d like to learn more about big data solutions on AWS plea se check below for a link to further reading There is a colorful and broad tapestry of big data solutions that companies have already created using the AWS platform The Pragma Architecture shows that by building an IoT solution on that same platform the entire ecosystem of big data solutions is available Summary Defining your Internet of Things strategy can be a truly transformational endeavor that opens the door for unique business innovations As organizations start striving for their own IoT innov ation s it is critical to select a platform that promotes the core tenets: business and technical agility scalability cost and security The AWS platform over delivers on the core tenets of an IoT solution by not just providing IoT services but offerin g those services alongside a broad deep and highly regarded set of platform services across a global footprint This over delivery also brings freedoms that increase your business’ control over its own destiny and enables your business’ IoT solutions to more rapidly iterate toward the outcomes sought in your IoT strategy As next steps in evaluating IoT platforms we recommend the further reading section below to learn more about AWS IoT big data solutions on AWS and customer case studies on AWS ArchivedAmazon Web Services – Core Tenets of IoT Page 12 Contributors The following individuals authored this document:  Olawale Oladehin Solutions Architect Amazon Web Services  Brett Francis Principal Solutions Architect Amazon Web Services Further Reading For additional reading please consult the following sources:  AWS IoT Service3  Getting Started with AWS IoT4  AWS Case Studies5  Big Data Analytics Options on AWS6 1 https://d0awsstaticcom/whitepapers/AWS_DevOpspdf 2 https://awsamazoncom/compliance/shared responsibility model/ 3 https://awsamazoncom/iot/ 4 https://awsamazoncom/iot/getting started/ 5 https://awsamazoncom/solutions/case studies/ 6 https://d0awsstaticcom/whitepapers/Big_Data_Analytics_Options_on_AW Spdf Notes,General,consultant,Best Practices Cost_Management_in_the_AWS_Cloud,ArchivedCost Management in the AWS Cloud Marc h 201 8 This paper has been archived For the latest technical guidance on Cost Management see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or service s each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Cost Management in the Cloud 1 Creating a Cost Conscious Culture 1 Cost Governance Best Practices 2 Getting Started with Cost Management 3 AWS Cost Explorer 3 AWS Cost and Usage Report 5 AWS Budgets 5 Other Cost Related Metrics 6 Conclusion 7 Archived Abstract This is the second in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and cont inuously measure your optimization status Amazon Web Services (AWS) provides a suite of cost management tools out of the box to help you get the most value from your AWS investment This paper provides an overview of many of these tools as well as organ izational best practices for creating a cost conscious mindset ArchivedAmazon Web Services – Cost Management Page 1 Cost Management in the Cloud Migrating to the cloud enhances your business’s ability to scale and flex to the demands of your company’s workloads Historically compu ting costs were tied to a quarterly or yearly hardware procurement investment With cloud technology you now have the flexibility to initialize resources and services at any time —you pay only for what you use This has shifted the way that costs are unde rstood managed and optimized In the past hardware costs were treated as a capital expense which led to predictable resource procurement and cost patterns You had to purchase enough servers to support your company’s most highly trafficked day which resulted in waste because many of these servers would lie idle for much of the year Because the cloud lets you scale on demand you pay only for the resources you use which minimizes waste but can result in variable cost patterns The ability to scale up a nd down on demand has allowed resource procurement to transition from sole ownership of the finance team to stakeholders across IT engineering finance and other teams This democratization of resource procurement has initiated an ever growing group of c ostconscious stakeholders who are now responsible for understanding managing and ultimately optimizing costs Creating a Cost Conscious Culture One of the first steps on your company’s cloud journey is to establish best pract ices for cloud cost management Your organization should create a Cloud Center of Excellence and designate key stakeholders to oversee technical and architectural quality and advance a cost conscious agenda This group often starts small and grows over time A typical journey might look something like this: • Cost awareness – An individual from the finance or engineering team allocates a few hours per week to learn the basics of cloud cost management using AWS training resources helps establish basic governance best practices and participates in organization wide cloud direction discussions This individual also tends to evangelize using out ofthebox AWS reports and tools ArchivedAmazon Web Services – Cost Management Page 2 • Cost management and optimization – Over time this individual or small group expands to a larger team w hose members define custom metrics adopt and disseminate advanced reporting methodologies and enforce cost allocation strategies (often via AWS resource tags) • Evangelism and process optimization – As financial and cost management needs become more compl ex a larger dedicated team with advanced skills supports cost management across the organization and establishes internal communities of interest to support education and collaboration on key cloud topics Cost Governance Best Practices To scale increasi ngly complex workloads that are run on AWS your organization should emphasize the creation of clear effective policies and governance mechanisms around cloud deployment usage and cost responsibility Keep in mind that executive support for cost ma nagem ent processes is critical • Resource controls (policy based and automated) govern who can deploy resources and the process for identifying monitoring and categorizing these new resources These controls can use tools such as AWS Service Catalog AWS Ident ity and Access Management (IAM) roles and permissions and AWS Organizations as well as third party tools such as ServiceNow • Cost allocation applies to teams using resources shifting the emphasis from the IT ascostcenter mentality to one of shared res ponsibility • Budgeting processes include reviewing budgets and realized costs and then acting on them • Architecture optimization focuses on the need to continually refine workloads to be more cost conscious to create better architected systems • Tagging an d tagging enforcement ensure cost tracking and visibility across organization lines Establishing effective processes ensures that the right information and controls are available to the right people This reinforces channels of communication for costrelated inquiries which strengthens your cost conscious culture ArchivedAmazon Web Services – Cost Management Page 3 Getting Started with Cost Management The best place to start with gaining insight and taking action on your costs is the monthly AWS bill which is accessible via the AWS Billing and Cost Management console Your AWS bill breaks down costs by service AWS Region and linked account Although this is a great place to start for high level cost information the AWS Management C onsole also comprises a suite of billing and cost management tools that give you fine grain access understanding and control over your AWS costs and usage These tools include AWS Cost Explorer the AWS Cost and Usage Reports and A WS Budgets AWS Cost Explorer AWS Cost Explorer helps you visualize understand and manage your AWS costs and usage over time This is done via an intuitive interface that enabl es you to quickly create custom reports that include charts and tabular data You can analyze your cost and usage data in aggregate (such as total costs and usage across all accounts) down to granular details (for example m22xlarge costs within the Dev a ccount tagged “project: Blackthorn”) Cost Explorer equips you with data exploration functionality such as the ability to group and filter your cost and usage information to help you quickly and easily get to the data you need to make data driven decisio ns You can also change the chart type and time frame as well as access advanced filters When you sign up for Cost Explorer AWS prepares the data about your costs for the current month and the last 3 months and then calculates the forecast for the next 3 months Cost Explorer can display up to 12 months of historical data data for the current month and the forecasted costs for the next 3 months To help you get started Cost Explorer provides a selection of default reports to help you pinpoint cost an d usage trends These reports include: • Monthly costs by AWS service – Visualize the costs and usage associated with the top five cost accruing AWS services and get a detailed breakdown on all services in a table view ArchivedAmazon Web Services – Cost Management Page 4 • Amazon EC2 monthly cost and usage – View all Amazon Elastic Compute Cloud (Amazon EC2) costs over the past three months as well as current month todate costs • Monthly costs by linked account – View the distribution of costs across your organizat ion To recreate this chart add Linked Account as the grouping dimension in Cost Explorer • Monthly running costs – See all running costs over the past three months and view forecasted costs for the coming month with a corresponding confidence interval • Reserved Instance (RI) reports – To learn more about the RI Utilization and Coverage reports see Reserved Instance (RI) Reporting To create and save persona lized reports you can use the following functionality: • Set time interval and granularity – Set a custom time interval and determine whether you would like to view your data monthly or daily • Filter/ group your data – Dig deeper into your data by taking advantage of filtering and grouping functionality using a variety of available dimensions • Forecast future costs and usage – Use forecasting to get a better idea of what your costs and usage may look like in the future Available filters in Cost Explorer in clude: • API Operation – Requests made to and tasks performed by a service • AWS Services – Individual AWS services such as Amazon EC2 or Amazon Simple Storage Service (Amazon S3) • AWS Regions – Geographic areas in whi ch AWS hosts your resources • Availability Zones – Distinct locations within an AWS R egion • Usage Types – The units that each service employs to measure the usage of a specific type of resource • Usage Type Groups – Predefined filters that collect specific cate gories of usage into a single filter (eg EC2 ELB – Running Hours) • Cost Allocation Tags – AWS resource tags that have been activated for cost allocation ArchivedAmazon Web Services – Cost Management Page 5 • Instance Types – The type you specified when launching an EC2 host • Linked Accounts – Members of a con solidated billing family • Purchase Option – Identify On Demand Spot and Reserved Instance usage Once you arrive at a helpful view you can save your progress as a new report that you can refer to in the future To learn more about AWS Cost Explorer see AWS Cost Explorer AWS Cost and Usage Report The AWS Cost and Usage Report tracks your AWS usage and provides estimated charges associated with that usage You can conf igure t his report to present the data hourly or daily It is updated at least once a day until it is finalized at the end of the billing period The AWS Cost and Usage Report gives you the most granular insight possible into your costs and usage and it is the source of truth for the billing pipeline It can be used to develop advanced custom metrics using business intelligence data analytics and third party cost optimization tools The AWS Cost and Usage Report is delivered automatically to an S3 bucket that you specify and it can be downloaded directly from there (standard S3 storage rates apply) It can also be ingested into Amazon Redshift or uploaded to Amazon QuickSight To learn more about th e AWS Cost and Usage Report see AWS Cost and Usage Report AWS Budgets AWS Budgets lets you set custom cost and usage budgets and receive alerts when you approach or exceed your budgeted amount You can create b udgets from the AWS Budgets Dashboard or programmatically via the AWS Budgets API Budgets can track cost or usage monthly quar terly or yearly You can create a b udget by using the same filters available in Cost Explorer You can monitor b udgets via the Budgets Dashboard in the AWS Management Console For both cost and usage budgets alerts can be set against actual or forecasted budgeted values ArchivedAmazon Web Services – Cost Management Page 6 From there you can further specify the percent accrual toward the cost or usage threshold For example specifying 100 % of the actual costs of a $1000 budget will alert you when the $1000 threshold is exceeded Creating a second alert that notifies you when 90 % of your $1000 budget has been reached will give you more time to take proactive action You can also supplement these alerts by setting one against forecasted cost or usage values (eg 105 %of your budgeted value) which will alert you of possible an omalies or changes in behavior Each budget can have up to five associated alerts Each alert can have up to 10 email subscribers and can optionally be published to an SNS topic Other Cost Related Metrics Creating cost related metr ics and then tracking them supports a data driven decision making culture This makes it easy to understand and manage your costs and identify opportunities for savings Some examples of cost related metrics that you can implement include percentage of: • Resource utilization • Instances turned off daily • Instances tagged • Amazon EC2 instances that have undergone EC2 Right Sizing Organizations taking advantage of AWS cost optimization offerings such as Reserved Instances and Spot Instances should develop metrics around them such as percentage of: • Reserved Instance coverage of key workloads • Aggregate utilization of EC2 Reserved Instances • Application of EC2 Spot Instances and any associated discounts As organizational needs evolve cost management requirements tend to evolve as well toward quantifying savings Savings can be realized as a result of cost optimization efforts: • Workload management – Gain elasticity by turning off dev elopment test and staging workloads when not in use A common approach is to ArchivedAmazon Web Services – Cost Management Page 7 mandate on/off for all such instances except those flagged manually as exceptions: On/off savings = (Highest hourly cost x hours per month) − actual monthly cost • Reserved Instance utilization – Maximize Reserved Instance utilization using the EC2 Reserved Instance Reports in AWS Cost Explorer A typical utilization target is 70 % of always on workloads • Reserved Instance Right Sizing – Apply a benchmark to a point in time and measure savings potential by right sizing your EC2 instances Over time you can measure savings achieved through right sizing and compare that to your initial benchmark These are a few examples of possible metric s that you can implement in your cost optimization journey You can further refine your metrics to track unit costs along the following dimensions: • Number of customers or active subscribers • Revenue generated • Product or business unit • Internal user • Experiment Using the cost metrics outlined above you can link your cloud computing costs and usage to your business objectives Conclusion AWS provides a set of cost management tools out of the box to help you manage monitor and ultimately optimize yo ur costs To get started identify someone to set the standard for cloud excellence at your organization get started using cost management tools for your needs and define and track against a set of cost related benchmarks for cost optimization As your c ost management capabilities grow you can begin to use more advanced metrics set budgets and alerts and us e advanced analytics to identify ad ditional savings opportunities ArchivedAmazon Web Services – Cost Management Page 8 To learn more about the tools that AWS provides to help you access understand a llocate control and optimize your AWS costs and usage see AWS Cost Management,General,consultant,Best Practices Creating_a_Culture_of_Cost_Transparency_and_Accountability,ArchivedCreating a Culture of Cost Transparency and Accountability March 2018 This paper has been archived For the latest technical guidance on Cost Management see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Abstract 4 From Cloud Cost to Cloud Value 1 Speed and Cost Tradeoffs 3 Cost is Everyone’s Responsibility 3 Promoting Visibility Transparency and Accountability 4 Determining Cost Allocation 5 Evangelizing Best Practices 6 Conclusion 6 Archived Abstract This is the fifth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses the tools best practic es and tips that your organization can use to create a lean cost culture and maximize the benefits of the cloud ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 1 From Cloud Cost to Cloud Value Migrating to the cloud is an iterative process that evolves as your organization develops new s kills processes tools and capabilities These skills build momentum and acc elerate your migration efforts The prospect of moving to the cloud does not need to be a daunting or arduous proposition Establishing the right cultural foundation to build on is key to a successful migration Because cloud services are purchased deployed and managed in fundamentally different ways from traditional IT adopting them successfully requires a cultural shift as well as a technolo gical one inside organizations Culture consists of the attitudes and behaviors that define how a business operates Organizations can improve cost optimization by promoting a culture where employees view change as normal and welcome responsibility in the interest of following best practice s and adapting to new technology This is what lean cost culture means In traditional environments IT infrastructure requires significant upfront investment and labor The decision to incur these costs typically must go through multiple layers of approva l In legacy IT models IT purchases are ordered and managed through a central services model at significant expense What’s more the sources of these costs are difficult to identify and allocate in part because of limited transparency The cloud present s an entirely different situation IT infrastructure requires more limited capital investments and labor can focus on undifferentiated work as opposed to managing infrastructure You can easily spin up cloud services without IT intervention using a depart mental credit card Specialist teams are not always required to get infrastructure to a functioning state and business units can more easily deploy their own technology needs While the initial costs might be lower they are also easier to incur Without the right infrastructure and processes in pl ace costs are not always easy to manage There’s also a major difference in how cloud services and data center infrastructure are paid for If you create a virtual machine on a physical server in a data center there’s no inherent way to measure the cost of that action If you create this machine in the cloud costs immediately begin to accrue Cloud ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 2 costs are tightly coupled with usage often down to the second Most actions have a hard dollar cost implication Because cloud resources are easier to deploy and incur usage based costs organizations must rely on good governance and user behavior to manage costs —in other words they need to create a lean cost culture This is especially important becaus e with the cloud and modern agile DevOps practices implementation is a continuous cycle with new resources services and projects being adopted regularly A lean cost culture is essential when architecting cloud based solutions and should be part of pla nning design and development Cost management should not be delegated only after the technology has been developed Fortunately in many ways creating a lean cost culture is much easier to do in the Amazon Web Services (AWS) Cloud than in the data center environment You can closely track the costs incurred by specific individuals groups projects or functions Your teams can share i nformation through consoles and reports Rich cost analytics and management tools are built into the platform and cost saving management automation is relatively easy to implement By u sing the tools best practices and tips detailed in this paper your organization can maximize the benefits of the cloud while keeping costs under control Ultimately the goal is to move from thinking about cloud costs to understanding cloud value —the return on investment ( ROI ) your organization obtains fr om various initiatives and workloads that leverage the cloud It’s important to understand not just what you’re spending but the value you’re getting in return A bigger bill doesn’t necessarily indicate a problem if it means you’re growing your business your margins or your capabilities Therefore your organization need s to clearly identify key performance indicators and success factors that are impacted by cloud adoption In the absence of well identified metrics determining success is complicated an d it can be difficult to derive value Examples of categories that can help define success are business agility operational resilienc y and total cost of ownership One example of how to evaluate cloud value is by looking at unit cost The unit can be any object of value in your organization such as subscribers API calls or page views The unit cost is the total cost of a service divided by the number of units By focusing on reducing unit cost over time and understanding how ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 3 spending and margins are re lated you can concentrate on getting more for your money Arriving at this level of understanding can be an incremental process Best practices that can help get you there are discussed below Speed and Cost Tradeoffs With cost optimization as with the o ther pillars in the AWS WellArchitected Framework there are trade offs to consider for example whether to optimize for speed tomarket or for cost In some cases it’s best to optimi ze for speed — going to market quickly shipping new features or simply meeting a deadline — rather than investing in upfront cost optimization Sometimes d esign decisions are directed by haste rather than data and the temptation always exists to overcompens ate just in case rather than spend time benchmarking for the most cost optimal deployment This might lead to overprovisioned and under optimized deployments However this is a reasonable choice when you need to lift and shift resources from your on premises environment to the clo ud and then optimize afterward Investing in a cost optimization strategy upfront allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practi ces and avoiding unnecessary overprovisioning Cost is Everyone’s Responsibility All teams can help manage cloud costs and cost optimization is everyone’s responsibility Many variables affect cost and different levers can be pulled to drive operational excellence The following are e xamples of different teams that need to consider cost optimization : • Engineering needs to know the cost of deploying resources and how to architect for cost optimization • Finance needs cost data for accounting reporting and decision making • Operations makes large scale decisions that affect IT costs • Business decision makers must track costs against budgets and understand ROI ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 4 • Executives need to understand the impact of cloud spending to help with divestitures acquisitions and organizational strategy In the past f ew of these roles were tasked with the responsibility of understanding let alone managing IT costs Now s takeholders need training policies and tools to do this effectively The best starting point is to crea te visibility into cloud costs Promoting Visibility Transparency and Accountability In the cloud it’s easy to get into a situation where the people watching costs are not the same people incurring them One of the goals of creating a lean cost culture is turning everybody into a cost watcher By providing alerts dashboards and reports relevant to each stakeholder you reduce the feedback loop between the data and the action that i s required to make corrections In addition to giving stakeholders visib ility it’s a good idea to encourage transparency —in other words let teams see how others are spending — showcasing trends best practices and opportunities for improvement This can help create a shared sense of ownership over cloud costs and incentivize people to minimize them You can even go so far as to encourage friendly rivalries between teams to achieve higher levels of optim ization through gamification To achieve true success cost optimization must become a cultural norm in your organization Get everyone involved Encourage everyone to track their cost optimization daily so they can establish a habit of efficiency and see the daily impact of their cost savings over time Although everyone shares the ownership of cost optimization best practices call for someone to take primary responsibility for cost optimization Typically this is someone from either the finance or IT department who is responsible for ensuring that cost controls are monitored so that business goals can be met The costoptimiza tion engineer makes sure that the organization is positioned to derive optimal value from the decision to adopt AWS As the organization matures this role can become a Cloud Center of Excellence responsible for continually driving cost optimization best p ractices For more on developing a Cloud Center of Excellence see the second whitepaper in this series ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 5 Determining Cost Allocation To help you understand your responsibility for cloud costs use AWS tools for resource allocation The two main mechanisms of cost allocation in AW S are linked accounts and tags Linked Accounts Linked accounts let you split the AWS bill by cost center or business unit while centralizing payment through the organizational account Linked accounts are managed through the consolidated billing feature in AWS Organizations With consolidated billing you can see a combined view of AWS charges incurred by all your accounts You also can get a cost report for each mem ber account that is assoc iated with your master account Tags To help you manage your instances images and other Amazon EC2 resources you can optionally assign your own metadata to each resource in the form of tags Tags enable you to categorize your AWS resources in different ways for example by purpose owner or environment You can use tags for many purposes and they are an especially powerful way to create a lean cost culture AWS Cost Explorer and detailed billing reports let you analyze your AWS costs by tag Typically you use business tags such as cost center/business unit customer or project to associate AWS costs with traditional cost allocation dimensions However a cost allocation report can include any tag which means you can easily associate costs with technical or security dimensions such as specific applications environments or compliance programs Using tags can make it easy to create usage reports specific to role business function application project a nd more Your o rganization should create a common taxonomy as early as possible —one that embodies the organizational structure and enables easy accountability for costs It is also important to track untagged resources because these can represent unallocat ed costs Many organizations enforce tagging programmatically and even implement a tag or ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 6 terminate rule With proper tagging people can easily see which costs they are responsible for Evangelizing Best Practices As with all cloud activities the key to developing best practices stems from infusing a business culture into everything you do When a culture of accountability and transparency becomes intrinsic to the way you conduct business you can see benefits quickly A cost conscious cloud culture does not come about on its own Changing processes and behaviors takes time and effort Clear policies around cost ownership deployment processes reporting and other best practices should be developed and evangelized across your organization Training can he lp staff understand how cloud costs work and steps th ey can take to eliminate waste Some fundamental policies to consider include: • Turning off unused resources • Using Amazon EC2 Spot Instances Amazon EC2 Reserved Instances and other service reservation types where appropriate • Using alerts no tifications and AWS Budgets to help teams stay on track • Reporting waste on a team and company level • Applying showbacks and chargebacks to enable cost accountability • Setting up dashboards to enable widespread monitoring of cloud usage • Setting up communication cadences to ensure visibility of cost management issues to the right people Conclusion Every organization is differ ent Some organizations are used to rapid change and will adopt a lean cost culture quickly Others have more entrenched processes and approaches and will require more time to get there The key is to understand that cultural change is required and that it should be addressed early in the cloud adoption journey More than any specific tool or approach getting your people on board is the foundat ion of cost management success,General,consultant,Best Practices Criminal_Justice_Information_Service_Compliance_on_AWS,"ArchivedCriminal J ustice Information Service Compliance on AWS (This document is part of the CJIS Workbook package which also includes CJIS Security Policy Requirements CJIS Security Policy Template and CJIS Security Policy Workbook ) March 2017 This paper has been archived For the latest compliance content see https://awsamazoncom/compliance/resources/ Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 What is Criminal Justice Information? 1 What is the CJIS Security Policy 2 CJIS Security Addendums (Agreements) 2 AWS Approach on CJIS 3 CJIS and relationship to FedRAMP 3 AWS Shared Responsibility Model 4 Service Categories 4 AWS Regions Availability Zones and Endpoints 6 Security & Compliance OF the Cloud 7 Security & Compliance IN the Cloud 8 Creating a CJIS Environment on AWS 9 Auditing and Accountability 10 Identification and Authentication 11 Configuration Management 12 Media Protection & Information Integrity 13 System and Communication Protection and Information Integrity 14 Conclusion 15 Further Reading 16 Document Revisions 17 Archived Abstract There is a long and successful track record of AWS customers using the AWS cloud for a wide range of sensitive federal and state government workloads including Criminal Justice Information (CJI) data Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to dramatically improve the security and protection of CJI data using the advanced security services and features of AWS such as a ctivity logging ( AWS CloudTrail ) encryption of data in motion and at rest (Amazon S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and AWS CloudHSM ) along with integrated permission management (IAM federated identity management multi factor authentication) To enable this AWS complies with Criminal Justice Information Services Division (CJIS) Security Policy requirements where applicable such as providing states with fingerprint cards for GovCloud administrators and signing CJIS security addendum agreements with our customers ArchivedAmazon Web Services – CJIS Compliance on AWS Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Because AWS designed their cloud implementation with security in mind you can use AWS services to satisfy a wide range of regulatory requirements including the Criminal Justice Information Services (CJIS) Security Policy The CJIS Security Policy provides Criminal Justice Agencies (CJA) and Noncriminal Justice Agencies (NCJA) with a minimum set of security requirements for access to FBI CJIS systems and information for the protection and saf eguarding of CJI The essential premise of the CJIS Security Policy is to provide the appropriate controls to protect CJI from creation through dissemination whether at rest or in transit This minimum standard of security requirements ensures continuity of information protection What is Criminal Justice Information? Criminal Justice Information (CJI) refers to the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws such as biometric identity his tory person organization property and case/incident history data CJI also refers to data necessary for civil agencies to perform their mission including data used to make hiring decisions CJIS Security Policy 52 A 3 defines CJI as: Criminal Justic e Information is the abstract term used to refer to all of the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws including but not limited to: biometric identity history person organization property and case/incident history data In addition CJI refers to the FBI CJIS provided data necessary for civil agencies to perform their mission; including but not limited to data used to make hiring decisions — CJIS Security Policy 52 A 3 Law enforcement must be able to access CJI wherever and whenever is necessary in a timely and secure manner in order to reduce and stop crime ArchivedAmazon Web Services – CJIS Compliance on AWS Page 2 What is the CJIS Security Policy The intent of the CJIS Security Policy is to ensure the protection of the CJI until the information is 1) released to the public via authorized dissemination (eg within a court system presented in crime reports data or released in the interest of public safety) and 2) purged or destroyed in accordance with applicable record retention rules The Criminal Justice Information Services Division (CJIS) is a division of the United States Federal Bureau of Investigation (FBI) and is responsible for publishing the Criminal Justice Information Services (CJIS) Security Policy which is currently on version 55 The CJIS Security Policy outlines a minimum set of security requirements that create security controls for managing and maintaining Criminal Justice Information (CJI) data The CJIS Advisory Policy Board (APB) manages the policy with national oversight from the CJIS division of the FBI There is no centralized adjudication body for determining what is or isn’t compliant with the Security Policy in the way that FedRAMP has standardized security assessments across the federal government That means vendors/CS Ps wanting to provide CJIS compliant solutions to multiple law enforcement agencies must gain formal CJIS authorizations from city county or state level authority CJIS Security Addendums (Agreements) Unlike many of the compliance frameworks that AWS supports there is no central CJIS authorization body no accredited pool of independent assessors nor a standardized assessment approach to determining whether a particular solution is considered ""CJIS compliant"" Simply put a standardized ""CJIS compliant” solution which works across all law enforcement agencies does not exist It is often falsely misunderstood and miscommunicated that a cloud service provider can be “CJIS certified” It is imperative to understand that delivering a CJIS compliant solution relies on a Shared Responsibility Model between the cloud service provider and the CJA Each law enforcement organization granting CJIS authorizations interprets solutions according to their own risk acceptance standard of what can be construed as compliant within the CJIS requirements Authorizations from one state do not necessarily find reciprocity within another state (or even necessarily ArchivedAmazon Web Services – CJIS Compliance on AWS Page 3 within the same state) Providers must submit solutions for review with each agency authorizing official(s) possibly to include duplicate fingerprint and background checks and other state/jurisdiction specific requirements Each authorization is an agreement with that particular organization; something that must be repeated locally at each law enforcement agency Thu s be wary of vendors that may represent themselves as having a nationally recognized or 50 state compliant CJIS service AWS Approach on CJIS AWS has evaluated the 13 Policy Areas along with the 131 security requirements and has determined that 10 controls can be directly inherited from AWS both AWS and the CJIS customer share 78 and 43 are customer specific controls AWS has documented these requirements with a detailed workb ook which can be downloaded at CJIS Security Policy Workbook The AWS CJIS Security Policy Workbook outlines the shared responsibility between AWS and the CJIS customer on how AWS directly supports the requirements within our FedRAMP accreditation (Note: the CJIS Advisory Policy Board (APB) also has mapping for CJIS to NIST 800 53rev4 requirements which are the base controls for Federal Risk and Authorization Management Program (FedRAMP) dated 6/1/2016) This document and our approach h as been reviewed by the CJIS APB subcommittee chairmen partners in the CJIS space with favorable support on the efficacy of our workbook and approach CJIS and relationship to FedRAMP All Federal Agencies including Criminal Justice Agencies (CJA’s) may leverage the AWS package completed as part of the Federal Risk and Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud service providers (CSP’s ) This approach utilizes a “do once use many times” model to ensure cloud based services have adequate information security eliminate duplication of effort reduce risk management costs and accelerate cloud adoption FedRAMP conforms to the National Institute of Science & Technology (NIST) 800 Series Publications to verify that ArchivedAmazon Web Services – CJIS Compliance on AWS Page 4 all authorizations are compliant with the Federal Information Security Management Act (FISMA) The CJIS Security Policy integrates presidential directives federal laws FBI directives the criminal justice community’s APB decisions along with nationally recognized guidance from the National Institute of Standards and Technology (NIST) and the National Crime Prevention and Privacy Compact Council (Compact Council ) AWS Shared Responsibility Model AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services consider the following three main categories: • Infrastructure • Platform • Software Each category comes with a slightly different security ownership model based on how you interact and access the functionality The main focus of this document the CJIS Security Policy Template document the CJIS Security Policy Requirements document and the CJIS Security Policy Workbook is on the Infrastructure services The other categories are highlighted for awareness and can also be addressed by AWS services as outlined in the following sections Service Categories Infrastructure Services This category includes compute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) AWS Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating ArchivedAmazon Web Services – CJIS Compliance on AWS Page 5 system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack Platform as a Service Services in this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides service for these application “c ontainers” You are responsible for setting up and managing network controls such as firewall rules and the underlying platform – eg level identity and access management separately from Identity and Access Management ( IAM ) Examples of container servic es include Amazon Relational Database Services ArchivedAmazon Web Services – CJIS Compliance on AWS Page 6 (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk Software as a Service This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Service (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS manages the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful integration with IAM AWS Regions Availability Zones and Endpoints AWS has datacenters in multiple locations around the world The recommended region for CJIS workloads is t he AWS GovCloud region Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids The y are interconnected using high speed links so applications ArchivedAmazon Web Services – CJIS Compliance on AWS Page 7 can rely on Local Area Network (LAN0) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zone(s) where your systems will reside Systems can span multiple Availability Zones and we recommend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through t he AWS Management Console AWS provides programmatic access to services through Application Programming Interfaces (APLs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Security & C ompliance OF the Cloud One of the tenets within the CJIS Security Policy is the risk verse realism approach of applying risk based approaches that can be used to mitigate risks based on Every “shall” statement contained within the CJIS Security Policy has been scrutinized for risk versus the reality of resource constraints and realworld application The purpose of the CJIS Security Policy is to establish the minimum security requirements; therefore individual agencies are encouraged to implement additiona l controls to address agency specific risks Each agency faces risk unique to that agency It is quite possible that several agencies could encounter the same type of risk however depending on resources would mitigate that risk differently In that light a risk based approach can be used when implementing requirements” — 23 Risk Versus Realism In order to manage risk and security within the cloud a variety of processes and guidelines have been created to differentiate between the security of a cloud service provider and the responsibilities of a customer consuming the cloud services One of the primary concepts that have emerged is the increased understanding and documentation of shared inherited or dual (AWS & Customer) security controls in a cloud env ironment A common question for ArchivedAmazon Web Services – CJIS Compliance on AWS Page 8 AWS is: “how does leveraging AWS make my security and compliance activities easier?” This question can be answered by demonstrating the security controls that are met by approaching the AWS Cloud in two distinct ways: first reviewing compliance of the AWS Infrastructure gives an idea of “Security & Compliance OF the cloud”; and second reviewing the security of workloads running on top of the AWS infrastructure gives an idea of “Security & Compliance IN the cloud” AWS opera tes manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate Customers running workloads on the AWS infrastructure depend on AWS for a nu mber of security controls AWS has several additional whitepapers which provide additional information to assist AWS customers with integrating AWS into their existing security frameworks and to help design and execute security assessments of an organizat ion’s use of AWS For more information see the AWS Compliance Whitepapers Security & Compliance IN the Cloud Security & Compliance IN the Cloud refers to how the customer manages the secur ity of their workloads through the use of various applications and architecture (virtual private clouds security groups operating systems databases authentication etc) • Cross service security controls – are security controls which a customer needs to implement across all services within their AWS customer instance While each customer’s use of AWS services may vary along with their own risk posture and security control interpretation cross service controls will need to be documented within the customer’s use of AWS services Example: Multi factor authentication can be used to help secure Identity and Access Management (IAM) users groups and roles within the customer environment in order to meet CJIS Access Management Authentication and Authorization requirements for the particular agency or CJIS organization • Service Specific security controls – are service specific security implementation such as the Amazon S3 security access permission ArchivedAmazon Web Services – CJIS Compliance on AWS Page 9 settings l ogging event notification and/or encryption A customer may need to document service specific controls within their use of Amazon S3 in order to meet a specific security control objective related to criminal justice data and/or investigative related reco rds Example: Server Side Encryption (SSE) can be enabled for all objects classified as CJI and/or directory information related to the CJIS security • Optimized Network Operating Systems (OS) and Application Controls – controls a customer may need to docu ment in order to meet specific control elements related to the use of an Operating System and/or application deployed within AWS Example: Customer Server Secure hardening rules or an optimized private Amazon Machine Images (AMI) in order to meet specific security controls within Change Management Creating a CJIS Environment on AWS AWS has several partner solutions that collect transfer manage as well as share digital evidence (eg video and audio files) related to law enforcement interactions AWS is also working with several partners who are delivering electronic warrant services as well as other unique CJIS law enforcement applications and services directly or indirectly to CJIS customers a s illustrated above CJIS Agency/Customer CJIS Technology ArchivedAmazon Web Services – CJIS Compliance on AWS Page 10 Similar to other AWS compliance frameworks the CJIS Security Policy takes advantage of the shared responsibility model between you and AWS Using a cloud se rvice which aligns to CJIS security requirements doesn't mean that your environment automatically adheres to applicable CJIS requirements It’s up to you (or your AWS partner/systems integrator) to architect a solution that meets the applicable CJIS requirements outlined in the CJIS Security Policy One advantage of using AWS for CJIS workloads is that you inherit a significant portion of the security control implementation from AWS and the partner solution that address and meet CJIS security policy elem ents You and your AWS customers and partners should enable several applicable security features functions and utilize leading practices in order to create an AWS CJIS compliant environment within their use of AWS As such t he following section provides a high level overview of services and tools you and your partners should consider as part of your AWS CJIS implementation Auditing and Accountability (Ref CJIS Policy Area 4) • AWS CloudTrail – A service that records AWS API calls for your account and del ivers log files to you AWS CloudTrail logs all user activity within your AWS account You can see who performed what actions on each of your AWS resources The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing For more information go here • Amazon CloudWatch – A service that monitors AWS cloud resources and the applications that you run on AWS You can use AWS CloudWatch to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes AWS Elastic Load Balancers and Amazon RDS DB instances For more information go here • AWS Trusted Advisor – This online resource provides best practices (or checks) in fo ur categories: cost optimization security fault tolerance and performance improvement For each check you can review a detailed description of the recommended best practice a set of alert criteria guidelines for action and a list of useful resources on the topic For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 11 • Amazon SNS – You can use this service to send email or SMS based notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization who need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS For more information go here Identification and Authentication (Ref CJIS Policy Area 6) • Access Control – IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control the actions that users and applications can perform through the AWS Management Console or AWS API Federation allows IAM rol es to be mapped to permissions from central directory services • AWS Identity and Access Management ( IAM) configuration – Creating user groups and assignment of rights including creation of groups for internal auditors an IAM super user and application administrative groups segregated by functionality (eg database and Unix administrators) For more information go here • AWS Multi Factor Authentication (MFA) – A simple best practice that adds an extra l ayer of protection on top of your user name and password With MFA enabled when a user signs in to an AWS website they will be prompted for their user name and password (the first factor —what they know) as well as for an authentication code from their A WS MFA device (the second factor —what they have) For more information go here • AWS Account Password Policy Settings – Within the IAM console under account settings a password policy can be set which supports the password policy requirements as outlined within the CJIS security policy For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 12 Configuration Management (Ref CJIS Policy Area 7) • Amazon EC2 – A web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run Amazon Machine Images (AMI) For more information go here • Amazon Machine Image (AMI) – An Amazon Machine Image (AMI) provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need You can also launch instances from as many different AMIs as you need For more information go here • Amazon Machine Images (AMIs) management – Organizations commonly ensure security and complia nce by centrally providing workload owners with pre built AMIs These “golden” AMIs can be preconfigured with host based security software and hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as star ting images on which to install their own software and configuration knowing the images are already compliant For more information go here • Choosing an AMI – While AWS d oes provide images that can be used for deployment of host operating systems you need to develop and implement system configuration and hardening standards to align with all applicable CJIS requirements for your operating systems For more information go here • AWS EC2 Security Groups – You can control how accessible your virtual instances in EC2 are by configuring built in firewall rules (Security Groups) – from totally public to completely private or somewhere in between For more information go here • Resource Tagging – Almost all AWS resources allow the addition of user defined tags These tags are metadata and irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account it is important to restrict access to resources based on tagging Regardless of account structure ArchivedAmazon Web Services – CJIS Compliance on AWS Page 13 you can use tag based IAM policies to place extra security restrictions on critical resources For more information go here • AWS Config – A fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config service you can immediately discover all of your AWS resources and view the configuration of each You can receive notifications each time a configuration changes as well as dig into the configuration history to perform incident analysis For more information go here • CloudFormation Templates – Creating preapproved AWS CloudFormation templates for common use cases Using templates allows CJIworkload owners to inherit the security implementation of the approved template thereby lim iting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications For more information go here • AWS Service Catalog – Allows CJIS IT administrators to create manage and distribute portfolios of approved products to end users who can then access the products they need in a personalized portal Typical products include servers databases websites or applications that are deployed using AWS resources (for example an Amazon EC2 instance or an Amazon RDS database) For more information go here Media Protection & Information Integrity (Ref CJIS Policy Area 8 & 10) • AWS Storage Gateway – A service that connects an on premises software appliance to cloud based storage providing seamless and secure integration between your on premises IT environment and AWS’s storage infrastructure For more information go here • Storage – AWS provides various options for storage of information including Amazon Elastic Block Store (Amazon EBS) Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS) to allow you to make data easily accessible to your appl ications or for backup purposes Before you store sensitive data you should use CJIS requirements for restricting direct inbound and outbound data to select the correct storage option ArchivedAmazon Web Services – CJIS Compliance on AWS Page 14 For example Amazon S3 can be configured to encrypt your data at rest with server side encryption (SSE) In this scenario Amazon S3 will automatically encrypt your data on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys If you choose server side encryption with Amazon S3 you can use one of the following methods: o AWS Key Management Service (KMS) – A service that makes it easy for you to create and control the encryption keys used to encrypt your data AWS KMS uses Hardware Security Modules (HSMs) to protect the security of your keys For customers who use encryption extensively and require strict control of their keys AWS KMS provides a convenient management option for creating and administering the keys used to encrypt yo ur data at rest For more information go here o KMS Service Integration – AWS KMS seamlessly integrates with Amazon EBS Amazon S3 Amazon RDS Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail and Amazon EMR This integration means that you can use AWS KMS master encryption keys to encrypt the data you store with these services by simply selecting a check box in the AWS Management Console For more information go here o AWS CloudHSM Service – A service that helps you meet corporate contractual and regulatory compliance re quirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud AWS CloudHSM supports a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infr astructure (PKI) including authentication and authorization document signing and transaction processing For more information go here System and Communication Protection and Information Integrity (Ref CJIS Policy Area 10) • AWS Virtual Private Cloud (VPC) – You can use VPC to connect existing infrastructure to a set of logically isolated AWS compute ArchivedAmazon Web Services – CJIS Compliance on AWS Page 15 resources via a Virtual Private Network (VPN) connection and to extend existing management capabilit ies such as security services firewalls and intrusion detection systems to include virtual resources built on AWS For more information go here • AWS Direct Connect (DX) – AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS For more information go here • Perfect Forward Secrecy – For even greater communication privacy several AWS services such as AWS Elastic Load Balancer and Amazon CloudFront offer newer stronger cipher suites SSL/TLS clients can use these cipher suites to use Perfect Forward Secrec y a technique that uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised • Protect data in transit – You should implement SSL encryption on your server instances You will need a certificate from an external certification authority like VeriSign or Entrust The public key included in the certificate authenticates each session and serves as the basis for creating the shared session key used to encrypt the data AWS security engineers and solution architects have developed whitepapers and operational checklists to help you select the best options for your needs and recommend security best practices For example guidance on securely storing and rotating or changing secret keys and passwords Conclusion There are few key points to remember in supporting CJIS work loads: Security is a shared responsibility as AWS doesn't manage the customer environment or data this means you are responsible for implementing the applicable CJIS Security Policy requirements in your AWS environment over and above the AWS implementation of security requirements within the infrastructure Encryption of data in transit and at rest is critical AWS provides several ""key"" resources to help you achieve this imp ortant solution From Solutions Architect personnel available to assist you to our Encrypting Data at Rest Whitepaper as ArchivedAmazon Web Services – CJIS Compliance on AWS Page 16 well as multiple Encryption leading practices AWS strives to provide the resources you need to implement secure solutions AWS directly addresses the relevant CJIS Security Policy requirements applicable to the AWS infrastructure As AWS provides a self provisioned platform that customers wholly manage AWS isn't directly subject to the CJIS Security Policy However we are absolutely committed to maintaining world class cloud security and complian ce programs in support of our customer needs AWS demonstrates compliance with applicable CJIS requirements as supported by our third party assessed frameworks (such as FedRAMP) incorporating on site data center audits by our FedRAMP accredited 3PAO In th e spirit of a shared responsibility philosophy the AWS CJIS Requirements Matrix and the CJIS Security Policy Workbook (in a system security plan template) ha ve been developed which aligns to the CJIS Policy Areas The Workbook is intended to support customers in systematically documenting their implementation of CJIS requirements alongside the AWS approach to each requirement (along with guidance on submitting the document for review and authorization) AWS provides multiple built in security features in support of CJIS workloads such as: • Secure access using AWS Identity and Access Management (IAM) with multi factor authentication • Encrypted data storage with either AWS provided options or customer maintained opti ons • Logging and monitoring with Amazon S3 logging AWS CloudTrail Amazon CloudWatch and AWS Trusted Advisor • Centralized customer controlled key management with AWS CloudHSM and AWS Key Management Ser vice (KMS) Further Reading For additional help see the following sources: • AWS Compliance Center: http://awsamazoncom/compliance ArchivedAmazon Web Services – CJIS Compliance on AWS Page 17 • AWS Security Center: http:// awsamazoncom/security • AWS Security Resources: http://awsamazoncom/security/security resources • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp faqs/ • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Co mpliance_Whitepaperpdf • Cloud Architecture Best Practices Whitepaper: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/public sector contact/ Document Revisions Date Description March 2017 Revised for 55 combined CJIS 54 Workbook and CJIS Whitepaper July 2015 First publication",General,consultant,Best Practices CrossDomain_Solutions_on_AWS,ArchivedCrossDomain Solutions on AWS December 2016 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/cross domainsolutions/welcomehtml Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What is a CrossDomain Solution? 1 OneWay Transfer Device 1 Multidomain Data Guard 2 Traditional Deployment 2 How Is a CrossDomain Solution Different from Other Security Appliances? 3 When is a CrossDomain Solution Required? 4 Connecting OnPremises Infrastructure 4 Amazon VPC 4 AWS Direct Connect 5 Amazon EC2 5 Amazon S3 5 AWS Advantages for Secure Workloads 6 Cost 6 Elasticity 6 PurposeBuilt Infrastructure 6 Auditability 6 Security and Governance 7 Sample Architectures 7 Deploying a CDS via the Internet 7 Deploying a CDS via AWS Direct Connect 8 Deploying a CDS across Multiple Regions 9 Deploying a CDS in a Colocation Environment 11 Conclusion 11 Contributors 12 Further Reading 12 ArchivedNotes 12 ArchivedAbstract Many corporations government entities and institutions maintain multiple security domains as part of their information technology (IT) infrastructure For the purposes of this document a security domain is an environment with a set of resources accessible only by users or entities who have permitted access to those resources The resources are likely to include the resource’s network fabric as defined by the security domain’s policy Some organization’s users need to interact with multiple domains simultaneously or a system or user within one security domain needs to communicate directly or obtain data from a system or user in a separate security domain For security domains with highly sensitive data a crossdomain solution (CDS) can be deployed to allow data transfer between security domains while ensuring integrity of the domain’s security perimeter ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 1 Introduction To control access across security domains it’s common to employ a specialized hardware solution such as a crossdomain solution (CDS ) to manage and control the interactions between two security boundaries When security domains extend across data centers or expand into the cloud you can encounter additional challenges when including the hardware solution you want in your architecture You are not limited to any particular vendor solution to deploy a CDS on the AWS Cloud However one challenge is that you cannot place your own hardware within an AWS data center This requirement is part of the AWS commitment to maintain security within AWS data centers This whitepaper provides best practices for designing hybrid architectures where AWS services are incorporated as one or more security domains within a multidomain environment What is a CrossDomain Solution? The Committee on National Security Systems (CNSS) defines a CDS as a form of controlled interface that enables manual or automatic access or transfer of information between different security domains Two types of CDS are discussed in this whitepaper a o neway transfer (OWT) device and a multidomain data guard OneWay Transfer Device An OWT device allows data to flow in a single direction from one security domain to another A common implementation of an OWT device uses a fiber optic cable To ensure data flows only in one direction the OWT uses a single optical transmitter The optical transmitter is placed on only one end of the fiber optic cable (eg data producer) and the optical receiver is placed on the opposite end (eg data consumer) OWT devices are often referred to as diodes due to their ability to transfer data only in one direction similar to the semiconductor of the same name ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 2 Multidomain Data Guard A multidomain data guard enables bidirectional data flow between security domains A common implementation of a multidomain data guard is a single server running a trusted hardened operating system with multiple network interface cards (NICs) Each NIC provides a physical demarcation for a single security domain The multidomain data guard inspects all data transmitted between domains to ensure the data remains in compliance with a unique rule set that is specific to the guard’s deployment Traditional Deployment Figure 1 shows a traditional crossdomain solution deployment between two security domains Security Domain “A” is connected to Security Domain “B” using a CDS If the CDS is an OWT device resources deployed in Network “A” can communicate to resources deployed in Network “B” by sending data via the CDS If instead the CDS is a multi domain data guard resources in either security domain can communicate with the other security domain by sending data via the CDS In the following example the CDS is administrated and also physically located within the protections of Security Domain “B” ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 3 Figure 1: Traditional CDS deployment How Is a CrossDomain Solution Differ ent from Other Security Appliances? A CDS differs from other security appliances such as firewalls web application firewalls (WAFs) and intrusion detection or prevention systems In addition to providing physical network and logical isolation between domains cross domain solutions offer additional security mechanisms such as virus scanning auditing and logging and deep content inspection in a single solution In Security Domain “A” Network “A” Security Domain “B” Network “B” Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 4 combination when the CDS is included in a larger security program these capabilities help prevent both exploitation and data leakage When is a CrossDomain Solution Required? A business decision to employ a CDS should evaluate the high cost of ownership involved with integration procurement and maintenance Be aware that a high degree of customization is often required for each individual CDS deployment You would often deploy a CDS due to regulatory or policy requirements or in situations where a data breach would be catastrophic to your organization Because of these reasons the CDS is an integral component of the architecture and may even be required to achieve an Authority to Operate (ATO) from your organization’s security and compliance program Once an ATO is achieved it can be cumbersome to make changes to a CDS configuration (eg alter the message rule set) without affecting the ATO ’s approval If these drawbacks outweigh the additional security provided by a CDS you should consider other options such as WAF s Connecting OnPremises Infrastructure AWS provides service offerings to help you connect your existing onpremises infrastructures The following sections describe some o f the key services that AWS offers including: Amazon Virtual Private Cloud (Amazon VPC) AWS Direct Connect Amazon Elastic Compute Cloud (Amazon EC2 ) and Amazon Simple Storage Service (Amazon S3) Amazon VPC Amazon VPC lets you provision a logically isolated section of your AWS environment so that you can launch resources in a virtual network you define You have complete control over your virtual networking environment including the selection of your own IP address range creation of subnets and configuration of route tables and network gateways The network configuration for a VPC is ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 5 easily customized using multiple layers of security including security groups and network access control lists The security layers control access to Amazon EC2 instances in each subnet Additionally you can create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage AWS as an extension of your corporate data center AWS Direct Connect Using Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment Direct Connect enables you to establish a dedicated network connection between your network and one of the Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces This enables you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within Amazon VPC using private IP address space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs Amazon EC2 Amazon EC2 is a web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon S3 Amazon S3 provides costeffective object storage for a wide variety of use cases including cloud applications content distribution backup and archiving disaster recovery and big data analytics Objects stored in Amazon S3 can be protected in transit by using SSL or clientside encryption Data at rest in Amazon S3 can be protected by using serverside encryption (you request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects) and/or using clientside encryption (you encrypt data clientside and then upload the data to Amazon S3) Using clientside encryption you manage the encryption process the encryption keys and related tools ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 6 AWS Advantages for Secure Workloads The AWS Cloud provides several advantages if you want to deploy secure workloads using a CDS Cost Pay only for the storage and compute consumed for your workloads Amazon S3 offers multiple storage classes you can use to control the cost of storage objec ts based on the frequency and availability required at the object level Eliminate the costs associated with data duplication data fragmentation system maintenance and upgrades Provision compute resources for specific jobs and stop paying for the comp ute resources when the jobs are complete Elasticity Scale as workload volumes increase and decrease paying only for what you use Eliminate large capital expenditures by no longer guessing what levels of storage and compute are required for your workloads Scaling resources is not limited to just meeting demand Workload owners can also leverage the scalability value of AWS by scaling up compute resources for timesensitive jobs PurposeBuilt Infrastructure You tailor AWS purposebuilt tools to your requirements and scaling and audit objectives in addition to supporting realtime verification and reporting through the use of internal tools such as AWS CloudTrail1 AWS Config2 and Amazon CloudWatch3 These tools are built to help you maximize the protection of your services data and applications This means as an AWS customer you can spend less time on routine security and audit tasks and focus on proactive measures that can continue to enhance security and audit capabilities of your AWS environment Auditability AWS manages the underlying infrastructure and you manage the security of anything you deploy in AWS As a modern platform AWS enables you to ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 7 formalize the design of security as well as audit controls through reliable automated and verifiable technical and operational processes that are built into every AWS customer account The cloud simplifies system use for administrators and those running IT and makes your AWS environment much simpler to audit sample testing as AWS can shift audits toward a 100 percent verification versu s traditional sample testing Security and Governance AWS Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS Cloud infrastructure compliance responsibilities are shared By tying together governancefocused auditfriendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs This helps you establish and operate in an AWS security control environment The IT infrastructure that AWS provides is designed and managed in alignment with security best practices and numerous security accreditations Sample Architectures You can set up your CDS in many ways The following examples describe some of the more common architectures in use Deploying a CDS via the Internet Figure 2 shows two onpremises customer networks that are connected by a CDS using the traditional deployment as shown earlier in Figure 1 In this configuration Security Domain “A” is extended to provide connectivity to an Amazon VPC in the AWS Cloud while Security Domain “B” exist s solely within the customer’s data center ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 8 Figure 2: Deploying a CDS via the Internet The customer is using the Internet as a WAN to connect to the Amazon VPC A secure IPSEC tunnel encapsulates data crossing the Internet betwee n on premises infrastructure and the customer’s VPC Additional security mechanisms such as a WAF or an intrusion detection system (IDS) can be deployed within Security Domain “A” for added protection from Internet facing systems Because Amazon VPC is a n extension of Security Domain “A” Amazon EC2 instances launched within Amazon VPC can communicate with resources in Security Domain “B” via the CDS Deploying a CDS via AWS Direct Connect Figure 3 shows a similar deployment to Figure 2 b ut Direct Connect is used instead of the Internet to provide the WAN connectivity for extending Security Domain “A” to Amazon VPC Internet Secure IPSEC Tunnel Security Domain “A” Extension Virtual Private Cloud AWS Region VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 9 Figure 3: Deploying a CDS via Direct Connect Direct Connect gives you greater control and visibility of the WAN network path required to connect to Amazon VPC Using Direct Connect also reduces the threat vector posed by the Internet All data flowing between your data center and AWS Regions is doing so across your procured communication links Deploying a CDS across Multiple Regions Figure 4 shows two individual security domains connected to two separate AWS Regions As shown earlier in Figure 3 the security domains are extended by using a combination of Direct Connect and a secure IPSEC VPN tunnel All data flowing between the security domains flows from AWS to the customer’s data center first where it is inspected by the CDS before flowing back to AWS Security Domain “A” AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region Customer Data Center 8021q VLAN Customer WAN Secure IPSEC Tunnel VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Security Domain “B” Customer Gateway Virtual Private Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 10 Figure 4: Deploying a CDS across multiple regions You should implement a multiregion deployment when the unique capabilities of an individual AWS Region apply to only a single security domain For example an entity might choose to provision an Amazon Redshift data warehouse in one of the AWS Regions in the European Union (EU) to comply with data locality requirements while also maintaining a production data processing cluster in a USbased region to comply with FedRamp requirements Even though these two systems are deployed in separate geographic locations to comply with separate compliance programs and regulations they still might have a requirement to communicate and share an approved subset of data Deploying a CDS between these two security domains might be an acceptable way to share data while maintaining the integrity of the security domain’s boundary AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 11 Deploying a CDS in a Colocation Environment Figure 5 depicts an additional potential configuration using space at colocation environments In Figure 5 the CDS is still deployed in a customercontrolled area that is leased from the colocation facility provider Figure 5 shows a fully off premises implementation that includes a CDS Figure 5: Deploying a CDS in a colocation environment Conclusion Organizations with workloads across multiple security domains can leverage all the benefits that AWS services offer by using Direct Connect VPN crossdomain hardware and a colocation Organizations can select the hardware needed to meet their security domain transfer requirements and extend resources that live in other AWS Regions or onpremises locations In addition to the ability to connect resources across security domains AWS offers a wide variety of tools AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Customer WAN “A” Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 12 that you and your organization can leverage to meet security and compliance requirements of workloads hosted within AWS Contributors The following individuals and organizations contributed to this document:  Andrew Lieberthal Solutions Architect AWS Public Sector SalesVar Further Reading For additional help please consult the following sources:  Amazon VPC Network Connectivity Options4  AWS Security Best Practices5  Intro to AWS Security6  Overview of AWS7 Notes 1 https://awsamazoncom/cloudtrail/ 2 http://awsamazoncom/config 3 http://awsamazoncom/cloudwatch 4http://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf 5 http://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf 6 https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 7 http://d0awsstaticcom/whitepapers/awsoverviewpdf,General,consultant,Best Practices CSA_Consensus_Assessments_Initiative_Questionnaire,"CSA Consensus Assessments Initiative Questionnaire (CAIQ) May 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 4 CSA Consensus Assessments Initiative Questionnaire 5 Further Reading 100 Document Revisions 100 Abstract The CSA Consensus Assessments Initiative Questionnaire provides a set of questions the CSA anticipates a cloud consumer and/or a cloud auditor would ask of a cloud provider It provides a series of security control and process questions which can then be used for a wide range of uses including cloud provider selection and security evaluation AWS has completed this questionnaire with the answers below The questionnaire has been completed using the current CSA CAIQ standard v402 (06072021 Update) Introduction The Cloud Security Alliance (CSA) is a “notforprofit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing and to provide education on the uses of Cloud Computing to help secure all other forms of computing” For more information see https://cloudsecurityallianceorg/about/ A wide range of industry security practitioners corporations and associations participate in this organization to achieve its mission CSA Consensus Assessments Initiative Questionnaire Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 011 Are audit and assurance policies procedures and standards established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance A&A 012 Are audit and assurance policies procedures and standards reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 021 Are independent audit and assurance assessments conducted according to relevant standards at least annually? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A02 Conduct independent audit and assurance assessments according to relevant standards at least annually Independent Assessments Audit & Assurance A&A 031 Are independent audit and assurance assessments performed according to risk based plans and policies? Yes CSPowned AWS internal and external audit and assurance uses riskbased plans and approach to conduct assessments at least annually AWS Compliance program covers sections including but not limited to assessment methodology security assessment and results and nonconforming controls A&A03 Perform independent audit and assurance assessments according to riskbased plans and policies Risk Based Planning Assessment Audit & Assurance A&A 041 Is compliance verified regarding all relevant standards regulations legal/contractual and statutory requirements applicable to the audit? Yes CSPowned AWS maintains Security Governance Risk and Compliance relationships with internal and external parties to verity monitor legal regulatory and contractual requirements Should a new security directive be issued AWS has documented plans in place to implement that directive with designated timeframes A&A04 Verify compliance with all relevant standards regulations legal/contractual and statutory requirements applicable to the audit Requirement s Compliance Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 051 Is an audit management process defined and implemented to support audit planning risk analysis security control assessments conclusions remediation schedules report generation and reviews of past reports and supporting evidence? Yes CSPowned Internal and external audits are planned and performed according to the documented audit scheduled to review the continued performance of AWS against standardsbased criteria and to identify general improvement opportunities Standardsbased criteria includes but is not limited to the ISO/IEC 27001 Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 16) and the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards A&A05 Define and implement an Audit Management process to support audit planning risk analysis security control assessment conclusion remediation schedules report generation and review of past reports and supporting evidence Audit Management Process Audit & Assurance A&A 061 Is a riskbased corrective action plan to remediate audit findings established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS maintains a Risk Management program to mitigate and manage risk AWS management has a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 062 Is the remediation status of audit findings reviewed and reported to relevant stakeholders? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities External audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria and to identify improvement opportunities Standardsbased criteria include but are not limited to Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 18) the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards and the Payment Card Industry Data Security standard PCI DSS 321 Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 011 Are application security policies and procedures established documented approved communicated applied evaluated and maintained to guide appropriate planning delivery and support of the organization's application security capabilities? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security AIS 012 Are application security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 021 Are baseline requirements to secure different applications established documented and maintained? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AIS02 Establish document and maintain baseline requirements for securing different applications Application Security Baseline Requirement s Application & Interface Security AIS 031 Are technical and operational metrics defined and implemented according to business objectives security requirements and compliance obligations? Yes CSCowned See response to Question ID AIS021 AIS03 Define and implement technical and operational metrics in alignment with business objectives security requirements and compliance obligations Application Security Metrics Application & Interface Security AIS 041 Is an SDLC process defined and implemented for application design development deployment and operation per organizationally designed security requirements? Yes CSPowned See response to Question ID AIS021 AIS04 Define and implement a SDLC process for application design development deployment and operation in accordance with security requirements defined by the organization Secure Application Design and Developmen t Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 051 Does the testing strategy outline criteria to accept new information systems upgrades and new versions while ensuring application security compliance adherence and organizational speed of delivery goals? Yes CSPowned See response to Question ID AIS021 AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 052 Is testing automated when applicable and possible? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a ""pipeline"" containing ""stages” AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 061 Are strategies and capabilities established and implemented to deploy application code in a secure standardized and compliant manner? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a ""pipeline"" containing ""stages” AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 062 Is the deployment and integration of application code automated where possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the AWS Overview of Security Processes for further details That whitepaper is located here https://d1awsstaticcom/whitepapers/Security /AWS_Security_Whitepaperpdf AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security AIS 071 Are application security vulnerabilities remediated following defined processes? Yes CSPowned Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security AIS 072 Is the remediation of application security vulnerabilities automated when possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 011 Are business continuity management and operational resilience policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 021 Are criteria for developing business continuity and operational resiliency strategies and capabilities established based on business disruption and risk impacts? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR02 Determine the impact of business disruptions and risks to establish criteria for developing business continuity and operational resilience strategies and capabilities Risk Assessment and Impact Analysis Business Continuity Management and Operational Resilience BCR 031 Are strategies developed to reduce the impact of withstand and recover from business disruptions in accordance with risk appetite? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR03 Establish strategies to reduce the impact of withstand and recover from business disruptions within risk appetite Business Continuity Strategy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 041 Are operational resilience strategies and capability results incorporated to establish document approve communicate apply evaluate and maintain a business continuity plan? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR04 Establish document approve communicate apply evaluate and maintain a business continuity plan based on the results of the operational resilience strategies and capabilities Business Continuity Planning Business Continuity Management and Operational Resilience BCR 051 Is relevant documentation developed identified and acquired to support business continuity and operational resilience plans? Yes CSPowned The AWS business continuity plan details the threephased approach that AWS has developed to recover and reconstitute the AWS infrastructure: • Activation and Notification Phase • Recovery Phase • Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 052 Is business continuity and operational resilience documentation available to authorized stakeholders? Yes CSPowned Information System Documentation is made available internally to AWS personnel through the use of Amazon's Intranet site Refer to ISO 27001 Appendix A Domain 12 BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 053 Is business continuity and operational resilience documentation reviewed periodically? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 061 Are the business continuity and operational resilience plans exercised and tested at least annually and when significant changes occur? Yes CSPowned AWS Business Continuity Policies and Plans have been developed and tested at least annually in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity at least annually BCR06 Exercise and test business continuity and operational resilience plans at least annually or upon significant changes Business Continuity Exercises Business Continuity Management and Operational Resilience BCR 071 Do business continuity and resilience procedures establish communication with stakeholders and participants? Yes CSPowned The AWS Business Continuity policy provides a complete discussion of AWS services roles and responsibilities and AWS processes for managing an outage from detection to deactivation AWS Service teams create administrator documentation for their services and store the documents in internal AWS document repositories Using these documents teams provide initial training to new team members that covers their job duties oncall responsibilities service specific monitoring metrics and alarms along with the intricacies of the service they are supporting Once trained service team members can assume oncall duties and be paged into an engagement as a resolver In addition to the documentation stored in the repository AWS also uses GameDay Exercises to train coordinators and Service Teams in their roles and responsibilities BCR07 Establish communication with stakeholders and participants in the course of business continuity and resilience procedures Communicat ion Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 081 Is cloud data periodically backed up? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored Customers retain control and ownership of their content When customers store content in a specific region it is not replicated outside that region It is the customer's responsibility to replicate content across regions if business needs require that Backup and retention policies are the responsibility of the customer AWS offers best practice resources to customers including guidance and alignment to the Well Architected Framework Snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon backups AWS Backup allows customers to centrally manage and automate backups across AWS services The service enables customers to centralize and automate data protection across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 082 Is the confidentiality integrity and availability of backup data ensured? Yes Shared CSP and CSC See response to Question ID BCR081 BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 083 Can backups be restored appropriately for resiliency? Yes CSCowned AWS Backup allows customers to centrally manage and automate backups across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 091 Is a disaster response plan established documented approved applied evaluated and maintained to ensure recovery from natural and manmade disasters? Yes Shared CSP and CSC The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 092 Is the disaster response plan updated at least annually and when significant changes occur? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience BCR 101 Is the disaster response plan exercised annually or when significant changes occur? Yes CSPowned AWS tests the business continuity at least annually to ensure effectiveness of the associated procedures and the organization readiness Testing consists of gameday exercises that execute on activities that would be performed in an actual outage AWS documents the results including lessons learned and any corrective actions that were completed BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 102 Are local emergency authorities included if possible in the exercise? No CSPowned BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 111 Is businesscritical equipment supplemented with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards? Yes CSPowned AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites BCR11 Supplement business critical equipment with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards Equipment Redundancy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 011 Are risk management policies and procedures associated with changing organizational assets including applications systems infrastructure configuration etc established documented approved communicated applied evaluated and maintained (regardless of whether asset management is internal or external)? Yes CSPowned AWS applies a systematic approach to managing change to ensure that all changes to a production environment are reviewed tested and approved The AWS Change Management approach requires that the following steps be complete before a change is deployed to the production environment: 1 Document and communicate the change via the appropriate AWS change management tool 2 Plan implementation of the change and rollback procedures to minimize disruption 3 Test the change in a logically segregated nonproduction environment 4 Complete a peerreview of the change with a focus on business impact and technical rigor The review should include a code review 5 Attain approval for the change by an authorized individual Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a ""pipeline"" containing ""stages” CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management CCC 021 Is a defined quality change control approval and testing process (with established baselines testing and release standards) followed? Yes CSPowned See response to Question ID CCC011 CCC02 Follow a defined quality change control approval and testing process with established baselines testing and release standards Quality Testing Change Control and Configuration Management CCC 031 Are risks associated with changing organizational assets (including applications systems infrastructure configuration etc) managed regardless of whether asset management occurs internally or externally (ie outsourced)? Yes CSPowned See response to Question ID CCC011 CCC03 Manage the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Change Management Technology Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 041 Is the unauthorized addition removal update and management of organization assets restricted? Yes CSPowned Authorized staff must pass two factor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy CCC04 Restrict the unauthorized addition removal update and management of organization assets Unauthorize d Change Protection Change Control and Configuration Management CCC 051 Are provisions to limit changes that directly impact CSCowned environments and require tenants to authorize requests explicitly included within the service level agreements (SLAs) between CSPs and CSCs? No CSPowned AWS notifies customers of changes to the AWS service offering in accordance with the commitment set forth in the AWS Customer Agreement AWS continuously evolves and improves our existing services and frequently adds new services Our services are controlled using APIs If we change or discontinue any API used to make calls to the services we will continue to offer the existing API for 12 months Additionally AWS maintains a public Service Health Dashboard to provide customers with the realtime operational status of our services at http://statusawsamazoncom/ CCC05 Include provisions limiting changes directly impacting CSCs owned environments/tenant s to explicitly authorized requests within service level agreements between CSPs and CSCs Change Agreements Change Control and Configuration Management CCC 061 Are change management baselines established for all relevant authorized changes on organizational assets? Yes CSPowned See response to Question ID CCC011 CCC06 Establish change management baselines for all relevant authorized changes on organization assets Change Management Baseline Change Control and Configuration Management CCC 071 Are detection measures implemented with proactive notification if changes deviate from established baselines? Yes CSPowned See response to Question ID CCC081 CCC07 Implement detection measures with proactive notification in case of changes deviating from the established baseline Detection of Baseline Deviation Change Control and Configuration Management CCC 081 Is a procedure implemented to manage exceptions including emergencies in the change and configuration process? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 082 'Is the procedure aligned with the requirements of the GRC04: Policy Exception Process?' Yes CSPowned See response to Question ID CCC081 CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management CCC 091 Is a process to proactively roll back changes to a previously known ""good state"" defined and implemented in case of errors or security concerns? Yes CSPowned See response to Question ID CCC011 CCC09 Define and implement a process to proactively roll back changes to a previous known good state in case of errors or security concerns Change Restoration Change Control and Configuration Management CEK 011 Are cryptography encryption and key management policies and procedures established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC Internally AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS customers are responsible for managing encryption keys within their AWS environments Customers can leverage AWS services such as AWS KMS and CloudHSM to manage the lifecycle of their keys according to internal policy requirements See following: AWS KMS https://awsamazoncom /kms/ AWS CloudHSM https://awsamazoncom /cloudhsm/ CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 012 Are cryptography encryption and key management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management CEK 021 Are cryptography encryption and key management roles and responsibilities defined and implemented? Yes CSCowned See response to CEK011 CEK02 Define and implement cryptographic encryption and key management roles and responsibilities CEK Roles and Responsibiliti es Cryptography Encryption & Key Management CEK 031 Are data atrest and intransit cryptographically protected using cryptographic libraries certified to approved standards? NA CSCowned AWS allows customers to use their own encryption mechanisms (for storage and intransit) for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK03 Provide cryptographic protection to data atrest and intransit using cryptographic libraries certified to approved standards Data Encryption Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 041 Are appropriate data protection encryption algorithms used that consider data classification associated risks and encryption technology usability? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK04 Use encryption algorithms that are appropriate for data protection considering the classification of data associated risks and usability of the encryption technology Encryption Algorithm Cryptography Encryption & Key Management CEK 051 Are standard change management procedures established to review approve implement and communicate cryptography encryption and key management technology changes that accommodate internal and external sources? Yes Shared CSP and CSC See response to CEK011 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK05 Establish a standard change management procedure to accommodate changes from internal and external sources for review approval implementation and communication of cryptographic encryption and key management technology changes Encryption Change Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 061 Are changes to cryptography encryption and key management related systems policies and procedures managed and adopted in a manner that fully accounts for downstream effects of proposed changes including residual risk cost and benefits analysis? Yes Shared CSP and CSC See response to CEK011 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK06 Manage and adopt changes to cryptography encryption and key managementrelated systems (including policies and procedures) that fully account for downstream effects of proposed changes including residual risk cost and benefits analysis Encryption Change Cost Benefit Analysis Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 071 Is a cryptography encryption and key management risk program established and maintained that includes risk assessment risk treatment risk context monitoring and feedback provisions? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed CEK07 Establish and maintain an encryption and key management risk program that includes provisions for risk assessment risk treatment risk context monitoring and feedback Encryption Risk Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 081 Are CSPs providing CSCs with the capacity to manage their own data encryption keys? Yes CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK08 CSPs must provide the capability for CSCs to manage their own data encryption keys CSC Key Management Capability Cryptography Encryption & Key Management CEK 091 Are encryption and key management systems policies and processes audited with a frequency proportional to the system's risk exposure and after any security event? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management CEK 092 Are encryption and key management systems policies and processes audited (preferably continuously but at least annually)? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 101 Are cryptographic keys generated using industry accepted and approved cryptographic libraries that specify algorithm strength and random number generator specifications? Yes Shared CSP and CSC AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom/kms/) Refer to AWS SOC reports for more details on KMS AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK10 Generate Cryptographic keys using industry accepted cryptographic libraries specifying the algorithm strength and the random number generator used Key Generation Cryptography Encryption & Key Management CEK 111 Are private keys provisioned for a unique purpose managed and is cryptography secret? NA CSCowned Customers determine whether they want to leverage AWS KMS to store encryption keys in the cloud or use other mechanisms (onprem HSM other key management technologies) to store keys within their on premises environments CEK11 Manage cryptographic secret and private keys that are provisioned for a unique purpose Key Purpose Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 121 Are cryptographic keys rotated based on a cryptoperiod calculated while considering information disclosure risks and legal and regulatory requirements? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK12 Rotate cryptographic keys in accordance with the calculated cryptoperiod which includes provisions for considering the risk of information disclosure and legal and regulatory requirements Key Rotation Cryptography Encryption & Key Management CEK 131 Are cryptographic keys revoked and removed before the end of the established cryptoperiod (when a key is compromised or an entity is no longer part of the organization) per defined implemented and evaluated processes procedures and technical measures to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK13 Define implement and evaluate processes procedures and technical measures to revoke and remove cryptographic keys prior to the end of its established cryptoperiod when a key is compromised or an entity is no longer part of the organization which include provisions for legal and regulatory requirements Key Revocation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 141 Are processes procedures and technical measures to destroy unneeded keys defined implemented and evaluated to address key destruction outside secure environments revocation of keys stored in hardware security modules (HSMs) and include applicable legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK14 Define implement and evaluate processes procedures and technical measures to destroy keys stored outside a secure environment and revoke keys stored in Hardware Security Modules (HSMs) when they are no longer needed which include provisions for legal and regulatory requirements Key Destruction Cryptography Encryption & Key Management CEK 151 Are processes procedures and technical measures to create keys in a preactivated state (ie when they have been generated but not authorized for use) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK15 Define implement and evaluate processes procedures and technical measures to create keys in a pre activated state when they have been generated but not authorized for use which include provisions for legal and regulatory requirements Key Activation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 161 Are processes procedures and technical measures to monitor review and approve key transitions (eg from any state to/from suspension) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK16 Define implement and evaluate processes procedures and technical measures to monitor review and approve key transitions from any state to/from suspension which include provisions for legal and regulatory requirements Key Suspension Cryptography Encryption & Key Management CEK 171 Are processes procedures and technical measures to deactivate keys (at the time of their expiration date) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK17 Define implement and evaluate processes procedures and technical measures to deactivate keys at the time of their expiration date which include provisions for legal and regulatory requirements Key Deactivation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 181 Are processes procedures and technical measures to manage archived keys in a secure repository (requiring least privilege access) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK18 Define implement and evaluate processes procedures and technical measures to manage archived keys in a secure repository requiring least privilege access which include provisions for legal and regulatory requirements Key Archival Cryptography Encryption & Key Management CEK 191 Are processes procedures and technical measures to encrypt information in specific scenarios (eg only in controlled circumstances and thereafter only for data decryption and never for encryption) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK19 Define implement and evaluate processes procedures and technical measures to use compromised keys to encrypt information only in controlled circumstance and thereafter exclusively for decrypting data and never for encrypting data which include provisions for legal and regulatory requirements Key Compromise Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 201 Are processes procedures and technical measures to assess operational continuity risks (versus the risk of losing control of keying material and exposing protected data) being defined implemented and evaluated to include legal and regulatory requirement provisions? Yes Shared CSP and CSC AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS CEK20 Define implement and evaluate processes procedures and technical measures to assess the risk to operational continuity versus the risk of the keying material and the information it protects being exposed if control of the keying material is lost which include provisions for legal and regulatory requirements Key Recovery Cryptography Encryption & Key Management CEK 211 Are key management system processes procedures and technical measures being defined implemented and evaluated to track and report all cryptographic materials and status changes that include legal and regulatory requirements provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK21 Define implement and evaluate processes procedures and technical measures in order for the key management system to track and report all cryptographic materials and changes in status which include provisions for legal and regulatory requirements Key Inventory Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 011 Are policies and procedures for the secure disposal of equipment used outside the organization's premises established documented approved communicated enforced and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS01 Establish docum ent approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 012 Is a data destruction procedure applied that renders information recovery information impossible if equipment is not physically destroyed? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 013 Are policies and procedures for the secure disposal of equipment used outside the organization's premises reviewed and updated at least annually? Yes Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 021 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location established documented approved communicated implemented enforced maintained? Yes AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content DCS02 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 022 Does a relocation or transfer request require written or cryptographically verifiable authorization? Yes Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security DCS 023 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 031 Are policies and procedures for maintaining a safe and secure working environment (in offices rooms and facilities) established documented approved communicated enforced and maintained? Yes CSPowned AWS engages with external certifying bodies and independent auditors to review and validate our compliance with compliance frameworks AWS SOC reports provide additional details on the specific physical security control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 032 Are policies and procedures for maintaining safe secure working environments (eg offices rooms) reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 041 Are policies and procedures for the secure transportation of physical media established documented approved communicated enforced evaluated and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 042 Are policies and procedures for the secure transportation of physical media reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security DCS 051 Is the classification and documentation of physical and logical assets based on the organizational business risk? Yes CSPowned In alignment with ISO 27001 standards AWS assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS05 Classify and document the physical and logical assets (eg applications) based on the organizational business risk Assets Classification Datacenter Security DCS 061 Are all relevant physical and logical assets at all CSP sites cataloged and tracked within a secured system? Yes CSPowned In alignment with ISO 27001 standards AWS Hardware assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS06 Catalogue and track all relevant physical and logical assets located at all of the CSP's sites within a secured system Assets Cataloguing and Tracking Datacenter Security DCS 071 Are physical security perimeters implemented to safeguard personnel data and information systems? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 072 Are physical security perimeters established between administrative and business areas data storage and processing facilities? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security DCS 081 Is equipment identification used as a method for connection authentication? Yes CSPowned AWS manages equipment identification in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS08 Use equipment identification as a method for connection authentication Equipment Identification Datacenter Security DCS 091 Are solely authorized personnel able to access secure areas with all ingress and egress areas restricted documented and monitored by physical access control mechanisms? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 092 Are access control records retained periodically as deemed appropriate by the organization? Yes CSPowned Authentication logging aggregates sensitive logs from EC2 hosts and stores them on S3 The log integrity checker inspects logs to ensure they were uploaded to S3 unchanged by comparing them with local manifest files Access and privileged command auditing logs record every automated and interactive login to the systems as well as every privileged command executed External access to data stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the data accessor IP address object and operation DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security DCS 101 Are external perimeter datacenter surveillance systems and surveillance systems at all ingress and egress points implemented maintained and operated? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS10 Implement maintain and operate datacenter surveillance systems at the external perimeter and at all the ingress and egress points to detect unauthorized ingress and egress attempts Surveillance System Datacenter Security DCS 111 Are datacenter personnel trained to respond to unauthorized access or egress attempts? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS11 Train datacenter personnel to respond to unauthorized ingress or egress attempts Unauthorize d Access Response Training Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 121 Are processes procedures and technical measures defined implemented and evaluated to ensure riskbased protection of power and telecommunicatio n cables from interception interference or damage threats at all facilities offices and rooms? Yes CSPowned AWS equipment is protected from utility service outages in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities DCS12 Define implement and evaluate processes procedures and technical measures that ensure a riskbased protection of power and telecommunication cables from a threat of interception interference or damage at all facilities offices and rooms Cabling Security Datacenter Security DCS 131 Are data center environmental control systems designed to monitor maintain and test that on site temperature and humidity conditions fall within accepted industry standards effectively implemented and maintained? Yes CSPowned AWS data centers incorporate physical protection against environmental risks AWS' physical protection against environmental risks has been validated by an independent auditor and has been certified as being in alignment with ISO 27002 best practices Refer to ISO 27001 standard Annex A domain 11 and link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS13 Implement and maintain data center environmental control systems that monitor maintain and test for continual effectiveness the temperature and humidity conditions within accepted industry standards Environment al Systems Datacenter Security DCS 141 Are utility services secured monitored maintained and tested at planned intervals for continual effectiveness? Yes CSPowned AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities Please refer to link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS14 Secure monitor maintain and test utilities services for continual effectiveness at planned intervals Secure Utilities Datacenter Security DCS 151 Is businesscritical equipment segregated from locations subject to a high probability of environmental risk events? Yes CSPowned The AWS Security Operations Center performs quarterly threat and vulnerability reviews of datacenters and colocation sites These reviews are in addition to an initial environmental and geographic assessment of a site performed prior to building or leasing The quarterly reviews are validated by third parties during our SOC PCI and ISO assessments DCS15 Keep businesscritical equipment away from locations subject to high probability for environmental risk events Equipment Location Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 011 Are policies and procedures established documented approved communicated enforced evaluated and maintained for the classification protection and handling of data throughout its lifecycle according to all applicable laws and regulations standards and risk level? Yes CSPowned AWS has implemented data handling and classification requirements which provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Handling requirements AWS services are content agnostic in that they offer the same high level of security to customers regardless of the type of content being stored We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthorized access AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management DSP 012 Are data security and privacy policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 021 Are industry accepted methods applied for secure data disposal from storage media so information is not recoverable by any forensic means? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DSP02 Apply industry accepted methods for the secure disposal of data from storage media such that data is not recoverable by any forensic means Secure Disposal Data Security and Privacy Lifecycle Management DSP 031 Is a data inventory created and maintained for sensitive and personal information (at a minimum)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP03 Create and maintain a data inventory at least for any sensitive data and personal data Data Inventory Data Security and Privacy Lifecycle Management DSP 041 Is data classified according to type and sensitivity levels? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP04 Classify data according to its type and sensitivity level Data Classification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 051 Is data flow documentation created to identify what data is processed and where it is stored and transmitted? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 052 Is data flow documentation reviewed at defined intervals at least annually and after any change? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 061 Is the ownership and stewardship of all relevant personal and sensitive data documented? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 062 Is data ownership and stewardship documentation reviewed at least annually? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management DSP 071 Are systems products and business practices based on security principles by design and per industry best practices? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing DSP07 Develop systems products and business practices based upon a principle of security by design and industry best practices Data Protection by Design and Default Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 081 Are systems products and business practices based on privacy principles by design and according to industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 082 Are systems' privacy settings configured by default and according to all applicable laws and regulations? NA CSCowned This is a customer responsibility AWS customers are responsible to adhere to regulatory requirements in the jurisdictions their business are active in DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 091 Is a data protection impact assessment (DPIA) conducted when processing personal data and evaluating the origin nature particularity and severity of risks according to any applicable laws regulations and industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP09 Conduct a Data Protection Impact Assessment (DPIA) to evaluate the origin nature particularity and severity of the risks upon the processing of personal data according to any applicable laws regulations and industry best practices Data Protection Impact Assessment Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 101 Are processes procedures and technical measures defined implemented and evaluated to ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope (as permitted by respective laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP10 Define implement and evaluate processes procedures and technical measures that ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope as permitted by the respective laws and regulations Sensitive Data Transfer Data Security and Privacy Lifecycle Management DSP 111 Are processes procedures and technical measures defined implemented and evaluated to enable data subjects to request access to modify or delete personal data (per applicable laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP11 Define and implement processes procedures and technical measures to enable data subjects to request access to modification or deletion of their personal data according to any applicable laws and regulations Personal Data Access Reversal Rectification and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 121 Are processes procedures and technical measures defined implemented and evaluated to ensure personal data is processed (per applicable laws and regulations and for the purposes declared to the data subject)? Yes Shared CSP and CSC AWS has established a formal Data Subject Access Request (DSAR) according to General Data Protection Regulation (GDPR) For this they have to call AWS and open a Harbinger ticket by contacting a CS Team Manager who will work with Legal to open a ticket which includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS customers are responsible for the management of the data (including adhering to applicable laws and regulations) they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP12 Define implement and evaluate processes procedures and technical measures to ensure that personal data is processed according to any applicable laws and regulations and for the purposes declared to the data subject Limitation of Purpose in Personal Data Processing Data Security and Privacy Lifecycle Management DSP 131 Are processes procedures and technical measures defined implemented and evaluated for the transfer and sub processing of personal data within the service supply chain (according to any applicable laws and regulations)? NA Note: AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/sub processors/ DSP13 Define implement and evaluate processes procedures and technical measures for the transfer and sub processing of personal data within the service supply chain according to any applicable laws and regulations Personal Data Sub processing Data Security and Privacy Lifecycle Management DSP 141 Are processes procedures and technical measures defined implemented and evaluated to disclose details to the data owner of any personal or sensitive data access by sub processors before processing initiation? NA AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/third partyaccess/ DSP14 Define implement and evaluate processes procedures and technical measures to disclose the details of any personal or sensitive data access by subprocessors to the data owner prior to initiation of that processing Disclosure of Data Sub processors Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 151 Is authorization from data owners obtained and the associated risk managed before replicating or using production data in nonproduction environments? NA Customer data is not used for testing DSP15 Obtain authorization from data owners and manage associated risk before replicating or using production data in non production environments Limitation of Production Data Use Data Security and Privacy Lifecycle Management DSP 161 Do data retention archiving and deletion practices follow business requirements applicable laws and regulations? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored AWS customers are responsible for the management of the data they place into AWS services including retention archiving and deletion policies and practices DSP16 Data retention archiving and deletion is managed in accordance with business requirements applicable laws and regulations Data Retention and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 171 Are processes procedures and technical measures defined and implemented to protect sensitive data throughout its lifecycle? NA CSCowned Customers control their customer content With AWS customers: • Determine where their customer content will be stored including the type of storage and geographic region of that storage • Customers can replicate and back up their customer content in more than one region and we will not move or replicate customer content outside of the customer's chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users • Choose the secured state of their customer content We offer customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys • Manage access to their customer content and AWS services and resources through users groups permissions and credentials that customers control DSP17 Define and implement processes procedures and technical measures to protect sensitive data throughout it's lifecycle Sensitive Data Protection Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 181 Does the CSP have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations? Yes CSPowned We are vigilant about our customers' privacy AWS policy prohibits the disclosure of customer content unless we’re required to do so to comply with the law or with a valid and binding order of a governmental or regulatory body Unless we are prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services Amazon notifies customers before disclosing customer content so they can seek protection from disclosure It's also important to point out that our customers can encrypt their customer content and we provide customers with the option to manage their own encryption keys We know transparency matters to our customers so we regularly publish a report about the types and volume of information requests we receive here: https://awsamazoncom/compliance/amazon informationrequests/ DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management DSP 182 Does the CSP give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation? Yes Shared CSP and CSC See response to Question ID DSP181 DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 191 Are processes procedures and technical measures defined and implemented to specify and document physical data locations including locales where data is processed or backed up? NA CSCowned This is a customer responsibility Customers manage access to their customer content and AWS services and resources We provide an advanced set of access encryption and logging features to help you do this effectively (such as AWS CloudTrail) We do not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to our customers and their end users Customers choose the region(s) in which their customer content will be stored We will not move or replicate customer content outside of the customer’s chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users Customers choose how their customer content is secured We offer our customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys DSP19 Define and implement processes procedures and technical measures to specify and document the physical locations of data including any locations in which data is processed or backed up Data Location Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 011 Are information governance program policies and procedures sponsored by organizational leadership established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annually Governance Program Policy and Procedures Governance Risk and Compliance GRC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annuall y Governance Program Policy and Procedures Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 021 Is there an established formal documented and leadership sponsored enterprise risk management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC02 Establish a formal documented and leadershipsponsored Enterprise Risk Management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks Risk Management Program Governance Risk and Compliance GRC 031 Are all relevant organizational policies and associated procedures reviewed at least annually or when a substantial organizational change occurs? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC03 Review all relevant organizational policies and associated procedures at least annually or when a substantial change occurs within the organization Organization al Policy Reviews Governance Risk and Compliance GRC 041 Is an approved exception process mandated by the governance program established and followed whenever a deviation from an established policy occurs? Yes CSPowned Management reviews exceptions to security policies to assess and mitigate risks AWS Security maintains a documented procedure describing the policy exception workflow on an internal AWS website Policy exceptions are tracked and maintained with the policy tool and exceptions are approved rejected or denied based on the procedures outlined within the procedure document GRC04 Establish and follow an approved exception process as mandated by the governance program whenever a deviation from an established policy occurs Policy Exception Process Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 051 Has an information security program (including programs of all relevant CCM domains) been developed and implemented? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC05 Develop and implement an Information Security Program which includes programs for all the relevant domains of the CCM Information Security Program Governance Risk and Compliance GRC 061 Are roles and responsibilities for planning implementing operating assessing and improving governance programs defined and documented? Yes CSPowned See response to Question ID GRC051 GRC06 Define and document roles and responsibilities for planning implementing operating assessing and improving governance programs Governance Responsibilit y Model Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 071 Are all relevant standards regulations legal/contractual and statutory requirements applicable to your organization identified and documented? Yes CSPowned AWS documents tracks and monitors its legal regulatory and contractual agreements and obligations In order to do so AWS performs and maintains the following activities: 1) Identifies and evaluates applicable laws and regulations for each of the jurisdictions in which AWS operates 2) Documents and implements controls to help ensure its conformity with statutory regulatory and contractual requirements relevant to AWS 3) Categorizes the sensitivity of information according to the AWS information security policies to help protect from loss destruction falsification unauthorized access and unauthorized release 4) Informs and continually trains personnel that must be made aware of information security policies to help protect sensitive AWS information 5) Monitors for nonconformities to the information security policies with a process in place to take corrective actions and enforce appropriate disciplinary action AWS maintains relationships with internal and external parties to monitor legal regulatory and contractual requirements Should a new security directives be issued AWS creates and documents plans to implement the directive within a designated timeframe AWS provides customers with evidence of its compliance with applicable legal regulatory and contractual requirements through audit reports attestations certifications and other compliance enablers Visit awsamazoncom/artifact for information on how to review the AWS external attestation and assurance documentation GRC07 Identify and document all relevant standards regulations legal/contractual and statutory requirements which are applicable to your organization Information System Regulatory Mapping Governance Risk and Compliance GRC 081 Is contact established and maintained with cloudrelated special interest groups and other relevant entities? Yes CSPowned AWS personnel are part of special interest groups including relevant external parties such as security groups AWS personnel use these groups to improve their knowledge about security best practices and to stay up to date with relevant security information GRC08 Establish and maintain contact with cloudrelated special interest groups and other relevant entities in line with business context Special Interest Groups Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 011 Are background verification policies and procedures of all new employees (including but not limited to remote employees contractors and third parties) established documented approved communicated applied evaluated and maintained? Yes CSPowned Where permitted by law AWS requires that employees undergo a background screening at hiring commensurate with their position and level of access (Control AWSCA92) AWS has a process to assess whether AWS employees who have access to resources that store or process customer data via permission groups are subject to a posthire background check as applicable with local law AWS employees who have access to resources that store or process customer data will have a background check no less than once a year (Control AWSCA99) HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 012 Are background verification policies and procedures designed according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed business requirements and acceptable risk? Yes CSPowned AWS conducts criminal background checks as permitted by applicable law as part of pre employment screening practices for employees commensurate with the employee’s position and level of access to AWS facilities The AWS SOC reports provide additional details regarding the controls in place for background verification HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 013 Are background verification policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources HRS 021 Are policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has implemented data handling and classification requirements that provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Data handling requirements Employees are required to review and signoff on an employment contract which acknowledges their responsibilities to overall Company standards and information security HRS02 Establish document approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 022 Are the policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS02 Establish d ocument approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources HRS 031 Are policies and procedures requiring unattended workspaces to conceal confidential data established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS roles and responsibilities for maintaining safe and secure working environment are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources HRS 032 Are policies and procedures requiring unattended workspaces to conceal confidential data reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 041 Are policies and procedures to protect information accessed processed or stored at remote sites and locations established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 042 Are policies and procedures to protect information accessed processed or stored at remote sites and locations reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 051 Are return procedures of organizationally owned assets by terminated employees established and documented? Yes CSPowned Upon termination of employee or contracts AWS assets in their possessions are retrieved on the date of termination In case of immediate termination the employee/contractor manager retrieves all AWS assets (eg Authentication tokens keys badges) and escorts them out of AWS facility HRS05 Establish and document procedures for the return of organizationowned assets by terminated employees Asset returns Human Resources HRS 061 Are procedures outlining the roles and responsibilities concerning changes in employment established documented and communicated to all personnel? Yes CSPowned AWS Human Resources team defines internal management responsibilities to be followed for termination and role change of employees and vendors AWS SOC reports provide additional details HRS06 Establish document and communicate to all personnel the procedures outlining the roles and responsibilities concerning changes in employment Employment Termination Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 071 Are employees required to sign an employment agreement before gaining access to organizational information systems resources and assets? Yes CSPowned Personnel supporting AWS systems and devices must sign a nondisclosure agreement prior to being granted access Additionally upon hire personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy HRS07 Employees sign the employee agreement prior to being granted access to organizational information systems resources and assets Employment Agreement Process Human Resources HRS 081 Are provisions and/or terms for adherence to established information governance and security policies included within employment agreements? Yes CSPowned In alignment with ISO 27001 standard AWS employees complete periodic rolebased training that includes AWS Security training and requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies Refer to SOC reports for additional details HRS08 The organization includes within the employment agreements provisions and/or terms for adherence to established information governance and security policies Employment Agreement Content Human Resources HRS 091 Are employee roles and responsibilities relating to information assets and security documented and communicated? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees HRS09 Document and communicate roles and responsibilities of employees as they relate to information assets and security Personnel Roles and Responsibiliti es Human Resources HRS 101 Are requirements for non disclosure/confide ntiality agreements reflecting organizational data protection needs and operational details identified documented and reviewed at planned intervals? Yes CSPowned Amazon Legal Counsel manages and periodically revises the Amazon NDA to reflect AWS business needs HRS10 Identify document and review at planned intervals requirements for non disclosure/confidenti ality agreements reflecting the organization's needs for the protection of data and operational details Non Disclosure Agreements Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 111 Is a security awareness training program for all employees of the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 112 Are regular security awareness training updates provided? Yes CSPowned See response to Question ID HRS111 HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 121 Are all employees granted access to sensitive organizational and personal data provided with appropriate security awareness training? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 122 Are all employees granted access to sensitive organizational and personal data provided with regular updates in procedures processes and policies relating to their professional function? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources HRS 131 Are employees notified of their roles and responsibilities to maintain awareness and compliance with established policies procedures and applicable legal statutory or regulatory compliance obligations? Yes CSPowned AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employee as well as electronic mail messages and the posting of information via the Amazon intranet Refer to ISO 27001 standard Annex A domain 7 and 8 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard HRS13 Make employees aware of their roles and responsibilities for maintaining awareness and compliance with established policies and procedures and applicable legal statutory or regulatory compliance obligations Compliance User Responsibilit y Human Resources IAM 011 Are identity and access management policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 012 Are identity and access management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management IAM 021 Are strong password policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned AWS internal Password Policies and guidelines outlines requirements of password strength and handling for passwords used to access internal systems AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 022 Are strong password policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 031 Is system identity information and levels of access managed stored and reviewed? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked AWS customers are responsible for access management within their AWS environments IAM03 Manage store and review the information of system identities and level of access Identity Inventory Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 041 Is the separation of duties principle employed when implementing information system access? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the ability to manage segregations of duties of their AWS resources AWS best practices for Identity & Access Management can be found here: https://docsawsamazon com/IAM/ Search for AWS best practices for Identity & Access Management IAM04 Employ the separation of duties principle when implementing information system access Separation of Duties Identity & Access Management IAM 051 Is the least privilege principle employed when implementing information system access? Yes CSPowned See response to Question ID IAM041 IAM05 Employ the least privilege principle when implementing information system access Least Privilege Identity & Access Management IAM 061 Is a user access provisioning process defined and implemented which authorizes records and communicates data and assets access changes? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM06 Define and implement a user access provisioning process which authorizes records and communicates access changes to data and assets User Access Provisioning Identity & Access Management IAM 071 Is a process in place to de provision or modify the access in a timely manner of movers / leavers or system identity changes to effectively adopt and communicate identity and access management policies? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section ""AWS Access"" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM07 Deprovision or respectively modify access of movers / leavers or system identity changes in a timely manner in order to effectively adopt and communicate identity and access management policies User Access Changes and Revocation Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 081 Are reviews and revalidation of user access for least privilege and separation of duties completed with a frequency commensurate with organizational risk tolerance? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section ""AWS Access"" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM08 Review and revalidate user access for least privilege and separation of duties with a frequency that is commensurate with organizational risk tolerance User Access Review Identity & Access Management IAM 091 Are processes procedures and technical measures for the segregation of privileged access roles defined implemented and evaluated such that administrative data access encryption key management capabilities and logging capabilities are distinct and separate? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content IAM09 Define implement and evaluate processes procedures and technical measures for the segregation of privileged access roles such that administrative access to data encryption and key management capabilities and logging capabilities are distinct and separated Segregation of Privileged Access Roles Identity & Access Management IAM 101 Is an access process defined and implemented to ensure privileged access roles and rights are granted for a limited period? Yes CSPowned Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 102 Are procedures implemented to prevent the culmination of segregated privileged access? Yes CSPowned Access to AWS systems are allocated based on least privilege approved by an authorized individual prior to access provisioning Duties and areas of responsibility (for example access request and approval change management request and approval change development testing and deployment etc) are segregated across different individuals to reduce opportunities for an unauthorized or unintentional modification or misuse of AWS systems Group or shared accounts are not permitted within the system boundary IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management IAM 111 Are processes and procedures for customers to participate where applicable in granting access for agreed high risk as (defined by the organizational risk assessment) privileged access roles defined implemented and evaluated? No IAM11 Define implement and evaluate processes and procedures for customers to participate where applicable in the granting of access for agreed high risk (as defined by the organizational risk assessment) privileged access roles CSCs Approval for Agreed Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 121 Are processes procedures and technical measures to ensure the logging infrastructure is ""readonly"" for all with write access (including privileged access roles) defined implemented and evaluated? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management IAM 122 Is the ability to disable the ""read only"" configuration of logging infrastructure controlled through a procedure that ensures the segregation of duties and break glass procedures? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 131 Are processes procedures and technical measures that ensure users are identifiable through unique identification (or can associate individuals with user identification usage) defined implemented and evaluated? Yes CSPowned AWS controls access to systems through authentication that requires a unique user ID and password AWS systems do not allow actions to be performed on the information system without identification or authentication User access privileges are restricted based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems (for example network applications tools etc) requires documented approval from the authorized personnel (for example user's manager and/or system owner) and validation of the active user in the HR system Refer to SOC2 report for additional details IAM13 Define implement and evaluate processes procedures and technical measures that ensure users are identifiable through unique IDs or which can associate individuals to the usage of user IDs Uniquely Identifiable Users Identity & Access Management IAM 141 Are processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for a least privileged user and sensitive data access defined implemented and evaluated? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM14 Define implement and evaluate processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for at least privileged user and sensitive data access Adopt digital certificates or alternatives which achieve an equivalent level of security for system identities Strong Authenticati on Identity & Access Management IAM 142 Are digital certificates or alternatives that achieve an equivalent security level for system identities adopted? Yes CSPowned AWS Identity Directory and Access Services enable you to add multifactor authentication (MFA) to your applications IAM14 Strong Authenticati on Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 151 Are processes procedures and technical measures for the secure management of passwords defined implemented and evaluated? Yes CSPowned AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM15 Define implement and evaluate processes procedures and technical measures for the secure management of passwords Passwords Management Identity & Access Management IAM 161 Are processes procedures and technical measures to verify access to data and system functions authorized defined implemented and evaluated? Yes Shared CSP and CSC Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data and server instances are logically isolated from other customers by default Privileged user access controls are reviewed by an independent auditor during the AWS SOC ISO 27001 and PCI audits AWS Customers retain control and ownership of their data AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure IAM16 Define implement and evaluate processes procedures and technical measures to verify access to data and system functions is authorized Authorizatio n Mechanisms Identity & Access Management IPY 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for communications between application services (eg APIs)? Yes CSPowned Details regarding AWS APIs can be found on the AWS website at: https://awsamazoncom/documentation/ IPY01 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 012 Are policies and procedures established documented approved communicated applied evaluated and maintained for information processing interoperability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY02 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 013 Are policies and procedures established documented approved communicated applied evaluated and maintained for application development portability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY03 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 014 Are policies and procedures established documented approved communicated applied evaluated and maintained for information/data exchange usage portability integrity and persistence? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY04 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 015 Are interoperability and portability policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IPY05 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 021 Are CSCs able to programmatically retrieve their data via an application interface(s) to enable interoperability and portability? Yes CSCowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom /documentation/ IPY02 Provide application interface(s) to CSCs so that they programmatically retrieve their data to enable interoperability and portability Application Interface Availability Interoperabilit y & Portability IPY 031 Are cryptographically secure and standardized network protocols implemented for the management import and export of data? Yes CSPowned AWS APIs and the AWS Management Console are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS AWS recommends that customers use secure protocols that offer authentication and confidentiality such as TLS or IPsec to reduce the risk of data tampering or loss AWS enables customers to open a secure encrypted session to AWS servers using HTTPS (Transport Layer Security [TLS]) IPY03 Implement cryptographically secure and standardized network protocols for the management import and export of data Secure Interoperabil ity and Portability Management Interoperabilit y & Portability IPY 041 Do agreements include provisions specifying CSC data access upon contract termination and have the following? a Data format b Duration data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Yes Shared CSP and CSC AWS customer agreements include data related provisions upon termination Details regarding contract termination can be found in the example customer agreement see Section 7 Term; Termination https://awsamazoncom/agreement/ IPY04 Agreements must include provisions specifying CSCs access to data upon contract termination and will include: a Data format b Length of time the data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Data Portability Contractual Obligations Interoperabilit y & Portability IVS 011 Are infrastructure and virtualization security policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 012 Are infrastructure and virtualization security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security IVS 021 Is resource availability quality and capacity planned and monitored in a way that delivers required system performance as determined by the business? Yes Shared CSP and CSC AWS maintains a capacity planning model to assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements IVS02 Plan and monitor the availability quality and adequate capacity of resources in order to deliver the required system performance as determined by the business Capacity and Resource Planning Infrastructure & Virtualization Security IVS 031 Are communications between environments monitored? Yes Shared CSP and CSC Monitoring and alarming are configured by Service Owners to identify and notify operational and management personnel of incidents when early warning thresholds are crossed on key operational metrics IVS03 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 032 Are communications between environments encrypted? NA CSCowned AWS APIs are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS and within their multiple environment AWS provides open encryption methodologies and enables customers to encrypt and authenticate all traffic and to enforce the latest standards and ciphers IVS04 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 033 Are communications between environments restricted to only authenticated and authorized connections as justified by the business? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage their AWS environments and associated access IVS05 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 034 Are network configurations reviewed at least annually? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for configuration management within their AWS environments IVS06 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 035 Are network configurations supported by the documented justification of all allowed services protocols ports and compensating controls? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture AWS customers are responsible for network management within their AWS environments IVS07 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 041 Is every host and guest OS hypervisor or infrastructure control plane hardened (according to their respective best practices) and supported by technical controls as part of a security baseline? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for server and system management within their AWS environments IVS04 Harden host and guest OS hypervisor or infrastructure control plane according to their respective best practices and supported by technical controls as part of a security baseline OS Hardening and Base Controls Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 051 Are production and non production environments separated? Yes CSPowned The development test and production environments emulate the production system environment and are used to properly assess and prepare for the impact of a change to the production system environment In order to reduce the risks of unauthorized access or change to the production environment the development test and production environments are logically separated IVS05 Separate production and nonproduction environments Production and Non Production Environment s Infrastructure & Virtualization Security IVS 061 Are applications and infrastructures designed developed deployed and configured such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented segregated monitored and restricted from other tenants? Yes CSPowned Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them Customers maintain full control over who has access to their data Services which provide virtualized operational environments to customers (ie EC2) ensure that customers are segregated from one another and prevent crosstenant privilege escalation and information disclosure via hypervisors and instance isolation Different instances running on the same physical machine are isolated from each other via the hypervisor In addition the Amazon EC2 firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical randomaccess memory (RAM) is separated using similar mechanisms IVS06 Design develop deploy and configure applications and infrastructures such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented and segregated monitored and restricted from other tenants Segmentatio n and Segregation Infrastructure & Virtualization Security IVS 071 Are secure and encrypted communication channels including only uptodate and approved protocols used when migrating servers services applications or data to cloud environments? Yes CSCowned AWS offers a wide variety of services and partner tools to help customer migrate data securely AWS migration services such as AWS Database Migration Service and AWS Snowmobile are integrated with AWS KMS for encryption Learn more about AWS cloud migration services at: https://awsamazoncom /clouddatamigration/ IVS07 Use secure and encrypted communication channels when migrating servers services applications or data to cloud environments Such channels must include only uptodate and approved protocols Migration to Cloud Environment s Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 081 Are highrisk environments identified and documented? NA CSCowned AWS Customers retain responsibility to manage their own network segmentation in adherence with their defined requirements Internally AWS network segmentation is aligned with the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IVS08 Identify and document highrisk environments Network Architecture Documentati on Infrastructure & Virtualization Security IVS 091 Are processes procedures and defenseindepth techniques defined implemented and evaluated for protection detection and timely response to networkbased attacks? Yes CSPowned AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership In addition the AWS control environment is subject to regular internal and external risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment AWS security controls are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance IVS09 Define implement and evaluate processes procedures and defenseindepth techniques for protection detection and timely response to networkbased attacks Network Defense Infrastructure & Virtualization Security LOG 011 Are logging and monitoring policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 012 Are policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring LOG 021 Are processes procedures and technical measures defined implemented and evaluated to ensure audit log security and retention? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG02 Define implement and evaluate processes procedures and technical measures to ensure the security and retention of audit logs Audit Logs Protection Logging and Monitoring LOG 031 Are security related events identified and monitored within applications and the underlying infrastructure? NA CSCowned This is a customer responsibility AWS customers are responsible for the applications within their AWS environment LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring LOG 032 Is a system defined and implemented to generate alerts to responsible stakeholders based on security events and their corresponding metrics? Yes Shared CSP and CSC AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS customers are responsible for incident management within their AWS environments LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 041 Is access to audit logs restricted to authorized personnel and are records maintained to provide unique access accountability? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG04 Restrict audit logs access to authorized personnel and maintain records that provide unique access accountability Audit Logs Access and Accountabilit y Logging and Monitoring LOG 051 Are security audit logs monitored to detect activity outside of typical or expected patterns? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 052 Is a process established and followed to review and take appropriate and timely actions on detected anomalies? Yes CSPowned See response to Question ID LOG0051 LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 061 Is a reliable time source being used across all relevant information processing systems? Yes CSPowned In alignment with ISO 27001 standards AWS information systems utilize internal system clocks synchronized via NTP (Network Time Protocol) AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard LOG06 Use a reliable time source across all relevant information processing systems Clock Synchronizati on Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 071 Are logging requirements for information meta/data system events established documented and implemented? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring LOG 072 Is the scope reviewed and updated at least annually or whenever there is a change in the threat environment? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 081 Are audit records generated and do they contain relevant security information? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events LOG08 Generate audit records containing relevant security information Log Records Logging and Monitoring LOG 091 Does the information system protect audit records from unauthorized access modification and deletion? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG09 The information system protects audit records from unauthorized access modification and deletion Log Protection Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 101 Are monitoring and internal reporting capabilities established to report on cryptographic operations encryption and key management policies processes procedures and controls? Yes Shared CSP and CSC AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance AWS customers are responsible for key management within their AWS environments LOG10 Establish and maintain a monitoring and internal reporting capability over the operations of cryptographic encryption and key management policies processes procedures and controls Encryption Monitoring and Reporting Logging and Monitoring LOG 111 Are key lifecycle management events logged and monitored to enable auditing and reporting on cryptographic keys' usage? NA CSCowned This is a customer responsibility LOG11 Log and monitor key lifecycle management events to enable auditing and reporting on usage of cryptographic keys Transaction/ Activity Logging Logging and Monitoring LOG 121 Is physical access logged and monitored using an auditable access control system? Yes CSPowned Access to data center is logged Only authorized users are allowed into data centers Visitors follow the visitor access process and their relevant details along with business purpose is logged in the data center access log system The access log is retained for 90 days unless longer retention is legally required LOG12 Monitor and log physical access using an auditable access control system Access Control Logs Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 131 Are processes and technical measures for reporting monitoring system anomalies and failures defined implemented and evaluated? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring LOG 132 Are accountable parties immediately notified about anomalies and failures? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring SEF 011 Are policies and procedures for security incident management e discovery and cloud forensics established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 012 Are policies and procedures reviewed and updated annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 021 Are policies and procedures for timely management of security incidents established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 022 Are policies and procedures for timely management of security incidents reviewed and updated at least annually? Yes CSPowned See response to Question ID SEF012 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 031 Is a security incident response plan that includes relevant internal departments impacted CSCs and other businesscritical relationships (such as supplychain) established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF03 'Establish document approve communicate apply evaluate and maintain a security incident response plan which includes but is not limited to: relevant internal departments impacted CSCs and other business critical relationships (such as supply chain) that may be impacted' Incident Response Plans Security Incident Management EDiscovery & Cloud Forensics SEF 041 Is the security incident response plan tested and updated for effectiveness as necessary at planned intervals or upon significant organizational or environmental changes? Yes CSPowned AWS incident response plans are tested on at least on an annual basis SEF04 Test and update as necessary incident response plans at planned intervals or upon significant organizational or environmental changes for effectiveness Incident Response Testing Security Incident Management EDiscovery & Cloud Forensics SEF 051 Are information security incident metrics established and monitored? Yes CSPowned AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF05 Establish and monitor information security incident metrics Incident Response Metrics Security Incident Management EDiscovery & Cloud Forensics SEF 061 Are processes procedures and technical measures supporting business processes to triage security related events defined implemented and evaluated? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF06 Define implement and evaluate processes procedures and technical measures supporting business processes to triage security related events Event Triage Processes Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 071 Are processes procedures and technical measures for security breach notifications defined and implemented? Yes CSPowned AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 072 Are security breaches and assumed security breaches reported (including any relevant supply chain breaches) as per applicable SLAs laws and regulations? Yes CSPowned AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 081 Are points of contact maintained for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities? Yes CSPowned AWS maintains contacts with industry bodies risk and compliance organizations local authorities and regulatory bodies as required by the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF08 Maintain points of contact for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities Points of Contact Maintenance Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 011 Are policies and procedures implementing the shared security responsibility model (SSRM) within the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 012 Are the policies and procedures that apply the SSRM reviewed and updated annually? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer AWS Information Security Management System policies that are in scope for SSRM are reviewed and updated annually and as necessary The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 021 Is the SSRM applied documented implemented and managed throughout the supply chain for the cloud service offering? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA02 Apply document implement and manage the SSRM throughout the supply chain for the cloud service offering SSRM Supply Chain Supply Chain Management Transparency and Accountability STA 031 Is the CSC given SSRM guidance detailing information about SSRM applicability throughout the supply chain? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA03 Provide SSRM Guidance to the CSC detailing information about the SSRM applicability throughout the supply chain SSRM Guidance Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 041 Is the shared ownership and applicability of all CSA CCM controls delineated according to the SSRM for the cloud service offering? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer This varies by cloud services used the shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA04 Delineate the shared ownership and applicability of all CSA CCM controls according to the SSRM for the cloud service offering SSRM Control Ownership Supply Chain Management Transparency and Accountability STA 051 Is SSRM documentation for all cloud services the organization uses reviewed and validated? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA05 Review and validate SSRM documentation for all cloud services offerings the organization uses SSRM Documentati on Review Supply Chain Management Transparency and Accountability STA 061 Are the portions of the SSRM the organization is responsible for implemented operated audited or assessed? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA06 Implement operate and audit or assess the portions of the SSRM which the organization is responsible for SSRM Control Implementati on Supply Chain Management Transparency and Accountability STA 071 Is an inventory of all supply chain relationships developed and maintained? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA07 Develop and maintain an inventory of all supply chain relationships Supply Chain Inventory Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 081 Are risk factors associated with all organizations within the supply chain periodically reviewed by CSPs? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA08 CSPs periodically review risk factors associated with all organizations within their supply chain Supply Chain Risk Management Supply Chain Management Transparency and Accountability STA 091 Do service agreements between CSPs and CSCs (tenants) incorporate at least the following mutually agreed upon provisions and/or terms? • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and thirdparty assessment • Service termination • Interoperability and portability requirements • Data privacy Yes Shared CSP and CSC AWS service agreements includes multiple provisions and terms For additional details refer to following sample AWS Customer Agreement online https://awsamazoncom/agreement/ STA09 Service agreements between CSPs and CSCs (tenants) must incorporate at least the following mutuallyagreed upon provisions and/or terms: • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and third party assessment • Service termination • Interoperability and portability requirements • Data privacy Primary Service and Contractual Agreement Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 101 Are supply chain agreements between CSPs and CSCs reviewed at least annually? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA10 Review supply chain agreements between CSPs and CSCs at least annually Supply Chain Agreement Review Supply Chain Management Transparency and Accountability STA 111 Is there a process for conducting internal assessments at least annually to confirm the conformance and effectiveness of standards policies procedures and SLA activities? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA11 Define and implement a process for conducting internal assessments to confirm conformance and effectiveness of standards policies procedures and service level agreement activities at least annuall y Internal Compliance Testing Supply Chain Management Transparency and Accountability STA 121 Are policies that require all supply chain CSPs to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards implemented? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA12 Implement policies requiring all CSPs throughout the supply chain to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards Supply Chain Service Agreement Compliance Supply Chain Management Transparency and Accountability STA 131 Are supply chain partner IT governance policies and procedures reviewed periodically? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA13 Periodically review the organization's supply chain partners' IT governance policies and procedures Supply Chain Governance Review Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 141 Is a process to conduct periodic security assessments for all supply chain organizations defined and implemented? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA14 Define and implement a process for conducting security assessments periodically for all organizations within the supply chain Supply Chain Data Security Assessment Supply Chain Management Transparency and Accountability TVM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained to identify report and prioritize the remediation of vulnerabilities to protect systems against vulnerability exploitation? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management TVM 012 Are threat and vulnerability management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 021 Are policies and procedures to protect against malware on managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 022 Are asset management and malware protection policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 031 Are processes procedures and technical measures defined implemented and evaluated to enable scheduled and emergency responses to vulnerability identifications (based on the identified risk)? Yes CSPowned See response to Question ID TVM011 TVM03 Define implement and evaluate processes procedures and technical measures to enable both scheduled and emergency responses to vulnerability identifications based on the identified risk Vulnerability Remediation Schedule Threat & Vulnerability Management TVM 041 Are processes procedures and technical measures defined implemented and evaluated to update detection tools threat signatures and compromise indicators weekly (or more frequent) basis? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM04 Define implement and evaluate processes procedures and technical measures to update detection tools threat signatures and indicators of compromise on a weekly or more frequent basis Detection Updates Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 051 Are processes procedures and technical measures defined implemented and evaluated to identify updates for applications that use third party or open source libraries (according to the organization's vulnerability management policy)? Yes CSPowned AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from thirdparties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon TVM05 Define implement and evaluate processes procedures and technical measures to identify updates for applications which use third party or open source libraries according to the organization's vulnerability management policy External Library Vulnerabilitie s Threat & Vulnerability Management TVM 061 Are processes procedures and technical measures defined implemented and evaluated for periodic independent thirdparty penetration testing? Yes CSPowned AWS Security regularly performs penetration testing These engagements may include carefully selected industry experts and independent security firms AWS does not share the results directly with customers AWS thirdparty auditors review the results to verify frequency of penetration testing and remediation of findings TVM06 Define implement and evaluate processes procedures and technical measures for the periodic performance of penetration testing by independent third parties Penetration Testing Threat & Vulnerability Management TVM 071 Are processes procedures and technical measures defined implemented and evaluated for vulnerability detection on organizationally managed assets at least monthly? No CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools External vulnerability assessments are conducted by an AWS approved third party vendor at least quarterly TVM07 Define implement and evaluate processes procedures and technical measures for the detection of vulnerabilities on organizationally managed assets at least monthly Vulnerability Identification Threat & Vulnerability Management TVM 081 Is vulnerability remediation prioritized using a riskbased model from an industry recognized framework? Yes CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools TVM08 Use a riskbased model for effective prioritization of vulnerability remediation using an industry recognized framework Vulnerability Prioritization Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 091 Is a process defined and implemented to track and report vulnerability identification and remediation activities that include stakeholder notification? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM09 Define and implement a process for tracking and reporting vulnerability identification and remediation activities that includes stakeholder notification Vulnerability Management Reporting Threat & Vulnerability Management TVM 101 Are metrics for vulnerability identification and remediation established monitored and reported at defined intervals? Yes Shared CSP and CSC AWS tracks metrics for internal process measurements and improvements that align with our policies and standards AWS customers are responsible for vulnerability management within their AWS environments TVM10 Establish monitor and report metrics for vulnerability identification and remediation at defined intervals Vulnerability Management Metrics Threat & Vulnerability Management UEM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for all endpoints? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management UEM 012 Are universal endpoint management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 021 Is there a defined documented applicable and evaluated list containing approved services applications and the sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM02 Define document apply and evaluate a list of approved services applications and sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data Application and Service Approval Universal Endpoint Management UEM 031 Is a process defined and implemented to validate endpoint device compatibility with operating systems and applications? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint compatibility with operating systems and applications UEM03 Define and implement a process for the validation of the endpoint device's compatibility with operating systems and applications Compatibilit y Universal Endpoint Management UEM 041 Is an inventory of all endpoints used and maintained to store and access company data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint inventory management UEM04 Maintain an inventory of all endpoints used to store and access company data Endpoint Inventory Universal Endpoint Management UEM 051 Are processes procedures and technical measures defined implemented and evaluated to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked UEM05 Define implement and evaluate processes procedures and technical measures to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data Endpoint Management Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 061 Are all relevant interactiveuse endpoints configured to require an automatic lock screen? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices These include automatic lockout after defined period of inactivity UEM06 Configure all relevant interactiveuse endpoints to require an automatic lock screen Automatic Lock Screen Universal Endpoint Management UEM 071 Are changes to endpoint operating systems patch levels and/or applications managed through the organizational change management process? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM07 Manage changes to endpoint operating systems patch levels and/or applications through the company's change management processes Operating Systems Universal Endpoint Management UEM 081 Is information protected from unauthorized disclosure on managed endpoints with storage encryption? NA CSPowned AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked Additionally customers are provided tools to encrypt data within AWS environment to add additional layers of security The encrypted data can only be accessed by authorized customer personnel with access to encryption keys UEM08 Protect information from unauthorized disclosure on managed endpoint devices with storage encryption Storage Encryption Universal Endpoint Management UEM 091 Are antimalware detection and prevention technology services configured on managed endpoints? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard UEM09 Configure managed endpoints with anti malware detection and prevention technology and services Anti Malware Detection and Prevention Universal Endpoint Management UEM 101 Are software firewalls configured on managed endpoints? Yes CSPowned Amazon assets (eg laptops) are configured with antivirus software that includes email filtering software firewalls and malware detection UEM10 Configure managed endpoints with properly configured software firewalls Software Firewall Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 111 Are managed endpoints configured with data loss prevention (DLP) technologies and rules per a risk assessment? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure UEM11 Configure managed endpoints with Data Loss Prevention (DLP) technologies and rules in accordance with a risk assessment Data Loss Prevention Universal Endpoint Management UEM 121 Are remote geolocation capabilities enabled for all managed mobile endpoints? No CSPowned No response is required as we have indicated no UEM12 Enable remote geo location capabilities for all managed mobile endpoints Remote Locate Universal Endpoint Management UEM 131 Are processes procedures and technical measures defined implemented and evaluated to enable remote company data deletion on managed endpoint devices? Yes CSPowned AWS scope for mobile devices are iOS and Android based mobile phones and tablets AWS maintains a formal mobile device policy and associated procedures Specifically AWS mobile devices are only allowed access to AWS corporate fabric resources and cannot access AWS production fabric where customer content is stored AWS production fabric is separated from the corporate fabric by boundary protection devices that control the flow of information between fabrics Approved firewall rule sets and access control lists between network fabrics restrict the flow of information to specific information system services Access control lists and rule sets are reviewed and approved and are automatically pushed to boundary protection devices on a periodic basis (at least every 24 hours) to ensure rulesets and access control lists are up todate Consequently mobile devices are not relevant to AWS customer content access UEM13 Define implement and evaluate processes procedures and technical measures to enable the deletion of company data remotely on managed endpoint devices Remote Wipe Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 141 Are processes procedures and technical and/or contractual measures defined implemented and evaluated to maintain proper security of third party endpoints with access to organizational assets? NA AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ UEM14 Define implement and evaluate processes procedures and technical and/or contractual measures to maintain proper security of thirdparty endpoints with access to organizational assets ThirdParty Endpoint Security Posture Universal Endpoint Management End of Standard Further Reading For additional information see the following sources:  AWS Compliance Quick Reference Guide  AWS Answers to Key Compliance Questions  AWS Cloud Security Alliance (CSA) Overview Document Revisions Date Description April 2022 Updated CAIQ template and updated responses to individual questions based on CAIQ v402 July 2018 2018 validation and update January 2018 Migrated to new template January 2016 First publication",General,consultant,Best Practices Data_Warehousing_on_AWS,Data Warehousing on AWS January 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Introducing Amazon Redshift 2 Modern Analytics and Data Warehousing Architecture 3 AWS Analytics Services 3 Analytics Architecture 4 Data Warehou se Technology Options 10 RowOriented Databases 10 Column Oriented Databases 11 Massively Parallel Processing (MPP) Architectures 12 Amazon Redshift Deep Dive 12 Integration with Data Lake 12 Performance 13 Durability and Availability 14 Elasticity and Scalability 15 Operations 16 Redshift Advisor 16 Interfaces 17 Security 17 Cost Model 18 Ideal Usage Patterns 18 AntiPatterns 19 Migrating to Amazon Redshift 20 OneStep Migration 20 TwoStep Migration 20 Wave based Migration 21 Tools and Additional Help for Database Migration 21 Designing Data Warehousing Workflows 22 Conclusion 25 Contributors 25 Further Reading 25 Document Revisions 26 Abstract Enterprises across the globe want to migrate data warehousing to the cloud to improve performance and lower costs This whitepaper discusses a modern approach to analytics and data warehousing architecture It outlines services available on Amazon Web Services (AWS) to implement this arch itecture and provides common design patterns to build data warehousing solutions using these services This whitepaper is aimed at d ata engineers data analysts business analysts and developers Amazon Web Services Data Warehousing on AWS 1 Introduction Data is an enterpri se’s most valuable asset To fuel innovation which fuels growth an enterprise must: • Store every relevant data point about their business • Give data access to everyone who needs it • Have the ability to analyze the data in different ways • Distill the data d own to insights Most large enterprises have data warehouses for reporting and analytics purposes They use data from a variety of sources including their own transaction processing systems and other databases In the past building and running a data w arehouse —a central repository of information coming from one or more data sources —was complicated and expensive Data warehousing systems were complex to set up cost millions of dollars in upfront software and hardware expenses and took months of plannin g procurement implementation and deployment processes After making the initial investments and setting up the data warehouse enterprises had to hire a team of database administrators to keep their queries running fast and protect against data loss Traditional data warehouse architectures and on premises data warehousing pose many challenges : • They are difficult to scale and have long lead times for hardware procurement and upgrades • They have high overhead costs for administration • Proprietary form ats and siloed data make it costly and complex to access refine and join data from different sources • They cannot separate cold (infrequently used) and warm (frequently used) data which results in bloated costs and wasted capacity • They limit the number of users and the amount of accessible data which leads to antidemocratization of data • They i nspire other legacy architecture patterns such as retrofitting use cases to accommodate the wrong tools for the job instead of using the correct tool for each use case In this whitepaper we provide the information you need to take advantage of the strategic shift happening in the data warehousing space from on premises to the cloud: Amazon Web Services Data Warehousing on AWS 2 1 Modern analytics architecture 2 Data warehousing technology choices available within that architecture 3 A deep dive on Amazon Redshift and its differentiating features 4 A blueprint for building a complete data warehousing system on AWS with Amazon Redshift and other AWS S ervices 5 Practical tips for migrating from other data warehousing solutions and tapping into our partner ecosystem Introducing Amazon Redshift In the past when data volumes grew or an enterprise wanted to make analytics and reports available to more users they had to choose between accepting slow query performance or investing time and effort on an expensive upgrade process In fact some IT teams discourage augmenting data or adding queries to protect existing service level agreements Many enterprises struggled with maintaining a healthy relationship with traditional database vendors They were often forced to either upgrade hardware for a managed system or enter a protracted negotiation cycle for an expired term license When they hit the scaling limit on one data warehouse engine they were forced to migrate to another engine from th e same vendor with different SQL semantics Cloud data warehouses like Amazon Redshift changed how enterprises think about data warehousing by dramatically lowering the cost and effort associated with deplo ying data warehouse systems without compromising on features scale and performance Amazon Redshift is a fast fully managed petabyte scale data warehousing solution that makes it simple and cost effective to analyze large volumes of data using existi ng business intelligence (BI) tools With Amazon Redshift you can get the performance of columnar data warehousing engines that perform massively parallel processing (MPP) at a tenth of th e cost You can start small for $025 per hour with no commitments and scale to petabytes for $1000 per terabyte per year You can grow to exabyte scale storage by storing data in an Amazon Simple Storage Servi ce (Amazon S3) data lake and taking a lake house approach to data warehousing with the Amazon Redshift Spectrum feature With this setup you can query data directly from files on Amazon S3 for as low as $5 per terabyte of data scanned Since launching in February 2013 Amazon Redshift has been one of the fastest growing AWS Services with tens of thousands of customers across many industries and company sizes Enterprises such as NTT DOCOMO FINRA Johnson & Johnson McDonalds Equinox Fannie Mae Hearst Amgen and NASDAQ have migrated to Amazon Redshift Amazon Web Services Data Warehousing on AWS 3 Modern Analytics and Data Warehousing Architecture Data typically flows into a data warehouse from transactional systems and other relational databases and typically includes struc tured semi structured and unstructured data This data is processed transformed and ingested at a regular cadence Users including data scientists business analysts and decision makers access the data through BI tools SQL clients and other tools So w hy build a data warehouse at all ? Why not just run analytics queries directly on an online transaction processing (OLTP) database where the transactions are recorded? To answer the question let’s look at the differences between data warehouses and OLTP databases • Data warehouses are optimized for batched write operations and reading high volumes of data • OLTP databases are optimized for continuous write operations and high volumes of small read operations Data warehouses generally employ denormal ized schemas like the Star schema and Snowflake schema because of high data throughput requirements whereas OLTP databases employ highly normalized schemas which are more suited for high transaction throughput requirements To get the benefits of using a data warehouse managed as a separate data store with your source OLTP or other source system we recommend that you build an efficient data pipeline Such a pipeline extracts the data from the source system converts it into a schema suitable for data warehousing and then loads it into the data warehouse In the next section we discuss the building blocks of an a nalytics pipeline and the different AWS Services you can use to architect the pipeline AWS Analytics Services AWS analytics services help enterprises quickly convert their data to answers by providing mature and integrated analytics services ranging fro m cloud data warehouses to serverless data lakes Getting answers quickly means less time building plumbing and configuring cloud analytics services to work together AWS helps you do exactly that by giving you : 1 An easy path to build data lak es and data w arehouses and start running diverse analytics workloads 2 A secure cloud storage compute and network infrastructure that meets the specific needs of analytic workloads Amazon Web Services Data Wareho using on AWS 4 3 A fully integrated analytics stack with a mature set of analytics tools covering all common use cases and leveraging open file formats standard SQL language open source engines and platforms 4 The best performance the most scalability and the lowest co st for analytics Many enterprises choose cloud data lakes and cloud data warehouses as the foundation for their data and analytics architectures AWS is focused on helping customers build and secure data lakes and data warehouses in the cloud within days not months AWS Lake Formation enables secured selfservice discovery and access for users Lake Formation provides easy ondemand access to specific resources that fit the requirements of each ana lytics workload The data is curated and cataloged already prepared for any type of analytics Related records are matched and de duplicated with machine learning AWS provides a diverse se t of analytics services that are deeply integrated with the infrastructure layers This enables you to take advantage of features like intelligent tiering and Amazon Elastic Compute Cloud (Amazon EC2) spot instan ces to reduce cost and run analytics faster When you ’re ready for more advanced analytic approaches use our broad collection of machine learning ( ML) and artificial intelligence (AI) services against that same data in S3 to gain even more insight without the delays and cost s of moving or transforming your data Analytics Architecture Analytics pipelines are designed to handle large volumes of incoming streams of data from heterogeneous sources such as databases applications and devices A typical analytics pipeline has the following stages: 1 Collect data 2 Store the data 3 Process the data 4 Analyze and visualize the data Figure 1: Analytics Pipeline Amazon Web Services Data Warehousing on AWS 5 Data Collection At the data collection stage consider that you probably have different types of data such as transactional data log data streaming data and Internet of Things (IoT) da ta AWS provides solutions for data storage for each of these types of data Transactional Data Transactional data such as e commerce purchase transactions and financial transactions is typically stored in relational database management systems (RDBMS) or NoSQL database systems The choice of database solution depends on the use case and application characteristics • A NoSQL database is suitable when the data is not well structured to fit into a defined schema or when the schema changes often • An RDBMS solution is suitable when transactions happen across multiple table rows and the queries require complex joins Amazon DynamoDB is a fully managed NoSQL database service that you can use as an OLTP store for your applications Amazon Aurora and Amazon Relational Database Service (Amazon RDS ) enable you to implement a n SQLbased relational database solution for your application : • Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud • Amazon RDS is a service that enables you to easily set up operate and scale relational databases on the cloud For more information about the different AWS database services see Databases on AWS Log Data Reliably capturing system generated logs help s you troubl eshoot issues conduct audits and perform analytics using the information stored in the logs Amazon S3 is a popular storage solution for non transactional data such as log data that is used for analytics Beca use it provides 99999999999 percent durability S3 is also a popular archival solution Streaming Data Web applications mobile devices and many software applications and services can generate staggering amounts of streaming data —sometimes terabytes per hour —that need to be collected stored and processed continuously Using Amazon Kinesis services you can do that simply and at a low cost Alterna tively you can use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to run applications that use Apache Kafka to process streaming data With Amazon MSK you can use native Amazon Web Services Data Warehousing on AWS 6 Apache Kafka application programming interfaces ( APIs ) to populate data lakes stream changes to and from databases and power ML and analytics applications IoT Data Devices and sensors around the world send messages continuously Enterprises today need to capture this data and derive intelligence from it Using AWS IoT connected devices interact easily and securely with the AWS Cloud Use AWS IoT to leverage AWS services like AWS Lambda Amazon Kinesis Services Amazon S3 Amazon Machine Learning and Amazon DynamoDB to build applications that gather process analyze and act on IoT data without having to manage any infrastructure Data Processing The collection process provides data that potentially has useful information You can analyze the extracted information for intelligence that will help you grow your business This intelligence might for example tell you about your user behavior and the relative popularity of your products The best practice to gather this intelligence is to load your raw data into a data warehouse to perform further analysis There are two types of processing workflows to accomplish this: batch processing and realtime processing The most common forms of processing online analytic processing (OLAP) and OLTP each use one of these types OLAP processing is generally batch based OLTP systems are oriented toward real time processing and are generally not well suited fo r batch based processing If you decouple data processing from your OLTP system you keep the data processing from affecting your OLTP workload First let's look at what is involved in batch processing Batch Processing • Extract Transform Load (ETL) — ETL is the process of pulling data from multiple sources to load into data warehousing systems ETL is normally a continuous ongoing process with a well defined workflow During this process data is initially extracted from one or more sources The extracted data is then cleansed enriched transformed and loaded into a data warehouse For batch ETL use AWS Glue or Amazon EMR AWS Glue is a fully managed ETL service You can create and run an ETL job with a few clicks in the AWS Management Console Amazon EMR is for big data processing and analysis EMR offers an expandable lowconfigurat ion service as an easier alternative to running in house cluster computing Amazon Web Services Data Warehousing on AWS 7 • Extract Load Transform (ELT) — ELT is a variant of ETL where the extracted data is loaded into the target system first Transformations are performed after the data is loaded into the data warehouse ELT typically works well when your target system is powerful enough to handle transformations Amazon Redshift is often used in ELT pipelines because it is highly efficient in performing transformations • Online Analytical Processing (OLAP) — OLAP systems store aggregated historical data in multidimensional schemas Used widely for query reporting and analytics OLAP systems enable you to extract data and spot trends on multiple dimensions Because it is optimized for fast joins Ama zon Redshift is often used to build OLAP systems Now let’s look at what’s involved in real time processing of data Real Time Processing We talked about streaming data earlier and mentioned Amazon Kinesis Services and Amazon MSK as solutions to capture and store streaming data You can process this data sequentially and incrementally on a record byrecord basis or over sliding time windows Use the processed data for a wide variety of analytics including correlations aggregations filtering a nd sampling This type of processing is called realtime processing Information derived from real time processing gives companies visibility into many aspects of their business and customer activity such as service usage (for metering or billing) serve r activity website clicks and geolocation of devices people and physical goods This enables them to respond promptly to emerging situations Real time processing requires a highly concurrent and scalable processing layer To process streaming data i n real time use AWS Lambda Lambda can process the data directly from AWS IoT or Amazon Kinesis Data Streams Lambda enables you to run code withou t provisioning or managing servers Amazon Kinesis Client Library (KCL) is another way to process data from Amazon Kinesis Streams KCL gives you more flex ibility than Lambda to batch your incoming data for further processing You can also use KCL to apply extensive transformations and customizations in your processing logic Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS It can capture streaming data and automatically load it into Amazon Redshift enabling nearrealtime analytics with existing BI tools and dashboards you’re already using today Define batching rules with Kinesis Data Firehose and it takes care of reliably batching the data and delivering it to Amazon Redshift Amazon MSK is an easy way to build and run applications that use Apache Kafka to process streaming data Apache Kafka is an open source platform for building real time streaming data pipelines and applications With Amazon MSK you can use native Amazon Web Services Data Warehousing on AWS 8 Apache Kafka APIs t o populate data lakes stream changes to and from databases and power machine learning and analytics applications AWS Glue streaming jobs enable you to perform complex ETL on streaming data Streaming ETL jobs in AWS Glue can consume data from streaming sources like Amazon Kinesis Data Streams and Amazon MSK clean and transform those data streams in flight and continuously load the results into S3 data lakes data warehouses or other data stores As you process streaming data in a n AWS Glue job you have access to the full capabilities of Spark Structured Streaming to implement data transformations such as ag gregating partitioning and formatting as well as joining with other data sets to enrich or cleanse the data for easier analysis Data Storage You can store your data in a lake house data warehouse or data mart • Lake house — A lake house is an archit ectural pattern that combines the best elements of data warehouses and data lakes Lake houses enable you to query data across your data warehouse data lake and operational databases to gain faster and deeper insights that are not possible otherwise Wit h a lake house architecture you can store data in open file formats in your data lake and query it in place while joining with data warehouse data This enables you to make this data easily available to other analytics and machine learning tools rather t han locking it in a new silo • Data warehouse — Using data warehouses you can run fast analytics on large volumes of data and unearth patterns hidden in your data by leveraging BI tools Data scientists query a data warehouse to perform offline analytics a nd spot trends Users across the enterprise consume the data using SQL queries periodic reports and dashboards as needed to make critical business decisions • Data mart — A data mart is a simple form of data warehouse focused on a specific functional area or subject matter For example you can have specific data marts for each division in your enterprise or segment data marts based on regions You can build data marts from a large data warehouse operational stores or a hybrid of the two Data marts are simple to design build and administer However because data marts are focused on specific functional areas querying across functional areas can become complex because of distribution You can use Amazon Redshift to build lake houses d ata marts and data warehouses Redshift enables you to easily query data in your data lake and write data back to your data lake in open formats You can use familiar SQL statements to combine and process data across all your data stores and execute que ries on live data in your operational databases without requiring any data loading and ETL pipelines Amazon Web Services Data Warehousing on AWS 9 Analysis and Visualization After processing the data and making it available for further analysis you need the right tools to analyze and visualize the processed data In many cases you can perform data analysis using the same tools you use for processing data You can use tools such as MySQL Workbench to analyze your data in Amazon Redshift with ANSI SQL Amazon Redshift also works well with popular third party BI solutions available on the market such as Tableau and MicroStrategy Amazon QuickSight is a fast cloud powered BI service that enables you to create visualizations perform analysis as needed and quickly get business insights from your data Amazon QuickSight offers native integration with AWS data sources such as Amazon Redshift Amazon S3 and Amazon RDS Amazon Redshift sources can be autodetected by Amazon QuickSight and can be queried either using a direct query or SPICE mode SPICE is the in memory optimized calculation engine for Amazon QuickSight designed specifically for fast asneeded data vis ualization You can improve the performance of database datasets by importing the data into SPICE instead of using a direct query to the database If you are using S3 as your primary storage you can use Amazon Athena /QuickSight integration to perform analysis and visualization Amazon Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL You can run SQL queries using Athena on data stored in S3 and build business dashboards within QuickSight For another visualization approach Apache Zeppelin is an open source BI solution th at you can run on Amazon EMR to visualize data in S3 using Spark SQL You can also use Apache Zeppelin to visualize data in Amazon Redshift Analytics Pipeline with AWS Services AWS offers a broad set of services to implement an end toend analytics platform Figure 2 shows the services we discussed and where they fit within the analytics pipeline Amazon Web Services Data Warehousing on AWS 10 Figure 2: Analytics Pipeline with AWS Services Data Warehouse Technology Options In this section we discuss o ptions available for building a data warehouse: row oriented databases column oriented databases and massively parallel processing architectures Row Oriented Databases Roworiented databases typically store whole rows in a physical block High perform ance for read operations is achieved through secondary indexes Databases such as Oracle Database Server Microsoft SQL Server MySQL and PostgreSQL are roworiented database systems These systems have been traditionally used for data warehousing but th ey are better suited for transactional processing (OLTP) than for analytics To optimize performance of a row based system used as a data warehouse developers use a number of techniques including : • Building materialized views • Creating pre aggregated rollup tables • Building indexes on every possible predicate combination Amazon Web Services Data Warehousing on AWS 11 • Implementing data partitioning to leverage partition pruning by query optimizer • Performing index based joins Traditional row based data stores are limited by t he resources available on a single machine Data marts alleviate the problem to an extent by using functional sharding You can split your data warehouse int o multiple data marts each satisfying a specific functional area However when data marts grow large over time data processing slows down In a row based data warehouse every query has to read through all of the columns for all of the rows in the bloc ks that satisfy the query predicate including columns you didn’t choose This approach creates a significant performance bottleneck in data warehouses where your tables have more columns but your queries use only a few Column Oriented Databases Colu mnoriented databases organize each column in its own set of physical blocks instead of packing the whole rows into a block This functionality allows them to be more input/output ( I/O) efficient for read only queries because they have to read only those columns accessed by a query from disk (or from memory) This approach makes column oriented databases a better choice than row oriented databases for data warehousing Figure 3 illustrates the primary difference between row oriented and column oriented databases Rows are packed into their own blocks in a row oriented database and columns are packed into their own blocks in a column oriented database Figure 3: Row oriented vs column oriented databases After faster I/O the next biggest benefit to usin g a column oriented database is improved compression Because every column is packed into its own set of blocks Amazon Web Services Data Warehousing on AWS 12 every physical block contains the same data type When all the data is the same data type the database can use extremely efficient compression algorithms As a result you need less storage compared to a row oriented database This approach also results in significantly lesser I/O because the same data is stored in fewer blocks Some column oriented databases that are used for data warehousing include Amazon Redshift Vertica Greenplum Teradata Aster Netezza and Druid Massively Parallel Processing (MPP) Architectures An MPP architecture enables you to use all the resources available in the cluster for processing data which dramatically increas es performance of petabyte scale data warehouses MPP data warehouses allow you improve performance by simply adding more nodes to the cluster Amazon Reds hift Druid Vertica Greenplum and Teradata Aster are some of the data warehouses built on an MPP architecture Open source frameworks such as Hadoop and Spark also supp ort MPP Amazon Redshift Deep Dive As a columnar MPP technology Amazon Redshift offers key benefits for performant costeffective data warehousing including efficient compression reduced I/O and lowe r storage requirements It is based on ANSI SQL so you can run existing queries with little or no modification As a result it is a popular choice for enterprise data warehouses Amazon Redshift delivers fast query and I/O performance for virtually any data size by using columnar storage and by parallelizing and distributing queries across multiple nodes It automates most of the common administrative tasks associated with provisioning configuring monitoring backing up and securing a data warehouse making it easy and inexpensive to manage Using this automation you can build petabyte scale data warehouses in minutes instead of the weeks or months taken by traditional on premises imp lementations You can also run exabytes scale queries by storing data on S3 and query ing it using Amazon Redshift Spectrum Amazon Redshift also enables yo u to scale compute and storage separately using Amazon Redshift RA3 nodes RA3 nodes come with Redshift Managed Storage (RMS) which leverages your workload patterns and advanced data management techniques such as automatic fine grained data eviction and intelligent data pre fetching You can size your cluster based on your compute needs only and pay only for the storage used Integration with Data Lake Amazon Redshift provides a feature called Redshift Spectrum that makes it easier to both query data and write data back to your data lake in open file formats With Spectrum you can query open fi le formats such as Parquet ORC JSON Avro CSV Amazon Web Services Data Warehousing on AWS 13 and more directly in S3 using familiar ANSI SQL To export data to your data lake you simply use the Redshift UNLOAD command in your SQL code and specify Parquet as the file format and Redshift automatica lly takes care of data formatting and data movement into S3 To query data in S3 you create an external schema if the S3 object is already cataloged or create an external table You can write data to external tables by running CREATE EXTERNAL TABLE AS SE LECT or INSERT INTO an external table This gives you the flexibility to store highly structured frequently accessed data in a Redshift data warehouse while also keeping up to exabytes of structured semi structured and unstructured data in S3 Exporting data from Amazon Redshift back to your data lake enables you to analyze the data further with AWS services like Amazon Athena Amazon EMR and Amazon SageMaker Performance Amazon Redshift offers fast industry leading performance with flexibility Amazon Redshift offers multiple features to achieve this superior performance including: • High performing hardware — The Amazon Redshift Service offers multiple node types to choose from based on your requirements The latest generation RA3 instances are built on the AWS Nitro System and featur e high bandwidth networking and performance indistinguishable from bare metal These Amazon Redshift instances maximize speed for performance intensive workloads that require large amounts of compute capacity with the flexibility to pay by usage for storage and pay separately for compute by specifying the number of instances you need • AQUA (preview) — AQUA (Advanced Query Accelerator) is a distributed and hardware accelerated cache that enables Amazon Redshift to run up to ten times faster than any other cloud data warehouse AQUA accelerates Amazon Redshift queries by running data intensive tasks such as filtering and aggregation closer to the storage layer This avoids networking bandwidth limitations by eliminating unnecessary data movement between where data is stored and compute clusters AQUA uses AWS designed processors to accelerate queries This includes AWS Nitro chips adapted to speed up data encryption and compression and c ustom analytics processors implemented in fieldprogrammable gate arrays ( FPGAs ) to accelerate operations such as filtering and aggregation AQUA can process large amounts of data in parallel across multiple nodes and automatically scales out to add mor e capacity as your storage needs grow over time Amazon Web Services Data Warehousing on AWS 14 • Efficient storage and high performance query processing — Amazon Redshift delivers fast query performance on datasets ranging in size from gigabytes to petabytes Columnar storage data compression and zone maps reduce the amount of I/O needed to perform queries Along with the industry standard encodings such as LZO and Zstandard Amazon Redshift also o ffers purpose built compression encoding AZ64 for numeric and date/time types to provide both storage savings and optimized query performance • Materialized views — Amazon R edshift materialized views enable you to achieve significantly faster query performance for analytical workloads such as dashboarding queries from BI tools and ELT data processing jobs You can use materialized views to store frequently used precomputati ons to speed up slow running queries Amazon Redshift can efficiently maintain the materialized views incrementally to speed up ELT and provide low latency performance benefits For more information see Creating materialized views in Amazon Redshift • Auto workload management to maximize throughput and performance — Amazon Redshift uses machine learning to tune configuration to achieve high throughput and perfor mance even with varying workloads or concurrent user activity Amazon Redshift utilizes sophisticated algorithms to predict and classify incoming queries based on their run times and resource requirements to dynamically manage resources and concurrency wh ile also enabling you to prioritize your business critical workloads Short query acceleration (SQA) sends short queries to an express queue for immediate processing rather than waiting behind long running queries You can set the priority of your most imp ortant queries even when hundreds of queries are being submitted Amazon Redshift is also a self learning system that observes the user workload continuously detecting opportunities to improve performance as the usage grows applying optimizations seaml essly and making recommendations via Redshift Advisor when an explicit user action is needed to further turbocharge Amazon Redshift performance • Result caching — Amazon Redshift uses result caching to deliver sub second response times for repeated queries Dashboard visualization and business intelligence tools that execute repeated queries experience a significant performance boost When a query executes Amazo n Redshift searches the cache to see if there is a cached result from a prior run If a cached result is found and the data has not changed the cached result is returned immediately instead of re running the query Durability and Availability To provide the best possible data durability and availability Amazon Redshift automatically detects and replaces any failed node in your data warehouse cluster It makes your replacement node available immediately and loads your most frequently Amazon Web Services Data Warehousing on AWS 15 accessed data first so you can resume querying your data as quickly as possible Amazon Redshift attempts to maintain at least three copies of data : the original and replica on the compute nodes and a backup in S3 The cluster is in read only mode until a replacement node is provisioned and added to the cluster which typically takes only a few minutes Amazon Redshift clusters reside within one Availability Zone However if you want to a Multi AZ setup for Amazon Redshift you can create a mirror and then self manage replication and failover With just a few clicks in the Amazon Redshift Management Console you can set up a robust disaster recovery (DR) environment with Amazon Redshift Amazon Redshift automatically takes incremental snapshots (backup s) of your data every eight hours or five gigabytes ( GBs) per node of data change You can get more information and control over a snapshot including the ability to control the automatic snapshot's schedule You can keep copies of your backups in multipl e AWS Regions In case of a service interruption in one AWS Region you can restore your cluster from the backup in a different AWS Region You can gain read/write access to your cluster within a few minutes of initiating the restore operation Elasticity and Scalability With Amazon Redshift you get the elasticity and scalability you need for your data warehousing workloads You can scale compute and storage independently and pay only for what you use With the elasticity and scalability that Amazon Reds hift offers you can easily run non uniform and unpredictable data warehousing workloads Amazon Redshift provides two forms of compute elasticity : • Elastic resize — With the elastic resize feature you can quickly resize your Amazon cluster by adding nodes to get the resources needed for demanding workloads and to remove nodes when the job is complete to save cost Additional nodes are added or removed in minutes with minimal disruption to on going read and write queries Elastic resize can be automated using a schedule you define to accommodate changes in w orkload that occur on a regular basis Resize can be scheduled with a few clicks in the console or programmatically using the AWS command line interface ( AWS CLI) or an API call • Concurrency Scaling — With the Concurrency Scaling feature you can support virtually unlimited concurrent users and concurrent queries with consistently fast query performance When concurrency scaling is enabled Amazon Redshift automaticall y adds additional compute capacity when you need it to process an increase in concurrent read queries Write operations continue as normal on your main cluster Users always see the most current data whether the queries run on the main cluster or on a con currency scaling cluster Amazon Web Services Data Warehousing on AWS 16 Amazon Redshift enables you to start with as little as a single 160 GB node and scale up all the way to multiple petabytes of compressed user data using many nodes For more information see About Clusters and Nodes in the Amazon Redshift Cluster Management Guide Amazon Redshift Managed Storage Amazon Redshift managed storage enables you to scale and pay for compute and storage independently so you c an size your cluster based only on your compute needs It automatically uses high performance solidstate drive ( SSD)based local storage as tier1 cache and takes advantage of optimizations such as data block temperature data block age and workload pat terns to deliver high performance while scaling storage automatically when needed without requiring any action Operations As a managed service Amazon Redshift completely automates many operational tasks including : • Cluster Performance — Amazon Redshift p erforms Auto A NALYZE to maintain accurate table statistics It als o performs Auto VACUUM to ensure that the database storage is efficient and de leted data blocks are reclaimed • Cost Optimization — Amazon Redshift enables you to pause and resume the clusters that need to be available only at a specific time enabling you to suspend ondemand billing while the cluster is not being used Pause and resume can also be automated using a schedule you define to match your operational needs Cost controls can be defined on Amazon R edshift clusters to monitor and control your usage and associated cost for Amazon Redshift Spectrum and Concurrency Scaling features Redshift Advisor To help you improve performance and decrease the operating costs for your cluster Amazon Redshift has a feature called Amazon Redshift Advisor Amazon Redshift Advisor offers you specific recommendations about changes to make Advisor develops its customized recommendations by analyzing workload and usage metrics for your cluster These tailored recommendations relate to operations and cluster s ettings To help you prioritize your optimizations Advisor ranks recommendations by order of impact You can view Amazon Redshift Advisor analysis results and recommendations on the AWS Management Console Amazon Web Services Data Warehousing on AWS 17 Interfaces Amazon Redshift has custom Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers you can download from the Connect Client tab of the console which means you can use a wide range of familiar SQL clients You can also use standard PostgreSQL JDBC and ODBC drivers For more information about Amazon Redshift drivers see Amazon Redshift and PostgreSQL in the Amazon Redshift Database Developer Guide Amazon Redshift provides a built in Query Editor in the web console The Query Editor is an in browser interface for running SQL queries on Amazon Redshift clusters directly from AWS Management Console It’s a convenient way for a database administrator (DBA) or a user to run queries as needed or diagnose queries You can also find numerous examples of validated integrations with m any popular BI and ETL vendors In these integrations loads and unloads execute in parallel on each compute node to maximize the rate a t which you can ingest or export data to and from multiple resources including S3 Amazon EMR and DynamoDB You can easily load streaming data into Amazon Redshift using Amazon Kinesis Data Firehose enabling near real time analytics with existing BI tools and dashboards You can locate metrics for compute utilization memory utilization storage utilization and read/write traffic to your Amazon Redshift data warehouse cluster by using the con sole or Amazon CloudWatch API operations Security To help provide data security you can run Amazon Redshift inside a virtual private cloud based on the Amazon Virtual Private Cloud (Amazon VPC) service You can use the software defined networking model of the VPC to define firewall rules that restrict traffic based on the rules you configure Amazon Redshift su pports SSL enabled connections between your client application and your Amazon Redshift data warehouse cluster which enables data to be encrypted in transit You can also leverage Enhanced VPC Routing to manage data flow between your Amazon Redshift cluster and other data sources Data traffic is routed within the AWS network instead of public internet The Amazon Redshift compute nodes store your data but the data can be accessed only from the cluster’s leader node This isolation provides another layer of security Amazon Redshift integrates with AWS CloudTrail to enable you to audit all Amazon Redshift API calls To help keep your data secure at rest Amazon Redshift supports encryption and can encrypt each block using hardware accelerated Advanced Encryption Standard ( AES)256 encryption as each block is wr itten to disk This encryption takes place at a low level in the I/O subsystem; the I/O subsystem encrypts everything written to disk including intermediate query results The blocks are backed up as is which means that backups are also encrypted By def ault Amazon Redshift takes care of key management but you can choose to manage your keys using your Amazon Web Services Data Warehousing on AWS 18 own hardware security modules or manage your keys through AWS Key Management Service (AWS KMS) Database security management is controlled by managing user access granting the proper privileges to tables and views to user accounts or groups and leveraging column level grant and revoke to meet your security and compliance needs in finer granularity In addi tion Amazon Redshift provides multiple means of authentication to secure and simplify data warehouse access You can use AWS Identity and Access Management (AWS IAM) within your AWS account Use federated authentication if you already manage user identifies outside of AWS via SAML 20compatible identity providers to enable your users to access the data warehouse without managing database users and passwords Amazon Redshift also supports multi factor authentication (MFA) to provide additional security Cost Model Amazon Redshift requires no long term commitments or upfront costs This pricing approach frees you f rom the capital expense and complexity of planning and purchasing data warehouse capacity ahead of your needs Charges are based on the size and number of nodes in your cluster If you use Amazon Redshift managed storage (RMS) with an RA3 instance you pay separately for the amount of compute and RMS that you use If you need additional compute power to handle workload spike s you can enable concurrency scaling For e very 24 hours that your main cluster runs you accumulate one hour of credit to use this fe ature for free Beyond that you will be charged the per second on demand rate There is no additional charge for backup storage up to 100 percent of your provisioned storage For example if you have an active cluster with two XL nodes for a total of four terabytes (TB) of storage AWS provides up to four TB of backup storage on S3 at no additional charge Backup storage beyond th e provisioned storage size and backups stored after your cluster is terminated are billed at standard Amazon S3 rates There is no data transfer cha rge for communication between S3 and Amazon Redshift If you use Redshift Spectrum to access data store in your data lake you pay for the query cost based on how much data the query scans For more information see Amazon Redshift Pricing Ideal Usage Patterns Amazon Re dshift is ideal for OLAP using your existing BI tools Enterprises use Amazon Redshift to do the following: Amazon Web Services Data Warehousing on AWS 19 • Running enterprise BI and reporting • Analyze global sales data for multiple products • Store historical stock trade data • Analyze ad impressions and clicks • Aggregate gaming data • Analyze social trends • Measure clinical quality operation efficiency and financial performance in health care With the Amazon Redshift Spectrum feature Amazon Redshift supports semi structured data and extend s your data warehouse to your data lake This enables you to: • Run as needed analysis on large volume event data such as log analysis and social media • Offload infrequently accessed history data out of the data warehouse • Join the external dataset with the data warehouse directly without loading them into the data warehouse AntiPatterns Amazon Redshift is not ideally suited for the following usage patterns: • OLTP – Amazon Redshift is designed for data war ehousing workloads delivering extremely fast and inexpensive analytic capabilities If you require a fast transactional system you might want to choose a relational database system such as Amazon Aurora or Amazon RDS or a NoSQL database such as Amazon DynamoDB • Unstructured data – Data in Amazon Redshift must be structured by a defined schema Amazon Redshift do esn’t support an arbitrary schema structure for each row If your data is unstructured you can perform ETL on Amazon EMR to get the data ready for loading into Amazon Redshift For JSON data you can store key value pairs and use the native JSON functions in your queries • BLOB data – If you plan to store binary large object (BLOB) files su ch as digital video images or music you might want to stor e the data in S3 and referenc e its location in Amazon Redshift In this scenario Amazon Redshift keeps track of metadata (such as item name size date created owner location and so on) about your binary objects but the large objects themselves are stored in S3 Amazon Web Services Data Warehousing on AWS 20 Migrating to Amazon Redshift If you decide to migrate from an existing data warehouse to Amazon Redshift which migration strategy you should choose depends on several factors: • The size of the database and its tables and objects • Network bandwidth between the source server and AWS • Whether the migration and switchover to AWS will be done in one step or a sequence of steps over time • The data change rate in the source system • Transformations during migration • The partner tool that you plan to use for migration and ETL OneStep Migration Onestep migration is a good option for small databases that don’t require continuous operation Customers can extract existing databases as comma separated value (CSV) files or columnar format like Parquet then use services such as AWS Snowball to deliver datasets to S3 for loading into Amazon Redshift Customers then test the destination Amazon Redshift database for data cons istency with the source After all validations have passed the database is switched over to AWS TwoStep Migration Twostep migration is commonly used for databases of any size: 1 Initial data migration — The data is extracted from the source databas e preferably during non peak usage to minimize the impact The data is then migrated to Amazon Redshift by following the one step migration approach described previously 2 Changed data migration — Data that changed in the source database after the initial data migration is propagated to the destination before switchover This step synchronizes the source and destination databases After all the changed data is migrated you can validate the data in the destination database perform necessary tests and i f all tests are passed switch over to the Amazon Redshift data warehouse Amazon Web Services Data Warehousing on AWS 21 Wave based Migration Large scale MPP data warehouse migration presents a challenge in terms of project complexity and is riskier Taking precaution s to break a complex migration project into multiple logical and systematic waves can significantly reduce the complexity and risk Starting from a workload that covers a good number of data sources and subject areas with medium complexity then add more data sources and subject areas in each subsequent wave See Develop an application migration methodology to modernize your data wareh ouse with Amazon Redshift for a description of how to migrate from the source MPP data warehouse to Amazon Redshift using the wave based migration approach Tools and Additional Help for Database Migration Several tools and technologies for data migrati on are available You can use some of these tools interchangeably or you can use other third party or open source tools available in the market 1 AWS Database Migration Service supports both the one step and the two step migration processes To follow the two step migration process you enable supplemental logging to capture changes to the source system You can enable supplemental lo gging at the table or database level 2 AWS Schema Conversion Tool (SCT) is a free tool that can convert the source database schema and a majority of the database code objects including vie ws stored procedures and functions to a format compatible with the target databases SCT can scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project After schema conversion is compl ete SCT can help migrate a range of data warehouses to Amazon Redshift using built in data migration agents 3 Additional data integration partner tools include : • Informatica • Matillion • SnapLogic • Talend • BryteFlow Ingest • SQL Server Integration Services (SSIS) Amazon Web Services Data Warehousing on AWS 22 For more information on data integration and consulting partners see Amazon Redshift Partners We provide technical advice migration support and financial assistance to help eligible customer s quickly and cost effectively migrate from legacy data warehouses to Amazon Redshift the most popular and fastest cloud data warehouse Qualifying customers receive advice on application architecture migration strategies program management proof ofconcept and employee training that are customized for their technology landscape and migration goals We offer migration assistance through Amazon Database M igration Accelerator AWS Professional Services or our network of Partners These teams and organizations specialize in a range of data warehouse and analytics technologies and bring a wealth of experience acquired by migrating thousands of data warehouses and applications to AWS We also offer service credits to minimize the financial impact of the migration For more information see Migrate to Amazon Redshift Designing Data Warehousing Workflows In the previous sections we discussed the features of Amazon Redshift that make it ideally suited for data warehousing To understand how to design data warehousing workflows with Amazon Redshift let’s look at the most common design pattern along with an example use case Suppose that a multinational clothing maker has more than a thousand retail stores sells certain clothing lines through department and discount stores and has an online presence From a technical standpoint these three channels currently operate independently They have different management point ofsale systems and accounting departments No single system merges all the related datasets together to p rovide the CEO with a 360 degree view across the entire business Suppose the CEO wants to get a company wide picture of these channels and perform analytics such as the following: • What trends exist across channels? • Which geographic regions do better across channels? • How effective are the company’s advertisements and promotions? • What trends exist across each clothing line? • Which external forces have impacts on the company’s sales ; for example the unemployment rate and weather conditions? • What onli ne ads are most effective? Amazon Web Services Data Warehousing on AWS 23 • How do store attributes affect sales ; for example tenure of employees and management strip mall versus enclosed mall location of merchandise in the store promotion endcaps sales circulars and in store displays? An enterpr ise data warehouse solves this problem It collects data from each of the three channels’ various systems and from publicly available data such as weather and economic reports Each data source sends data daily for consumption by the data warehouse Click stream data are streamed continuously and stored on S3 Because each data source might be structured differently an ETL process is performed to reformat the data into a common structure Then analytics can be performed across data from all sources simulta neously To do this we use the following data flow architecture: Figure 4: Enterprise data warehouse workflow 1 The first step is getting the data from different sources into S3 S3 provides a highly durable inexpensive and scalable storage platform that can be written to in parallel from many different sources at a low cost 2 For batch ETL you can use either Am azon EMR or AWS Glue AWS Glue is a fully managed ETL service that simplif ies ETL job creation and eliminate s the need to provision and manage infrastructure You pay only for the resources used while your jobs are running AWS Glue also provides a central ized metadata repository Simply point AWS Glue to your data stored in AWS and AWS Glue discovers your data and stores the associated table definition and schema in the AWS Glue Data Catalog Once cataloged your data is immediately searchable can be queried and is available for ETL Amazon Web Services Data Warehousing on AWS 24 3 Amazon EMR can transform and cleanse the data from the source format to go into the destination format Amazon EMR has built in integration with S3 which allows parallel threads of throughput from each node in your Amazon EMR cluster to and from S3 Typically a data warehouse gets new data on a nightly basis Because there is usually no need for analytics in the middle of the night the only requirement around this transformation process is that it finishes by the morning when the CEO and other business users need to access reports and dashboards You can use the Amazon EC2 Spot market to further bring down the cost of ETL A good spot strategy is to start bidding at a low price at midnight and continually increase your price over time until capacity is granted As you get closer to the deadline if spot bids have not succeeded you can fall back to on demand prices to ensure you still meet your completion time requirements Each source might have a different transfo rmation process on Amazon EMR but with the AWS pay asyougo model you can create a separate Amazon EMR cluster for each transformation You can tune each cluster it to be exactly the right capacity to complete all data transformation jobs without conten ding with resources for the other jobs 4 Each transformation job loads formatted cleaned data into S3 We use S3 here again because Amazon Redshift can load the data in parallel from S3 using multiple threads from each cluster node S3 also provides a n historical record and serves as the formatted source of truth between systems Data on S3 is cataloged by AWS Glue The metadata is stored in the AWS Glue data catalog which allows it to be consumed by other tools for analytics or machine learning if additional requirements are introduced over time 5 Amazon Redshift loads sorts distributes and compresses the data into its tables so that analytical queries can execute efficiently and in parallel If you leverage an RA3 instance with Amazon Redshift managed storage Amazon Redshift can automatically scale storage as your data increases As the business expands you can enable Amazon Redshift concurrency scaling to handle more and more user requests and keep near linear performance With new workload s are added you can increase data warehouse capacity in minutes by adding more nodes via Amazon Redshift elastic resize 6 Clickstream data is stored o n S3 via Kinesis Data F irehose hourly or even more frequently Because Amazon Redshift can query S3 external data via Spectrum without having to load them into a data warehouse you can track the customer online journey in near real time and join it with sales data in your data warehouse to understand customer behavior better This provides a more complete picture of customers and enables business users to get insight sooner and take action Amazon Web Services Data Warehousing on AWS 25 7 To visualize the analytics you can use Amazon QuickSight or one of the many partner visualization platforms that connect to Amazon Redshift using ODBC or JDBC This point is where the CEO and their staff view reports dashboards and charts Now executives can use t he data for making better decisions about company resources which ultimately increase s earnings and value for shareholders You can easily expand this flexible architecture when your business expands opens new channels launches additional customer specific mobile applications and brings in more data sources It takes just a few clicks in the Amazon Redshift Management Console or a few API calls Conclusion There is a strategic shift in data warehousing as enterprises migrate their analytics datab ases and solutions from on premises solutions to the cloud to take advantage of the cloud’s simplicity performance elasticity and cost effectiveness This whitepaper offers a comprehensive account of the current state of data warehousing on AWS AWS provides a broad set of services and a strong partner ecosystem that enable customers to easily build and run enterprise data warehousing in the cloud The result is a highly performant cost effective analytics architecture that can scale with your busines s on the AWS global infrastructure Contributors Contributors to this document include: • Anusha Challa Sr Analytics SSA Amazon Web Services • Corina Radovanovich Sr Product Marketing M anager Amazon Web Services • Juan Yu Sr Analytics SSA Amazon Web Services • Lucy Friedmann Product Marketing M anager Amazon Web Services • Manan Goel Principal Product Manager Amazon Web Services Further Reading For additional information see: • Amazon Redshift FAQs • Amazon Redshift lake house architecture • Amazon Redshift customer success Amazon Web Services Data Warehousing on AWS 26 • Amazon Redshift best practices • Implementing workload management • Querying external data using Amazon Redshift Spectrum • Amazon Redshift Documentation • Amazon Redshift system overview • What is Amazon Redshift? • AWS Key Management Service (KMS) • Amazon Redshift JSON functions • Amazon Redshift pricing • Amazon Redshift Partners • AWS Database Migration Service • Develop an application migration methodology to modernize your data warehouse with Amazon Redshift (blog entry) • What is Streaming Data? • Colum noriented DBMS Document Revisions Date Description January 2021 Updated to include latest features and capabilities March 2016 First publication,General,consultant,Best Practices Database_Caching_Strategies_Using_Redis,"ArchivedDatabase Caching Strategies Using Redis May 2017 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/database cachingstrategiesusingredis/welcomehtmlArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Database Challenges 1 Types of Database Caching 1 Cach ing Patterns 3 Cache Aside (Lazy Loading) 4 Write Through 5 Cache Validity 6 Evictions 7 Amazon ElastiCache and Self Managed Redis 8 Relational Database Caching Techniques 9 Cache the Database SQL ResultSet 10 Cache Select Fields and Values in a Custom Format 13 Cache Select Fields and Values into an Aggregate Redis Data Structure 14 Cache Serialized Applicati on Object Entities 15 Conclusion 17 Contributors 17 Further Reading 17 Archived Abstract Inmemory data caching can be one of the most effective strategies for improving your overall application performance and reducing your database costs You can apply c aching to any type of database including relational databases such as Amazon Relational Database Service (Amazon RDS) or NoSQL databases such as Amazon DynamoDB MongoDB and Apache Cassandra The best part of caching is that it’s easy to implement and it dramatically improves the speed and scalability of your application This w hitepaper describes some of the caching strategies and implementation approaches that address the limitations and challenges associated with disk based databases ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 1 Database Challenges When you’re building distributed applications that require low latency and scalability disk based databases can pose a number of challenges A few common ones include the following : • Slow processing queries: There are a number of query optimization techniques and schema designs that help boost query performance However the data retrieval speed from disk plus the added query processing times generally put your query response times in double digit millis econd speeds at best This assumes that you ha ve a steady load and your da tabase is performing optimally • Cost to scale: Whether the data is distributed in a disk based NoSQL database or vertically scaled up in a relational database scaling for extremely high reads can be costly It also can require several database read replicas to match what a single in memory cache node can deliver in terms of requests per second • The need to simplify data access: Although relational databases provide an excellent means to data model relationships they aren’t optimal for data access There are instances where your applications may want to access the data in a particular structure or view to simplify data retrieval and increase application performance Before implementing database caching many architects and engine ers spend great effort trying to extract as much performance as they can from their database s However there is a limit to the performance that you can achieve with a disk based database and it’s counterproductive to try to solve a problem with the wrong tools For example a large portion of the latency of your database query is dictated by the physics of retrieving data from disk Types of Database Caching A database cache supplements your primary database by removing unnecessary pressure on it typically in the form of frequently accessed read data The cache itself can live in several areas including in your database in the applic ation or as a standalon e layer The following are the three most common types of database caches: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 2 • Database integrated caches: Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built in write through capabilities1 The database updates its cache automatically when the underlying data changes Nothing in the application tier is required to use this cache The downside of integrated caches is their size and capabilities Integrated caches are typically limited to the available memory that is allocated to the cache by the database instance and can’t be used for other purposes such as sha ring data with other instances • Local caches: A local cache stores your frequently used data within your application This makes data retrieval faster than other caching architectures because it removes network traffic that is associated with retrieving data A major disadvantage is that amo ng your applications each node has its own resident cache working in a disconnected manner The information that is stored in an individual cache node whether it ’s cached database rows web content or session data can’t be shared with other local cache s This creates challenges in a distributed environment where information sharing is critical to support scalable dynamic environments Because most applications use multiple application servers coordinating the values across them becomes a major challenge if each server has its own cache In addition when outages occur the data in the local cache is lost and must be rehydrated which effectively negat es the cache The majority of these disadv antages are mitigated with remote caches • Remote caches: A remote cache (or “side cache”) is a separate instance (or instances) dedicated for sto ring the cached data in memory Remote caches are stored on dedicated servers and are typically built on key/va lue NoSQL stores such as Redis2 and Memcached 3 They provide hundreds of thousands and up to a million requests per second per cache node Many solutions such as Amazon ElastiCache for Redis also provide the high availability need ed for critical workloads4 ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 3 The average latency of a request to a remote cache is on the sub millisecond timescale which is orders of magnitude faster than a request to a diskbased database At these spe eds local caches are seldom necessary Remote caches are ideal for distributed environment s because they work as a connected cluster that all your disparate systems can use Howev er when network latency is a concern you can apply a two tier caching strategy that uses a local and remote cache together This paper doesn’t describe this strategy in detail but it’s typically used only when needed because of the complexity it adds With remote caches the orchestration between caching the data and managing the validity of the data is managed by your applications and/or processes that use it The cache itself is not directly connected to the database but is used adjacently to it The remainder of this paper focus es on using remote caches and specifically Amazon ElastiCache for Redis for caching relational database data Caching Patterns When you are caching data from your database t here are caching patterns for Redis5 and Memcached6 that you can implement including proactive and reactive approaches Th e patterns you choose to implement should be directly related to your caching and application objectives Two common approaches are cache aside or lazy loading (a reactive approach) and write through (a proactive approach) A cache aside cache is updated after the data is requested A writethrough cache is updated immediately when the primary database is updated With both approaches the application is essentia lly managing what data is being cached and for how long The following diagram is a typical representation of an architecture that uses a remote distributed cache ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 4 Figure 1: Architecture using remote distributed cache Cache Aside (Lazy Loading) A cache aside cache is the most common caching strategy available The fundamental data retrieval logic can be summarized as fo llows: 1 When your application needs to read data from the database it checks the cache first to determine whether the data is available 2 If the data is available (a cache hit) the cached data is returned and the response is issued to the caller 3 If the data isn’t available (a cache miss) the database is queried for the data The cache is then populated with the data that is retrieved from the database and the data is returned to the caller Figure 2: A cache aside cache This approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 5 • The cach e contains only data that the application actually requests which helps keep the cache size cost effective • Implementing this approach is straightforward and produces immediat e performance gains whether you use an application framework that encapsulates lazy caching or your own custom application logic A disadvantage when using cache aside as the only caching pattern is that because the data is loaded into the cache only after a cache miss some overhead is added to the initial response time because additional roundtrips to the cache and database are needed Write Through A write through cache reverses the order of how the cache is populated Instead of lazyloading the data in the cache after a cache miss the cac he is proactively updated immediately following the primary database update The fundamental data retrieval logic can be summarized as follows : 1 The a pplication batch or backend proces s updates the primary database 2 Immediately afterward the dat a is also updated in the cache Figure 3: A write through cache The write through pattern is almost always implemented along with lazy loading If the application gets a cache miss because the data is not present or has expired the lazy loading pattern is performed to update the cache The write through approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 6 • Because the cache is uptodate with the primary database there is a much greater likelihood that the data will be found in the cache This in turn result s in better overall application performance and user experience • The performance of your d atabase is optimal because fewer database reads are performed A disadvantag e of the write through approach is that infrequently requested data is also written to the cache resulting in a larger and more expensive cache A proper caching strategy includes effective use of both write through and lazy loading of your data and setting an appropriate expiration for the data to keep it relevant and lean Cache Validity You can control the freshness of your cached data by applying a time to live (TTL) or “expiration” to your cached keys After the set time has passed the key is deleted from the cache and access to the origin data store is required along with reaching the updated data Two principles can help you determine the appropriate TTLs to appl y and the type of caching patterns to implement First it’s important that you understand the rate of change of the underlying data Second it’s important that you evaluate the risk of outdated data being returned back to your application instead of its updated counterpart For example it might make sense to keep static or reference data (that is data that is seldom updated ) valid for longer periods of time with write throughs to the cache when th e underlying data gets updated With dynamic data that changes often you might want to apply lower TTLs that expire the data at a rate of change that matches that of the primary dat abase This lowers the risk of returning outdated data while still providing a buffe r to offload database requests It’s also important to recognize that even if you are only caching data for minutes or seconds versus longer durations appropriately apply ing TTLs to ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 7 your cached keys can result in a huge performance boost and an overall better user experience with your application Another best practice when applying TTLs to your cache keys is to add some time jitter to your TTLs This reduces the possibili ty of heavy database load occurring when your cached data expires Take for example the scenario of caching product information If all your product data expires at the same time and your application is under heavy load then your backend database has to fulfill all the product requests Depending on the load that could generate too much pressure on your database resulting in poor performance By adding slight jitter to your TTLs a random ly generated time value (eg TTL = your initial TTL value in seconds + jitter) would reduce th e pressure on your backend database and also reduce the CPU use on your cache engine as a result of deleting expired keys Evictions Evictions occur when cache memory is overfilled or is greater than the maxmemory setting for the cache causing the engine selecting keys to evict in order to manage its memory The keys that are chosen are based on the eviction policy you select By default Amazon ElastiCache for Redis sets the volatile lru eviction policy to your Redis c luster When t his policy is select ed the least recently used keys that have an expiration (TTL) value set are evicted Other eviction policies are available and can be applied in the config urable maxmemory policy parameter The following table summarizes e viction policies: Eviction Policy Description allkeys lru The cache evicts the least recently used (LRU) keys regardless of TTL set allkeys lfu The cache evicts the least frequently used (LFU) keys regardless of TTL set volatile lru The cache evicts the least recently used (LRU) keys from those that have a TTL set ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 8 Eviction Policy Description volatile lfu The cache evicts the least frequently used (LFU) keys from those th at have a TTL set volatile ttl The cache evicts the keys with the shortest TTL set volatile random The cache randomly evicts keys with a TTL set allkeys random The cache randomly evicts keys regardless of TTL set noeviction The cache doesn’t evict keys at all This blocks future writes until memory frees up A good strategy in selecting an appropriate eviction policy is to consider the data stored in your cluster and the outcome of keys being evicted Generally least recently used ( LRU)based policies are more common for basic caching use cases However depending on your objectives you might want to use a TTL or random based eviction policy that better suits your requirements Also if you are experiencing evictions with your cluster it is usually a sign that you should scale up (that is use a node with a larger memory footprint ) or scale out (that is add more nodes to your cluster) to accommodate the additional data An exce ption to this rule is if you are purposefully relying on the cache engine to manage your keys by means of eviction also referred to an LRU cache 7 Amazon ElastiCache and Self Managed Redis Redis is an open source inmemory data store that has become the most popular key/value engine in the market Much of its popularity is due to its support for a variety of data structures as well as other features including Lua scripting support8 and Pub/Sub messaging capability Other added benefits include high availab ility topologies with support for read replicas and the ability to persist data Amazon ElastiCache offers a fully manage d service for Redis This means that all the administrative tasks associated with managing your Redis cluster including monitoring patching backups and automatic failover are managed ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 9 by Amazon This lets you focus on your business and your data instea d of your operations Other benefits of using Amazon ElastiCache for Redis over self managing your cache environment include the following : • An enhanced Redis engine that is fully compatible with the open source version but that also provides added stabilit y and robustness • Easily modifiable parameters such as eviction policies buffer limits etc • Ability to scale and resize your cluster to terabytes of data • Hardened security that lets you isolate your cluster within Amazon Virtual Private Cloud (Amazon VPC)9 For more information about Redis or Amazon ElastiCache see the Further Reading section at the end of this whitepaper Relational Da tabase Caching Techniques Many of the caching techniques that are described in this section can be applied to any type of database However this paper focuses on relational databases because they are the most common database caching use case The basic paradigm when you query data from a relational database includes executing SQL statements and iterating over the returned ResultSet object cursor to retrieve the database rows There are several techniques you can apply when you want to cache the returned data However it’s best to choose a method that simplifies your data access pattern and/or optimizes the architectur al goals that you have for your application To visualize this we’ll examine snippets of Java code to explain the logic You can find additional information on the AWS caching site10 The examples use the Jedis Redis client library11 for connecting to Redis although you can use any Java Redis library including Lettuce12 and Redisson 13 Assume that you issued the following SQL statement against a customer database for CUSTOMER_ID 1001 We’ll examine the various cachi ng strategies that you can use SELECT FIRST_NAME LAST_NAME EMAIL CITY STATE ADDRESS COUNTRY FROM CUSTOMERS WHERE CUSTOMER_ID = “1001”; ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 10 The query returns this record: … Statement stmt = connectioncreateStatement(); ResultSet rs = stmtexecuteQuery(query); while (rsnext()) { Customer customer = new Customer(); customersetFirstName(rsgetString(""FIRST_NAME"")); customersetLastName(rsgetString(""LAST_NAME"")); and so on … } … Iterating over the ResultSet cursor lets you retrieve the fields and values from the database rows From that point the application can choose where and how to use that data Let’s also assume that your application framework can ’t be used to abstract your caching implementation How do you best cac he the returned database data? Given this scenario you have many options The following sections evaluate some options with focus on the caching logic Cache the Database SQL ResultSet Cache a serialized ResultSet object that conta ins the fetched database row • Pro: When data retrieval logic is abstracted (eg as in a Data Access Object14 or DAO layer) the consuming code expects only a ResultSet object and does not need to be made aware of its origination A ResultSet object can be iterated over r egardless of w hether it originated from the database or was deserialized from the cache which greatly reduc es integration logic This pattern can be appli ed to any relational database • Con: Data retrieval still requires extracting values from the ResultSet object cursor and does not further simplify data access; it only reduces data retrieval latency ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 11 Note : When you cach e the row it’s important that it’s serializable The following example uses a CachedRowSet implementation for this purpose When you are using Redis this is stored as a byte array value The following code converts the CachedRowSet object into a byte arra y and then stores that byte array as a Redis byte array value The actual SQL statement is stored as the key and converted into bytes … // rs contains the ResultSet key contains the SQL statement if (rs != null) { //lets write through to the cache CachedRowSet cachedRowSet = new CachedRowSetImpl(); cachedRowSetpopulate(rs 1); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = new ObjectOutputStream(bos); outwriteObject(cachedRowSet); byte[] red isRSValue = bos toByteArray(); jedis set(keygetBytes() redisRSValue); jedis expire(keygetBytes() ttl); } … The nice thing about storing the SQL statement as the key is that it enable s a transparent caching abstraction layer that hides the implementation details The other added benefit is that you don’t need to create any additional mappings between a custom key ID and the executed SQL statement The last statement executes an expire command to apply a TTL to the stored key This code follows our write through logic in that upon querying the database the cached value is stored immediately afterward For lazy caching you would initially query the cache before executing the query again st the database To hide the implementation details use the DAO pattern and expose a generic method for your application to retrieve the data For example because your key is the actual SQL statement your method signature could look like the following: public ResultSet getResultSet(String key); // key is sql statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 12 The code that calls (consum es) this method expects only a ResultSet object regardless of what the underlying implementation details are for the interface Under the hood the getResultSet method execute s a GET command for the SQL key which if present is deserialize d and convert ed into a ResultSet object public ResultSet getResultSet(String key) { byte [] redisResultSet = null ; redisResultSet = jedis get(keygetBytes()); ResultSet rs = null ; if (redisResultSet != null ) { // if cached value exists deserialize it and return it try { cachedRowSet = new CachedRowSetImpl(); ByteArrayInputStream bis = new ByteArrayInputStream(redisResultSe t); ObjectInput in = new ObjectInputStream(bis); cachedRowSetpopulate((CachedRowSet) inreadObject()); rs = cachedRowSet; } … } else { // get the ResultSet from the database store it in the rs object then cache it … } … return rs; } If the data is not present in the cache query the database for it and cache it before returning As mentioned earlier a best practice would be to apply an appropriate TTL on the keys as well For all other caching techniques that we’ll review you should establish a naming convention for your Redis keys A good naming convention is one that is easily predictable to applications and developers A hierarchical structure separated by colons is a common naming convention for keys such as object:type:id ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 13 Cache Select Fields and Values in a Custom Format Cache a subset of a fetched database row into a cust om structure that can b e consumed by your applications • Pro: This approach is easy to implement You essentially store specific retrieved fields and values into a structure such as JSON or XML and then SET that structure into a Redis string The format you choose should be something that conforms to your application ’s data access pattern • Con: Your application is using different types of objects when querying for particular data ( eg Redis string and database results) In addition you are required to parse through the entire structure to retrieve the individual attributes associated with it The following code stores specific customer attributes in a customer JSON object and caches that JSON object into a Redis string : … // rs contains the ResultSet while (rsnext()) { Customer customer = new Customer(); Gson gson = new Gson(); JsonObject customerJSON = new JsonObject(); customersetFirstName(rsgetString(""FIRST_NAME"")); customerJSONadd(“first_name” gsontoJsonTree(customergetFirstName() ); customersetLastName(rsgetStri ng(""LAST_NAME"")); customerJSONadd(“last_name” gsontoJsonTree(customergetLastName() ); and so on … jedisset(customer:id:""+customergetCustomerID() customerJSONtoString() ); } … For data retrieval you can implement a generic method through an interface that accepts a customer key (eg customer:id:1001) and a n SQL statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 14 string argument It will also return whatever structure your application requires (eg JSON XML) and abstract the underlying det ails Upon initial request the application execute s a GET command on the customer key and if the value is present return s it and complete s the call If the value is not present it queries the database for the record write sthrough a JSON representation of the data to the cache and return s Cache Select Fields and Values into an Aggregate Redis Data Structure Cache the fetched database row into a specific data structure that can simplif y the application ’s data access • Pro: When converting the ResultSet object into a format that simplifies access such as a Redis Hash your application is able to use that data more effectively This technique simplifies your data access pattern by reducing the need to iterate over a ResultSet object or by parsing a structure like a JSON object stored in a string In addition working with aggregate data structures such as Redis Lists Sets and Hashes provide various attrib ute level commands associated with setting and getting data eliminating the overhead associated with processing the data before being able to leverage it • Con: Your application is using different t ypes of objects when querying for particular data ( eg Redis Hash and database results) The following code creates a HashMap object that is used to store the customer data The map is populated with the database data and SET into a Redis … // rs contai ns the ResultSet while (rsnext()) { Customer customer = new Customer(); Map map = new HashMap(); customersetFirstName(rsgetString(""FIRST_NAME"")); mapput(""firstName"" customergetFirstName()); customersetLastName(rsgetString(""LAST_NAME"")); mapput(""lastName"" customergetLastName()); and so on … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 15 jedishmse t(customer:id:""+customergetCustomerID() map); } … For data retrieval you can implement a generic method through an interface that accepts a customer ID (the key) and a n SQL statement argument It return s a HashMap to the caller Just as in the other examples you can hide the details of where the map is originating from First your application can query the cache for the customer data using the customer ID key If the data is not present the SQL statement execute s and retrieve s the data from the dat abase Upon retrieval you may also store a hash representation of that customer ID to lazy load Unlike JSON the added benefit of storing your data as a hash in Redis is that you can query for individual attributes within it Say that for a given request you only want to respond with specific attributes associated with the customer Hash such as the customer name and address This flexibility is supported in Redis along with various other features such as adding and deleting individ ual attributes in a map Cache Serialized Application Object Entities Cache a subset of a fetched database row into a custom structure that can b e consumed by your applications • Pro: Use application objects in their native application state with simple serializing and deserializing techniques This can rapidly accelerate application performance by minimizing data transformation logic • Con: Advanced application development use case The following code converts the customer object into a byte array and then stores that value in Redis: … // key contains customer id Customer customer = (Customer) object; ByteArrayOutputStream bos = new ByteArrayOutputStream(); ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 16 ObjectOutput out = null ; try { out = new Object OutputStream(bos); outwriteObject(customer); outflush(); byte [] objectValue = bostoByteArray(); jedis set(keygetBytes() objectValue); jedis expire(keygetBytes() ttl); } … The key identifier is also stored as a byte representation and can be represented in the customer:id:1001 format As the other examples show you can create a generic method through an application interface that hides the underlying details method details In this example when instantiating an object or hydrating one with state the method accepts the customer ID (the key) and either returns a customer object from the cache or constructs one after querying the backend database First your application queries the cache for the serialized customer object using the customer ID If the data is not present the SQL statement execute s and the application consume s the data hydrate s the customer entity ob ject and then lazy load s the serialized representation of it in the cache public Customer getObject(String key) { Customer customer = null ; byte [] redisObject = null ; redisObject = jedis get(keygetBytes()); if (redisObject != null ) { try { ByteArrayInputStream in = new ByteArrayInputStream(redisObject); ObjectInputStream is = new ObjectInputStream(in); customer = (Customer) isreadObject(); } … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 17 } … return customer; } Conclusion Modern applications can’t afford poor performance Today’s users have low tolerance for slow running applications and poor user experiences When low latency and scaling databases are critical to the success of your applications it’s imperative that you use database caching Amazon ElastiCache provides two managed in memory key value stores that you can use for database caching A managed service further simplifies using a cache in that it removes the administrative tasks associated with support ing it Contributors The following individuals and organizations contributed to this document: • Michael Labib Specialist Solutions Architect AWS Further Reading For more information see the following resources : • Performance at Scale with Amazon ElastiCache (AWS whitepaper)15 • Full Redis command list16 1 https://awsamazoncom/rds/aurora/ 2 https://redisio/download 3 https://memcachedorg/ 4 https://awsamazoncom/elasticache/redis/ 5 https://docsawsamazoncom/AmazonElastiCache/latest/red ug/Strategieshtml Notes ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 18 6 https://docsawsamazoncom/AmazonElastiCache/latest/mem ug/Strategieshtml 7 https://redisio/topics/lru cache 8 https://wwwluaorg/ 9 https://awsamazoncom/vpc/ 10 https://awsamazoncom/caching/ 11 https://githubcom/xetorthio/jedis 12 https://githubcom/wg/lettuce 13 https://githubcom/redisson/redisson 14 http://wwworaclecom/technetwork/java/dataaccessobject 138824html 15 https://d0awsstaticcom/whitepapers/performance atscale withamazon elasticachepdf 16 https://redisio/commands",General,consultant,Best Practices Demystifying_the_Number_of_vCPUs_for_Optimal_Workload_Performance,ArchivedDemystifying the Number of vCPUs for Optimal Workload Performance September 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 3 Contents Abstract 4 Introduction 5 Methodology 6 Discussion by Example 8 Best Practices 10 Conclusion 13 Contributors 13 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 4 Abstract Following industry standard rules of thumb when migrating physical servers or desktops into a virtual environment doesn’t ensure optimal CPU performance after consolidation especially for CPU intensive workloads This paper describes a proven scientific methodology for benc hmarking CPU performance for different CPU generations with detailed examples to achieve optimal performance Learn how to choose Amazon EC2 instance types based on CPU resources and apply best practices for CPU selection with Amazon EC2 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 5 Introduction When you migrate physical servers or desktops to a virtual environment using a hypervisor (such as ESX Hyper V KVM Xen etc) you’re typically advised to follow industry standard rules of thumb for high workload consolidation For example you might b e advised to use 1 CPU core for every 2 virtual machines (VMs) However this ratio might not provide a realistic estimate for CPUs with high clock speeds such as thos e running at 16 GHz to 33 GHz You should use a higher consolidation ratio with faster CPUs New generation CPUs provide better performance even when running at the same clock speed or with the same number of CPU cores compared with prior generation CPUs The price performance ratio w ith new CPUs is better as well So how do we benchmark the CPU performance for different CPU generations to get the optimal performance after VM consolidation? As part of the answer and to ensure predictable results we should have a scientific approach t o determine the most appropriate CPU sizing Remember that undersizing a CPU resource can cause poor user experience and oversizing a CPU resource can cause wasted resources and higher Operating Expenses (OPEX) yielding a higher Total Cost Ownership (TCO) This paper examine s a proven methodology for choosing the right Amazon Elastic Compute Cloud (EC2) instance types based on CPU resources and includes detailed examples In addition some best practices for CPU selection with Amazon EC2 are discussed ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 6 Methodology Step 1: Normalize the CPU performance index (Pi) for different generation CPUs using the Moore’s Law equation1: 𝑃𝑖(𝑡)=2005556 (𝑡) (1) Where Pi (t) is the CPU perfor mance index at the reference month t = 0 In other words if we’re trying to migrate a system with a CPU A being first sold on Jan 2015 to CPU B being first sold on June 2016 then the performance index for CPU A is P i (0) = 1 and CPU B is P i (18) = 2 Step 2: Determine the normalized CPU utilization in term s of clock speed (GHz) of the current workload utiliza tion by inserting Equation (1) into Equation (2) The normalized CPU utilization (CPU Utilization (Norm) ) equation will be explained as shown below: 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 )= [#𝐶𝑃𝑈 ×#𝐶𝑜𝑟𝑒 ×𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 ×𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 ×𝑃𝑖(𝑡)] (2) Where ▪ #CPU = Current number of CPU sockets per physical server If it is a VM it should be equivalent to 1 ▪ #Core = Current number of CPU cores per physical server If it is a VM it should be equivalent to the number of currently deployed vCPUs (We are assuming that there is no oversubscription in this case ) If hyper threading is enabl ed th e number of CPU cores or v CPUs should be doubled 1 In the mid 1960s Gordon Moore the co founder of Intel made the observation that computer power measured by the number of transistors that could be fit onto a chip doubled every 18 months This law has performed extremely well over the preceding years ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 7 ▪ CPU Freq = Current CPU clock speed in GHz ▪ CPU Utilization = Current CPU utilization as a percentage ▪ 𝑃𝑖(𝑡) = Performance index for vCPUs per month Step 3: Determine the estimated CPU utilization b y reserving sufficient buffer for a workload spike This is calculated by inserting the required headroom in term s of percentage (%) into Equation (3) This gives a conservative estimate of the CPU sizing to avoid suboptimal performance The estimated CPU utilization (CPU Utilization (Est)) equation is explained as shown below 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝐸𝑠𝑡) = 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 ) × (1+𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 ) (3) Where 𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 = Percentage of CPU resource reserved as a buffer for a workload spike Step 4: Refer to Amazon EC2 Instance Types to find the most appropriate CPU type for particular instance classes by using Equ ation (4) 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛(𝐸𝑠𝑡) ≤ 𝐶𝑃𝑈 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦(𝑛𝑒𝑤 )= [#𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) 2× 𝐶𝑃𝑈 𝐹𝑟𝑒𝑞(𝑛𝑒𝑤 )×𝑃𝑖(𝑛𝑒𝑤 )(𝑡)] (4) Where ▪ #𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) = Newly selected number of vCPUs for the Amazon EC2 instance It is divided by 2 since hyper threading is used on the Amazon EC2 instance ▪ #𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 (𝑛𝑒𝑤 ) = Newly designated CPU clock speed (GHz) for the Amazon EC2 instance ▪ 𝑃𝑖(𝑛𝑒𝑤 )(𝑡) = Perf ormance index for new vCPUs per month ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 8 Discussion by Example Step 1: Table 1 shows the performance index which is calculated by using Equation (1) for various CPU models The oldest CPU model Xeon E5640 is used as the benchmark Both the Xeon E5640 and E5647 models belong to the current state of usage Table 1: CPU Performance index for various CPU model s Step 2: Table 2 shows the total CPU utilization in GHz after using Equation (2) for all the physical ser vers’ workload s that will be migrated to Amazon EC2 Table 2: Normalized CPU utilization in GHz Step 3: Table 3 shows the estimated CPU utilization in GHz after we include the buffer using Equation ( 3) Table 3: Estimated CPU utilization in GHz Step 4: After reviewing Amazon EC2 Instance Types we decided to deploy M4 instances Table 4 shows the performance index that is calculated using Equation (1) by taking the CPU model Xeon E52686 v4 as reference t = 0 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 9 Table 4: Performance index for M4 class instances Table 5 illustrates the CPU capacity of M4 instances after normalization Model vCPU* CPU Freq (GHz) Mem (GiB) SSD Storage (GB) Perf Index Per Core CPU Capacity new (GHz) m4large 2/2 23 8 EBSonly 100 230 m4xlarge 4/2 23 16 EBSonly 100 460 m42xlarge 8/2 23 32 EBSonly 100 920 m44xlarge 16/2 23 64 EBSonly 100 1840 m410xlarge 40/2 23 160 EBSonly 100 4600 m416xlarge 64/2 23 256 EBSonly 100 7360 Table 5: M 4 class instances’ CPU capacity after normalization * The number of vCPUs is divided by 2 because each vCPU in an Amazon EC2 instance is a hyperthread of an Intel Xeon CPU core By comparing the results that you obtain from steps 3 and 4 Table 6 demonstrates the CPU selection mapping against each source machine that is being migrated to Amazon EC2 Host Name CPU Model Recommended AWS Instance Type Server01 Xeon E5640 m4large Server02 Xeon E5640 m4xlarge Server03 Xeon E5647 m4xlarge Server04 Xeon E5647 m42xlarge Table 6: Recommended instance type This example did n’t take into account memory storage or I/O factors For actual scenarios we should consider taking a more holistic view to optimally balance performance and TCO saving Amazon EC2 has many different classes of instance types such as Compute Optimized Me mory Optimized Storage Optimized IO Optimized and GPU Optimized – see https://awsamazoncom/ec2/instance CPU Model CPU Frequency (GHz) # Cores First Sold Performance Index Performance Index Per Core Xeon E52686 v4 230 180 Jun16 1796 100 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 10 types for more detailed information These different classes of instance types are optimized to deliver the best performance and TCO saving depending on your application’s behavior and usage characteristic s Best Practices 1 Assess the requirements of your applications and select the appropri ate Amazon EC2 instance family as a starting point for application performance testing Amazon EC2 provides you with a variety of instance types each with one or more size options organized into distinct instance families that are optimized for different types of applications You should start evaluating the performance of your applications by : a) Identifying how your application compare s to different instance families ( for example is the application compute bound memory bound or I/O bound ?) b) Sizing your w orkload to identify the appropriate instance size There is no substitute for measuring the performance of your entire application because application performance can be impacted by the underlying infrastructure or by software and architectural limitation s We recommend application level testing including the use of application profiling and load testing tools and services 2 Normalize generations of CPUs by using Moore’s Law Processing performance is usually bound to the number of CPU cores clock speed and type of CPU hardware instances that an application runs on A new CPU model will usually outperform the models it precedes even with the same number of cores and clock speed Therefore you should normalize different generations of CPUs by using Moore’s Law as shown earlier in Methodology to obtain more realistic comparison results 3 Have a data collection period that is long enough to capture the workload utilization pattern Workload changes in accordance with time shifting For analysis y our data collection period should be long enough to show you the peak and trough utilization across your business cycle (for example monthly or quarterly) You should include peak utiliza tion instead of average utilization for the purposes of CPU sizing This will ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 11 ensure that you provide a consistent user experience when workloads are under peak utilization 4 Deploy discovery tools For large scale environments (more than a few hundred mach ines) deploy automated discovery tools such as the AWS Application Discovery Service to perform data collection It’s critical to ensure that the discovery tools includ e basic inventory capabilities to collect the required CPU inventory and utilization (maximum average and minimum) that are specified in Methodology Determine whether the discovery tool requires specific user permissions or secure/compliant port s to be open ed Also investigate whether the discovery tool requires the source machines to be rebooted to install agents In many critical production environments server rebooting is not permissible 5 Allocate enough buffer for spikes When you perform the CPU sizing and capacity planning always include a reasonable buffer of 10 –15% of total required capacity This buffer is crucial to avoid any overlap of scheduled and unscheduled processing that may cause unexpected spikes 6 Monitor continuously Carry out the performance benchmarks before and after migration to investigate user experience acceptance levels Deploy a cloud monitoring tool such as Amazon CloudWatch to monitor CPU performance The cl oud monitoring tool should use monitoring to send alerts if the CPU utilization exceeds the predefined threshold level The tool also should provide reporting capability that generate s relevant reports for short and long term capacity planning purpose s 7 Determine the right VM sizing A VM is considered undersized or stressed when the amount of CPU demand peaks above 70% for more than 1% of any 1 hour A VM is considered oversized when the amount of CPU demand is below 30% for more than 1% of the entire ra nge of 30 days Figure 1 and Figure 2 give a good illustration of determining stress analysis for undersized and oversized conditions ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 12 Figure 1: CPU Undersized condition Figure 2: CPU Oversized condition 8 Deploy single threaded appli cations on uniproces sor virtual machines instead of on SMP virtual machines for the best performance and resource use Single threaded applications can take advantage of a single CPU Deploying such applications on dual processor virtual machines does not speed up the appli cation Instead it causes the second virtual CPU to unnecessarily hold physical resources that other VMs could otherwise use The uniprocessor operating system versions are for single core machines If used on a multi core machine a uniprocessor operating system will recognize and use only one of the cores The SMP versions while required to fully utilize multi core machines can also be used on single core machines However d ue to their extra synchronization code SMP operating sys tems used on single core machines run slightly slower than a uniprocessor operating system on the same machine ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 13 9 Consider using Amazon EC2 Dedicated Instances and Dedicated Host s if you have compliance requirements Dedicated instances and host s don’t share hardware with other AWS accounts To learn more about the di fferences between them see awsamazoncom/ec2/dedicated hosts Conclusion The methodology and best practices discussed in this paper give a pragmatic result for optimal performance regarding selected CPU resource s This methodology has been applied to many enterprises’ cloud transformation projects and delivered more predictable performance with significant TCO saving Additionally this methodology can be adopted for capacity planning and helps enterprises establish strong business justifications for platform expansion Actual performance sizing in a cloud environment should inc lude memory storage I/O and network traffic performance metrics to give a holistic performance sizing overview Contributors The following individual s and organizations contributed to this document: Tan Chin Khoon Enterprise Migration Architect – APAC For a more comprehensive and holistic example and discussion of cloud environment consolidation please contact Tan Chin Khoon Document Revisions Date Description September 2018 Updated formulas and instructions August 2016 First publication,General,consultant,Best Practices Deploying_Microsoft_SQL_Server_on_AWS,ArchivedDeploying Microsoft SQL Server on Amazon Web Services November 2019 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Amazon RDS for SQL Server 1 SQL Serv er on Amazon EC2 1 Hybrid Scenarios 2 Choosing Between Microsoft SQL Server Solutions on AWS 2 Amazon RDS for Microsoft SQL Server 4 Starting an Amazon RDS for SQL Server Instance 5 Security 6 Performance Management 11 High Availability 15 Monitoring and Management 17 Managing Cost 21 Microsoft SQL Server on Amazon EC2 23 Starting a SQL Server Instance on Amazon EC2 23 Amazon EC2 Security 25 Performance Management 26 High Availability 29 Monitoring and Management 32 Managing Cost 34 Caching 36 Hybrid Scenarios and Data Migration 37 Backups to the Cloud 38 SQL Server Log Shipping Between On Premises and Amazon EC2 39 SQL Server Always On A vailability Groups Between On Premises and Amazon EC2 40 AWS Database Migration Service 42 Comparison of Microsoft SQL Server Feature Availability on AWS 42 ArchivedConclusion 46 Contributors 46 Further Reading 47 Document Revisions 47 ArchivedAbstract This whitepaper explain s how you can run SQL Server databases on either Amazon Relational Database Service (Amazon RDS) or Amazon Elastic Compute Cloud (Amazon EC2) and the advantages of each approach We review in detail how to provision and monitor your SQL Server database and how to manage scalability performance backup and recovery high availability and securi ty in both Amazon RDS and Amazon EC2 We also describe how you can set up a disaster recovery solution between an on premises SQL Server environment and AWS using native SQL Server features like log shipping replication and Always On availability groups This whitepaper helps you make an educated decision and choose the solution that best fits your needs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 1 Introduction AWS offers a rich set of features to enable you to run Microsoft SQL Server –based workloads in the cloud These f eatures offer a variety of controls to effectively manage scale and tune SQL Server deployments to match your needs This whitepaper discusses these features and controls in greater detail in the following pages You can run Microsoft SQL Server versions on AWS using the following services: • Amazon RDS • Amazon EC2 Note: Some versions of SQL Server are dependent on Microsoft licensing For current supported versions see Amazon RDS for SQL Server and Microsoft SQL Server on AWS Amazon RDS for SQL Server Amazon RDS is a service that makes it eas y to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patc hing minor and major version upgrades failed instance replacement and backup and recovery of your SQL Server databases Amazon RDS also offers automated Multi AZ (Availability Zone) synchronous replication allowing you to set up a highly available and scalable environment fully managed by AWS Amazon RDS is a fully managed service and your database s run on their own SQL Server instance with the compute and storage resources you specify Backups high availability and failover are fully automated Becau se of these advantages we recommend customers consider Amazon RDS for SQL Server first SQL Server on Amazon EC2 Amazon Elastic Compute Cloud ( Amazon EC2 ) is a service that provides computing capacity in the clou d Using Amazon EC2 is similar to running a SQL Server database onpremises You are responsible for administering the database including backups and recovery patching the operating system and the database tuning of the operating system and database par ameters managing security and configuring high availability ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 2 or replication You have full control over the operating system database installation and configuration With Amazon EC2 you can quickly provision and configure DB instances and storage and you can scale your instances by changing the size of your instances or amount of storage You can provision your databases in AWS Regions across the world to provide low latency to your end users worldwide You are responsible for data replication and recovery across your instances in the same or different Regions Running your own relational database on Amazon EC2 is the ideal scenario if you require a maximum level of control and configurability Hybrid Scenarios You can also run SQL Server workloads in a hybrid environment For example you might have pre existing commitments on hardware or data center space that makes it impractical to be all in on cloud all at once Such commitments don’t mean you can’t take advantage of the scalability availability and cost benefits of running a portion of your workload on AWS Hybrid designs make this possible and can take many forms from leveraging AWS for long term SQL Server backups to running a secondary replica in a SQL Server Always On Availability Group Choosing Between Microsoft SQL Server Solutions on AWS For SQL Server databases both Amazon RDS and Amazon EC2 have advantages and certain limitations Amazon RDS for SQL Server is easier to set up manage and maintain Using Amazon RDS can be more cost effective than running SQL Server in Amazon EC2 and lets you focus on more important tasks such as schema and index maintenance rather than the day today administration of SQL Server and the underlying operating system Alternatively running SQL Server in Amazon EC2 gives you more control flexibility and choice Depending on your application and your requirements you might prefer one over the other Start by considering the capabilities and limitations of your proposed solution as follows: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 3 • Does your workload fit within the features and capabilities offered by Amazon RDS for SQL Server? We will discuss these in greater detail later in this whitepaper • Do you need high availability and automated failover capabilities? If you are running a production workload high availability is a recommended best practice • Do you have the resources to manage a cluster on an ongoing basis? These activities include backups restores software updates availability data durability optimization a nd scaling Are the same resources better allocated to other business growth activities? Based on your answers to the preceding considerations Amazon RDS might be a better choice if the following is true: • You want to focus on business growth tasks such a s performance tuning and schema optimization and outsource the following tasks to AWS: provisioning of the database management of backup and recovery management of security patches upgrades of minor SQL Server versions and storage management • You need a highly available database solution and want to take advantage of the push button synchronous Multi AZ replication offered by Amazon RDS without having to manually set up and maintain database mirroring failover clusters or Always On Availability Gro ups • You don’t want to manage backups and most importantly point intime recoveries of your database and prefer that AWS automates and manages these processes However running SQL Server on Amazon EC2 might be the better choice if the following is true : • You need full control over the SQL Server instance including access to the operating system and software stack • Install third party agents on the host • You want your own experienced database administrators managing the databases including backups repli cation and clustering • Your database size and performance needs exceed the current maximums or other limits of Amazon RDS for SQL Server • You need to use SQL Server features or options not currently supported by Amazon RDS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 4 • You want to run SQL Server 2017 on the Linux operating system For a detailed side byside comparison of SQL Server features available in the AWS environment see the Comparison of Microsoft SQL Server Feature Availability on AWS section Amaz on RDS for Microsoft SQL Server For the list of currently Amazon RDS currently supported versions and features see Microsoft SQL Server on Amazon RDS Amazon RDS for SQL Server supports the following editions of Microsoft SQL Server : • Express Edition : This edition is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments Microsoft limits the amount of memory and size of the individual databases that can be run on the Express edition This edition is not available in a Multi AZ deployment • Web Edition : This edition is suitable for public internet accessible web workloads This edition is not available in a Multi AZ deployment • Standard Edition : This edition is suitable for most SQL S erver workloads and can be deployed in Multi AZ mode • Enterprise Edition : This edition is the most feature rich edition of SQL Server is suitable for most workloads and can be deployed in Multi AZ mode For a detailed feature comparison between the dif ferent SQL Server editions see Editions and supported features of SQL Server on the Microsoft Developer Network (MSDN) website In Amazon RDS for SQL Server the following features and options are supported depending on the edition of SQL Server: For the most current supported features see Amazon RDS f or SQL Server features • Core database engine features • SQL Server development tools: Visual Studio integration and IntelliSense • SQL Server management tools: SQL Server Management Studio (SSMS) sqlcmd SQL Server Profiles (for client side traces) SQL Server Migration Assistant (SSMA) Database Engine Tuning Advisor and SQL Server Agent ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 5 • Safe Common Language Runtime (CLR) for SQL Server 2016 and below versions • Service Broker • Fulltext search (except semantic search) • Secure Sockets Layer (SSL) connection support • Transparent Data Encryption (TDE) • Encryption of storage at rest using the AWS Key Management Service (AWS KMS) fo r all SQL Server license types • Spatial and location features • Change tracking • Change Data Capture • Always On or Database mirroring (used to provide the Multi AZ capability) • The ability to use an Amazon RDS SQL DB instance as a data source for reporting anal ysis and integration services • Local Time Zone support • Custom Server Collations AWS frequently improve s the capabilities of Amazon RDS for SQL Server For the latest information on supported versions features and options see Version and Feature Support on Amazon RDS Starting an Amazon RDS for SQL Server Instance You can start a SQL Server instance on Amazon RDS in several ways : • Interactively using the AWS Management Console • Programmatically using AWS CloudFormation templates • AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell After the instance has been deployed you can connect to it using standard SQL Server tools Amazon RDS provides you with a Domain Name Service (DNS) endpoint for the server as shown in the following figure To connect to the database u se this DNS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 6 endpoint as the SQL Server hostname along with the master user name and password configured for the instance Always use the DNS endpoint to connect to the instance because the underlying IP address might change Amazon RDS exposes the Always On AGs availability group listener endpoint for the SQL Server Multi AZ deployment The endpoint is visible in the console and is returned by the DescribeDBInstances API as an entry in the endpoints field You can easily connect to the listener endpoint in order to have faster fa ilover times Figure 1: Amazon RDS DB instance properties Security You can use several features and sets of controls to manage the security of your Amazon RDS DB instance These controls are as follows: • Network controls which determine the network configuration underlying your DB instance • DB instance access controls which determine administrative and management access to your RDS resources • Data access controls which determine access to the data stored in your RDS DB instance databases ArchivedAmazon Web Services Deploying Microsoft SQL Se rver on Amazon Web Services Page 7 • Data at rest protection which affects the security of the data stored in your RDS DB instance • Data in transit protection which affects the security of data connections to and from your RDS DB instance Network Controls At the network layer controls are on th e deployed instance EC2VPC level EC2VPC allows you to define a private isolated section of the AWS Cloud and launch resources within it You define the network topology the IP addressing scheme and the routing and traffic access control patterns Newe r AWS accounts have access only to this networking platform In EC2 VPC DB subnet groups are also a security control They allow you to narrowly control the subnets in which Amazon RDS is allowed to deploy your DB instance You can control the flow of net work traffic between subnets using route tables and network access control lists (NACLs) for stateless filtering You can designate certain subnets specifically for database workloads without default routes to the internet You can also deny non database traffic at the subnet level to reduce the exposure footprint for these instances Security groups are used to filter traffic at the instance level Security groups act like a stateful firewall similar in effect to host based firewalls such as the Microso ft Windows Server Firewall The rules of a security group define what traffic is allowed to enter the instance (inbound) and what traffic is allowed to exit the instance (outbound) VPC security groups are used for DB instances deployed in a VPC They can be changed and reassigned without restarting the instances associated with them For improve d security we recommend restricting inbound traffic to only database related traffic (port 1433 unless a custom port number is used) and only traffic from known s ources Security groups can also accept the ID of a different security group (called the source security group) as the source for traffic This approach makes it easier to manage sources of traffic to your RDS DB instance in a scalable way In this case y ou don’t have to update the security group every time a new server needs to connect to your DB instance; you just have to assign the source security group to it Amazon RDS for SQL Server can make DB instances publicly accessible by assigning internet routable IP addresses to the instances In most use cases this approach is not needed or desired and we recommend setting this option to No to limit the potential threat In cases where direct access to the database over the public internet is needed ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 8 we rec ommend limiting the sources that can connect to the DB instance to known hosts by using their IP addresses For this option to be effective the instance must be launched in a subnet that permits public access and the security groups and NACLs must permit inbound traffic from those sources DB instances that are exposed publicly over the internet and have open security groups accepting traffic from any source might be subject to more frequent patching Such instances can be force patched when security pat ches are made available by the vendors involved This patching can occur even outside the defined instance maintenance window to ensure the safety and integrity of customer resources and our infrastructure Although there are many ways to secure your data bases we recommend using private subnet(s) within a VPC no possible direct internet access DB Instance Access Controls Using AWS Identity and Access Management (IAM) you can manage access to your Amazon RDS for SQL Server instances For example you can authorize administrators under your AWS account (or deny them the ability) to create describe modify or delete an Ama zon RDS database You can also enforce mult ifactor authentication (MFA) For more information on using IAM to manage administrative access to Amazon RDS see Authe ntication and Access Control for Amazon RDS in the Amazon RDS User Guide Data Access Controls Amazon RDS for SQL Server supports both SQL Authentication and Windows Authentication and access control for authenticated users should be configured using the principle of least privilege A master account is created automatically when an instance is launched This master user is granted several permissions For det ails see Master User Account Privileges This login is typically used for administrative purposes only and is granted the roles of processadmin setupa dmin SQLAgentUser Alter on SQLAgentOperator and public at the server level Amazon RDS manages the master user as a login and creates a user linked to the login in each customer database with the db_owner permission You can create additional users and databases after launch by connecting to the SQL Server instance using the tool of your choice (for example SQL Server Management Studio) These users should be assigned only the permissions needed for the workload or application that they are supporting t o operate correctly For example if you as the master user create a user X who then creates a database user X will be a member of the db_owner role for this new database not the master user Later on if you reset the master password the master user wi ll be added to db_owner for this new database ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 9 You can also integrate with your existing identity infrastructure based on Microsoft Active Directory and authenticate against Amazon RDS for SQL Server databases using the Windows Authentication method Using Windows Authentication allows you to keep a single set of credentials for all your users and save time and effort by not having to update these credentials in multiple places To use the Windows Authentication method with your Amazon RDS for SQL Server instance sign up for the AWS Directory Service for Microsoft Active Directory If you don’t already have a directory running you can create a new one You can then associate directories with both new and existing DB instances You can use Active Directory to manage users and groups with access privileges to your SQL Server DB instance and also join other EC2 instances to that domain You can also establish a one way forest trust from an external exi sting Active Directory deployment to the directory managed by AWS Directory Service Doing so will give you the ability to authenticate already existing Active Directory users and groups you have established in your organization with Amazon RDS SQL Server instances You can also create SQL Server Windows logins on domain joined DB instances for users and groups in your directory domain or the trusted domain if applicable Logins can be added using a SQL client tool such as SQL Server Management Stud io using the following command CREATE LOGIN [] FROM WINDOWS WITH DEFAULT_DATABASE = [master] DEFAULT_LANGUAGE = [us_english]; More information on configuring Windows Authentication with Amazon RDS for SQL Server can be found in the Using Windows Authentication topic in the Amazon R DS User Guide Unsupported SQL Server Roles and Permissions in Amazon RDS The following server level roles are not currently available in Amazon RDS: bulkadmin dbcreator diskadmin securityadmin serveradmin and sysadmin See Features Not Supported and Features with limited support Also the following server level permissions are not available on a SQL Server DB instance: • ADMINISTER BULK OPERATIONS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 10 • ALTER ANY CREDENTIAL • ALTER ANY EVENT NOTIFICATION • ALTER RESOURCES • ALTER SETTINGS (you can use the DB parameter group API actions to modify parameters) • AUTHENTICATE SERVER • CREATE DDL EVENT NOTIFICATION • CREATE ENDPOINT • CREATE TRACE EVENT NOTIFICATION • EXTERNAL ACCESS ASSEMBLY • SHUTDOWN (you can use the RDS reboot option instead) • UNSAFE ASSEMBLY • ALTER ANY AVAILABILITY GROUP • CREATE ANY AVAILABILITY GROUP Data at Rest Protection Amazon RDS for SQL Server supports the encryption o f DB instances with encryption keys managed in AWS KMS Data that is encrypted at rest includes the underlying storage for a DB instance its automated backups and snapshots You can also encrypt existing DB instances and share encrypted snapshots with ot her accounts within the same Region Amazon RDS encrypted instances use the open standard AES 256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS instance Once your data is encrypted Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance You don’t need to modify your database client applications to use encryption Amazon RDS encrypted instances also help secure your data from unauthorized access to the underlying storage You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud and to fulfill compliance requirements for data at rest encryption To manage the keys used for encrypting and decrypting your Ama zon RDS resources use AWS KMS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 11 Amazon RDS also supports encryption of data at rest using the Transparent Data Encryption ( TDE) feature of SQL Server This feature is only available in the Enterprise Edition You can enable TDE by setting up a custom optio n group with the TDE option enabled (if such a group doesn’t already exist) and then associating the DB instance with that group You can find more details on Amazon RDS support for TDE on the Options for the Microsoft SQL Server Database Engine topic in the Amazon RDS User Guide If full data encryption is not feasible or not desired for your workload you can selectively encrypt table data using SQL S erver column level encryption or by encrypting data in the application before it is saved to the DB instance Data in Transit Protection Amazon RDS for SQL Server fully supports encrypted connections to the instances using SSL SSL support is available in all AWS Regions for all supported SQL Server editions Amazon RDS creates an SSL certificate for your SQL Server DB instance when the instance is created The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificat e to help guard against spoofing attacks You can find more details on how to use SSL encryption in Using SSL with a Microsoft SQL Server DB Instance in the Amazon RDS User Guide Performance Management The performance of your SQL Server DB instance is determined primarily by your workload Depending on your workload you need to select the right instance type which affects the compute capacity amount of memory and network capacity available to your database Instance type is also determined by the storage size and type you select when you provision the database Instance Sizing The amount of memory and compute capa city available to your Amazon RDS for SQL Server instance is determined by its instance class Amazon RDS for SQL Server offers a range of DB instance classes from 1 vCPU and 1 GB of memory to 96 vCPUs and 488 GB of memory Not all instance classes are ava ilable for all SQL Server editions however The i nstance class availability also varies based on the version Amazon RDS for SQL Server supports the various DB instance classes for the various SQL Server editions For the most up todate list of supported instance classes see Amazon RDS for SQL Server instance types ArchivedAmazon Web Services Deployin g Microsoft SQL Server on Amazon Web Services Page 12 Previous generation DB instance classes are superseded in terms of both cost effectiveness and performance by the current generation classes For the previous generation instance types see Previous Generation Instances for more information Understanding the performance characteristics of your workload is impor tant when identifying the proper instance class If you are unsure how much CPU you need we recommend that you start with the smallest appropriate instance class then monitor CPU utilization using Amazon CloudWatch You can modify the instance class for an existing Amazon RDS for SQL Server instance allowing the flexibility to scale up or scale down the instance size depending on the performance characteristics required If you are in a Multi AZ High Availability configuration making the change involve s a server reboot or a failover To modify a SQL Server instance see Modifying a DB Instance Running the Microsoft SQL Server database engine and for the list of modification setting see setting for Microsoft SQL Server DB Instances The settings are similar to the ones you configure when launching a new DB instance By default changes (including a change to the DB instance class) are applied during the next specified maintenance window Alternatively you can use the apply immediately flag to apply t he changes immediately Disk I/O Management Amazon RDS for SQL Server simplifies the allocation and management of database storage for instances You decide the type and amount of storage to use and also the level of provisioned I/O performance if applica ble You can change the amount of storage or provisioned I/O on an RDS for SQL Server instance after the instance has been deployed You can also enable storage auto scaling to enable the Amazon RDS to automatically increase the storage when needed to avoi d having your instance run out of storage space We recommend that you enable storage auto scaling to handle growth from the onset Amazon RDS for SQL Server supports two types of storage each having different characteristics and recommended use cases: • General Purpose (SSD) (also called GP2) is an SSD backed storage solution with predictable performance and burst capabilities This option is suitable for workloads that run in larger batches such as nightly report processing Credits are replenished while the instance is largely idle and are then available for bursts of batch jobs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 13 • Provisioned IOPS storage (or PIOPS storage) is designed to meet the needs of I/Ointensive workloads that are sensitive to storage performance and consistency in random access I/O throughput The following table compares the Amazon RDS storage performance characteristics Table 1: Amazon RDS storage performance characteristics Storage Type Min Volume Size Max Volume Size Baseline Performance Burst Capability Storage Technology Pricing Criteria General Purpose 20 GiB (100 GiB recommende d) 16 TiB* 3 IOPS/GiB Yes; up to 3000 IOPS per volume subject to accrued credits SSD Allocated storage Provisioned IOPS 20 GiB (for Enterprise and Standard editions 100 GiB for Web and Express Edition) 16 TiB* 10 IOPS/GiB up to max 64000 IOPS No; fixed allocation SSD Allocated storage and Provisioned IOPS * Maximum IOPS of 64000 is guaranteed only on Nitro based instances that are on m5 instance types Although performance characteristics of instances change over time as t echnology and capabilities improve there are several metrics that can be used to assess performance and help plan deployments Different workloads and query patterns affect these metrics in different ways making it difficult to establish a practical base line reference in a typical environment We recommend that you test your own workload to determine how these metrics behave in your specific use case For Amazon RDS we provision and measure I/O performance in units of input/output operations per second ( IOPS) We count each I/O operation per second that is 256 KiB or smaller as one IOPS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Service s Page 14 The average queue depth a metric available through Amazon CloudWatch tracks the number of I/O requests in the queue that are waiting to be serviced These requests have been submitted by the application but haven’t been sent to the storage device because the device is busy servicing other I/O requests Time spent in the queue increases I/O latency and large queue sizes can indicate an overloaded system from a storage pe rspectiveAs a result depending on the storage configuration selected your overall storage subsystem throughput will be limited either by the maximum IOPS or the maximum channel bandwidth at any time If your workload is generating a lot of small sized I /O operations (for example 8 KiB) you are likely to reach maximum IOPS before the overall bandwidth reaches the channel maximum However if I/O operations are large in size (for example 256 KiB) you might reach the maximum channel bandwidth before max imum IOPS As specified in Microsoft documentation SQL Server stores data in 8 KiB pages but uses a complex set of techni ques to optimize I/O patterns with the general effect of reducing the number of I/O requests and increasing the I/O request size This approach results in better performance by reading and writing multiple pages at the same time Amazon RDS accommodates t hese multipage operations by counting every read or write operation on up to 32 pages as a single I/O operation to the storage system based on the variable size of IOPS SQL Server also attempts to optimize I/O by reading ahead and attempting to keep the queue length nonzero Therefore queue depth values that are very low or zero indicate that the storage subsystem is underutilized and potentially overprovisioned from a n I/O capacity perspective Using small storage sizes (less than 1TB) with General Pur pose (GP2) SSD storage can also have a detrimental impact on instance performance If your storage size needs are low you must ensure that the storage subsystem provides enough I/O performance to match your workload needs Because IOPS are allocated on a ratio of 3 IOPS for each 1 GB of allocated GP2 storage small storage sizes will also provide small amounts of baseline IOPS When created each instance comes with an initial allocation of I/O credits This allocation provides for burst capabilities of up to 3000 IOPS from the start Once the initial burst credits allocation is exhausted you must ensure that your ongoing workload needs fit within the baseline I/O performance of the storage size selected ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 15 High Availability Amazon RDS provides high availab ility and failover support for DB instances using Multi AZ deployments Multi AZ deployments provide increased availability data durability and fault tolerance for DB instances Multi AZ high availability option uses SQL Server database mirroring or Always On availability g roups configuration options with additional improvements to meet the requirements of enterprise grade production workloads running on SQL Server The Multi AZ deployment option provides enhanced availability and data durability b y automatically replicating database updates between two AWS Availability Zones Availability Zones are physically separate locations with independent infrastructure engineered to be insulated from failures in other Availability Zones When you set up SQL Server Multi AZ RDS automatically configures all databases on the instance to use database mirroring or availability groups Amazon RDS handles the primary the witness and the secondary DB instance for you Because configuration is automatic RDS selec ts database mirroring or Always On availability group s based on the version of SQL Server that you deploy Amazon RDS supports Multi AZ with database mirroring or availability group s for the following SQL Server versions and editions (exceptions noted) : See Multi AZ Deployments for Microsoft SQL Server for more information • SQL Server 2017: Enterprise Editions (Always On availability group s are supported in Ent erprise Edition 140030491 or later) • SQL Server 2016: Enterprise Editions (Always On availability group s are supported in En terprise Edition 130052160 or later) Amazon RDS supports Multi AZ with database mirroring for the following SQL Server versions and editions except for the versions of Enterprise Edition noted previously: • SQL Server 2017: Standard and Enterprise Editions • SQL Server 2016: Standard and Enterprise Editions • SQL Server 2014: Standard and Enterprise Editions • SQL Server 2012: Standard and Enterprise Editions Amazon RDS supports Multi AZ for SQL Server in all AWS Regions with the following exceptions: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 16 • US West (N California): Neither database mirroring nor Always On availability group s are supported • South America (São Paulo): Supported on all DB instance classes except m1 or m2 • EU (Stockholm): Neither database mirroring nor Always On availability group s are supported When you create or modify your SQL Server DB instance to run using Multi AZ Amazon RDS will automatically provision a primary database in one Availability Zone and maintain a synchronous secondary replica in a different Avail ability Zone In the event of planned database maintenance or unplanned service disruption Amazon RDS will automatically fail over the SQL Server database s to the up todate secondary so that database operations can resume quickly without any manual inter vention If an Availability Zone failure or instance failure occurs your availability impact is limited to the time that automatic failover takes to complete typically 60 120 seconds for database mirroring and 10 15 seconds for availability groups When failing over Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point to the secondary which is in turn promoted to become the new primary The canonical name record (or endpoint name) is an entry in DNS We recommend that you implement retry logic for database connection errors in your application layer by using the canonical name rather than attempt to connect directly to the IP address of the DB instance We recommend this approach because during a failover the underlyin g IP address will change to reflect the new primary DB instance Amazon RDS automatically performs a failover in the event of any of the following: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB node • Compute unit failure on the primary DB node • Storage failure on the primary DB node Amazon RDS Multi AZ deployments don’t fail over automatically in response to database operations such as long running queries deadlocks or database corruption errors For ex ample suppose that a customer workload causes high resource usage on an instance and that SQL Server times out and triggers failover of individual databases In this case RDS recovers the failed databases back to the primary instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 17 When operations suc h as instance scaling or system upgrades like OS patching are initiated for Multi AZ deployments they are applied first on the secondary instance prior to the automatic failover of the primary instance for enhanced availability Due to failover optimiza tion of SQL Server certain workloads can generate greater I/O load on the mirror than on the principal particularly for DBM deployments This functionality can result in higher IOPS on the secondary instance We recommend that you consider the maximum IO PS needs of both the primary and secondary when provisioning the storage type and IOPS of your RDS for SQL Server instance Monitoring and Management Amazon CloudWatch collects many Amazon RDS specific metrics You can look at these metrics using the AWS Management Console the AWS CLI (using the monget stats command) or the AWS API Or the powershell (using the Get CWMetricStatistics cmdlet ) In addition to the system level metrics collected for Amazon EC2 instances (such as CPU usage disk I/O and network I/O) the Amazon RDS metrics include many database specific metrics such as database connections free storage space read and write I/O per second read and write latency read and write throughput and available RAM For a full up todate list see Amazon RDS Dimensions and Metrics in the Amazon CloudWatch Developer Guide In Amazon CloudWatch you can also configure alarms on these metrics to trigger notifications when the state changes An alarm watches a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon Simple Notification Service (Amazon SNS) topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered You can also use notifications as triggers for custom automated response mechanisms or workflows that react to alarm events; however you need to implement such event handlers separately Amazon RDS for SQL Server als o supports Enhanced Monitoring Amazon RDS provides metrics in nearreal time for the operating system (OS) that your DB instance runs on You can view the metrics for your instance using the console or consume the Enhanced Monitoring JSON output from Ama zon CloudWatch Logs in a monitoring system of your choice Enhanced Monitoring gathers its metrics from an agent on the instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 18 Enhanced Monitoring gives you deeper visibility into the health of your Amazon RDS instances in nearreal time providing a co mprehensive set of 26 new system metrics and aggregated process information at a detail level of up to 1 second These monitoring metrics cover a wide range of instance aspects such as the following: • General metrics like uptime instance and engine version • CPU utilization such as idle kernel or user time percentage • Disk subsystem metrics including utilization read and write bytes and number of I/O operations • Network metrics like interface throughput and read and write bytes • Memory utili zation and availability including physical kernel commit charge system cache and SQL Server footprint • System metrics consisting of number of handles processes and threads • Process list information grouped by OS processes RDS processes (management monitoring diagnostics agents) and RDS child processes (SQL Server workloads) Because Enhanced Monitoring delivers metrics to CloudWatch Logs this feature incur s standard CloudWatch Logs charges These charges depend on a number of factors: • The number of DB instances sending metrics to CloudWatch Logs • The level of detail of metrics sampling —finer detail results in more metrics being delivered to CloudWatch Logs • The workload running on the DB instance —more compute intensive workloads have more OS process activity to report More information and instructions on how to enable the feature can be found in Viewing DB Instance Metrics in the Amazon RDS User Guide In ad dition to CloudWatch metrics you can use the Performance Insights and native SQL Server performance monitoring tools such as dynamic management views the SQL Server error log and both client and server side SQL Server Profiler traces Performance Insights expands on existing Amazon RDS monitoring features to illustrate your database's performance and help you analyze any issues that affect it With the Performance Insights dashboard you can visualize the database load and filter the lo ad by waits SQL statements hosts or users More information can be found at Using ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 19 Using Amazon RDS Performance Insights in the Amazon Relational Database Service User Guide Amazon RDS for SQL Server provides two administrative windows of time designed for effective management described following The service will assign default time windows to each DB instance if these aren’t customized • Backup window: The back up window is the period of time during which your instance is going to be backed up Because backups might have a small performance impact on the operation of the instance we recommend you set the window for a time when this has minimal impact on your wor kload • Maintenance window: The maintenance window is the period of time during which instance modifications (such as implementing pending changes to storage or CPU class for the instance) and software patching occur Your instance might be restarted during this window if there is a scheduled activity pending and that activity requires a restart but that is not always the case We recommend scheduling the maintenance window for a time when your instance has the least traffic or a potential restart is leas t disruptive Amazon RDS for SQL Server comes with several built in management features: • Automated backup and recovery Amazon RDS automatically backs up all databases of your instances You can set the backup retention period when you create a n instance If you don't set the backup retention period Amazon RDS uses a default retention period of one day You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days Automated backups occur daily during the backup window If you select zero days of backup retention point in time log backups are not taken Amazon RDS uses these periodic data backups in conjunction with your transaction logs (backed up every 5 minutes) to enable you to restore your DB instance to any second during your retention period up to the LatestRestorableTime typically up to the last 5 minutes • Push button scaling With a few clicks you can change the instance class to increase or decrease the size of your instance’s compute capacity network capacity and memory You can choose to make the change immediately or schedule it for your next maintenance window • Automatic host replacement Amazon RDS automatically replaces the compute instance powering your deployment in the event of a har dware failure ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 20 • Automatic minor version upgrade Amazon RDS keeps your database software up todate You have full control on whether Amazon RDS deploy s such patching automatically and you can disable this option to prevent that Regardless of this setting publicly accessible instances with open security groups might be force patched when security patches are made available by vendors to ensure the safety and integrity of customer resources and our infrastructure The patching activity occurs during the w eekly 30 minute maintenance window that you specify when you provision your database (and that you can alter at any time) Such patching occurs infrequently and your database might become unavailable during part of your maintenance window when a patch is applied You can minimize the downtime associated with automatic patching if you run in Multi AZ mode In this case the maintenance is generally performed on the secondary instance When it is complete the secondary instance is promoted to primary The maintenance is then performed on the old primary which becomes the secondary • Preconfigured parameters and options Amazon RDS provides a default set of DB parameter groups and also option groups for each SQL Server edition and version These groups contain configuration parameters and options respectively which allow you to tune the performance and features of your instance By default Amazon RDS provides an optimal configuration set suitable for most workloads based on the class of the in stance that you selected You can create your own parameter and option groups to further tune the performance and features of your instance You can administer Amazon RDS for SQL Server databases using the same tools you use with on premises SQL Server ins tances such as SQL Server Management Studio However to provide you with a more secure and stable managed database experience Amazon RDS doesn’t provide desktop or administrator access to instances and it restricts access to certain system procedures a nd tables that require advanced privileges such as those granted to sa Commands to create users rename users grant revoke permissions and set passwords work as they do in Amazon EC2 (or on premises) databases The administrative commands that RDS doesn’t support are listed in Unsupported SQL Server Roles and Permissions in Amazon RDS Even though direct file system level access to the RDS SQL Server instance is no t available you can always migrate your data out of RDS instances You can use t ools like the Microsoft SQL Server Database Publishing Wizard to download the contents of ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 21 your databases into flat T SQL files You can then load these files into any other SQ L Server instances or store them as backups in Amazon Simple Storage Service (Amazon S3) or Amazon S3 Glacier or on premises In addition you can use the AWS Database Migration Service to move data to and from Amazon RDS You can also use native backup and restore through S3 You can use native backups to migrate databases to Amazon RDS for SQL Server instances or back up your RDS for SQL Server instances to S3 to copy to another SQL Server instance or to retai n offline For more details on how this works and the permissions required see Importing and Exporting SQL Server Databases Managing Cost Managing the cost of the IT infrastructure is often an important driver for cloud adoption AWS makes running SQL Server on Amazon a cost effective proposition by providing a flexible scalable environment and pricing models that allow you to pay for only the capac ity you consume at any given time Amazon RDS further reduces your costs by reducing the management and administration tasks that you have to perform Generally the cost of operating an Amazon RDS instance depends on the following factors: • The AWS Region the instance is deployed in • The instance class and storage type selected for the instance • The Multi AZ mode of the instance • The pricing model • How long the instance is running during a given billing period You can optimize the operating costs of your RDS wo rkloads by controlling the factors listed above AWS services are available in multiple Regions across the world In Regions where our costs of operating our services are lower we pass the savings on to you Thus Amazon RDS hourly prices for the different instance classes vary by the Region If you have the flexibility to deploy your SQL Server workloads in multiple Regions the potential savings from operating in one Region as compared to another can be an important factor in choosing the right Region Amazon RDS also offers different pricing models to match different customer needs: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 22 • OnDemand Instance pricing allows you to pay for Amazon RDS DB in stances by the hour with no term commitments You incur a charge for each hour a given DB instance is running If your workload doesn’t need to run 24/7 or you are deploying temporary databases for staging testing or development purposes OnDemand Instance p ricing can offer significant advantages • Reserved Instances (RI) allow you to lower costs and reserve capacity Reserved Instances can save you up to 60 percent over On Demand rates when used in steady state which tend to be the case for many datab ases They can be purchased for 1 or 3year terms If your SQL Server database is going to be running more than 25 percent of the time each month you will most likely financially benefit from using a Reserved Instance Overall savings are greater when co mmitting to a 3 year term compared to running the same workload using OnDemand Instance pricing for the same period of time However the length of the term needs to be balanced against projections of growth because the commitment is for a specific inst ance class If you expect that your compute and memory needs are going to grow over time for a given DB instance you might want to opt for a shorter 1 year term and weigh the savings from the Reserved Instance against the overhead of being over provisione d for some part of that term The following pricing options are available for RDS Reserved Instances : • With All Upfront Reserved Instances you pay for the entire Reserved Instance with one upfront payment This option provides you with the largest discount compared to On Demand Instance pricing • With Partial Upfront Reserved Instances you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term • With No Upfront Reserved Instanc es you don’t make any upfront payments but are charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instance pricing but the di scount is usually less than for the other two Reserved Instance pricing options Note that like in Amazon EC2 in Amazon RDS you can issue a stop command to a standalone DB instance and keep the instance in a stopped state to avoid incurring compute charge s You can't stop an Amazon RDS for SQL Server DB instance in a Multi AZ configuration instead you can terminate the instance take a final snapshot prior to termination and recreate a new Amazon RDS instance from the snapshot when ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 23 you need it or remov e the Multi AZ configuration first and then stop the instance Note that after 7 days your stopped instance will re start so that any pending maintenance can be applied Additionally you can use several other strategies to help optimize costs: • Terminate DB instances with a last snapshot when they are not needed then reprovision them from that snapshot when they need to be used again For example some development and test databases can be terminated at night and on weekends and reprovisioned on weekdays in the morning Alternatively use the stop feature mentioned above to turn off the database for the weekend • Scale down the size of your DB instance during off peak times by using a smaller instance class See the Amazon RDS for SQL Server Pricing webpage for up todate pricing information for all pricing models and instance classes Microsoft SQL Server on Amazon EC2 You can also choose to run a Microsoft SQL Server on Amazon EC2 as described in the following sections Starting a SQL Server Instance on Amazon EC2 You can start a SQL Server DB instance on Amazon EC2 in several ways : • Interactively using the AWS Manageme nt Console • Programmatically using AWS CloudFormation templates • Using AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell For the procedure to launch Amazon EC2 using the AWS Management Consol e see Launch an Instan ce Check the below useful bullets for launching Amazon EC2 for running SQL Server instance ArchivedAmazon Web Services Deploying Micros oft SQL Server on Amazon Web Services Page 24 • You can deploy a SQL Server instance on Amazon EC2 using an Amazon Machine Image (AMI) An AMI is simply a packaged environment that includes all the necessary software to set up and boot your instance Some AMIs have just the operating system (for example Windows Server 2019 ) and others have the operating system and a version and edition of SQL Server (Windows Server 2019 and SQL Server 201 7 Standard Edition SQL Server 2017 on Ubuntu and so on) We recommend that you use the AMIs available at Windows A MIs These are available in all AWS Regions Some AMIs include an installation of a specific version and edition of SQL Server When running an Amazon EC2 instance based on one of these AMIs the SQL Server licensing costs are included in the hourly pri ce to run the Amazon EC2 instance • Other AMIs install just the Microsoft Windows operating system This type of AMI allows you the flexibility to perform a separate custom installation of SQL Server on the Amazon EC2 instance and bring your own license (B YOL) of Microsoft SQL Server if you have qualifying licenses For additional information on BYOL qualification criteria see License Mobility • Consider all five performance charact eristics (vCPU Memory Instance Storage Network Bandwidth and EBS Bandwidth) of Amazon EC2 instances when selecting the EC2 instance See Amazon EC2 Instance Types for more information • Depending on the type of SQL Server deployment for example stand alone Windows Failover Clustering and Always On Availability Groups SQL Server on Linux and so on you might decide to assign one or multiple static IP addresses to your Amazon EC2 instan ce You can do this assignment in the Network interface section of Configure Instance Details • Add the appropriate storage volumes depending on your workload needs For more details on select the appropriate volume types see the Disk I/O Management section • Assign the appropriate tags to the Amazon EC2 instance We recommend that you assign tags to other Amazon resources for example Amazon Elastic Block Store (Amazon EBS) volumes to allow for more control over resou rcelevel permissions and cost allocation For best practices on tagging AWS resources see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 25 Amazon EC2 Security When you run SQL Server on Amazon EC2 instances you have the responsibility to effectively protect network access to your instances with security groups adequate operating system settings and best practices such as limiting access to open port s and using strong passwords In addition you can also configure a hostbased firewall or an intrusion detection and prevention system (IDS/IPS) on your instances As with Amazon RDS in EC2 security controls start at the network layer with the network d esign itself in EC2 VPC along with subnets security groups and network access control lists as applicable For a more detailed discussion of these features review the preceding Amazon RDS Security section Using AWS Identity and Access Management (IAM) you can control access to your Amazon EC2 resources and authorize (or deny) users the ability to manage your instances running the SQL Server database and the corresponding EBS volumes For example you can r estrict the ability to start or stop your Amazon EC2 instances to a subset of your administrators You can also assign Amazon EC2 roles to your instances giving them privileges to access other AWS resources that you control For more information on how to use IAM to manage administrative access to your instances see Controlling Access to Amazon EC2 Resources in the Amazon EC2 User Guide In an Amazon EC2 deployment of SQL Server you are also responsible for patching the OS and application stack of your instances when Microsoft or other third party vendors release new security or functional patches This patching includes work for additional support services and instances such as Active Directory servers You can encrypt the EBS data volumes of your SQL Server instances in Amazon EC2 This option is available to all editions of SQL Server de ployed on Amazon EC2 and is not limited to the Enterprise Edition unlike transparent data encryption ( TDE) When you create an encrypted EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots crea ted from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances transparently to your instance providing encryption of data in transit from EC2 instances to EBS storage as well Note that encryption of boot volu mes is not supported yet Your data and associated keys are encrypted using the open standard AES 256 algorithm EBS volume encryption integrates with the AWS KMS This integration allows you to use your own customer master key (CMK) for volume encryption Creating and ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 26 leveraging your own CMK gives you more flexibility including the ability to create rotate disabl e and define access controls and to audit the encryption keys used to protect your data Performance Management The performance of a relational DB instance on AWS depends on many factors including the Amazon EC2 instance type the configuration of the d atabase software the application workload and the storage configuration The following sections describe various options that are available to you to tune the performance of the AWS infrastructure on which your SQL Server instance is running Instance Si zing AWS has many different Amazon EC2 instance types available so you can choose the instance type that best fits your needs These instance types vary in size ranging from the smallest instance the t2micro with 1 vCPU 1 GB of memory and EBS only storage to the largest instance the d28xlarge with 36 vCPUs 244 GB of memory 48 TB of local storage and 10 gigabit network performance We recommend that you choose Amazon EC2 instances that best fit your workload requirements and have a good balance o f CPU memory and IO performance SQL Server workloads are typically memory bound so look at the r 5 or r5d instances also referred to as memory optimized instances If your workload is more CPU bound look at the latest compute optimized instances of th e c5 instance family See Amazon EC2 Instance types for more information You can customize the number of CPU cores for the instance You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory intensive workloads but fewer CPU cores See Optimizing CPU Options for more inform ation One of the differentiators among all these instance types is that the m 5 r5 and c5 instance types are EBS optimized by default whereas older instance types such as the r3 family can be optionally EBS optimized You can find a detailed explanation of EBS optimized instances in the Disk I/O Management section following If your workload is network bound again look at instance families that sup port 25 gigabit network performance because these instance families also support Enhanced Networking These include the r5 z1d m5 and c5 instance families The i3en and c5n instance types even support 100 gigabit network performance Enhanced Networki ng enables you to get significantly higher packet per second (PPS) performance lower network jitter and lower latencies by using single root I/O ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 27 virtualization (SR IOV) This feature uses a new network virtualization stack that provides higher I/O perfor mance and lower CPU utilization compared to traditional implementations See Enhanced Networking on Windows in the Amazon EC2 User Guide Disk I/O Management The same storage types available for Amazon RDS are also available when deploying SQL Server on Amazon EC2 Additionally you also have access to instance storage Because you have fine grained control over the storage volumes and strategy to use you can deploy workloads that require more than 4 TiB in size or 64000 IOPS in Amazon EC2 Multiple EBS volumes or instance storage disks can even be striped together in a software RAID configuration to aggregate both the storage size and usable IOPS beyond the capabilities of a single volume The two main Amazon EC2 storage options are as follows: • Instance store volumes: Several Amazon EC2 instance types come with a certain amount of local (directly attached) storage which is ephemeral These include R5d M5d i3 i3en and x1e instance types • Any data saved on instance storage is no longer available after you stop and restart that instance or if the underlying hardware fails which causes an instance restart to happen on a different host server This character istic makes instance storage a challenging option for database persistent storage However Amazon EC2 instances can have the following benefits: o Instance store volumes offer good performance for sequential disk access and don’t have a negative impact on your network connectivity Some customers have found it useful to use these disks to store temporary files to conserve network bandwidth o Instance types with large amounts of instance storage offer unmatched I/O performance and are recommended for database workloads as long as you implement a backup or replication strategy that addresses the ephemeral nature of this storage ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 28 • EBS volumes: Similar to Amazon RDS you can use EBS for persistent block level storage volumes Amazon EBS volumes are off instance s torage that persist s independently from the life of an instance Amazon EBS volume data is mirrored across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component You can back them up to Amazon S3 by u sing snapshots These attributes make EBS volumes suitable for data files log files and the flash recovery area Although the maximum size of an EBS volume is 16 TB you can address larger database sizes by striping your data across multiple volumes See EBS volume characteristics for more information EBSoptimized instances enable Amazon EC2 instances to fully utilize the Provisioned IOPS on an EBS volume These instances deliver dedicated throughput between Amazon EC2 and Amazon EBS depending on the instance type When attached to EBSoptimized instan ces Provisioned IOPS volumes are designed to deliver within 10 percent of their provisioned performance 999 percent of the time The combination of EBSoptimized instances and Provisioned IOPS volumes helps to ensure that instances are capable of consist ent and high EBS I/O performance See EBS optimized by default for more information Most databases with high I/O requirements should benefit from this featu re You can also use EBS optimized instances with standard EBS volumes if you need predictable bandwidth between your instances and EBS For up todate information about the availability of EBS optimized instances see Amazon EC2 Instance Types To scale up random I/O performance you can increase the number of EBS volumes your data resides on for example by using eight 100 GB EBS volumes instead of one 800 GB EBS volume However remember that us ing striping generally reduces the operational durability of the logical volume by a degree inversely proportional to the number of EBS volumes in the stripe set The more volumes you include in a stripe the larger the pool of data that can get corrupted if a single volume fails because the data on all other volumes of the stripe gets invalidated also EBS volume data is natively replicated so using RAID 0 (striping) might provide you with sufficient redundancy and availability No other RAID mechanism i s supported for EBS volumes Data logs and temporary files benefit from being stored on independent EBS volumes or volume aggregates because they present different I/O patterns To take advantage of additional EBS volumes be sure to evaluate the networ k load to help ensure that your instance size is sufficient to provide the network bandwidth required ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 29 I3 and i3en instances with instance storage are optimized to deliver tens of thousands of low latency random I/O operations per second (IOPS) to applica tions from direct attached SSD drives These instances provide an alternative to EBS volumes for the most I/O demanding workloads Amazon EC2 offers many options to optimize and tune your I/O subsystem We encourage you to benchmark your application on se veral instance types and storage configurations to select the most appropriate configuration For EBS volumes we recommend that you monitor the CloudWatch average queue length metric of a given volume and target an average queue length of 1 for every 500 IOPS for volumes up to 2000 IOPS and a length between 4 and 8 for volumes with 2 000 to 4 000 IOPS Lower metrics indicate overprovisioning and higher numbers usually indicate your storage system is overloaded High Availability High availability is a d esign and configuration principle to help protect services or applications from single points of failure The goal is for services and applications to continue to function even if underlying physical hardware fails or is removed or replaced We will review three native SQL Server features that improve database high availability and ways to deploy these features on AWS Log Shipping Log shipping provides a mechanism to automatically send transaction log backups from a primary database on one DB instance to one or more secondary databases on separate DB instances Although log shipping is typically considered a disaster recovery feature it can also provide high availability by allowing secondary DB instances to be promoted as the primary in the e vent of a failure of the primary DB instance Log shipping offers you many benefits to increase the availability of log shipped databases Besides the benefits of disaster recovery and high availability already mentioned log shipping also provides access to secondary databases to use as read only copies of the database This feature is available between restore jobs It can also allow you to configure a lag delay or a longer delay time which can allow you to recover accidentally changed data on the prima ry database before these changes are shipped to the secondary database We recommend running the primary and secondary DB instances in separate Availability Zones and optionally deploying an optional monitor instance to track all the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 30 details of log shippi ng Backup copy restore and failure events for a log shipping group are available from the monitor instance Database Mirroring Database mirroring is a feature that provides a complete or almost complete mirror of a database depending on the operating mode on a separate DB instance Database mirroring is the technology used by Amazon RDS to provide Multi AZ support for Amazon RDS for SQL Server This feature increases the availability and protection of mirrored databases and provides a mechanism to ke ep mirrored databases available during upgrades In database mirroring SQL Servers can take one of three roles: the principal server which hosts the read/write principal version of the database; the mirror server which hosts the mirror copy of the princ ipal database; and an optional witness server The witness server is only available in high safety mode and monitors the state of the database mirror and automates the failover from the primary database to the mirror database A mirroring session is establ ished between the principal and mirror servers which act as partners They perform complementary roles as one partner assumes the principal role while the other partner assumes the mirror role Mirroring performs all inserts updates and deletes that ar e executed against the principal database on the mirror database Database mirroring can either be a synchronous or asynchronous operation These operations are performed in the two mirroring operating modes: • Highsafety mode uses synchronous operation In this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror database as quickly as possible using a synchronous operation As soon as the database is synchronized the transaction is comm itted on both partners This mode has increased transaction latency as each transaction needs to be committed on both the principal and mirror databases Because of this high latency we recommend that partners be in the same or different Availability Zone s hosted within the same AWS Region when you use this operating mode ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon W eb Services Page 31 • Highperformance mode uses asynchronous operation Using this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror d atabase using an asynchronous process Unlike a synchronous operation this mode can result in a lag between the time the principal database commits the transactions and the time the mirror database commits the transactions This mode has minimum transacti on latency and is recommended when partners are in different AWS Regions SQL Server Always On Availability Groups Always On availability groups is an enterprise level feature that provides high availability and disaster recovery to SQL Server databases Always On availability groups uses advanced features of Windows Failover Cluster and the Enterprise Edition of all versions of SQL Server from SQL Server 2012 Starting in SQL Server 2016 SP1 basic availability groups are available for Standard Edition SQL Server as well (as a replacement for database mirroring) These availability groups support the failover of a set of user databases as one distinct unit or group User databases defined within an availability group consist of primary read/writ e databases along with multiple sets of related secondary databases These secondary databases can be made available to the application tier as read only copies of the primary databases thus providing a scale out architecture for read workloads You can a lso use the secondary databases for backup operations You can implement SQL Server Always On availability groups on Amazon Web Services using services like Windows Server Failover Clustering (WSFC) Amazon EC2 Amazon VPC Active Directory and DNS Alway s On cluster s require multiple subnets and need the MultiSubnetFailover=True parameter in the connection string to work correctly See How do I create a SQL Server Always On availability group cluster in the AWS Cloud? for how to deploy SQL Server Always On availability Groups For details on how to deploy SQL Server Always On availability groups in AWS using CloudFormation see the SQL Server on the AWS Cloud: Quick Start Reference Deployment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 32 Figure 2: SQL Server Always On availability group Monitoring and Management Amazon CloudWatch is an AWS instance monitoring service that provides detailed CPU disk and network utilization metrics for each Amazon EC2 instance and EBS volume Using these metrics you can perform detailed reporting and management This data is available in the AWS Management Console and also the API Using the API allows for infrastructure automation and orchestration based on load metrics Additionally Amazon CloudWatch supports custom metrics such as memory utilization or disk utilizations which are metrics visible only f rom within the instance You can publish your own relevant metrics to the service to consolidate monitoring information You can also push custom logs to CloudWatch Logs to monitor store and access your log files for Amazon EC2 SQL Server instances You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console the CloudWatch Logs commands in the AWS CLI or the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 33 CloudWatch Logs SDK This approach allows you to track log events in real time for your SQL Server inst ances As with Amazon RDS you can configure alarms on Amazon EC2 Amazon EBS and custom metrics to trigger notifications when the state changes An alarm tracks a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon SNS topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered In addition you can use Microsoft and any third party monitoring tools that have built in SQL Server monitoring capabilities Amazon EC2 SQL Server monitoring can be integrated with System Center Operations Manager (SCOM) Open source monitoring frameworks such as Nagios can also be run on Amazon EC2 to monitor your whole AWS environment including your SQL Server databases The management of a SQL Server database on Amazon EC2 is similar to the management of an on premises database You can use SQL Server Management Studio SQL Server Configuration Manager SQL Server Profiler and other Microsoft and third party tools to perform administration or tuning tasks AWS also offers the AWS Add ins for Microsoft System Center to extend the functionality of your existing Microsoft System Center implementation to monitor and control AWS resources from the same interface as your on premises resources These addins are currently av ailable at no additional cost for SCOM versions 2007 and 2012 and System Center Virtual Machine Manager (SCVMM) Although you can use Amazon EBS snapshots as a mechanism to back up and restore EBS volumes the service does not integrate with the Volume Shadow Copy Service (VSS) You can take a snapshot of an attached volume that is in use However VSS integration is required to ensure that the disk I/O of SQL Server is temporarily paused during the snapshot process Any data that has not been per sisted to disk by SQL Server or the operating system at the time of the EBS snapshot is excluded from the snapshot Lacking coordination with VSS there is a risk that the snapshot will not be consistent and the database files can potentially get corrupte d For this reason we recommend using third party backup solutions that are designed for SQL Server workloads ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 34 Managing Cost AWS elastic and scalable infrastructure and services make running SQL Server on Amazon a cost effective proposition by tracking d emand more closely and reducing overprovisioning As with Amazon RDS the costs of running SQL Server on Amazon EC2 depend on several factors Because you have more control over your infrastructure and resources when deploying SQL Server on Amazon EC2 the re are a few additional dimensions to optimize cost on compared to Amazon RDS: • The AWS Region the instance is deployed in • Instance type and EBS optimization • The type of instance tenancy selected • The high availability solution selected • The storage type and size selected for the EC2 instance • The Multi AZ mode of the instance • The pricing model • How long it is running during a given billing period • Underlying Operating system (Windows or Linux) As with Amazon RDS Amazon EC2 hourly instance costs vary by the Region If you have flexibility about where you can deploy your workloads geographically we recommend deploying your workload in the Region with the cheapest EC2 costs for your particular us e case Different instance types have different hourly charges Generally current generation instance types have lower hourly charges compared to previous generation instance types along with better performance due to newer hardware architectures We recommend that you test your workloads on new instance types as these become available and plan to migrate your workloads to new instance types if the c ost vs performance ratio makes sense for your use case Many EC2 instance types are available with the E BSoptimized option This option is available for an additional hourly surcharge and provides additional dedicated networking capacity for EBS I/O This dedicated capacity ensures a predictable amount of networking capacity to sustain predictable EBS I/O Some current generation ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 35 instance types such as the C4 M4 and D2 instance types are EBS optimized by default and don’t have an additional surcharge for the optimization Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren’t Dedicated Instances and from instances that belong to other AWS accounts We recommend deploying EC2 SQL Server inst ances in dedicated tenancy if you have certain regulatory needs Dedicated tenancy has a per region surcharge for each hour a customer runs at least one instance in dedicated tenancy The hourly cost for instance types operating in dedicated tenancy is different for standard tenancy Uptodate pricing information is available on the Amazon EC2 Dedicated Instances pricing page You also have the option to provision EC2 Dedicated Hosts These are physical servers with E C2 instance capacity fully dedicated to your use Dedicated Hosts can help you address compliance requirements and reduce cos ts by allowing you to use your existing server bound software licenses For m ore information see Amazon EC2 Dedicated Hosts and Bring license to AWS Amazon EC2 Reserved Instances allow you to lower costs and reserve capacity Reserved Instances can save you up to 70 percent over On Demand rates when used in steady state They can be purchased for one or three year terms If your SQL Server database is going to be running more than 60 percent of the time you will most likely financially benefit from using a Reserved Instance Unlike with On Demand pricing the capacity reservation is made for the entire duration of the term wheth er a specific instance is using the reserved capacity or not The following pricing options are available for EC2 Reserved Instances : • All Upfront Reserved Instances : you pay for the entire Reserved Instance with one upfront payment This option provides yo u with the largest discount compared to On Demand Instance pricing • Partial Upfront Reserved Instances : you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 36 • No Upfront Reserved Instances : you don’t make any upfront payments but will be charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instanc e pricing but the discount is usually less than for the other two Reserved Instance pricing options Additionally the following options can be combined to reduce your cost of operating SQL Server on EC2: • Use the Windows Server with SQL Server AMIs where licensing is included The cost of the SQL Server license is included in the hourly cost of the instance You are only paying for the SQL Server license when the instance is running This approach is especially effective for databases that are not running 24/7 and for short projects • Shut down DB instances when they are not needed For example some development and test databases can be shut down at night and on weekends and restarted on weekdays in the morning • Scale down the size of your databases during off peak times • Use the Optimizing CPU options Caching Whether using SQL Server on Amazon EC2 or Amazon RDS SQL Server users confronted with heavy workloads should look into reducing this load by caching data so that the web and application servers don’t have to repeatedly access the database for common or re peat datasets Deploying a caching layer between the business logic layer and the database is a common architectural design pattern to reduce the amount of read traffic and connections to the database itself The effectiveness of the cache depends largely on the following aspects: • Generally the more read heavy the query patterns of the application are on the database the more effective caching can be • Commonly the more repetitive query patterns are with queries returning infrequently changing datasets the more you can benefit from caching Leveraging caching usually requires changes to applications The logic of checking populating and updating a cache is normally implemented in the application data and database abstraction layer or Object Relationa l Mapper (ORM) ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 37 Several tools can address your caching needs You have the option to use a managed service similar to Amazon RDS but for caching engines You can also choose from different caching engines that have slightly different feature sets: • Amazon ElastiCache: In a similar fashion to Amazon RDS ElastiCache allows you to provision fully managed caching clusters supporting both Memcached and Redis ElastiCache simplifies and offloads the management monitoring and operation of a Memcached or Redis e nvironment enabling you to focus on the differentiating parts of your applications • Memcached: An open source high performance distributed in memory object caching system Memcached is an in memory object store for small chunks of arbitrary data (string s objects) such as results of database calls Memcached is widely adopted and mostly used to speed up dynamic web applications by alleviating database load • Redis: An open source high performance in memory key value NoSQL data engine Redis stores stru ctured key value data and provides rich query capabilities over your data The contents of the data store can also be persisted to disk Redis is widely adopted to speed up a variety of analytics workloads by storing and querying more complex or aggregate datasets in memory relieving some of the load off backend SQL databases Hybrid Scenarios and Data Migration Some AWS customers already have SQL Server running in their on premises or colocated data center but want to use the AWS Cloud to enhance their arc hitecture to provide a more highly available solution or one that offers disaster recovery Other customers are looking to migrate workloads to AWS without incurring significant downtime These efforts often can stretch over a significant amount of time A WS offers several services and tools to assist customers in these use cases and SQL Server has several replication technologies that offer high availability and disaster recovery solutions These features differ depending on the SQL Server version and edi tion Amazon RDS on VMware lets you deploy managed databases in on premises VMware environments using the Amazon RDS technology enjoyed by hundreds of thousands of AWS customers Amazon RDS provides cost efficient and resizable capacity while automating ti meconsuming administration tasks including hardware provisioning database setup patching and backups freeing you to focus on your applications RDS on VMware brings these same benefits to your on premises deployments making it ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 38 easy to set up operate and scale databases in VMware vSphere private data centers or to migrate them to AWS RDS on VMware allows you to utilize the same simple interface for managing databases in on premises VMware environments as you would use in AWS You can easily replica te RDS on VMware databases to RDS instances in AWS enabling low cost hybrid deployments for disaster recovery read replica bursting and optional long term backup retention in Amazon Simple Storage Service (S3) Amazon RDS on VMware is supporting Microsoft SQL Server PostgreSQL MySQL and MariaDB databases with Oracle to follow in the future Backups to the Cloud AWS storage solutions allow you to pay for only what you need AWS doesn’t require capacit y planning purchasing capacity in advance or any large upfront payments You get the benefits of AWS storage solutions without the upfront investment and hassle of setting up and maintaining an on premises system Amazon Simple Storage Service (Amazon S3 ) Using Amazon S3 you can take advantage of the flexibility and pricing of cloud storage S3 gives you the ability to back up SQL Server databases to a highly secure available durable reliable storage solution Many third party backup solutions are des igned to securely store SQL Server backups in Amazon S3 You can also design and develop a SQL Server backup solution yourself by using AWS tools like the AWS CLI AWS Tools for Windows PowerShell or a wide variety of SDKs for NET or Java and also the A WS Toolkit for Visual Studio AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and AWS ’s storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low laten cy performance by maintaining frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 AWS Storage Gateway enables your existing on premises –to–cloud ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 39 backup applications to store primary backups on Amazon S3’s sc alable reliable secure and cost effective storage service SQL Server Log Shipping Between On Premises and Amazon EC2 Some AWS customers have already deployed SQL Server using a Windows Server Failover Cluster design in an on premises or colocated facility This approach provides high availability in the event of component failure within a data center but doesn’t protect against a significant outage impacting multiple components or the entire data center Other AWS customers have been using SQL Server synchronous mirroring to provide a high availability solution in their on premises data center Again this provides high availability in the event of component failure within the data center but doesn’t protect against a significant outage impactin g multiple components or the entire data center You can extend your existing on premises high availability solution and provide a disaster recover y solution with AWS by using the native SQL Server feature of log shipping SQL Server transaction logs can s hip from on premises or colocated data centers to a SQL Server instance running on an Amazon EC2 instance within a VPC This data can be securely transmitted over a dedicated network connection using AWS Direct Connect or over a secure VPN tunnel Once shi pped to the Amazon EC2 instance these transaction log backups are applied to secondary DB instances You can configure one or multiple databases as secondary databases An optional third Amazon EC2 instance can be configured to act as a monitor an instan ce that monitors the status of backup and restore operations and raises events if these operations fail ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 40 Figure 3: Hybrid SQL Server Log Shipping SQL Server Always On Availability Groups Between OnPremises and Amazon EC2 SQL Server Always On availability groups is an advanced enterprise level feature to provide high availability and disaster recovery solutions This feature is available when deploying the Enterprise Edition of SQL Server 2012 2014 2016 or 2017 within the AWS Cloud on Amazon EC2 or on physical or virtual machines deployed in on premises or colocated data centers SQL Server 201 6 and SQL Server 201 7 standard edition provides basic high availability two node single database failover non readable secondary You can also setup th e Always On availability groups on Linux based SQL Server by using PaceMaker for clustering instead of using the Windows Server Failover Clustering (WSFC) If you have existing onpremises deployments of SQL Server Always On availability groups you might want to use the AWS Cloud to provide an even higher level of availability and disaster recovery To do so you can extend your data center into a VPC by using a dedicated network connection like AWS Direct Connect or setting secure VPN tunnels between thes e two environments Consider the following points when planning a hybrid implementation of SQL Server Always On availability groups: • Establish secure reliable and consistent network connection between on premises and AWS (using AWS Direct Connect or VPN ) • Create a VPC based on the Amazon VPC service ArchivedAmazon Web Services Deploy ing Microsoft SQL Server on Amazon Web Services Page 41 • Use Amazon VPC route tables and security groups to enable the appropriate communicate between the new environments • Extend Active Directory domains into the VPC by deploying domain controllers as Amazon EC2 instances or using the AWS Directory Service AD Connector service • Use synchronous mode between SQL Server instances within the same environment (for example all instances on premises or all instances in AWS) • Use asynchronous mode between SQL Server instances in different environments (for example instance in AWS and on premises) Figure 4: Always On availability groups You can also use the distributed availability groups This type of availabilit y group is supported in SQL Server 2016 and later versions Distributed availability groups span two separate availability groups and you can use them for AWS as a DR solution or migrating on premises Amazon EC2 ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 42 Figure 5: Hybrid Windows Server Failover Cluster AWS Database Migration Service AWS Database Migration Service helps you migrate databases to AWS easily and securely When you use the AWS Database Migration Service the source database remains fully operational during the migration minimizing downtime to applications that rely on the database You can begin a database migration with just a few clicks in the AWS Management Console Once the migration has started AWS manages many of the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target The service is intended to support migrations to and from AWS hosted databases where both the source and destination engine are the same and also heterogeneous data sources Comparison of Microsoft SQL Server Feature Availability on AWS The following t able shows a side byside comparison of available features of SQL Server in the AWS environment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Servi ces Page 43 Table 2: SQL Server features on AWS Amazon RDS Amazon EC2 SQL Server Editions Supported Versions Supported Versions Express 2012 2014 2016 2017 2012 2014 2016 2017 Web 2012 2014 2016 2017 2012 2014 2016 2017 Standard 2012 2014 2016 2017 2012 2014 2016 2017 Enterprise 2012 2014 2016 2017 2012 2014 2016 2017 SQL Server Editions Installation Method Installa tion Method Express N/A AMI Manual install Web N/A AMI Manual install Standard N/A AMI Manual install Enterprise N/A AMI Manual install Manageability Benefits Supported Supported Managed Automated Backups Yes No (need to configure and manage maintenance plans or use third party solutions) Multi AZ with Automated Failover Yes Enterprise Edition only (with manual configuration of Always On Availability Groups) Builtin Instance and Database Monitoring and Metrics Yes No (push your own metrics to CloudWatch or use third party solution) Automatic Software Patching Yes No Preconfigured Parameters Yes No (default SQL Server installation only) DB Event Notifications Yes No (manually track and manage DB events) SQL Server Feature Supported Supported SQL Authentication Yes Yes Windows Authentication Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 44 Amazon RDS Amazon EC2 TDE (encryption at rest) Yes (Enterprise Edition only) Yes (Enterprise Edition only) Encrypted Storage using AWS KMS Yes (all editions except Express ) Yes SSL (encryption in transit) Yes Yes Database Replication No (Limited Push Subscription) Yes Log Shipping No Yes Database Mirroring Yes (Multi AZ) Yes Always On Availability Groups Yes Yes Max Number of DBs per Instance Depends on the instance size and MultiAZ configuration None Rename existing databases Yes (Single AZ only) Yes (not available for databases in Availability Groups or enabled for mirroring) Max Size of DB Instance 16 TiB None Min Size of DB Instance 20 GB (Web Express) 200 GB ( Standard Enterprise ) None Increase Storage Size Yes Yes BACKUP Command Yes Yes RESTORE Command Yes Yes SQL Server Analysis Services Data source only* Yes SQL Server Integration Services Data source only* Yes SQL Server Reporting Services Data source only* Yes Data Quality Services No Yes Master Data Services No Yes Custom Set Time Zones Yes Yes SQL Server Mgmt Studio Yes Yes Sqlcmd Yes Yes SQL Server Profiler Yes (client side traces) Yes SQL Server Migration Assistance Yes Yes DB Engine Tuning Advisor Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 45 Amazon RDS Amazon EC2 SQL Server Agent Yes Yes Safe CLR Yes Yes Fulltext search Yes (except semantic search) Yes Spatial and location features Yes Yes Change Data Capture Yes (Enterprise Edition –All versions 2016/2017 Standard edition) Yes Change Tracking Yes Yes Columnstore Indexes 2012 and later (Enterprise ) 2012 and later (Standard Enterprise ) Flexible Server Roles 2012 and later 2012 and later Partially Contained Databases 2012 and later 2012 and later Sequences 2012 and later 2012 and later THROW statement 2012 and later 2012 and later UTF16 Support 2012 and later 2012 and later New Query Optimizer 2014 and later 2014 and later Delayed Transaction Durability (lazy commit) 2014 and later 2014 and later Maintenance Plans No** Yes Database Mail Yes Yes Linked Servers Yes Yes MSDTC No Yes Service Broker Yes (except Endpoints) Yes Performance Data Collector No Yes WCF Data Services No Yes FILESTREAM No Yes Policy Based Management No Yes SQL Server Audit Yes Yes BULK INSERT No Yes OPENROWSET Yes Yes Data Quality Services No Yes Buffer Pool Extensions No Yes Stretch Database No Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 46 Amazon RDS Amazon EC2 Resource Governor No Yes Polybase No Yes Machine Learning & R Services No Yes File Tables No Yes * Amazon RDS SQL Server DB instances can be used as data sources for SSRS ** Amazon RDS provides a separate set of features to facilitate backup and recovery of databases *** We encourage our customers use the Amazon Simple Email Service (Amazon SES) to send outbound emails originating from AWS resources and ensure a high degree of deliverability For detailed list of features supported by the editions of SQL Server see High Availability in Microsoft Documentation Conclusion AWS provides two deployment platforms to deploy your SQL Server databases: Amazon RDS and Amazon EC2 Each platform provides unique benefits that might be beneficial to your specific use case but you have the flexibility to use one or both depending on your n eeds Understanding how to manage performance high availability security and monitoring in these environments is outlined in this whitepaper key to choosing the best approach for your use case Contributors Contributors to this document include : • Jugal S hah Solutions Architect Amazon Web Services • Richard Waymire Outbound Principal Architect Amazon Web Services • Russell Day Solutions Architect Amazon Web Services • Darryl Osborne Solutions Architect Amazon Web Services • Vlad Vlasceanu Solutions Archit ect Amazon Web Services ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 47 Further Reading For additional information see: • Microsoft Products on AWS • Active Directory Reference A rchitecture: Implementing Active Directory Domain Services on AWS • Remote Desktop Gateway on AWS • Securing the Microsoft Platform on AWS • Implementing Microsoft Windows Server Failover Clustering and SQL Server Always On Availability Groups in the AWS Cloud • AWS Directory Service • SQL Server Database Restore to Amazon EC2 Linux Docu ment Revisions Date Description November 2019 Updated with information on new features and changes: release of SQL Server 2016 and 2017 in RDS RDS Backup and SQL Server on EC2 Linux new instance classes ; updated screen captures architecture diagrams optimize CPU Hybrid Scenarios and other minor corrections and content updates June 2016 Updated with information on new features and changes: release of Amazon RDS SQL Server Windows Authentication; availability of SQL Server 20 14 in Amazon RDS; new RDS Reserved DB Instance pricing model availability of the AWS Database Migration Service; other minor corrections and content updates May 2015 First publication,General,consultant,Best Practices Designing_MQTT_Topics_for_AWS_IoT_Core,This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Designing MQTT Topics for AWS IoT Core May 2019 This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved,General,consultant,Best Practices Determining_the_IOPS_Needs_for_Oracle_Database_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlDetermining the IOPS Need s for Oracle Database on AWS First Published December 2018 Updated November 17 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlContents Introduction 1 Storage options for Oracle Database 1 IOPS basics 3 Estimating IOPS for an existing database 4 Estimating IOPS for a new database 6 Considering throughput 7 Verifying your configuration 7 Conclusion 7 Contributors 8 Further reading 8 Document revisions 9 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAbstract Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying Oracle Database on the AWS Cloud infrastructure one of the most reliable and secure cloud computing services available today Many businesses of all sizes use Oracl e Database to handle their data needs Oracle Database performance relies heavily on the performance of the storage subsystem but storage performance always comes at a price This white paper includes information to help you determine the input/output oper ations per second (IOPS ) necessary for your database storage system to have the best performance at optimal cost This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 1 Introduction AWS offers customers the flexibility to run Oracle Database on either Amazon Relational Database Service (Amazon RDS) which is a managed database service in the cloud or on Amazon Elastic Compute Cloud (Amazon EC2) Many customers prefer to use Amazon RDS for Oracle Database because it provides an easy managed option to run Oracle Database on AWS without having to think about infrastructure provisioning or installing and maintaining database software You can also run Oracle Database directly on Amazon EC2 which allows you full control over setup of the entire infrastructure and database environment To get the best performance from your database you must configure the storage tier to provide the IOPS and throughput that the database ne eds This is a requirement for both Oracle Database on Amazon RDS and Oracle Database on Amazon EC2 If the storage system does not provide enough IOPS to support the database workload you will have sluggish database performance and transaction backlog However if you provision much higher IOPS than your database actually needs you will have unused capacity The e lastic nature of the AWS infrastructure allow s you to increase or decrea se the total IOPS available for Oracle Database on Amazon EC2 but doing this could have a performance impact on the database requires extra effort and might require database downtime Storage options for Oracle Database For Oracle Database storage on AW S you must use Amazon Elastic Block Store (Amazon EBS) volumes which offer the consistent lowlatency performance required to run your Oracle Database Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure which provides high availability and durability Amazon EBS provides these volume types: • General Purpose solid state drive ( SSD) (gp2) • General Pu rpose SSD ( gp3) • Provisioned IOPS SSD (io1) • Provisioned IOPS SSD ( io2) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 2 • Throughput Optimized hard disk drive ( HDD ) (st1) • Cold HHD ( sc1) Volume types differ in performance characteristics and cost For the high and consistent IOPS required for Oracle Database Amazon EBS General Purpose SSD or Amazon EBS Provisioned IOPS SSD volumes are the best fit For gp2 volumes IOPS performance is dire ctly related to the provisioned capacity gp2 volumes can deliver a consistent baseline of 3 IOPS/GB up to a maximum of 16000 IOPS (based on 16 KB/IO) for a 16 TB volume Input/output ( I/O) is included in the price of gp2 volumes so you pay only for each gigabyte of storage that you provision gp2 volumes also have the ability to burst to 3000 IOPS per volume independent of volume size to meet the periodic spike in performance that most application s need This is a useful database feature for which you can predict normal IOPS needs well but you might still experience an occasional higher spike based on specific workloads Currently gp3 is available for Oracle databases running on Amazon EC2 It has the same qualities of gp2 but also increases the throughput from 250 MiB/s to 1000 MiB/s gp2 volumes are sufficient for most Oracle Database workloads If you need more IOPS and throughput than gp2 can provide Provisioned IOPS ( PIOPS ) is the best choice io2 volumes can provide up to 64000 IOPS per volume for AWS Nitro based instances and 32000 IOPS per volume for other instance families Throughput optimized HDD volumes ( st1) offer low cost HDD volume desig ned for intensive workloads that require less IOPS but high throughput Oracle databases used for data warehouses and data analytics purposes can use st1 volumes Any log processing or data staging areas that require high throughput such as Oracle external tables or external BLOB storage can use st1 volumes st1 volumes can handle a maximum of 500 IOPS per volume Cold HDD volumes ( sc1) are suitable for legacy systems that you retain for occasional reference or archive purposes These systems are a ccessed less frequently and only a few scans are performed each day on the volume You can create s triped volumes (areas of free space on multiple volumes) for more IOPS and larger capacity The maximum IOPS an EC2 instance can support across all EBS volum es is 260000 The maximum IOPS an RDS instance can support is 256000 Use only Amazon EBSoptimized instances with gp2 and PIOPS You can use multiple This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 3 EBS volumes individually for different data files but striped volumes allow better throughput balancing scalability and burstable performance (for gp2) IOPS basics IOPS is the standard measure of I/O operations per second on a storage device It includes both read and write operations The amount of I/O u sed by Oracle Database can vary great ly in a time period based on the server load and the specific queries running If you are migrating an existing Oracle Database to AWS to ensure that you get the best performance regardless of load you must determine the peak IOPS used by your database and provision Amazon EBS volumes on AWS accordingly If you choose an IOPS number based on the average IOPS used by your existing database you should have sufficient IOPS for the database in most cases but database performance will suffer at peak load You can mitigate this issue to some extent by using Amazon EBS gp2 volumes which have the ability to burst to higher IOPS for small periods of tim e Customers sometimes assume that they need much more IOPS than they actually do This assumption occurs if customers confuse storage system IOPS with database IOPS Most enterprises use storage area network ( SAN) systems that can provide 100000 –200000 or more IOPS for storage The same SAN storage is usually shared by multiple databases and file systems which means the total IOPS provided by the storage system is used by many more applications than a single dat abase Most Oracle Database production systems in domains such as enterprise resource planning ( ERP) and customer relationship management ( CRM) are in the range of 3000–30000 IOPS Your individual application might have different IOPS requirements A per formance test environment’s IOPS needs are generally identical to those of production environments but for other test and development environments the range is usually 200 –2000 IOPS Some online transaction processing ( OLTP ) systems use up to 60000 IO PS There are Oracle databases that use more than 60000 IOPS but that is unusual If your environment shows numbers outside these parameters you should complete further analysis to confirm your numbers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 4 Estimating IOPS for an existing database The best way to estimate the actual IOPS that is necessary for your database is to query the system tables over a period of time and find the peak IOPS usage of your existing database To do this you measur e IOPS over a period of time and select the highe st value You can get this information from the GV$SYSSTAT dynamic performance view which is a special view in Oracle Database that provides database performance information This view is continuously updated while the database is open and in use Oracle Enterprise Manager and Automatic Workload Repository (AWR ) report s also use these views to gather data There is a GV$ view for almost all V$ views GV$ views contain data for all nodes in a Real Application Cluster (RAC) identified by an instance ID You can also use GV$ views for non RAC systems which have only one row for each performance criterion To determine IOPS y ou can modify the following sample Oracle PL/SQL script for your needs and run the script during peak database load in your environment For better accuracy run this during the same peak period for a few days and then choose the highest value as the peak IOPS Because the sample script capture s data and store s the PEAK_IOPS_ MEASUREMENT table you must f irst create the table with this code: CREATE TABLE peak_iops_measurement (capture_timestamp date total_read_io number total_write_io number total_io number total_read_bytes number total_write_bytes number total_bytes number); The following script runs for an hour ( run_duration := 3600 ) and captures data every five seconds (capture_gap := 5 ) It then calculates the average I /O and throughput per second for those 5 seconds and stores this information in the table To best fit your needs you can mod ify the run_duration and capture_gap values to change the number of seconds that the script runs and the frequency in seconds that data is captured DECLARE run_duration number := 3600; capture_gap number := 5; loop_count number :=run_duration/ capture_gap; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 5 rdio number; wtio number; prev_rdio number :=0; prev_wtio number :=0; rdbt number; wtbt number; prev_rdbt number; prev_wtbt number; BEGIN FOR i in 1loop_count LOOP SELECT SUM(value) INTO rdio from gv$sysstat WHERE name ='phys ical read total IO requests'; SELECT SUM(value) INTO wtio from gv$sysstat WHERE name ='physical write total IO requests'; SELECT SUM(value)* 0000008 INTO rdbt from gv$sysstat WHERE name ='physical read total bytes'; SELECT SUM(value* 000000 8) INTO wtbt from gv$sysstat WHERE name ='physical write total bytes'; IF i > 1 THEN INSERT INTO peak_iops_measurement (capture_timestamp total_read_io total_write_io total_io total_read_bytes total_write_bytes total_bytes) VALUES (sysdate(rdio prev_rdio)/5(wtio prev_wtio)/5((rdio prev_rdio)/5)+((wtio prev_wtio))/5(rdbt prev_rdbt)/5(wtbt prev_wtbt)/5((rdbt prev_rdbt)/5)+((wtbt prev_wtbt))/5); END IF; prev_rdio := rdio; prev_wtio := wtio; prev_rdbt := rdbt; prev_wtbt : = wtbt; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 6 DBMS_LOCKSLEEP(capture_gap); END LOOP; COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK; END; / The important values are total_io and total_bytes The script captures the split of time spent in read and write operations that you can use for comparison later After you have collected data for a sufficient amount of time you can find the peak IOPS used by your database by running the following query which takes the highest value from the column total_io SELECT MAX(total_io) PeakI OPS FROM peak_iops_measurement; To prepare for any unforeseen performance spikes we recommend that you add an additional 10 percent to this peak IOPS number to account for the actual IOPS that your database needs This actual IOPS is the total number of I OPS you should provision for your Amazon EBS volume ( gp2 or io1) Estimating IOPS for a new database If you are setting up a database for the first time on AWS and you don’t have any existing statistics you can use an IOPS number based on the expected number of application transaction s per second Though the IOPS necessary per transaction can vary widely —based on the amount of data involved the number of queries in a transaction and the query complexity —generally 30 IOPS per transaction is a good number to consider For example if you are expecting 100 transactions per second you can start with 3000 IOPS Amazon EBS volumes Because the amount of data in a new database is usually small changing the IOPS associated with Amazon EBS will be relatively simple whether your database is on Amazon RDS or Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 7 Considering throughput In addition to determining the right IOPS it is als o important to make sure your instance configuration can handle the throughput needs of your database Throughput is the measure of the transfer of bits across the network between the EC2 instance running your database and the Amazon EBS volumes that store the data The amount of available throughput relates directly to the network bandwidth that is available to the EC2 instance and the capability of Amazon EBS to receive data Amazon EBS–optimized instances consistently achieve the given level of performance For more information refer to Instance Types that Support EBS Optimization in the Amazon EC2 User Guide for Linux Instances You can find more about Amazon EC2– Amazon EBS configuration in the Amazon EC2 User Guide In addition to bandwidth availability there are other considerations that affect which EC2 instance you should choose for your Oracle Database These considerations include your database license virtual CPUs available and memory size Verifying your configuration After you configure your environment based on the IOPS and throughput numbers necessary for your environment you can verify your configuration before you install the database with the Oracle Orion tool which is available from Oracle Oracle Orion simulates Oracle Database I/O workloads using the same I/O software stack as Oracle Database which provides a measurement of IOPS and throughput that simulates what your database will experience For more details about this tool and to download it refer to the Oracle website Conclusion AWS provides the option to run Oracle in Amazon RDS or Amazon EC2 Choose RDS for a fully managed service or EC2 if you prefer full control AWS offers various storage services that allow the workl oad to optimized for cost or performance As workloads and requirements change the solution can scale up or down elastically This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 8 Contributors The following individuals and organizations contributed to this document: • Jayaraman Vellore Sampathkumar Amazon Web Services • Abdul Sathar Sait Amazon Web Services • Jinyoung Jung Amazon Web Services • Jason Massie Amazon Web Services Further reading For additional information about using Oracle Database with AWS services refer to the following resources Oracle Database on AWS • Advanced Architectures for Oracle Database on Amazon EC2 whitepaper • Strategies for Migrating Oracle Database to AWS whitepaper • Choosing the Operating System for Oracle Workloads on Am azon EC2 whitepaper • Best Practices for Running Oracle Database on AWS whitepaper Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle Database • Oracle in the Amazon Web Services Cloud FAQ Oracle Reference Architecture • Oracle quick start on AWS Oracle licensing on AWS • Licensing Oracle Software in the Cloud Computing Environment Getting starte d with Oracle RMAN backups and Amazon S3 • Getting Started: Backup Oracle databases directly to AWS with Oracle RMAN This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 9 AWS service details and pricing • AWS Cloud Products • AWS Documentation • AWS Whitepapers • AWS Pr icing • AWS Pricing Calculator Document revisions Date Description November 17 2021 Updates for technical accuracy December 2018 First publication,General,consultant,Best Practices Development_and_Test_on_AWS,This paper has been archived For the latest technical content refer t o the HTML version : https://docsawsamazoncom/whitepapers/latest/ developmentandtestonaws/developmentandtest onawshtml Development and Test on Amazon Web Services First Published November 2 2012 Updated June 29 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Development phase 2 Source code repository 3 Project mana gement tools 3 Ondemand development environments 6 Integrating with AWS APIs and IDE enhancements 9 Build phase 10 Schedule builds 10 Ondemand builds 10 Storing build artifacts 12 Testing phase 13 Automating test environments 13 Load testing 15 User acceptance testing 18 Sidebyside testing 19 Fault tolerance testing 20 Resource management 21 Cost allocatio n and multiple AWS accounts 21 Conclusion 22 Contributors 23 Further reading 23 Document revisions 23 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper describes how Amazon Web Services (AWS) adds value in the various phases of the software development cycle with specific focus on development and test For the development phase this whitepaper: • Shows you how to use AWS for managing version control • Describes project management tools the build process and environments hosted on AWS • Illustrates best practices For the test phase this whitepaper describes how to manage test environments and run various kinds of tests including load testing acceptance testing fault tolerance testing and so on AWS provides unique advantages in each of these scenarios and phases enabling you to choose the ones most appropriate for your software development project The intended audiences for this paper are project managers developers testers systems architects or anyone involved in software production activities This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 1 Introduction Organiz ations write software for various reasons ranging from core business needs (when the organization is a software vendor ) to customizing or integrating software Organizations also create different types of software: web applications standalone application s automated agents and so on In all such cases development teams are pushed to deliver software of high quality as quickly as possible to reduce the time to market or time to production In this document “development and test” refers to the various to ols and practices applied when producing software Regardless of the type of software to be developed a proper set of development and test practices is key to success However producing applications not only requires software engineers but also IT resou rces which are subject to constraints like time money and expertise The software lifecycle typically consists of the following main elements: Elements of the software lifecycle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 2 This whitepaper covers aspects of the development build and test phase s For each of these phases you need different types of IT infrastructure AWS provides multiple benefits to software development teams AWS offers on demand access to a wide range of cloud infrastructure services charging only for the resources that are used AWS helps eliminate both the need for costly hardware and the administrative pain that goes with owning and operating it Owning hardware and IT infrastructure usually involves a capital expenditure for a 3 5 year period where most development and test teams need compute or storage for hours days weeks or months This difference in timescales can cause friction due to the difficulty for IT operations to satisfy simultaneous requests from project teams even as they are constrained by a fixed set of resources The result is that project teams spend a lot of time justifying sourcing and holding on to resources This time could be spent focusing on the main job By provisioning only the resources needed for the duration of development phases test runs or complete test campaigns your company can achieve important savings compared to investing up front in traditional hardware With the right level of granularity you can allocate resources depending on each project’s needs and budget In addition to those economic benefits AWS also offers significant operational advantages such as the ability to set up a development and test infrastructure in a matter of minutes rather than weeks or months and to scale capacity up and down to provide the IT resources needed only when they are needed This document highlights some of the best practices and recommendations around development and test on AWS For example for the development phase this document discuss es how to securely and durably set up tools an d processes such as version control collaboration environments and automated build processes For the testing phase this document discuss es how to set up test environments in an automated fashion and how to run various types of test s including side byside tests load tests stress tests resilience tests and more Development phase Regardless of team size software type being developed o r project duration development tools are mandatory to rationalize the process coordinate efforts and centralize production Like any IT system development tools require proper administration and maintenance Operating such tools on AWS not only relieve s your development team from low level system maintenance tasks such as network configuration hardware setup and so on but also facilitates the completion of more This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 3 complex tasks The following sections describe how to operate the main components of devel opment tools on AWS Source code repository The source code repository is a key tool for development teams As such it needs to be available and the data it contains (source files under version control) needs to be durably stored with proper backup poli cies Ensuring these two characteristics — availability and durability —requires resources expertise and time investment that typically aren’t a core competency of a software development team Building a source code repository on AWS involves creating an AWS CodeCommit repository AWS CodeCommit is a secure highly scalable managed source control service that hosts private Git Hub repositories It eliminates the need for you to operate your own source control system and there is no hardware to provision and scale or software to install configure and operate You can use CodeCommit to store anything from code to binaries and it supports the standard functionality of Git Hub allowing it to work seamlessly with your existing GitHubbased tools Your team can also use CodeCommit’s online code tools to browse edit and collaborate on projects CodeCommit enables you to store any number of files and the re are no repository size limits In a few simple steps you can find information about a repository and clone it to your computer creating a local repo sitory where you can make changes and then push them to the CodeCommit repository You can work from th e command line on your local machines or use a GUI based editor Project management tools In addition to the source code repository teams often use additional tools such as issue tracking project tracking code quality analysis collaboration content sh aring and so on Most of the time those tools are provided as web applications Like any other classic web application they require a server to run and frequently a relational database The web components can be installed on Amazon Elastic Compute Cloud (Amazon EC2) with the database using Amazon R elational Database Service (Amazon RDS) for data storage Within minutes you can create Amazon EC2 instances which are virtual machines over which you have complete control A variety of different operating systems and distributions are available as Amazon Machine Images (AMIs) An AMI is a template This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 4 that contains a software configuration (operating system application server and applications) that you can run on Amazon EC2 After you’ve properly installed and configured the project management tool AWS recommend s you create an AMI from this setup so you can quickly recreate that instance without having to reinstall and reconfigur e the software Project management tools have the same needs as source code repositories: they need to be available and data has to be durably stored While you can mitigate the loss of code analysis reports by recreating them against the desired repository version losing project or issue tracking infor mation might have more serious consequences You can address the availability of the project management web application service by using AMIs to create replacement Amazon EC2 instances in case of failure You can store the application’s data separately fr om the host system to simplify maintenance or migration operati ons Amazon Elastic Block Store (Amazon EBS) provides off instance storage volumes that persist independently from the life of an instance After you create a volume you can attach it to a running Amazon EC2 instance As such an Amazon EBS volume is provisioned and attached to the instance to store the data of the version control repository You achieve durability by taking point intime snapshots of the EBS volume containing the repository data EBS snapshots are stored in Amazon Simple Storage Service (Amazon S3) a highly durable and scalable data store Objects in Amazon S3 are redundantly stored on mul tiple devices across multiple facilities in an AWS Region You can automate the creation and management of snapshots using Amazon Data Lifecycle Manager These snapshots can be used as the starting point for new Amazon EBS volumes and can protect your data for long term durability In case of a failure you can recreate the application data volume from the snapshots and recreate the application instance from an AMI To facilitate proper durability and restoration Amazon Relational Database Service (Amazon RDS) offers an easy way to set up operate and scale a relational database in AWS It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing the project team from this responsibility Amazon RDS Database instances (DB instances) can be provisioned in a matter of minutes Optionally Amazon RDS will ensure that the relational database software stays up to date with the latest patches The automated backup feature of Amazon RDS enables point intime recovery for DB instances allowing restoration of a DB instance to any point in time within the backup retention period This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 5 An Elastic IP address provi des a static endpoint to an Amazon EC2 instance and can be used in combination with DNS (for example behind a DNS CNAME ) This helps teams to access their hosted services such as the project management tool in a consistent way even if infrastructure is changed underneath ; for example when scaling up or down or when a replacement instance is provisioned An Elastic IP Address provides a static endpoint to an Amazon EC2 instance Note : For even quicker and easier deployment many project management tools are available from the AWS Marketplace or as Amazon Machine Images As your development team grows or adds more tools to the project management instance you might require extra capacity for both the web application instance and the DB instance In AWS scaling instances vertically is an easy and straightforward operation You simply stop the EC2 instance change the instance type and start the instance Alternatively you can create a new web application server from the AMI on a more powerful Amazon EC2 instan ce type and replace the previous server You can use horizontal scaling by using Elastic Load Balancing adding more instances to the system by using AWS Auto Scaling In this case as you have more than one node you can use Elastic Load Balancing to distribute the load across all application nodes Amazon RDS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 6 DB instances can scale compute and memory resources with a few clicks on the AWS Management Console Use Elastic Load Balancing to distribute the load across all application nodes When you want to quickly set up a software development project on AWS and don’t want to configure custom p roject management tools on EC2 you can use AWS CodeStar AWS Code Star comes with a unified project dashboard and integration with Atlassian JIRA software a third party issue tracking and project management tool With the AWS CodeStar project dashboard you can easily t rack your entire software development process from a backlog work item to production code deployment Ondemand development environments Developers primarily use their local laptops or desktops to run their development environments This is typically wher e the integrated development environment (IDE) is installed where unit tests are run where source code is checked in and so on However there are a few cases where on demand development environments hosted in AWS are helpful AWS Cloud9 is a cloud based IDE that enables you to write run and debug your code with just a browser It includes a code editor debugger and terminal AWS Cloud9 comes prepackaged with essential tools for popular programming langu ages including JavaScript Python PHP Ruby Go C++ and more so you don’t need to install files or configure your development machine to start new projects Because your AWS Cloud9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 7 IDE is cloud based you can work on your projects from your office ho me or anywhere using an internet connected machine With AWS Cloud9 you can quickly share your development environment with your team enabling you to pair program and track each other's inputs in real time Some development projects may use specialized sets of tools that would be cumbersome or resource intensive to install and maintain these on local machines especially if the tools are used infrequently For such cases you can prepare and configure development environments with required tools (develop ment tools source control unit test suites IDEs and so on ) and then bundle them as AMI s You can easily start the right environment and have it up and running in minimal time and with minimal effort When you no longer need the environment you can s hut it down to free up resources This can also be helpful if you need to switch context in the middle of having code checked out and work in progress Instead of managing branches or dealing with partial check ins you can spin up a new temporary environm ent On AWS you have access to a variety of different instance types some with very specific hardware configurations If you are developing specifically for a given configuration it may be helpful to have a development environment on the same platform where the system is going to run Amazon WorkSpaces enables you to provision virtual cloud based Microsoft Windows or Amazon Linux desktops for your users to run IDE s using your favorite applications such as Visual Studio IntelliJ Eclipse AWS CLI AWS SDK tools Visual Studio Code Eclipse Atom and many more The concept of hosted desktops is not limited to development environments ; it can apply to other roles or functions as well For more complex worki ng environments AWS CloudFormation makes it easy to set up collections of AWS resources This topic is discussed further in the Testing section of this document In many cases such environments are set up within the Amazon Virtual Private Cloud (Amazon VPC) which enables you to extend your on premise s private network to the cloud You can then provision the development en vironments as if they were on the local network but instead they are running in AWS This can be helpful if such environments require any onpremise s resources such as Lightweight Directory Access Protocol ( LDAP) The following diagram shows a deployment where development environments are running on Amazon EC2 instances within an Amazon VPC Those instances are remotely accessed from an enterprise network through a secure VPN connection This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 8 Development environments running on Amazon EC2 instances within a n Amazon VPC Stopping vs ending Amazon EC2 instances Whenever development environments are not used ; for example during the hours when you are not working or when a specific project is on hold you can easily shut them down to save resources and cost There are two possibilities: • Stopping the instances which is roughly equivalent to hibernating the o perating system • Ending the instances which is roughly equivalent to discarding the operating system When you stop an instance (possible for Amazon EBS−backed AMIs) the compute resources are released and no further hourly charges for the instance apply T he Amazon EBS volume stores the state and next time you start the instance it will have the working data as it did before you stopped it Note : any data stored on ephemeral drives will not be available after a stop/start sequence When you end an insta nce the root device and any other devices attached during the instance launch are automatically deleted (unless the DeleteOnTermination flag of a volume is set to “ false ”) meaning that data may be lost if there is no backup or snapshot available for the deleted volumes A n ended instance doesn’t exist anymore This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 9 and must be recreated from an AMI if needed You would typically end the instance of a development environment if all work has been completed and/or the specific environment will not be used anymore If you use AWS Cloud9 IDE the EC2 instance that AWS Cloud9 connects to by default stops 30 minutes after you close the IDE and restart s automatically when you open the IDE As a result you typically only i ncur EC2 instance charges for when you are actively working If you chose to run your development environments on EC2 instances you can use AWS Instance Scheduler to auto matically stop your instances during weekends or non working schedules This can help reduce the instance utilization and overall spend Integrating with AWS APIs and IDE enhancements With AWS you can now code against and control IT infrastructure either if the target platform of your project is AWS or if the project is about orchestrating resources in AWS For such cases you can use the various AWS SDKs to easily integrate their applications with AWS APIs taking the complexity out of coding directly against a web service interface and dealing with details around authentication retries error handling and so on The AWS SDK tools are available for multiple languages: C++ Go JavaScript Nodejs Python Java Net PHP Ruby and for mobile platforms Android and iOS AWS also offers IDE tools that make it easier for you to interact with AWS from within your IDEs such as: • AWS Toolkit for Visual Studio • AWS Toolkit for VS Code • AWS Toolkit for Eclipse • AWS Toolkit for IntelliJ IDEA • AWS Toolkit for PyCharm • AWS Toolkit for Azure DevOps • AWS Toolkit for Rider • AWS Toolkit for WebStorm This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 10 For developing and building Serverless application s AWS offers the Serverless Application Model (AWS SAM) open source framework which can be used with the AWS toolkits mentioned previously Build phase The process of building an application involves many steps includi ng compilation resource generation and packaging For large applications each step involves multiple dependencies such as building internal libraries using helper applications generating resources in different formats generating the documentation and so on Some projects might require building the deliverables for multiple CPU architectures platforms or operating systems The complete build process can take many hours which has a direct impact on the agility of the software development team This impact is even stronger on teams adopting approaches like continuous integration where every commit to the source repository triggers an automated build followed by test suites Schedu le builds To mitigate this problem teams working on projects with lengthy build times often adopt the “nightly build ” (or neutral build) approach or break the project into smaller sub projects (or a combination of both) Doing nightly builds involves a build machine checking out the latest source code from the repository and building the project deliverables overnight Development teams may not build as many versions as they would l ike and the build should be completed in time for testing to begin the next day Breaking down a project into smaller more manageable parts might be a solution if each sub project builds faster independently However an integration step combining all the different sub projects is still often necessary for the team to keep an eye on the overall project and to ensure the different parts still work well together Ondemand builds A more practical solution is to use more computational power for the build pr ocess On traditional environments where the build server runs on hardware acquired by the organization this option might not be viable due to economic constraints or provisioning delays A build server running on an Amazon EC2 instance can be scaled up vertically in a matter of minutes reducing build time by providing more CPU or memory capacity when needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 11 For teams with multiple builds triggered within the same day a single Amazon EC2 instance might not be able to produce the builds quickly enough A solution would be to take advantage of the on demand and pay asyougo nature of AWS CodeBuild to run multiple builds in parallel Every time a new build is requested by the development team or triggered b y a new commit to the source code repository AWS CodeBuild creates a temporary compute container of the class defined in the build project and immediately processes each build as submitted You can run separate builds concurrently without waiting in a queue This also enables you to schedule automated builds at a specific time window If you use a build tool on EC2 instances running as a fleet of worker nodes the task distribution to the worker nodes can be done using a queue holding all the builds to process Worker nodes pick the next build to process as they are free To implement this system Amazon Simple Queue Service (Amazon SQS) offers a reliable highly scalable hosted queue service Amazon SQS makes it easy to create an automated build workflow working in close conjunction with Amazon EC2 an d other AWS infrastructure services In this setup developers commit code to the source code repository which in turn pushes a build message into an Amazon SQS queue The worker nodes poll this queue to pull a message and run the build locally according to the parameters contained in the message (for example the branch or source version to use) You can further enhance this setup by dynamically adjusting the pool of worker nodes consuming the queue Auto Scaling is a service that makes it easy to scale the number of worker nodes up or down automatically according to predefined conditions With Auto Scaling worker nodes ’ capacity can increase seamlessly during demand spikes to maintain quick build gene ration and decrease automatically during demand lulls to minimize costs You can define scaling conditions using Amazon CloudWatch a monitoring service for AWS Cloud resources For example Amazon CloudWatch can monitor the number of messages in the build queue and notify Auto Scaling that more or less capacity is needed depending on the number of messages in the queue The following diagram summarizes this scena rio: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 12 Amazon CloudWatch can monitor the number of messages in the build queue and notify Auto Scaling that more or less capacity is needed Storing build artifacts Every time you produce a build you need to store the output somewhere Amazon S3 is an appropriate service for this Initially the amount of data to be stored for a given project is small but it grows over time as you produce more builds Here the pay as yougo and capacity characteristics of S3 are particularly attractive When you no longer need the build output you can delete it or use S3’s lifecycle policies to delete or archive the objects to Amazon S3 Glacier storage class AWS CodeBuild by default uses S3 bucket s to store the build outputs To distribute the build output (for example to be deployed in test staging or production or to be downloa ded to clients ) AWS offers several options You can distribute build output packages directly out of S3 by configuring bucket policies and/or ACLs to restrict the distribution You can also share the output object using an S3 presigned URL Another option is to use Amazon CloudFront a web service for content delivery which makes it easy to distribute pack ages to end users with low latency and high data transfer speeds thereby improving the end user experience This can be helpful for example when a large number of clients are downloading install packages or updates Amazon CloudFront offers several options; for example to authorize and/or restrict access though a full discussion of this is out of scope for this document This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 13 Testing phase Tests are a critical part of software development They ensure software quality but more importantly they help find issues early in the development phase lowering the cost of fixing them later during the project Tests come in many forms: unit tests performance tests user acceptance tests integration tests and so on and all require IT resources to run Test teams face the same challenges as development teams: the need for enough IT resources but only during the limited duration of the test runs Test environments change frequently and are different from project to project and may require different IT infrastructure or have varying capacity needs The AWS on demand and pay asyougo value propositions are well adapted to those constraints AWS enables your test teams to eliminate both the need for costly hardware and the administrative pain that goes along with owning and operating it AWS also offers significant operational advantages for testers Test environments can be set u p in minutes rather than weeks or months and a variety of resources including different instance types are available to run tests whenever they are needed Automating test environments There are many software tools and frameworks available for automatin g the process of running tests but proper infrastructure must be in place This involves provisioning infrastructure resources initializing the resources with a sample dataset deploying the software to be tested orchestrating the test runs and collect ing results The challenge is not only to have enough resources to deploy the complete application with all the different servers or services it might require but to be able to initialize the test environment with the right software and the right data ove r and over Test environments should be identical between test runs; otherwise it is more difficult to compare results Another important benefit of running tests on AWS is the ability to automate them in various ways You can create and manage test environments programmatically using the AWS APIs CLI tools or AWS SDKs Tasks that require human intervention in classic env ironments (allocating a new server allocating and attaching storage allocating a database and so on ) can be fully automated on AWS using AWS CodePipeline and AWS Cloud Formation For testers designing tests suites on AWS means being able to automate a test down to the operation of the components which are traditionally static hardware devices Automation makes test teams more efficient by removin g the effort of creating and initializing test environments and less error prone by limiting human intervention during This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 14 the creation of those environments An automated test environment can be linked to the build process following continuous integration principles Every time a successful build is produced a test environment can be provisioned and automated tests run on it The following sections describe how to automatically provision Amazon EC2 instances databases and complete environments Provisioning instances You can easily provision Amazon EC2 instances from AMIs An AMI encapsulates the operating system and any other software or configuration files pre installed on the instance When you launch the instance all the applications are already loaded from the AMI and ready to run For information about creating AMIs refer to the Amazon EC2 documentation The challenge with AMI based deployments is that each time you need to upgrade software you have to create a new AMI Although the process of creating a new AMI (and deleting an old one) can be completely automated using EC2 Image Builder you must define a strategy for managing and maintaining multiple versions of AMIs An alternative approach is to include only components into the AMI that don’t change often (operating sys tem language platform and low level libraries application server and so on ) More volatile components like the application under development can be fetched and deployed to the instance at runtime For more details on how to create self bootstrapped in stances see Bootstrapping Provisioning databases Test databases can be efficiently implemented as Amazon RDS database instances Your test teams can instantiate a fully op erational database easily and load a test dataset from a snapshot To create this test dataset you first provision an Amazon RDS instance After injecting the dataset you create a snapshot of the instance From that time every time you need a test database for a test environment you can create one as an Amazon RDS instance from that initial snapshot See Restoring from a DB snapshot Each Amazon RDS instance started from the same snapshot will contain the same dataset which helps ensure that your tests are consistent Provisioning complete environments While you can create complex test environments containing multiple instances using the AWS APIs command line tools or the AWS Management Console AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 15 CloudFormation makes it even easier to create a collection of related AWS resources and provision them in an orderly and predictable fashion AWS CloudFormation uses templates to create and delete a collection of resources together as a single unit (a stack ) A complete test environment ru nning on AWS can be described in a template which is a text file in JSON or YAML format Because templates are just text files you can edit and manage them in the same source code repository you use for your software development project That way the te mplate will mirror the status of the project and test environments matching older source versions can be easily provisioned This is particularly useful when dealing with regression bugs In just a few steps you can provision the full test environment enabling developers and testers to simulate a bug detected in older versions of the software AWS CloudFormation templates also support parameters that can be used to specify a specific software version to be loaded the Amazon EC2 instance sizes for the t est environment the dataset to be used for the databases and so on Provisioning cloud applications can be a challenging process that requires you to perform manual actions write custom scripts maintain templates or learn domain specific languages Yo u can now use the AWS Cloud Development Kit (AWS CDK) an open source software development framework for defining cloud infrastructure ascode with modern programming languages and deploying it through AWS Cloud Formation AWS CDK uses familiar programming languages such as TypeScript JavaScript Python Java C# / Net and Go for modeling your applications For more information about how to create and automate deployments on AWS using AWS CloudFormation see AWS CloudFormation Resources Load testing Functionality tests running in controlled environments are valuable tools to ensure software quality but they give lit tle information on how an application or a complete deployment will perform under heavy load For example some websites are specifically created to provide a service for a limited time: ticket sales for sports events special sales limited edition launch es and so on Such websites must be developed and architected to perform efficiently during peak usage periods In some cases the project requirements clearly state the minimum performance metrics to be met under heavy load conditions ( for example search results must be returned in under 100 milliseconds ( ms) for up to 10000 concurrent requests) and load tests are exercised to ensure that the system can sustain the load within those limits This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 16 For other cases it is not possible or practical to spe cify the load a system should sustain In such cases load tests are performed to measure the behavior under heavy load conditions The objective is to gradually increase the load of a system to determine the point where the performance degrades in such a way that the system cannot operate anymore Load tests simulate heavy inputs that exercise and stress a system Depending on the project inputs can be a large number of concurrent incoming requests a huge dataset to process and so on One of the main d ifficulties in load testing is generat ing large enough amounts of inputs to push the tested system to its limits Typically you need large amounts of IT resources to deploy the system to test and to generate the test input which requires further infrast ructure Because load tests generally don’t run for more than a couple of hours the AWS pay asyougo model nicely fits this use case You can also automate load tests using the techniques described in the previous section enabling your testers to exerci se them more frequently to ensure that each major change to the project doesn’t adversely affect system performance and efficiency Conversely by launching automated load tests you can discover whether a new algorithm caching layer or architecture desi gn is more efficient and benefits the project Note : For quick and easy setup testing tools and solutions are also available from the AWS Marketplace In Serverless architectures using AWS services such as AWS Lambda Amazon API Gateway AWS Step Function s and so on load testing can help identify custom code in Lambda functions that may not run efficiently as traffic scales up It also helps to determine an optimum timeout value by analyzing your functions ’ running duration to identify problems with a dependency service One of the most popular tools to perform this task is Artillery Community Edition which is an open source tool for testing serverless APIs You can also use Distributed Load Testing on AWS to automate application testing understand how it performs at scale and fix bottlenecks befor e releasing your application Network load testing Testing an application or service for network load involves sending large numbers of requests to the system being tested There are many software solutions available to simulate request scenarios but us ing multiple Amazon EC2 instances may be necessary to generate enough traffic Amazon EC2 instances are available on demand and are charged by the hour which makes them well suited for network load testing This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 17 scenarios Keep in mind the characteristics of di fferent instance types Generally larger instance types provide more input / output ( I/O) network capacity the primary resource consumed during network load tests With AWS test teams can also perform network load testing on applications that run outsid e of AWS Having load test agents dispersed in different Regions of AWS enables testing from different geographies ; for example to get a better understanding of the end user experience In that scenario it makes sense to collect log information from the instances that simulate the load Those logs contain important information such as response times from the tested system By running the load agents from different Regions the response time of the tested application can be measured for different geographi es This can help you understand the worldwide user experience Because you can end loadtesting Amazon EC2 instances right after the test you should transfer log data to S3 for storage and later analysis When you plan to run high volume network load te sts directly from your EC2 instances to other EC2 instances follow the Amazon EC2 Testing Policy Load testing for AWS Load testing an application running on AWS is useful to make sure that elasticity features are correctly implemented Testing a system for network load is important to make sure that for web front ends Auto Scaling and Elast ic Load Balancing configurations are correct Auto Scaling offers many parameters and can use multiple conditions defined with Amazon CloudWatch to scale the n umber of front end instances up or down These parameters and conditions influence how fast an Auto Scaling group will add or remove instances An Amazon EC2 instance’s post provisioning time might also affect an application’s ability to scale up quickly enough After initialization of the operating system running on Amazon EC2 instances additional services are initialized such as web servers application servers memory caches middleware services and so on The initialization time of these different s ervices affects the scale up delay especially when additional software packages need to be pulled down from a repository Load testing provide s valuable metrics on how fast additional capacity can be added into a particular system Auto Scaling is not onl y used for front end systems You might also use it for scaling internal groups of instances such as consumers polling an Amazon SQS queue or workers and deciders participating in an Amazon Simple Workflow Service (Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 18 SWF) workflow In both cases load testing the system can help ensure you’ve correctly implemented and configured Auto Sca ling groups or other automated scaling techniques to make your final application as costeffective and scalable as possible Cost optimization with Spot instances Load testing can require many instances especially when exercising systems that are designed to support a high amount of load While you can provision Amazon EC2 instances on demand and discard them when the test is completed while only paying by the hour there is an even more cost effective way to perform those tests using Amazon EC2 Spot Instances Spot Instances enable customers to bid for unused Amazon EC2 capacity Instances are charged th e Spot Price set by Amazon EC2 which fluctuates depending on the supply of and demand for Spot Instance capacity To use Spot Instances place a Spot Instance request specifying the instance type the desired Availability Zone the number of Spot Instances to run and the maximum price to pay per instance hour The Spot Price history for the past 90 days is available via the Amazon EC2 API or the AWS Management Console If the maximum price bid exceeds the current Spot Price the request is fulfilled and instances are started The instances run until either they are ended or the Spot Price increases above the maximum price whichever is sooner See Testimonials and Case Studies to read about other customers ’ case studies and testimonials on EC2 Spot instances User acceptance testing The objective of user acceptance testing is to present the current release to a testing team representing the final user base to determine if the project requirements and specification are met When users can test the software earlier they can spot conceptual weaknesses that have been introduced during the analysis phase or clarify gray areas in the project requirements By testing the software more frequently users can identify functional implementation errors and user interface or application flow misconceptions earlier lowering the cost and impact of correcting them Flaws detected by user acceptance testing may be very difficult to detect by other means The more often you conduct acceptance tests the better for the project because end users provide valuable feedback to development teams as requirements evolve This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 19 However like any other test practice acceptance tests req uire resources to run the environment where the application to be tested will be deployed As described in previous sections AWS provides on demand capacity as needed in a cost effective way which is also appropriate for acceptance testing Using some of the techniques described previously AWS enables complete automation of the process of provisioning new test environments and of disposing environments no longer needed Test environments can be provided for certain times only or continuously from the la test source code version or for every major release By deploying the acceptance test environment within Amazon VPC internal users can transparently access the application to be tested Such an application can also be integrated with other production ser vices inside the company such as LDAP email servers and so on offering a test environment to the end users that is even closer to the real and final production environment Side byside testing Sidebyside testing is a method used to compare a control system to a test system The goal is to assess whether changes applied to the test system improve a desired metric compared to the control system You can use this technique to optimize the performance of complex systems where a multitude of different par ameters can potentially affect the overall efficiency Knowing which parameter will have the desired effect is not always obvious especially when multiple components are used together and influence the performance of each other You can also use this tec hnique when introducing important changes to a project such as new algorithms caches different database engines or third party software In such cases the objective is to ensure your changes positively affect the global performance of the system After you’ve deployed the test and control systems send the same input to both using loadtesting techniques or simple test inputs Finally collect performance metrics and logs from both systems and compare them to determine if the changes you introduced in the test system present an improvement over the control system By provisioning complete test environments on demand you can perform side byside tests efficiently While you can do side byside testing without automated environment provisioning using t he automation techniques described above makes it easier to perform those tests whenever needed taking advantage of the pay asyougo model of AWS In contrast with traditional hardware it may not be possible to run multiple test environments for multip le projects simultaneously This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 20 Sidebyside tests are also valuable from a cost optimization point of view By comparing two environments in different AWS accounts you can easily come up with cost / performance ratios to compare both environments By continuously testing architecture changes for cost performance you can optimize your architectures for efficiency Fault tolerance testing When AWS is the target production environment for the application you’ve developed some specific t est practices provide insights into how the system will handle corner cases such as component failures AWS offers many options for building fault tolerant systems Some services are inherently fault tolerant for example Amazon S3 Amazon DynamoDB Amaz on SimpleDB Amazon SQS Amazon Route 53 Amazon CloudFront and so on Other services such as Amazon EC2 Amazon EBS and Amazon RDS provide features that help architect fault tolerant and highly available systems For example Amazon RDS offers the Multi Availability Zone option that enhances database availability by automatically provisioning and managing a replica in a different Availability Zone For more in formation on how to build fault tolerant architectures running on AWS read Building Fault Tolerant Applications on AWS and see the resources available in the AWS Architecture Center Many AWS customers run mission critical applications on AWS and they need to make sure their architecture is fault tolerant As a result an important practice for all sys tems is to test their fault tolerance capability While a test scenario exercises the system (using similar techniques to load testing) some components are taken down on purpose to check if the system is able to recover from such simulated failure You ca n use the AWS Management Console or the CLI to interact with the test environment For example you might end Amazon EC2 instances and the n test whether an Auto Scaling group is working as expected and a replacement instance automatically provisioned Yo u can also automate this kind of test by integrating AWS Fault Injection Simulator with your CI/CD pipeline It is a best practice is to use automated tools tha t for example occasionally and randomly disrupt Amazon EC2 instances With Fault Injection Simulator you can stress an application by creating disruptive events such as a sudden increase in CPU or memory consumption to observe how the system responds and implement improvements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 21 Resource management With AWS your development and test teams can have their own resources scaled according to their own needs Provisioning complex environments or platforms composed of multiple resources can be done using AWS CloudFormation stacks or some of the other automation techniques described in this whitepaper In large organizations comprising multiple teams it is a good practice to create an internal role or service responsible for centralizing and managing IT reso urces running on AWS This role typically consists of: • Promoting the internal development and test practices described here • Developing and maintaining template AMIs and template AWS CloudFormation stacks with the different tools and platforms used in your organization • Collecting resource requests from project teams and provisioning resources on AWS according to your organization’s policies including network configuration (such as Amazon VPC) and security configurations ( such as Security Groups and IAM credentials) • Monitoring resource usage and charges using AWS Cost Explorer and allocating these to team budgets You can use the AWS Service Catalog to achieve the tasks above or you might want to develop your own internal provisioning and management portal for a tighter integration with internal processes You can do this by using one of the AWS SDKs which allow programmatic access to resources runn ing on AWS Cost allocation and multiple AWS accounts Some customers have found it helpful to create specific accounts for development and test activities This can be important when your production environment also runs on AWS and you need to separate tea ms and responsibilities Separate accounts are isolated from each other by default so that for example development and test users do not interfere with production resources To enable collaboration AWS offers a number of features that enable sharing of resources across accounts such as Amazon S3 objects AMIs and Amazon EBS snapsh ots To separate out and allocate the cost for the various activities and phases of the development and test cycle AWS offers various options One option is to use separate accounts (for example for development testing staging and production) and eac h account will have its own bill You can also consolidate multiple accounts using This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 22 consolidated billing for AWS Organizations to simplify costs and take advantage of quantity discounts with a single bil l Another option is to make use of the monthly c ost allocation report which enables you to organize and track your AWS costs by using resource tagg ing In the context of development and test tags can represent the various stages or teams of the development cycle though you are free to choose the dimensions you find most helpful Conclusion Development and test practices require certain resources at certain times for the development cycle In traditional environments those resources might not be available at all or not in the necessary timeframe When those resources are available they provide a fixed amount of capacity that is either insufficient (especially in variable activities like testing ) or wasted (but paid for) when the resources are not used For more information see the Auto Scali ng documentation AWS offers a cost effective alternative to traditional development and test infrastructures Instead of waiting weeks or even months for hardware you can instantly provision resources scale up as the workload grows and release resource s when they are no longer needed Whether development and test environments consist of a few instances or hundreds whether they are needed for a few hours or 24/7 you still pay only for what you use AWS is a programming language and operating system−ag nostic platform and you can choose the development platform or programming model used in your business This flexibility enables you to focus on your project not on operating and maintaining your infrastructure AWS also enables possibilities that are difficult to realize with traditional hardware You can fully automate resources on AWS so that environments can be provisioned and decommissioned without human intervention You can start development environments ondemand; kick off builds when needed unconstrained by the availability of resources; provision test resources; and automatically orchestrate entire test runs or campaigns AWS offers you the ability to experiment and iterate with a rapidly changeable infrastructure Your project teams are free to use inexpensive capacity to perform any kind of tests or to experiment with new ideas with no upfront expenses or long term commitments making AWS a platform of choice for development and test This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 23 Contributors The following individuals and organizations contributed to this document: • Rakesh Singh Sr Technical Account Manager AMER World Wide Public Sector • Carlos Conde • Attila Narin Further reading • Developer Tools on AWS • How AWS Pricing Works (whitepaper) • AWS Architecture Center • AWS Technical Whitepapers Document revisions Date Description June 29 2021 Updates November 2 2012 First publication,General,consultant,Best Practices Digital_Transformation_Checklist_Using_Technology_to_Break_Down_Innovation_Barriers_in_Government,ArchivedDigital Transformation Checklist Using Technology to Break Down Innovation Barriers in Government December 201 7 This paper has been archived For the latest technical guidan ce on Public Sector Digital Transformation refer to https://awsamazoncom/government education/digitaltransformation/Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Transforming Vision 1 Checklist 1 Shifting Culture 2 Checklist 2 Change the Cost Model 3 Go Cloud Native 3 Track Progress 4 Data Driven Civic Innovation 4 Create the Environment for Digital Transformation 5 Deliver an Exceptional User Experience 5 Collaborate for Improved Worker Productivity 6 Expedite New Service Delivery 8 Global Reach 8 Key Takeaway 9 Contributors 9 Further Reading 9 Archived Abstract Innovation requires many ingredients: a great idea creativity persistence the right data and technology Governments around the world are taking advantage of the cloud to reduce cost and transform the way they deliver on the ir mission The exp ectations of an increasingly digital citizenry are high yet all levels of government face budgetary and human resource constraints Cloud computing (on demand delivery of IT resources via the Internet with pay asyougo pricing) can help government organi zations increase innovation agility and resiliency all while reducing costs This whitepaper provides guidelines that governments can use to break down innovation barriers and achieve a digital transformation that helps them engage and serve citizens ArchivedAmazon Web Services – Digital Transformation Checklist Page 1 Introduction Digital transformation is more than simply digitizing data It requires evolving from rigid legacy platforms to an IT environment that is designed to adapt to the changing needs of an organization It calls for innovation in addition to changes in policy procurement talent and culture to take full advantage of new opportunities that come with new breakthrough technologies Governments around the world are embracing the cloud to deliver services faster to citizens and to spur economic development At the same time this transformation can help them better cope with budgetary and human resource constraints This whitepaper offers a checklist of strategies and tactics governments worldwide are using to break down innovation barriers and tackle mission critical operations with the cloud Transforming Vision True digital transformation employs an innovative approach —one that combines technology and organizational processes for developing and delivering new services This requires a clear vision of where to start Active participation in the definition of a cloud strategy make s it easier to implement new ideas on an ongoing basis Establishing a new mindset is also critical in the digital transformation process Updating technologies is not enough To improve citizen engagement and staff productivity and accelerate service delivery this change is essential across all levels of the organization It’s about rethinkin g the approach and how new technology can help it materialize An agile development environment cultural shift and the right technology model can help governments further their modernization efforts Checklist  Communicate a vision for what success looks like  Define a clear governance strategy including the framework for achieving goals and the decision makers responsible for creating them ArchivedAmazon Web Services – Digital Transformation Checklist Page 2  Build a cross functional team to execute activities that support the strategy and goals  Identify technology partners with the expertise to help meet these goals  Move to a flexible IT system that supports rapid change Shifting Culture The idea of change can be daunting To successfully navigate a digital transformation it is imperative to reshape the culture accordingly This starts with shifting the organizational structure from traditional hierarchi es and silos to smaller teams that are empowe red to make decisions Collaboration between development staff IT and other strategic unit s eliminate s the “throw it over the wall” mentality and can ultimately translate to improved public service Note To keep up with the changes in technology it’s important to build an IT workforce that understands the latest trends and help them stay ahead of inherent learning curves Innovation works best with a bottom up approach where incentives are structured to recognize teams rather than individuals And by rewarding experiment ation you can remov e barriers and eliminate the fear of failure To drive c ultural change do the following: Checklist  Reorganize staff into smaller teams to empower decision making  Train staff on new policies and best practices  Give permission to deviate from traditional rules  Build a cloud development environment that exists as a place to play and build confidence with new skills  Shift to a shortterm planning mindset and continuously iterate on the plan (agile project management)  Consider hiring consultants to help with initial projects ArchivedAmazon Web Services – Digital Transformation Checklist Page 3 Change the Cost Model Small budgets can drive innovation because teams will take creative steps to build new processes to address problems C loud services can positively impact cost with the ability to modernize infrastructures without substantial capital investments Circumventing the long up front procurement process makes it possible to undertake more projects through immediate access to compute resources In addition cloud computing provides the option to spin up and spin down instances to accommodat e seasonal services and dev/test cycle s while only paying for the compute resources that you use Approach the cost model incrementally Start with cost containment shift to cost avoidance and then focus on cost reduction With a pay peruse model it’s possible to return long term budget bac k to the organization and reallocate funds to new projects Go Cloud Native While some organizations prefer to initially move individual licenses and projects to the cloud others opt for a cloud native approach Developing and running applications in this manner takes full advantage of the cloud computing model And by using DevOps processes that promote collaboration across small teams it’s possible to accelerate the delivery of new services with greater reliability DevOps tools provide sustainable processes through infrastructure automation continuous integration and delivery monitoring and auto remediation With a DevOps model it is possible to eliminate disparate development stovepipes and drive efficiencies Checklist  Adopt the philosophy of a cohesive unit across developer s and operations and quality assurance and security functions  Encourage an ownership mindset throughout the entire development and infrastructure lifecycle irrespective of roles  Provide your team with standardized DevOps tools and training ArchivedAmazon Web Services – Digital Transformation Checklist Page 4  Build a unified code repository  Add b uiltin security  Perform frequent but small updates to remain agile and make deployment less risky  Creat e an automated solution (drives consistency regardless of workflow or service )  By adopting a DevOps model organization s have more flexibility to experiment and develop solutions to long standing challenges creating a culture that enable s future innovation Track Progress During the digital transformation journey it is essential to establish metrics to track progress With early indicators in place it’s possible to take immediate action if something goes wrong or needs to be corrected Checklist  Create a data driven metrics system  Evaluate improvements and progress toward goals  Assess whether the organization is p lanning and deliver ing consistently on goals within specified timeframes Data Driven Civic Innovation The AWS engine of innovatio n has long been embraced by the startup community They are now joined by governments who seek to power innovative solutions for large societal problems As government data becomes more widely available more people can use AWS comput e and big data analysis services to tackle problems that were until recently exclusively the domain of government projects Scientists developers and curious citizens are more equipped than ever to find forward thinking and new solutions to some of the world’s biggest challenges These opportunities for innovation are improving lives and creating opportunities for a new class of civic tech entrepreneur s ArchivedAmazon Web Services – Digital Transformation Checklist Page 5 Create the Environment for Digital Transformation Drawing from Amazon’s own experience as an innovator AWS helps guide organizations toward techniques and tools to create a forward leaning digital enterprise But c loud computing is only half of the answer —the other half comes from an organization’s commitment to making a change So what else should governments be thinki ng about on the road to digital transformation? The following sections provide a framework for leveraging AWS in your organization Deliver an Exceptional User Experience High user satisfaction results from ready access to information when and wherever needed However an agency’s user experience sho uld not just focus on citizens —it has to start with its own staff Selfservice web applications enable your users to find information without human intervention regardless of time zone or operating hours For example: • Citizens can conduct business on the ir time remov e dependence on service centers with long wait ing periods to reach repr esentatives • Employees gain access to convenient on demand information from any location which makes it easy to share data with coworkers • Governments can leverage expertise from private companies and other government s to accelerate innovation with new services • Organizations can collect analyze and predict trends based on how web services are used With a flexible system it’s no longer a hassle to modify services to better meet the demands of users How AWS D elivers Governments are leading the way in driving innovation for citizens The cloud offers not only cost savings and agility but also the opportunity to develop breakthroughs in citizen en gagement ArchivedAmazon Web Services – Digital Transformation Checklist Page 6 Whether through open data initiatives public safety modernization education reform citizen service improvements or infrastructure programs more government organizations are increasingly turning to AWS to provide the cost effective scalable secure and flexible infrastructure necessary to transform With a focus on delivering value from taxpayer dollars all levels of government look to manage costs while maintaining the performance and capacity citizens require In a cloud computing environment new IT resources are just a click away This reduc es the time it takes to make those resources available to developers from weeks to just minutes Trimming cost and time for experimentation and develop ment results in a dramatic increase in agility for the organization With cloud computing it’s not necessary to make large upfront investments in hardware or in time spent managing it Instead it’s possible to provision exactly the right type and size of computing resou rces ne cessary to test new ideas or operate the IT department You can access as many resources as need ed almost instantly and only pay for what gets use d Collaborate for Improved Worker Productivity Agencies can quickly achieve business goals by lever aging experience across multiple organizations By facilitating real time communication to share information between teams efficiency increases In addition the sharing of information fosters a culture of trust and innovative thinking And with improved access to information workers are able to make better informed decisions to achieve business results Checklist  Pool limited resources to reduce cost and redundant efforts  Evaluate whether incremental change s produce h igher quality results  Be specific about how to improve communication How AWS Delivers AWS provides a host of services that can integrate into your existing processes and help transform the workplace into a collaborative environment ArchivedAmazon Web Services – Digital Transformation Checklist Page 7 Ama zon WorkDocs Ama zon WorkDocs is a fully managed secure enterprise storage and sharing service offering strong administrative controls and feedback capabilities Users can comment on files share them with others and seamlessly upload new versions Users have access from any place or device including PCs Macs tablets and mobile devices IT administrators can integrate with existing corporate directories enjoy flexible sharing policies and control where data is store d Identity and Access Management AWS Identity and Access Management (IAM) enables secure control led access to AWS services and resources for users IAM creates and manage s AWS users and groups and provides permission s to give them access to AWS resources DevOps and AW S Rapidly and reliably build and deliver citizen services using AWS and DevOps practices These services simplify provisioning and managing infr astructure deploying application code automating software release processes and monitoring application and infrastructure performance Running development and test workloads on AWS enables the elimination of hardware based resource constraints to quickl y create developer environments and expand testing machine fleet It offers instant access to machines with flexible configuration while only charging for what is used This enables faster onboarding of new developers the ability to try out configuration changes in parallel and run as large a test pass as needed Built in Security Government agencies are s tewards of citizens’ data and it is imperative to have the right controls in place to m aintain availability and integrity of that data Cloud security at AWS is the highest priority AWS customers can benefit from a data center and network architecture built to meet the requirements of the most security sensitive organizations With built in security it’s possible to : • React to incidents quickly • Run security scans daily • Monitor and track systems • Receive alerts if any changes are made to systems or services ArchivedAmazon Web Services – Digital Transformation Checklist Page 8 Data Protection Highly resilient disaster recovery is often viewed as complex and cost prohibitive but it’s affordable and easy to use in the cloud Agencies are using the AWS C loud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site If an incident occurs AWS provides rapid recovery of IT infrastructure and data to ensure business continuity Expedite New Service Delivery Speed and agility have become basic requirements for conducting business Today agencies must design flexibility into new services from the start to make it easy to adapt as the mission evolves This is also paramount for transforming IT infrastructure M oving to an on demand computing environme nt delivers the requisite flexibility and scalab ility to support a collaborative work environment This approach minimizes costs and r eliably adapts resources to meet the needs of the business How AWS Delivers The AWS Cloud Adoption Framework offers structure to help agencies develop an efficient and effective plan for their digital transformation Guidance and best practices prescribed within the framework offer a compr ehensive approach to cloud computing across the organization throughout the IT lifecycle Agencies no longer need to plan for and procure IT infrastructure (that is network data storage system resources data centers and supporting hardware and software) weeks or months in advance Instead it’s possible to instantly configure and launch hundreds or thousands of servers in minutes and deliver results faster Global Reach By combining expertise across agencies to work on common problems organizations around the globe can share best practices take advantage of economies of scale to reduce costs provide better quality deliver more effective services and reduce ri sk ArchivedAmazon Web Services – Digital Transformation Checklist Page 9 How AWS Delivers AWS is organized into AWS Regions and Availability Zones that allow for high throughput and low latency communication This design also enables fault isolation An outage of one AWS Region or local Availability Zone does not affect the remaining AWS infrastructure Each Availability Zone has an identical IaaS cloud services system that enables mission owners to cost effectively deploy applications and services with great flexibility scalability and reliability Key Take away Digital transformation requires strong leadership to drive change as well as a clear vision Organizations are experimenting with and benefiting from cloud technology to achieve digital transformation The result of this transformation is a more resilient and innovative government that can deliver services to citizens through the medium they now demand and can help retain innovative talent within agencies As an added bonus this creates job opportunities because new talent is needed to solve new problems and the entrepreneurship this brings can spur economic development Whether it is transforming how individuals collaborate or the way in which organization s execute large scale processes digital transformation offers signif icant upside for all agencies regardless of their size or mission Contributors The following individuals and organizations contributed to this document: • Carina Veksler Public Sector Solutions AWS Public Sector • Doug VanDyke General Manager Federal Government AWS Public Sector Further Reading For additional information see the following : • How Cities Can Stop Wasting Money Move Faster and Innovate • AWS Cloud Adoption Framework ArchivedAmazon Web Services – Digital Transformation Checklist Page 10 • 10 Considerations for Cloud Procur ement • Maximizing Value with AWS,General,consultant,Best Practices Docker_on_AWS_Running_Containers_in_the_Cloud,ArchivedDocker on AWS Running Containers in the Cloud First Published April 1 2015 Updated July 26 2021 This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/dockeron aws/dockeronawshtml ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Container Benefits 2 Speed 2 Consistency 2 Density and Resource Efficiency 3 Portability 3 Containers orchestrations on AWS 4 Key components 9 Container Enabled AMIs 9 Scheduling 9 Container Repositories 11 Logging and Monitoring 12 Storage 12 Networking 13 Security 14 CI/CD 16 Infrastructure as Code 16 Scaling 17 Conclusion 18 Contributors 18 Further reading 18 Document revisions 19 ArchivedAbstract This whitepaper provides guidance and options for running Docker on AWS Docker is an open platform for developing shipping and running applications in a loosely isolated environment called a container Amazon Web Services ( AWS ) is a natural complement to containers and offers a wide range of scalable infrastructure services upon which containers can be deployed You will find various options such as AWS Elastic Beanstalk Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) AWS Fargate and AWS App Runner This paper cover details of each option and key components of the container orchestration ArchivedAmazon Web Services Docker on AWS 1 Introduction Prior to the introduction of containers developers and administrators were often faced with the challenges of compatibility restrictions with applications workloads having to be built specifically for its pre determined environment If this workload needed to be migrated for example from bare metal to a virtual machine (VM) or from a VM to the cloud or between service providers this typically meant rebuilding the application or the workload entirely to ensure compatibility with the new environment Container was introduced to overcome these incompatibilities by providing a common interface With the release of Docker the interest in containers technology has rapidly increased Docker is an open source project that uses several resource isolation features of the Linux kernel to sandbox an application its dependencies configuration files and interfaces inside of an atomic unit called a container This allows a container to run on any host with the appropriate kernel components while shielding the application from behavioral inconsistencies due to varianc es in software installed on the host Containers use operating system level virtualization compared to VMs which use hardware level virtualization using hypervisor which is a software or a firmwar e that creates and runs VMs Multiple containers can run o n a single host OS without needing a hypervisor while being isolated from neighboring containers This layer of isolation allows consistency flexibility and portability that enable rapid software deployment and testing There are many ways in which usin g containers on AWS can benefit your organization Docker has been widely employed in use cases such as distributed applications batch jobs continuous deployment pipelines and etc The use cases for Docker continue to grow in areas like distributed dat a processing machine learning streaming media delivery and genomics The following examples show how AWS services can integrate with Docker: • Amazon SageMaker provides pre built Docker Images for Deep Le arning through TensorFlow and PyTorch or lets you bring your custom pre trained models through Docker images • Amazon EMR on Amazon EKS provides a deployment option to run open source big data frameworks on Amazon EKS • Bioinformatics applications for Genomics within Docker containers on Amazon ECS provide a consistent reproducible run time envi ronment ArchivedAmazon Web Services Docker on AWS 2 • For many SaaS providers the profile of Amazon EKS represents a good fit with their multi tenant microservices development and architectural goals Container Benefits The rapid growth of Docker contain ers is being fueled by the many benefits that it provide s If you have applications that run on VMs or bare metal servers today you should consider containerizing them to take advantage of the benefits that come from Docker containers These benefits can be seen across your organization from developers and operations to Q uality Assurance (QA) The primary benefits of Docker are speed consistency density and portability Speed Because of their lightweight and modular nature containers can enable rapid iteration of your applications Development speed is improved by the ability to deconstruct applications into smaller unit s This reduces shared resources between application components le ading to fewer compatibility issues between required libraries or packages Operational speed is improved because code built in a container on a developer’s local machine can be easily moved to a test server by simply moving the container The container s tartup time primarily depends on the size of the container image cache and the time to pull the image and start the container on host To improve the container startup time you must keep the size of image as small as possible using techniques like mult istage builds and local cache when applicable For more information see Best practices for writing Dockerfiles Consistency The consistency and fidelity of a modula r development environment provide predictable results when moving code between development test and production systems By ensuring that the container encapsulates exact versions of necessary libraries and packages it is possible to minimize the risk of bugs due to slightly different dependency revisions This concept easily lends itself to a disposable system approach in which patching individual containers is less preferable than building new containers in parallel testing and replacing the old Thi s practice helps avoid drift of packages across a fleet of containers versions of your application or dev/test/prod environments; the result is more consistent predictable and stable applications ArchivedAmazon Web Services Docker on AWS 3 Density and Resource Efficiency Containers facilitate enhanced resource efficiency by allowing multiple containers to run on a single system Resource efficiency is a natural result of the isolation and allocation techniques that containers use Containers can easily be restricted to a c ertain number of CPUs and allocated specific amounts of memory By understanding what resource a container needs and what resource is available to your VM or underlying host server it’s possible to maximize the containers running on a single host result ing in higher density increased efficiency of compute resources and less wastage on excess capacity Amazon ECS achieves this through placement strategies The binpack placement strategy tries to optimize placement of containers to be cost efficient as possible Containers in ECS are part of ECS tasks placed on compute instances to leave the least amount of unused CPU or memory This in turn minimizes the number of computed instances in use resulting in better resource efficiency The placement strategi es can be supported by placement constraints which lets you place tasks by constraints like the instance type or the availability zone This further enables you to efficiently utilize resources by ensuring that your tasks are running on instance types suitable for your workload by logically separating your tasks using task groups Amazon EKS uses the native Kubernetes scheduling and placement strategy which tries to place pods on nodes to best match the requirements of your workloads across nodes and no t to place pods on nodes where there aren’t sufficient resources Kubernetes allows you to limit the resources like CPU and memory to Kubernetes namespaces pods or containers For more information see Scheduling Portabilit y The flexibility of Docker containers is based on their portability ease of deployment and smaller size compared to virtual machines Like Git Docker provides a simple mechanism for developers to download and install Docker containers and their subsequ ent applications using the command docker pull Because Docker provides a standard interface it makes containers easy to deploy wherever you like providing portability among different versions of Linux your laptop or the cloud The images Docker builds are compliant with OCI (Open Container Initiative) which was created to support fully interoperable container standards Docker can build images by reading the instructions from a Dockerfile which is a text based manifest You can run the same Docker container on any supported version of Linux if you have the Docker stack installed on the host Additionally Docker supports Windows containers which can run on supported Windows versions Con tainers also provide flexibility by making a micro ArchivedAmazon Web Services Docker on AWS 4 services architecture possible In contrast to common infrastructure models in which a virtual machine runs multiple services packaging services inside their own container on top of a host OS allows a ser vice to be moved between hosts isolated from failure of other adjacent services and protected from errant patches or software upgrades on the host system Because Docker provides clean reproducible and modular environments it streamlines both code dep loyment and infrastructure management Docker offers numerous benefits for a variety of use cases whether in development testing deployment or production Containers orchestrations on AWS Amazon Web Services (AWS) is an elastic secure flexible and developer centric ecosystem that serves as an ideal platform for Docker deployments AWS offers the scalable infrastructure APIs and SDKs that integrate tightly into a development lifecycle and accentuate the benefit s of the lightweight and portable containers that Docker offers to its users In this section we will discuss the different possibilities for container deployments using AWS services such as AWS Elastic Beanstalk Amazon Elastic Container Service Amazon Elastic Kubernetes Service AWS Fargate and other additional services • AWS Elastic Beanstalk supports the deployment of web applications from Docker containers With Docker containers you can define your own runtime environment You can also choose your own platform programming language and any application dependencies (such as package managers or tools) which typically aren't supported by other platforms By using Docker with Elastic Beanstalk you have an infrastructure that handles all the details of capacity provisioning load balancing scaling and application health monitoring Elastic Beanstalk can deploy a Docker image and source code to EC2 instances running the Elastic Beanstalk Docker platform The platform offers multi container (and singlecontainer) support You can also leverage the Docker Compose tool on the Docker platform to simplify your application configuration testing and deployment In situations where you want to use the benefits of containers and want the simplicity of deployi ng applications to AWS by uploading a container image AWS Elastic Beanstalk may be the right choice While it is useful for deploying a limited number of containers the way to run and operate containerized applications with more flexibility at scale is b y using Amazon ECS ArchivedAmazon Web Services Docker on AWS 5 • Amazon ECS is a fully managed container orchestration service and the easiest way to rapidly launch thousands of containers across AWS’ broad range of compute options using your preferred CI/CD and automation tools Amazon ECS with EC 2 launch mode provides an easy lift for your applications that run on VMs The powerful simplicity of Amazon ECS enables you to grow from a single Docker container to managing your entire enterprise application portfolio across availability zones in the cl oud and onpremises using Amazon ECS Anywhere without the complexity of managing a control plane addons and nodes ECS Clusters are made up of Container Instances which are Amazon EC2 instances running the Amazon ECS container agent which communicates instance and container state information to the cluster manager; and pre configured dockerd the Docker d aemon The Amazon ECS container agent is included in the Amazon ECS optimized AMI but you can also install it on any EC2 instance that supports the Amazon ECS specification Your containers are defined in a task definition that you use to run individual t asks or tasks within an ECS service that enables you to run and maintain a specified number of tasks simultaneously in a cluster The task definition can be thought of as a blueprint for your application that you can specify various parameters such as the Docker image to use which ports should be open amount of CPU and memory to use with each task or container within a task and the IAM role the task should use We will discuss ECS Task and Service use cases in depth in the scheduling part under the key components section ArchivedAmazon Web Services Docker on AWS 6 • Amazon EKS provides a natural migration path if you are using Kubernetes already and want to continue to make use of those skills on AWS for your container applications EKS is a managed service that you can use to run Kubernetes on AWS without needing to install operate and maintain your own Kubernetes control plane or nodes It provides highly available and secure clusters and automates key tasks such as patching node provisioning and updates EKS runs a single tenant Kubernetes con trol plane for each cluster The control plane infrastructure is not shared across clusters or AWS accounts Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes that are responsible scheduling containers managing the availability of applications storing cluster data and other key tasks EKS runs upstream Kubernetes certified conformant for a predictable experience You can easily migrate any standard Kubernetes application to EKS without needing to refa ctor your code This allows you to deploy and manage workloads on your Amazon EKS cluster the same way that you would with any other Kubernetes environment Amazon EKS Anywhere is a new deployment option (coming in 2021 ) that enables you to easily create a nd operate Kubernetes clusters on premises including on your own virtual machines and bare metal servers EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on premises and automation tooling for cluster l ifecycle support As new Kubernetes versions are released and validated for use with Amazon EKS we will support three stable Kubernetes versions as part of the update process at any given time The container runtime used in EKS clusters may change in the future but your Docker containers will still work and you shouldn’t notice it EKS will eventually move to containers as the runtime for the EKS optimized Amazon Linux 2 AMI You can follow the containers roadmap issue for more details ArchivedAmazon Web Services Docker on AWS 7 • AWS Fargate provides a way to run containers in a serverless manner with both ECS and EKS AWS Fargate allows you to deliver auton omous container operations which reduces the time spent on configuration patching and security AWS Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment This enables your application to hav e workload isolation and improved security by design With AWS Fargate there is no over provisioning and paying for additional servers It allocates the right amount of compute eliminating the need to choose instances and scale cluster capacity When you run your Amazon ECS tasks and services with the Fargate launch type you package your application in containers specify the CPU and memory requirements define networking and IAM policies and launch the application Each Fargate task has its own isolati on boundary and does not share the underlying kernel CPU resources memory resources or elastic network interface with another task For Amazon EKS AWS Fargate integrates with Kubernetes using controllers that are built by AWS using the extensible model provided by Kubernetes These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate • AWS App Runner is a fully ma naged service that makes it easy to quickly deploy containerized web applications and APIs at scale without any prior experience of running infrastructure on AWS You can go from an existing container image container registry source code repository or existing CI/CD workflow to a fully running containerized web application on AWS in minutes AWS App Runner supports full stack development including both front end and back end web applications that use HTTP and HTTPS protocols App Runner automatically b uilds and deploys the web application and load balances traffic with encryption It monitors the number of concurrent requests sent to your application and automatically adds additional instances based on request volume AWS App runner is ideal for you if you want to run and scale your application on AWS without configuring or managing infrastructure services This means you will not have any orchestrators to configure build pipelines to set up load balancers to optimize or TLS certificates to rotate This really makes it the simplest way to build and run your containerized web application in AWS • Other options: There are additional Docker specific offerings available on AWS which can be useful based on the nature of your workloads It is beyond the scope of this whitepaper to look at these offerings in detail but we have extensive official AWS documentation and blog posts available for each of these offerings ArchivedAmazon Web Services Docker on AWS 8 o AWS App2Container (A2C) is a command line tool which can analyze and build an inventory of all NET and Java applications running in virtual machines on premises or in the cloud A2C packages the applica tion artifact and identified dependencies into container images configures the network ports and generates a Dockerfile ECS task definition or Kubernetes deployment YAML by integrating with various AWS services o Amazon LightSail is a highly scalable compute and networking resource on which you can deploy run and manage containers When you deploy your images to your LightSail container service t he service automatically launches and runs your containers in the AWS infrastructure o AWS Batch helps you to run batch computing workloads on the AWS Cloud You can define job definitions that specify which Do cker container images to run your jobs which run as a containerized applications on AWS Fargate or Amazon EC2 resources in your compute environment o AWS Lambda : You can packa ge and deploy Lambda functions as container images of up to 10 GB in size This allows you to easily build and deploy larger workloads that rely on sizable dependencies such as machine learning or data intensive workloads Just like functions packaged as ZIP archives functions deployed as container images benefit from the same operational simplicity automatic scaling high availability and native integrations with many services that you get with Lambda o Red H at OpenShift Service on AWS (ROSA) : If you are presently running Docker containers in OpenShift ROSA can accelerate your application development process by leveraging familiar OpenShift APIs and tools for deployments on AWS ROSA comes with pay asyougo hourly and annual billing a 9995% SLA and joint support from AWS and Red Hat o AWS Proton is a fully managed delivery service for container and serverless applications for Platform engineering teams to connect and coordinate all the different tools needed for infrastructure provisioning code deployments monitoring and updates Your choice i s usually driven by how much control you want to retain at the expense of additional management effort versus how much AWS can manage for you in the environment the containers run in For most use cases you may want to consider starting on the fully man aged end of the spectrum (App Runner or Fargate) and work backwards towards more of a self managed experience based on the demands of your workload The self managed experience can go to the extent of managing Docker ArchivedAmazon Web Services Docker on AWS 9 containers on EC2 VMs without the use o f any AWS managed services so you have the flexibility to pick the orchestration solution that works best for your needs Key components Container Enabled AMIs AWS has developed a streamlined purpose built operating system for use with Amazon EC2 Contain er Service The Amazon ECS Optimized AMI built on top of Amazon Linux 2 is pre configured with the Amazon ECS container agent a docker daemon with docker runtime dependencies which is the simplest way for you to get started and to get your containers ru nning on AWS quickly The Amazon EKS optimized Amazon Linux AMI is also built on top of Amazon Linux 2 configured to work with Amazon EKS and it includes Docker kubelet and the AWS IAM Authenticator Although you can create your own container instance AM I that meets the basic specifications needed to run your containerized workloads the Amazon ECS and EKS optimized AMIs are pre configured with requirements and recommendations tested by AWS engineers You can also use the Bottlerocket a Linux based open source operating system purpose built by AWS for running containers It includes only the essential software required to run containers and focuses on security and maintainability providing a reliable consistent and safe platform for container based workloads Scheduling When applications are scaled out across multiple hosts the ability to manage each host node docker containers and abstract away the complexity of the underlying platform becomes important In this environment scheduling refers to the ability to schedule containers on the most appropriate host in a scalable automated way In this section we will review key scheduling aspects of various AWS container orchestration servic es • Amazon ECS provides flexible scheduling capabilities by leveraging the same cluster state information provided by the Amazon ECS APIs to make appropriate placement decision Amazon ECS provides two scheduler options The service scheduler and the RunTa sk ArchivedAmazon Web Services Docker on AWS 10 o The service scheduler is suited for long running stateless applications that ensures an appropriate number of tasks are constantly running (replica) and automatically reschedules if tasks fail Services also let you deploy updates such as changing th e number of running tasks or the task definition version that should be running The daemon scheduling strategy deploys exactly one task on each active container instance o The Run Task is suited for batch jobs scheduled jobs or a single job that perform work and stop You can allow the default task placement strategy to distribute tasks randomly across your cluster which minimizes the chances that a single instance gets a disproportionate number of tasks Alternatively you can customize how the scheduler places tasks using task placement strategies and constraints • Amazon EKS: Kubernetes scheduler ( kube scheduler ) becomes responsible for finding the best node for every newly created pod or any unscheduled pods that have no node assigned It assigns the pod to the node with the highest ranking based on the filtering and ranking system If there is more than one nod e with equal scores kube scheduler selects one of these at random you can constrain a pod so that it can only runon set of nodes The scheduler will automatically do a reasonable placement but there are some circumstances where you may want to control w hich node the pod deploys to for example to ensure that a pod ends up on a machine with SSD storage attached to it or to co locate pods from two different services that communicate a lot into the same availability zone o NodeSelector is the simplest recommended form of node selection constraint For the pod to be eligible to run on a node the node must have each of the indicated key value pairs as l abels o Topology spread constraints are to control how Pods are spread across your cluster among failure domains such as regions zones nodes and other use rdefined topology domains This can help to achieve high availability as well as efficient resource utilization o Node affinity is a property of Pods that attrac ts them to a set of nodes (either as a preference or a hard requirement) Taints are the opposite that allow a node to repel a set of pods Tolerations are applied to pods and allow (but do not require) the pods to schedule onto nodes with matching taints ArchivedAmazon Web Services Docker on AWS 11 o Pod Priority indicates the importance of a Pod relative to other Pods If a Pod cannot be scheduled the scheduler tries to preempt (evict) lower priority Pods make scheduling of the pending Pod possible • Lambda is serverless so you don’t need to manage where or how t o scheduler your containers After you create a container image in the Amazon ECR you can simply create and run the Lambda function • Elastic Beanstalk can deploy a Dock er image and source code to EC2 instances running the Elastic Beanstalk Docker platform Compared to EKS or ECS Elastic Beanstalk’s container scheduling features are less for the sake of the managed infrastructure provisioning For more information on sam ples and help getting started with a Docker environment see Using the Docker platform Container Repositories Docker containers are distributed in the form of Docker images Docker images are a compile time construct defined by the Dockerfile manifest with a set of instructions to create the containers Docker images are stored in container registries for delivery to applications that need them Within a registry a collec tion of related images is grouped together as repositories Amazon Elastic Container Registry (Amazon ECR) is the AWS native managed container registry for Open Container Initiative (OCI) images which provides a convenient option with native integration t o the AWS ecosystem With ECR you can share container images privately within your organization using a private repository by default only accessible within your AWS account by IAM users with the necessary permissions Public repositories are available w orldwide for anyone to discover and download Amazon ECR comes with features like encryption at rest using AWS Key Management Service (AWS KMS) and in transit using Transport Layer Security (TLS) endpoints Amazon ECR image scanning helps in identifying software vulnerabilities in your container images by using CVEs database from the Clair project and provides a list of scan findings Additionally you can use VPC interface endpoints for ECR to res trict the network traffic between your VPC and ECR to Amazon network without a need for an internet gateway NAT gateway or a VPN/Direct Connect You can also use a registry of your choice such as DockerHub or any other cloud of self hosted container reg istry and integrate seamlessly with AWS container services For developers starting out with containers DockerHub API limits 100 image requests every six hours for anonymous usage but with ECR public you get 1 unauthenticated pull every second providing a less restrictive option to get started Your limits increase significantly when you authenticate to ECR and this is the recommended way to work with container registries as your adoption increases ArchivedAmazon Web Services Docker on AWS 12 Logging and Monitoring Treating logs as a continuous st ream of events instead of static files allows you to react to the continuous nature of log generation You can capture store and analyze real time log data to get meaningful insights into the application’s performance network and other characteristics An application must not be required to manage its own log files You can specify the awslogs log driver for containers in your task definition under the logConfiguration object to ship the stdout and stderr I/O streams to a designated log group in Amazon CloudWatch logs for viewing and archival Additionally FireLens for Amazon ECS enables you to use task definition parameters with the awsfirelens log driver to route logs to other AWS services or third party log aggregation tools for log storage and anal ytics FireLens works with Fluentd and Fluent Bit a fully compatible with Docker and Kubernetes Using the Fluent Bit daemonset you can send container logs from your EKS clusters to CloudWatch logs Amazon CloudWatch is a monitoring service for that you can use to collect various system application wide metrics and logs and set alarms CloudWatch Container Insights helps you explore aggregate and summarize your container metrics application logs and performance log events at the cluster node pod task and service level through automated dashboards in the CloudWatch console Container Insights also provides diagnostic information such as container restart failures crashloop backoffs in an EKS cluster to help you isolate issues and resolve them qu ickly Container Insights is available for Amazon Elastic Container Service (Amazon ECS including Fargate) Amazon Elastic Kubernetes Service (Amazon EKS) and Kubernetes platforms on Amazon EC2 During AWS re:Invent 2020 AWS launched Amazon Managed Service for Prometheus (AMP) and Amazon Managed Service for Grafana (AMG) two new open source based managed serv ices providing additional options to choose from AWS also provides the option to discover and ingest Prometheus custom metrics to CloudWatch Container Insights to reduce the number of monitoring tools Given the pace at which new services and features are being launched in this space AWS launched the One Observability Demo Workshop to help customers to get hands on experience with AWS instrumentation options and the latest capabilities of AWS observability services in a self paced guided sandbox environment Storage By default all files created inside a container are stored on a writable container layer This means the data doesn’t persist when that container no longer exists and is tightly ArchivedAmazon Web Services Docker on AWS 13 coupled to the host where a container is running Amazon ECS supports the following data volume options for containers • BindMounts : A file or directory on a host can be mounted into one or more containers For tasks hosted on Amazon EC2 the data can be tied to the lifecycle of the host by specifying a host and optional sourcePath value in your task definition Within the container writes to ward the containerPath are persisted to the underlying volume defined in the sourcePath independ ently from the container’s lifecycle You can also share data from a source container with other containers in the same task For tasks hosted on AWS Fargate us ing platform version 140 or later they receive a minimum of 20 GB of ephemeral storage for bind mounts which can be increased to a maximum of 200 GB • Docker Volumes : With the support for Docker volumes you can have the flexibility to configure the life cycle of the Docker volume and specify whether it’s a scratch space volume specific to a single instantiation of a task or a persistent volume that persists beyond the lifecycle of a unique instantiation of the task • Amazon EFS: It provides simple scala ble and persistent file storage for use with your Amazon ECS tasks With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files Your applications can have the storage they need when they need it Amazon EFS volumes are supported for tasks hosted on Fargate or Amazon EC2 instances Kubernetes supports many types of volumes Ephemeral volume types have a lifetime of a pod but persistent volumes exist beyond the lifetime of a pod When a pod ceases to exist Kubernetes destroys ephemeral volumes; however Kubernetes does not destroy p ersistent volumes For any kind of volume in a given pod data is preserved across container restarts For Amazon EKS Container Storage Interface (CSI) driver provides a CSI interface to manage the lifecycle of Amazon EBS EFS FSx for Lustre for Persiste nt Volume For more information see Kubernetes Volumes Networking AWS container services take advantage of the native networking features of Amazon Virtual Private Cloud (Amazon VPC) T his allows the hosts running your containers to be in different subnets across Availability Zones providing high availability Additionally you can take advantage of VPC features like Network Access Control Lists (NACL) and Security Groups to ensure tha t only network traffic you want to allow to come in or leave your containe r For ECS the main networking modes are ones that operate at a task level using the awsvpc network mode or the traditional bridge network ArchivedAmazon Web Services Docker on AWS 14 mode which runs a built in virtual network inside each Amazon EC2 instance awsvpc is the only network available for AWS Fargate Amazon EKS uses Amazon VPC Container Network Interface (CNI) plugin for Kuberne tes for the default native VPC networking to attach network interfaces to Amazon EC2 worker nodes Amazon VPC network policies restrict traffic between control plane components to within a single cluster Control plane components for a cluster can't view o r receive communication from other clusters or other AWS accounts except as authorized with Kubernetes RBAC policies The pods receive IP addresses from the private IP ranges of your VPC When the number of pods running on the node exceeds the number of a ddresses that can be assigned to a single network interface the plugin starts allocating a new network interface if the maximum number of network interfaces for the instance aren't already attached Using CNI customer networking you can assign IP addres ses from a different CIDR block than the subnet that the primary network interface is connected to You also have the option to set network policies through third party libraries for Calico so you have the options to control network communication inside y our Kubernetes cluster at a very granular level More details on EKS networking are available in the AWS documentation Security The shared responsibility of security applies to AWS container services as well AWS manages the security of the infrastructure that runs your containers However controlling access for your users and your container applications is your responsibility as the customer AWS Identity and Access Management ( IAM) plays an important role in the security of AWS container services The permissions provided by the IAM policies attached to the different principals in your AWS account determines what capabilities they have You should avoid using long lived credentials like access keys and secret access keys with your container applications IAM roles provide you with temporary security credentials for your role session You can use roles to delegate access to users applications or services that don't normally have ac cess to your AWS resources There are usually IAM roles at two different levels the first determines what a user can do within AWS Container services and the second is a role which determines which other AWS services your container applications running i n your cluster can interact with For EKS the IAM roles works together with Kubernetes RBAC to control access at multiple levels IAM roles for service accounts (IRSA) with EKS enables you to associate an IAM role with a Kubernetes service account This service account can then provide AWS permissions to the containers in any pod that uses that service account ArchivedAmazon Web Services Docker on AWS 15 With this feature you no longer need to over provision permissions to the IAM role associated with the Amazon EKS node so that pods on that node can call AWS APIs Other aspects of security are network security audit capability and secrets management The container services take advantage of dif ferent constructs provided by Amazon VPC By applying the right controls for IP addresses and ports at different levels you can ensure that only desired traffic enters and leaves your container applications For your audit needs you can use AWS CloudTrai l a service that provides a record of actions taken by a user role or another AWS service in AWS container services Using the information collected by CloudTrail you can determine the request made to Amazon ECS the IP address from which the request w as made who made the request when it was made and additional details AWS Secrets Manager and AWS Systems Manager Parameter Store are two services that can be used to secure sensitive data used within container applications Systems Manager Parameter Store provides secure hierarchical storage of data with no servers to manage Secrets Manager provides additional capabilities that includes random password generation and automatic password rotation Data stored within Systems Manager Parameter can be en crypted using AWS KMS and Secrets Manager uses it to encrypt the protected text of a secret as well AWS container services can integrate with either Systems Manager Parameter Store or Secrets Manager to use process sensitive data securely Kubernetes secr ets enables you to store and manage sensitive information such as passwords docker registry credentials and TLS keys using the Kubernetes API Kubernetes Secrets are by default stored as unencrypted base64 encoded strings They can be retrieved in pla in text by anyone with API access or anyone with access to Kubernetes' underlying data store You can apply native encryption atrest configuration provided by Kubernetes to encrypt the secrets at rest However this involves storing the raw encryption ke y in the encryption configuration which is not the most secure way of storing encryption keys Kubernetes stores all secret object data within etcd encrypted at the disk level using AWS managed encryption keys You can further encrypt Kubernetes secrets using a unique data encryption key (DEK) You are responsible for applying necessary RBAC based controls to ensure that only the right roles in your Kubernetes cluster have access to the secrets and the IAM permi ssions for the AWS KMS key is restricted to authorized principals ArchivedAmazon Web Services Docker on AWS 16 CI/CD Containers have become a feature component of continuous integration (CI) and continuous deployment (CD) workflows Because containers can be built programmatically using Dockerfiles containers can be automatically rebuilt anytime a new code revision is committed Immutable deployments are natural with Docker Each deployment is a new set of containers and it’s easy to rollback by deploying containers that reference previous images AWS container services provide APIs that make deployments easy by providing the complete state of the cluster and the ability to deploy containers using one of the built in schedulers or a custom scheduler AWS Code Services in AWS Developer Tools provide a convenient AWS native stack to perform CI/CD for your container applications It provide s tooling to pull the source code from the source code repository build the container image push the container image to the container registry and deploy the image as a running container in one of the container services AWS CodeBuild uses Docker images to provision the build environments which makes it flexible to adapt to the needs of the applicat ion you are building A build environment represents a combination of operating system programming language runtime and tools that CodeBuild uses to run a build NonAWS tooling for CI/CD like GitHub Jenkins DockerHub and many others can also integrate with the AWS container services using the APIs Infrastructure as Code You should define your cloud resources as code so that you can spend less time creating and managing the infrastructure As with other AWS services AWS CloudFormation provides you a way to model and set up your container resources formatted text files in JSON or YAML describ ing the resources that you want to provision If you're unfamiliar with JSON or YAML AWS also provides other options to script your container environments AWS Copilot CLI is a tool for developers to build release and operate production read y containerized applications on Amazon ECS and AWS Fargate Copilot takes best practices from infrastructure to continuous delivery and makes them available to customers from the comfort of their command line You can also monitor the health of your serv ice by viewing your service's status or logs scale up or down production services and spin up a new environment for automated testing For EKS eksctl is a simple CLI tool for creating and managing clusters on EKS It uses CloudFormation under the covers but allows you to specify your cluster configuration information using a config file with sensible defaults for configuration that is not specified If you prefer to use a familiar programming language to define cloud ArchivedAmazon Web Services Docker on AWS 17 resources you can use AWS Cloud Development Kit (CDK) CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation Today CDK s upports TypeScript JavaScript Python Java C#/Net and (in developer preview) Go Alternately if your organization already uses Terraform or similar tools that have modules for AWS container services you can use them to define your infrastructure as code too Scaling Amazon ECS is a fully managed container orchestration service with no control planes to manage scaling at all Amazon ECS provides options to auto scale container instances and ECS services Amazon ECS cluster auto scaling (CAS) enables you to have more control over how you scale the Amazon EC2 instances within a cluster The core responsibility of CAS to ensure that the right number of instances are running in an Auto Scaling Group to meet the needs of the tasks including tasks already r unning as well as tasks the customer is trying to run that don’t fit on the existing instances Amazon ECS Service Auto Scaling is the ability to automatically increase or decrease the desired count of tasks in your Amazon ECS service for Both EC2 and Farg ate based clusters You can use services’ CPU and memory utilization or other CloudWatch metrics Amazon ECS Service Auto Scaling supports target tracking step scaling and scheduled scaling policies For more information see Service auto scaling Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes Amazon EKS supports following Kubernetes auto scaling options • Cluster Autoscaler automatically adjusts the number of worker nodes in your cluster when pods fail or are rescheduled onto other nodes Amazon EKS node groups are provisioned as part of an Amazon EC2 Auto Scaling group which are compatible with the Cluster Autoscaler • Horizontal Pod Autoscaler automatically scales the number of pods in a deployment replication controller or stateful set based on CPU utilization or with custom metrics This can help your applications scale out to meet increased demand or scale in when resources are not needed thu s freeing up your nodes for other applications similar to Amazon ECS Service Autoscaling ArchivedAmazon Web Services Docker on AWS 18 • Vertical Pod Autoscaler frees the users from necessity of setting up todate resource limits and requests for the containers in their pods By default it provides the calculated recommendation without automatically changing resource requirements of the pods but when auto mode is configured it will set the requests automatic ally based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod It will also maintain ratios between limits and requests that were specified in initial containers configuration For more inform ation on large clusters see considerations for large clusters Conclusion Using Docker containers in conjunction with AWS can accelerate your software development by creating s ynergy between your development and operations teams The efficient and rapid provisioning the promise of build once run anywhere the separation of duties via a common standard and the flexibility of portability that containers provide offer advantages to organizations of all sizes By providing a range of services that support containers along with an ecosystem of complimentary services AWS makes it easy to get started with containers while providing the necessary tools to run containers at scale Contributors Contributors to this document include : • Chance Lee Solutions Architect Amazon Web Services • Sushanth Mangalore Solutions Architect Amazon Web Services Further reading For additional information see: • Container Migration Methodology • Best Practices for writ ing Dockerfiles • Deploying AWS Elastic Beanstalk Applications from Docker Containers • Introducing AWS App Runner ArchivedAmazon Web Services Docker on AWS 19 • Twelve Factor Apps using Amazon ECS and AWS Fargate • Blue/Green deployment with CodeDeploy • IAM roles for Kubernetes service accounts • Amazon EKS Networking • Amazon ECS using AWS Copilot • Amazon EKS Best Practices Guides • Amazon ECS Workshop • Amazon EKS Workshop Document revisions Date Description July 26 2021 Whitepaper updated for technical accuracy April 2015 First publication,General,consultant,Best Practices DoDCompliant_Implementations_in_the_AWS_Cloud,DoDCompliant Implementations in AWS First Published April 2015 Updated November 3 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Getting started 1 Shared responsibil ities and governance 2 Shared responsibility model 2 Compliance and governance 13 AWS global infrastructure 17 Architecture 19 Traditional DoD data center 19 DoD compliant cloud environment 20 AWS services 26 Compute 26 Networking 30 Storage 35 Management 40 Services in scope 44 Reference architecture 45 Impact lev el 2 45 Impact level 4 49 Impact level 5 51 Conclusion 53 Contributors 54 Further reading 54 Document revisions 54 Abstract This whitepaper is intended for Department of Defense ( DoD) mission owners who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) It provides security best practices and architectural recommendations that can help you properly design and deploy DoD compliant infrastructure to host your mission applications and protect your data and assets in the AWS Cloud The paper is designed for Information Technology ( IT) decision makers and security personnel and assumes that mission owners are familiar with basic security concepts in the areas of networking operating systems data encryption and operational controls AWS provides a secure hosting environment for mission owners in which to deploy their applications Mission owners retain the responsibility to sec urely deploy manage and monitor their systems and applications in accordance with DoD security and compliance policies When operating an application or system on AWS the mission owner is responsible for network configuration and security of their AWS en vironment including Amazon Elastic Compute Cloud (Amazon EC2) guest operating system s and management of user access Amazon Web Services DoDCompliant Implementations in AWS 1 Overview In January 2015 the Defense Information Systems Agency (DISA) released the DoD Cloud Computing (CC) Security Requirements Guide (SRG) which provided guidance for cloud service providers and for DoD mission owners in support of running workloads in cloud environments The DoD CC SRG is the primary guidance for cloud computing in the DoD community This whitepaper provides highlevel guid ance for DoD mission owners and partners in designing and deploying solutions in the AWS Cloud that are able to be accredited at Impact Level (IL) 2 IL 4 and IL 5 Although t here are many design permutations that can meet CC SRG requirements on AWS this document presents sample reference architectures to consider that will address many of the common use cases for IL2 IL4 and IL5 Getting started When considering a n applicat ion deployment or migration to the AWS Cloud DoD mission owners must first make sure that their IT plans align with their organization’s business model A solid understanding of the mission and core competencies of your organization will help you identify opportunities for modernization and innovation by migrating to the AWS Cloud You must think through key technology questions includin g: • How can the AWS C loud advance your mission objectives? • Do you have legacy applications and systems that need greater scalability reliability or security than you can afford to maintain in your own environment? • What are your compute storage and network capacity requirements? • How will you be prepared to scale up (and down) to support the mission ? As you answer each question apply the lenses of flexibility cost effectiveness scalability elasticity and security Taking advantage of AWS services allow s you to focus on your co re competencies and leverage the resources and experience that AWS provides Amazon Web Services DoDCompliant Implementations in AWS 2 Shared responsibilities and governance As mission owners build systems on top of AWS Cloud infrastructure the responsibility for implementing operational maintenance and securit y measures are shared : mission owners provide operational maintenance and security support for their software defined cloud components and AWS provide s operational maintenance and security for its infrastructure Mission owners can also inherit or use securi ty controls provided by AWS Shared responsibility model Security and compliance are shared responsibilit ies between AWS and mission owners This shared model can help relieve your operational burden because AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The mission owner assumes responsibility and management of the guest operating system (incl uding updates and security patches) and other associated application software as well as the configuration of the AWS provided security group firewall Mission owners should carefully consider the services they choose as their responsibilities vary depend ing on the services used the integration of those services into their IT environment and applicable laws and regulations1 Security responsibilities in the Cloud and of the Cloud Amazon Web Services DoDCompliant Implementation s in AWS 3 It is possible for mission owners to enhance security and/or meet their more stringent compliance requirements by leveraging AWS services like Amazon GuardDuty AWS Key Management Service (AWS KMS ) and encrypted Amazon Simple Storage Service (Amazon S3) buckets as well as network firewalls and centralized log aggregation The nature of this shared responsibility also provides the flexibility and mission owner control that permits the deployment of solutions that meet industry specific certification requirements This mission owner and AWS shared responsibility model also extends to compliance contro ls Just as the responsibility to operate the IT environment is shared between AWS and its mission owners so is the management operation maintenance and verification of shared compliance controls AWS manages security controls associated with AWS physi cal infrastructure Mission owners can then use the AWS control and compliance documentation available to them at AWS Artifact to perform their control evaluation and verification procedures AWS offers ser vices and features t hat can ease management of the customer’s portion of the shared responsibility model Refer to AWS Cloud Security Mission owner responsibilities Service instance management Mission owners are responsible for managing their instantiations of Amazon S3 bucket storage and objects Amazon Relational Database Service (Amazon RDS) database instances EC2 compute instances and their associ ated storage and Virtual Private Cloud (VPC) network environments This includes mission owner installed operating systems databases and applications running on EC2 instances that are within their authorization boundary Mission owners are also respons ible for managing specific controls relating to shared interfaces and services within the ir security authorization boundary such as customized security control solutions Examples include but are not limited to configuration and patch management vulner ability scanning disaster recovery protecting data in transit and at rest host firewall management credential management identity and access management and VPC network configurations Mission owners provision and configure their AWS compute storage and network resources using API calls to AWS API endpoints or by using the AWS Management Console Using these methods the mission owner is able to launch and shut down EC2 Amazon Web Services DoDCompliant Implementations in AWS 4 and RDS instances change firewall parameters and perform other management functions Application management Applications that run on AWS services are the responsibility of each mission owner to configure and maintain Mission owners should address the controls relevant to each application in the applicable System Security Plan (SSP) Operating system maintenance AWS provides Amazon Machine Images (AMIs) for standard OS releases that include Amazon Linux 2 Microsoft Windows Server R ed Hat Enterprise Linux SUSE Linux and Ubuntu Linux with no additional configuration applied to the image An AMI provides the information required to launch an EC2 instance which is a virtual server in the cloud The miss ion owner specifies the AMI used to launch an instance and the mission owner can launch as many instances from the AMI as needed An AMI includes the following: • A template for the root volume for the instance The root volume of an instance is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance store volume • Launch permissions that control which AWS accounts can use the AMI to launch instances • A block device mapping that specifies the volumes to attach to the instance when it’s launched The OS that is installed on an AMI provided by AWS is patched to a point in time In general AMIs include a minimal install of a guest operating system AWS does not perform any systems administration operati ons or maintenance duties such as patching DoD mission owners are responsible for properly hardening patching and maintaining their AMIs in accordance with DoD Security Technical Implementation Guides (STIGs) and the Information Assurance Vulnerability Management process To aid mission owners in compliance and configuration manag ement consider implementing AWS Systems Manager AWS Systems Manager can scan your instances against your patch configuration and custom policies You can define patch baselines maintain up todate antivirus definitions and enforce firewall policies You can also remotely manage your servers at scale without manually loggi ng in to each server Systems Manager also provides a centralized store to manage your Amazon Web Services DoDCompliant Implementations in AWS 5 configuration data whether i n plaintext such as database strings or secrets such as passwords This allows you to separate your secrets and configuration data from c ode Amazon EC2 provides an AWS Systems Manager (SSM ) document AWSEC2 ConfigureSTIG to apply Security Technical Information Guide ( STIG ) controls to an instance to help you quickly build compliant images following STIG standards The STIG SSM document scans for misconfigurations and runs a remediation script The STIG SSM document installs InstallRoot on Windows AMIs which is a utility produced by the Department of Defense (DoD) designed to instal l and update DoD certificates and remove unnecessary certificates to maintain STIG compliance There are no additional charges for using the STIG SSM document For more information refer to AWSEC2 ConfigureSTIG In 2019 AWS release d new AMIs for Microsoft Windows Server to help you meet STIG compliance standards Amazon EC2 Windows Server AMIs for STIG Compliance are preconfigured with more than 160 req uired security settings STIG compliant operating systems include Windows Server 2012 R2 Windows Server 2016 and Windows Server 2019 The STIG compliant AMIs include updated DoD certificates to help you get started and achieve STIG compliance For instru ctions on how to deploy these AMIs consult Amazon EC2 documentation or search on the AWS Marketplace AWS does not guarantee a specific patch level or control configuration settings Mission owner responsibility includes updating any EC2 instance to a recent patch level and configuring the instance to suit specific mission needs Upon deployment of EC2 instances the mission owner can assume full administrator access and is responsible for performing additional configuration patching security hardening vulnerability scanning and application installation AWS does not maintain administrator access to mission owner EC2 instances Mission owners can customize the instance launched from a public AMI and then save that configuration as a custom AMI for the mission owner’s own use After mission owners create and register an AMI they can use it to launch new instances This concept is analog ous to creating virtual machine templates in a traditional data center environment Instances launched from this customized AMI contain all of the customizations that mission owner has made The mission owner can deregister the AMI when finished After the AMI is deregistered mission owners cannot use it to launch new instances Amazon Web Services DoDCompliant Implementations in AWS 6 Creating custom Amazon Machine Images (AMIs) Workload migration Mission owners also have several options to assist in bulk virtual machine migration to AWS commonly referred to as lift andshift One such option is the AWS Server Migration Service (SMS) AWS SMS is an agentless service which makes it easier and faster for mission owners to migrate thousa nds of on premises workloads to AWS AWS SMS lets mission owners automate schedule and track incremental replications of live server volumes making it easier to coordinate large scale server migrations Although agentless SMS does require privileged acc ess to the source servers' hypervisor A second option is AWS Application Migration Service formerly known as CloudEndure Migration which is an agent based approach AWS Application Migration Service simplifies expedites and reduces the cost of cloud m igration by offering a highly automated lift andshift solution With AWS Application Migration Service you can maintain normal business operations throughout the replication process It nearly continuously replicates source servers which means little to no performance impact When you’re ready to launch the production machines your machines are automatically converted from their source infrastructure into the AWS infrastructure so they can boo t and run natively in AWS Security group configuration Mission owners are responsible for properly configuring their security groups in accordance with their organization’s networking policies A security group acts as a virtual firewall for an instance t o control inbound and outbound traffic As part of ongoing operations and maintenance mission owners must regularly review their security group Amazon Web Services DoDCompliant Implementations in AWS 7 configuration and instance assignment to maintain a secure baseline Security groups are not a solution that ca n be deployed using a one sizefitsall approach They should be carefully tailored to the intended functionality of each class of instance deployed within the mission owner’s AWS environment VPC configuration Amaz on Virtual Private Cloud (VPC) provides enhanced capabilities that AWS mission owners can use to secure their AWS environment through the deployment of traditional networking constructs such as demilitarized zones ( DMZs ) Virtual Local Area Networks (VLANs) and subnets that are segregated by functionality Network Access Control Lists (N ACLs ) provide stateless filtering that can be used similar ly to a firewall to defend against malicious traffic at the subnet level This adds another layer of network security in addition to the mission owner’s security group implementation Inbound traffic into VPC Backups Mission owners are responsible for establishing a backup strategy using AWS services or third party tools that meet the retention goals identified for their application Through the use of Amazon EBS snapshots mission owners can ensure their data is backed up to Amazon S3 on a regular basis Mission owners are responsible for setting and maintaining proper a ccess permissions to their Amazon EBS volumes and Amazon S3 Amazon Web Services DoDCompliant Implementations in AWS 8 buckets and objects Amazon S3 objects and Amazon EBS snapshots can also be configured with lifecycle policies to meet retention requirements and can be aged off to Amazon S3 Glacier for lower cost long term deep storage Host based security tools Mission owners should install and manage anti malware and host based intrusion detection systems in accordance with their organization’s security policies Host based security tools can be included withi n the mission owner’s AMI installed via bootstrapping services when the instance is launched or deployed using configuration management and automation tools like AWS Systems Manager Vulnerability scanning and penetration testing Mission owners are respo nsible for conducting regular vulnerability scanning and penetration testing of their systems in accordance with their organization’s security policies All vulnerability and penetration testing must be properly coordinated with AWS Security in accordance with AWS policy For more information refer to AWS Penetration Testing page Vulnerability scanning of EC2 instances can be accomplished using third party tools or via Amazon Inspector Amazon Inspector is an automated security assessment service that assesses applications for exposure vulnerabilities and deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API High availability and disaster recovery Mission owners also have the responsibility to architect their applications and systems so they are highly available and are routinely backed up Applications and systems should use multiple Availability Zones within an AWS Region for fault tolerance Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes includi ng natural disasters or system failures Mission owners also have the option of automating recovery in case of failures of systems or processes With APIs and automation in place mission owners can launch and test the Disaster Recovery (DR) solution on a recurring periodic basis to endure proper functionality of the solution and be prepared ahead of time Mission owners can reduce recovery times by quickly provisioning pre configured resources (such as AMIs ) when they are needed Amazon Web Services DoDCompliant Implementations in AWS 9 or cutover to already pr ovisioned DR site (and then scaling gradually as you need) Security best practices can be enumerated within an AWS CloudFormation template and provision resources within a VPC2 AWS Identity and Access Management Mission owners are responsible for properly managing their AWS account s including AWS account credentials as well as any IAM users groups or roles that they have associated with their account s This includes configuring multi factor author ization (MFA) password complexity and password retention requirements as applicable by accreditation policy Through the use of the AWS Identity and Access Management (IAM) service mission owners can implement rolebased access control that properly separates users by their identified roles and responsibilities thereby establishing least privilege and helping to ensure that users have only the permissions necessary to perform their assigned tasks To manage multiple AWS accounts mission owners should leverage AWS Organizations AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organi zation that you create and centrally manage AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet your mission’s budgetary security and compliance needs Identity federation AWS offers multipl e options for federating your identities in AWS You can use IAM to enable users to sign in to their AWS accounts with their existing corporate credentials AWS supports identify federation with on premises authentication stores such as Lightweight Directo ry Access Protocol (LDAP) and Active Directory With federation you can use single sign on (SSO) to access your AWS accounts using credentials from your organization’s directory Federation uses open standards such as Security Assertion Markup Language 2 0 (SAML) to exchange identity and security information between an identity provider (IdP) and an application Multi factor and CAC authentication At a minimum AWS mission owners should implement multi factor authentication (MFA) for the ir AWS account credentials as well as any privileged IAM accounts associated with AWS accounts MFA can be used to add an additional layer of security in Amazon S3 through activation of the MFA delete feature Amazon Web Services DoDCompliant Implementations in AWS 10 The DoD has standardized on MFA through the use of t he Common Access Card (CAC) or US Government Personal Identity Verification (PIV) token You can require your AWS users to authenticate to the AWS Management Console with a smart card by implementing SAML identity federation You can also implement a RAD IUS server to handle authentication requests to an AWS Managed Microsoft Active Directory instance Mission owner applications that are migrated to AWS that currently require CAC authentication at the application layer operate exactly the same as they do w ithin an on premises data center environment Privileged remote access Mission owners should implement privileged remote access for application and systems administrators to manage their AWS environment s There are several options for privileged remote access: • Amazon WorkSpaces • Amazon AppStream 20 • AWS Sys tems Manager Session Manager • EC2based bastion hosts Amazon Work Spaces is a managed secure Desktop asaService (DaaS) solution that helps you decrease the complexity in managing hardware inventory OS versions and patches and Virtual Desktop Infrastructure (VDI) Amazon Work Spaces can be configured to restrict access to specific resources within designated VPC subnets and specific AWS services controlled by the user’s IAM policy or role Users are able to access their Work Spaces through an installable desktop client or using the Remote Desktop client Both of these deployment options can be configured for CAC/PIV authentication A mazon Work Spaces is accredited at IL2 IL4 and IL5 Amazon AppStream 20 is a fully managed application streaming service You centrally manage your desktop applications on AppStream 20 and securely deliver them to any computer For example you are able to stream database management tools such as SQL Server Management Studio web browsers such as Firefox and Chrome (restricted to certain URLs if desired) as well as common office software Applications and data are not stored on users' computers Your applications are streamed as encrypted pixels and access data secured within your network Users are Amazon Web Services DoDCompliant Implementations in AWS 11 able to authenticate with their CAC/PIV tokens through the use of identity federation The Amazon AppStream 20 service is accredited at IL2 IL4 and IL5 AWS Systems Manager Session Manager is a fully managed AWS Systems Manager capability that lets you manage your EC2 instances on premises instances and virtual machines (VMs) through an interactive one click browser based shell or through the AWS CLI S ession Manager provides secure and auditable instance management without the need to open inbound ports maintain bastion hosts or manage SSH keys Session Manager also allows you to comply with security policies that require controlled access to instances strict security practices and fully auditable logs with instance access details while still providing end users with simple one click access to your managed instances across multiple operating system s Users are able to authenticate with their CAC/PIV tokens through the use of AWS Management Console identity federation Session Manager is a feature of AWS Systems Manager which is accredited at IL2 IL4 IL5 and IL6 EC2based b astion hosts are hardene d instances used for administrative tasks within the AWS environment Rather than allowing shell access to all EC2 instances from the public internet access can be restricted to a single EC2 instance thereby limiting the attack surface fr om possible comp romise Access to the bastion host should be through whiteliste d IP addresses within the mission owner’s organization require valid SSH keys and require multi factor authentication Auditing capabilities within the OS of the bastion host should be config ured to record all administrative activity These bastion hosts must be patched hardened and scanned in the same way as all other EC2 instances deployed within the mission environment Auditing Mission owners are responsible for properly configuring the ir AWS services to ensure that required audit logs are generated Audit logs should be forwarded to a dedicated log server instance or tool located within the mission owner’s VPC management subnet or written to a secured and encrypted Amazon S3 bucket ensuring that sensitive data is properly protected The mission owner should enable the use of AWS CloudTrail a managed service that enables governance compliance operational auditing and risk auditing of your AWS account With CloudTrail you can log nearly continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail provides event history of your AWS account activity including actions taken through th e AWS Management Console AWS SDKs command line tools and other AWS services This event history Amazon Web Services DoDCompliant Implementations in AWS 12 simplifies security analysis resource change tracking and troubleshooting In addition you can use CloudTrail to detect unusual activity in your AWS accou nts Data protection and spillage Following the Shared Responsibility Model AWS customers are responsible for encryption and access control of their data within their AWS environments According to published DISA guidance all data at rest must also be encrypted You can use AWS Key Management Service (AWS KMS) to help ensure that your data is encrypted at rest For more information refer to AWS KMS Keys To provide protection against data spills all mission owner data stored on Amazon EBS volumes and Amazon S3 must be encrypted using AES 256 encryption in accordance with DoD guidance The mission owner is resp onsible for implementing FIPS 140 2 validated encryption for data at rest with customer managed encryption keys in accordance with DoD policy The combination of the mission owner’s encryption and the automated wipe functionality that AWS provides can ens ure that any spilled data is illegible ciphertext greatly limiting the risk of accidental disclosure AWS degausses and destroys all decommissioned media in accordance with National Institute of Standards and Technology ( NIST ) and National Security Agency (NSA) standards Intrusion detection Mission owners are responsible for properly implementing host based intrusion detection systems on their instances as well as any required network based intrusion detection To assist mission owners in this endeavor AWS provides native services like Amazon GuardDuty Amazon GuardDuty is a threat detection service that nearly continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads With the cloud the collection and aggregation of account and network activities is simplified but it can be time consuming for security teams to nearly continuously analyze event log data for potential threats With GuardDuty you now have an intelligent and cost effective option for nearly continuous threat detection in the AWS Cloud GuardDuty is accredited at IL2 IL4 and IL5 Mission owners are responsible for coordinating deployment of their intrusion detection capabilities with their Cyber Security Service Provider (CSSP) Mission owners can implement the Secure Cloud Computing Architecture (SCCA) within AWS to help them meet their compliance and security requirements More information about the SCCA architecture can be foun d in the IL2 IL4 and IL5 sample reference architecture section of this document Amazon Web Services DoDCompliant Implementations in AWS 13 Compliance and governance Mission owners are required to maintain strong governance over the entire IT environment regardless of whether it is deployed in a traditional da ta center or in the AWS C loud Best practices include : • Understanding your workload’s required compliance objectives • Establishing a control environment that meets those objectives and requirements • Understanding the requirements for validation based on the organization’s risk tolerance • Verifying the operating effectiveness of the control environment Deployment of workloads in the AWS Cloud gives you options to apply various types of controls and utilize multiple verification methods To help mission owners meet DoD compliance and governance requirements a mission owner must perform the following basic steps: 1 Review information from AWS and other sources to understand how their cloud environment is architected and configured 2 Document all relevant DoD compliance requirements that may be in scope for their workloads in the cloud 3 Design and implement control objectives to meet the organization’s security and compliance requirements 4 Identify and document controls owned by outside or third parties 5 Verify that all control objectives are met and all key controls are designed and operatin g effectively Approaching compliance and governance in this manner will help mission owners gain a better understanding of their environment and will help clearly delineate any verification activities that need to be performed FedRAMP The Federal Risk an d Authorization Management Program (FedRAMP) is a US government wide program that provides a standardized approach to security assessment authorization and nearly continuous monitoring for cloud products and services3 Amazon Web Services DoDCompliant Implementations in AWS 14 The DoD SRG uses the FedRAMP pro gram to establish a standardized approach for DoD entities that are utilizing commercial cloud services AWS has been assessed and approved under FedRAMP and has been issued two Agency Authority to Operate (ATO) authorizations covering all 48 Contiguous S tates and the District of Columbia (CONUS) Regions which include AWS GovCloud (US) US East and US West For more information on FedRAMP compliance of the AWS Cloud visit our FedRAMP FAQ page All cloud service providers must demonstrate compliance with FedRAMP standards before they can be considered for a provisional authorization under the CC SRG by DoD Cloud Computing Security Requirements Guide The DoD CC SRG provides a formalized assessment and authorization process for cloud service provider s to obtain a DoD Provisional Authorization (PA) which can then be leveraged by mission owners These provisional auth orizations provide reusable certification s that attest to the compliance of specific AWS Regions and services in alignment with DoD standards reducing the time necessary for a DoD mission owner to assess and authorize their workloads for migration to AWS The CC SRG supports the overall goal of the US federal government to increase the utilization of commercial cloud computing and it provides a means for the DoD to support this goal The CC SRG requires the categorization of mission systems and their workloads at one of four (4) Impact Levels Each level represents a determination of the data sensitivity of a particular system and the controls required to protect it starting at level 2 (lowest) through level 6 (highest) The following table summarizes th e impact levels with a description of a typical workload connectivity restrictions Boundary Cloud Access Point (BCAP) requirements and Computer and Network Defense (CND) requirements Table 1 – Security requirements Impact Level Information Sensitivity Security Controls Location OffPremises Connectivity Separation Personnel Requirements 2 PUBLIC or non critical mission information FedRAMP Moderate US / US outlying areas or DoD on premises Internet Virtual / Logical Public Community National Agency Check and Inquiries (NACI) Amazon Web Services DoDCompliant Implementations in AWS 15 Impact Level Information Sensitivity Security Controls Location OffPremises Connectivity Separation Personnel Requirements 4 CUI or Non CUI Noncritical mission information NonNational Security Systems Level 2 + CUIspecific Tailored Set US / US outlying areas or DoD on premises NIPRNet via CAP Virtual / Logical Limited “Public” Com munity Strong virtual separation between tenant systems and information US Persons ADP1 Single Scope Background Information (SSBI) ADP2 National Agency Check with Law and Credit (NACLC) Nondisclosure Agreement (NDA) 5 Higher Sensitivity CUI Mission critical information National Security Systems Level 4 + NSS and CUIspecific Tailored Set US / US outlying areas or DoD on premises NIPRNet via CAP Virtual / Logical Federal Government Community Dedicated multi tenant infrastructure physically separate from non federal systems Strong virtual separation between tenant systems and information 6 Classified SECRET National Security Systems Level 5 + Classified Overlay US / US outlying areas or DoD on premises CLEARED/CL ASSIFIED FACILITIES SIPRNET DIRECT With DoD SIPRNet Enclave Connection Approval Virtual / Logical Federal Government Community Dedicated multi tenant infrastructure physically separate from non federal and unclassified systems US citizens with favorably adjudicated SSBI and SECRET clearance NDA AWS hold s a provisional authorization for Impact Level 2 workloads within US East and US West which permits mission owners to deploy public unclassified information in these AWS Regions with both the AWS authorization and the mission application’s ATO AWS GovCloud holds a provisional authorization for Impact Levels 2 4 and 5 and permits mission own ers to deploy the full range of controlled unclassified information categories covered by these levels The AWS Secret Region holds a provisional authorization for Impact Level 6 and permits workloads up to and including Secret classification Amazon Web Services DoDCompliant Implementations in AWS 16 To begin pl anning for the deployment of a DoD mission system in AWS it is critical that the CC SRG impact level categorization be made in advance Systems designated at Impact Level 2 can begin deployments relatively quickly Conversely a designation at Impact Level 4 or 5 requires that the mission application on AWS be connected to the Nonsecure Internet Protocol Router Network (NIPRNet) by means of AWS Direct Connect Internet Protocol Security (IPsec) virtual private network (VPN) or both This NIPRNet connecti on also requires that the traversal of all in bound and outbound traffic to and from the mission owner’s VPC be routed through a Border Cloud Access Point (BCAP) or equivalent DoD CIO approved boundary and its associated CND suite The provisioning of circu its for an AWS Direct Connect to NIPRNet connection typically has a substantial lead time so mission owners should plan accordingly Mission owners can also take advantage of existing Cloud Access Points that have been set up by various DoD agencies or CS SPs including DISA For more information regarding the Department of Defense CC SRG refer to the DISA cybermil website for the latest Cloud Security announcements and requirements or the latest CC SRG v13 document For more information on the DISA Cloud Access Point refer to the DISA Cloud Connection Process Guide FedRAMP + CC SRG compliance = the path to AWS For DoD application owners to obtain an Authority to Operate ( ATO) for their cloud deployed applications from their approving authority they must select a cloud service provider that has obtained a provisional authorization from DoD Gaining authorization under FedRAMP is the first step toward gaining authorization from DoD There are four paths into the FedRAMP repository ; the Joint Authorization Board (JAB) and Agency ATO paths are the most common If a CSP wants to go beyond FedRAMP and become a DoD CSP the CSP must go through the DoD CC SRG assessment process Curr ently attaining a FedRAMP Moderate authorization enables a CSP to be considered for Impact Level 2 of the CC SRG while an additional assessment is required against the FedRAMP+ controls of Impact Levels 4 and 5 prior to being granted a provisional author ization at those levels Regardless of whether the Designated Accrediting Authority (DAA) is using the DoD Information Assurance and Certification Accreditation Process (DIACAP) or the Risk Management Framework (RMF) process the DAA has the ability to lev erage and inherit the Provisional Authorization package(s) as part of its assessment toward a final ATO which only it grants (not the Defense Information Systems Agency (DISA)) The RMF process has been formally adopted by the DoD Mission owners can reque st the AWS Amazon Web Services DoDCompliant Implementations in AWS 17 FedRAMP package to get a better understanding of compliance and security processes that AWS abides by Controls inheritance and responsibilities AWS global infrastructure AWS provides facilities and hardware in support of mission owners with security features controlled by AWS at the infrastructure level In the infrastructure as a service (IaaS) model AWS is responsible for applicable service delivery layers including: • Infrastructure (hardware and software that comprise the infrastructure) • Service management processes (the operation and management of the infrastructure and the system and software engineering lifecycles) Mission owners use AWS to manage the cloud infrastructure includi ng the network data storage system resources data centers security reliability and supporting hardware and software Across the globe the infrastructure of AWS is organized into Regions Each Region contains Availability Zones which are located within a particular geographic area that allows for low latency communication between the zones Customer data resides within a particular Region and does not move to a different Region unless the customer explicitly takes this action Amazon Web Services DoDCompliant Implementations in AWS 18 Currently there are seven Regions available within CONUS that are permitted for use by the DoD They are: • US East (IL2) o useast1 (Northern Virginia) o useast2 (Ohio) • US West (IL2) o uswest1 (Northern California) o uswest2 (Oregon) • AWS GovCloud (IL4 IL5 ITAR and export controlled workloads) o usgovwest1 (Oregon) o usgoveast1 (Ohio) • AWS Secret Region (IL6) Each Availability Zone has an identical cloud services offering compute storage and networking among other functionality that enables mission own ers to deploy applications and services with flexibility scalability and reliability AWS provides mission owners with the option to choose only the services they require and the ability to provision or release them as needed Amazon Web Services DoDCompliant Implementations in AWS 19 Architecture Traditional D oD data center Traditional three tier data center architecture A typical DoD three tier data center architecture might consist of the following: • Two data center locations ; one hosting the production environment and one hosting the COOP or DR environment • Each system consists of three distinct tiers or network enclaves Each enclave is defined by separate subnets or VLANS • Network isolation and control between enclaves is maintained by a firewall Th is isolation allows the web tier to communicate with the application tier and the application tier to communicate with the database tier Direct external access to the application and web tiers is prohibited • A load balancer is used to distribute traffic across the web servers and may also provide SSL /TLS offloading Because of the distance between these systems and the network connectivity the data replication between databases is asynchronous In addition to the three tier web application and database components additional “shared” or c ommon services are needed to support the infrastructure as a whole These services may be dedicated to this application or may be leveraged to support multiple applications Amazon Web Services DoDCompliant Implementations in AWS 20 DoD compliant cloud environment Migrating mission workloads to a DoD compliant e nvironment in the AWS Cloud is achievable through the following high level steps Step 1 – Find a “ home” in the AWS Cloud Planning migration to AWS Regions and Availability Zones Concepts • AWS Region • AWS Availability Zone Amazon Web Services DoDComplian t Implementations in AWS 21 Step 2 – Define your network in AWS Configuring VPC subnets NACLs and route tables Concepts • Virtual Private Cloud (VPC) • VPC Subnet • Network Access Control List (Network ACL) • VPC Route Table Amazon Web Services DoDCompliant Implementations in AWS 22 Step 3 – deploy servers (or containers or serverless infrastructure) Deploying Amazon EC2 instances in your subnets Concepts • Amazon Elastic Compute Cloud (EC2) Step 4 – Add storage Creating and attaching Amazon EBS volumes to your Amazon EC2 instances Amazon Web Services DoDCompliant Implementations in AWS 23 Concepts • Amazon EBS • Amazon S3 • Amazon S3 Glacier Step 5 – Add scalability redundancy and failover Adding ELB to handle traffic coming to your EC2 instances Concepts • Multi AZ Architecture • Elastic Load Balancing (ELB) Amazon Web Services DoDCompliant Implementations in AWS 24 Adding an Auto Scaling Group to incr ease and decrease your compute capacity Concepts • Amazon EC2 Auto Scaling Step 6 – Implement network traffic filtering Adding security groups to your VPC Amazon Web Services DoDCompliant Implementations in AWS 25 Concepts • AWS security groups • Defense in depth Recap Comparison of AWS Cloud architecture and onpremises data centers Availability Zones are analogous to data centers Subnets are analogous to layer 3 VLANs EC2 instances are analogous to servers or virtual machines Security groups are analogous to stateful firewalls Shared services There are se veral other components that are required to support a DoD compliant environment in AWS The DoD CC SRG stipulates that IL4+ workloads require protection by a web application firewall including network intrusion detection/prevention full packet capture fun ctionality vulnerability scanning endpoint protection identity and access control (including public key infrastructure ( PKI)) common services (DNS/NTP) as well as log management and patching capabilities Amazon Web Services DoDCompliant Implementations in AWS 26 Additional components required to support a DoDcompliant environment in AWS AWS services Compute Amazon Elastic Compute Cloud (EC2) Amazon EC2 is a web service that provides virtual server instances that can be used to build and host software systems Amaz on EC2 facilitates web scale computing by enabling mission owners to deploy virtual machines on demand The simple web service interface allows mission owners to obtain and configure capacity with minimal friction and it provides complete control over computing resources Amazon EC2 changes the economics of computing by allowing organizations to avoid large capital expenditures and instead pay only for capacity that is actually used Amazon EC2 functionality and features include : • Elastic – Amazon EC2 reduces the time required to obtain and boot new server instances to minutes allowing mission owners to quickly scale capacity both up and down as computing requirements change Amazon Web Services DoDCompliant Implementations in AWS 27 • Flexible – The mission owner can choose among various options f or number of CPUs memory size and storage size A highly reliable and fault tolerant system can be built using multiple EC2 instances EC2 instances are very similar to traditional virtual machines or hardware servers EC2 instances use operating systems such as Windows or Linux They can accommodate most software that can run on those operating systems EC2 instances have IP addresses so the usual methods of interacting with a remote machine such as Secure Shell (SSH) and Remote Desktop Protocol (RDP) can be used • Amazon Machine Image (AMI) – AMI templates are used to define an EC2 server instance Each AMI contains a software configuration including operating system application server and applications applied to an instance type Instance types in Amazon EC2 are essentially hardware archetypes matched to the amount of memory (RAM) and computing power (number of CPUs) needed for the application Using AMI template s to launch Amazon EC2 instances • Custom AMI – The first step toward building applicati ons in AWS is to create a library of customized AMIs Starting an application then becomes a matter of launching the AMI For example if an application is a website or web service the AMI should be configured with a web server ( for example Apache Nginx or Microsoft Internet Information Serv ices) the associated static content and the code for all dynamic pages Amazon Web Services DoDCompliant Implem entations in AWS 28 Alternatively the AMI could be configured to install all required software components and content by running a bootstrap script as soon as the instance is launched As a result after launching the AMI the web server will start and the application can begin accepting requests After an AMI has been created replacing a failing instance is very simple; a replacement instance can easily be launched tha t uses the same AMI as its templ ate • EC2 local instance store volumes – These volumes provide temp orary block level stor age for EC2 instances When an EC2 insta nce is created from an A MI in most cases it comes with a preconfigured blo ck of preattached disk storage Unlike Ama zon EBS volumes data on instance store vo lumes p ersists only during the life of the associated E C2 instance and they are not intended to be used as durable disk stor age Data on EC2 local instance store vo lumes is p ersiste nt across orderly instance reboots (following OS vendor procedure for rebooting the underlying operating system) but not in situ ations where the EC2 instance shuts down or goes through a failure/restart cycle Local instance store volumes should not be used for any data that must persist over time such as permanent file or database storag e Although local instance store volumes are n ot persistent the data can be persisted by periodically copying or backing it up to Amazon EBS or Amazon S3 • Missionowner controlled – Mission owners have compl ete co ntrol of their instances They have root access to each one and can int eract with them as they would any mach ine Mission owners can stop their instance while retain ing the data on a bo ot partition and then subs equently restart the same instance using web ser vice APIs In stances can be rebooted remotely using web ser vice APIs Mission owners also ha ve acce ss to the AWS Management Conso le to view and control their instances • API Management – Managing instances can be done through an API call scriptab le com mand line tools or the AWS Management Conso le Being ab le to quickly launch rep laceme nt insta nces based on a cu stom AMI is a cr itical first step towards fault tolerance The next step is storing persi stent data that these server instan ces use Amazon Web Services DoDCompliant Implementations in AWS 29 • Multiple Availability Zones – Amazon EC2 pro vides the ab ility to place instances in multip le Availability Zones Availability Zones are d istinct locations that are en gineered to be ins ulated from failures in other Availability Zones They provide inexpensi ve low latency network connecti vity to other zones in the same Region By launc hing insta nces in separate Availability Zones mission owners can protect the ir app lications from failure of a single location Regions cons ist of one or more Availability Z ones • Reliable – The Amazon EC2 Ser vice Le vel Agreement (SLA) commitment is 9995% a vailability for each EC2 Region • Elastic IP Addresses – Elastic IP address es are static IP ad dresses designed for dynamic clo ud comp uting An E lastic IP addr ess is assoc iated with a mission owner accou nt and n ot with a particular instance So mission owners control th at address until they choose to explicitly release it Un like traditional static IP addresses ho wever Elastic IP ad dresses can be programmatically remapped to any instance in their a ccount in the event of an instance or Availability Zone failure Mission owners don ’t need to wait for a network techn ician to r econfigure or replace a h ost or wait for the Domain Name System (DNS) to propagate In addition miss ion owners can optio nally configure the reverse D NS reco rd of any of their Elastic IP ad dresses Scalab le (Durability) – AWS Auto Scaling is a web service that enables mission owners to automatically launch or terminate Amazon EC2 instances based on userdefined policies health status checks and schedules For applications configured to run on a cloud infrastructure scaling is an importa nt part of cost control and resource management Scaling is the ability to increase or decrease the compute capacity of an application by either changing the number of servers (horizontal scaling) or changing the size of the servers (vertical scaling) In a typical situation when the web application starts to get more traffic the mission owner either adds more servers or increases the size of existing servers to handle the additional load Similarly if the traffic to the web application starts to slow down the under utilized servers can be shut down or the size of the existing servers can be decreased Depending on the infrastructure involved vertical scaling might involve changes to server configurations every time the application scales With horizo ntal scaling AWS simply increases or decreases the number of servers according to the application's demands Amazon Web Services DoDCompliant Implementations in AWS 30 The decision when to scale vertically and when to scale horizontally depends on factors such as the mission owner’s use case cost performance and infrastructure When using Auto Scaling mission owners can increase the number of servers in use automatically when the user demand goes up to ensure that performance is maintained and can decrease the number of servers when demand goes down to minim ize costs Auto Scaling helps make efficient use of compute resources by automatically doing the work of scaling for the mission owner This automatic scaling is the core value of the Auto Scaling service Auto Scaling is well suited for applications that experience hourly daily or weekly variability in usage and need to automatically scale horizontally to keep up with changes in usage Auto Scaling frees users from having to predict traffic spikes accurately and plan for provisioning resources in advanc e of them With Auto Scaling mission owners can build a fully scalable and affordable infrastructure in the cloud Networking Amazon Virtual Private Cloud (VPC) AWS enables a mission owner to create the equivalent of a “ virtual private enclave” with the A mazon VPC service Amazon VPC is used to provision a logically isolated section of the AWS Cloud where a customer can launch AWS resources in a virtual network that is defined by the mission owner This logically separate space within AWS contain s compute and storage resources that can be connected to a mission owner’s existing infrastructure through a virtual private network (VPN) connection AWS Direct Connect (private) connection and/or the internet With Amazon VPC it is then possible to extend existi ng DoD directory services management tools monitoring/security scanning solutions and inspection capabilities thus maintaining a consistent means of protecting information whether it is residing on internal DoD IT resources or in AWS Network isolatio n and the ability to demonstrate separation of infrastructure and data is applicable at Impact Levels 2 4 and 5 and it is a key requirement of the CC SRG for Impact L evels 4 and 5 Amazon Web Services DoDCompliant Implementations in AWS 31 The CC SRG requires Impact Level 4 and 5 mission applications to be conn ected to NIPRNet without direct internet access from the VPC Mission owners have complete control over the definition of the virtual networking environment within their VPC including the selection of a private (RFC 1918) address range of their choice ( for example 10000/16) the creation of subnets the configuration of route tables and the inclusion or exclusion of network gateways Further mission owners can define the subnets within their VPC in a way that enables them to group similar kinds of in stances based on IP address range Mission owners can use VPC functionality and features in the following ways: • Mission owners can define a VPC on scalable infrastructure and specify its private IP address range from any range they choose • Mission owners can sub divide a VPC’s private IP address space further into one or more public or private subnets according to application requirements and security best practices This can facilitate running applications and services in a customer’s VPC • Mission owners define inbound and outbound access to and from individual subnets using network access control lists • Data can be stored in Amazon S3 with set permissions ensuring that the data can only be accessed from within a mission owner’s VPC • An Elastic IP a ddress can be attached to any instance in a mission owner’s VPC so it can be reached directly from the internet (Impact level 2 only) • A mission partner’s VPC can be bridged with their onsite DoD IT infrastructure (encapsulated in an encrypted VPN connecti on) to extend existing security and management policies to the VPC instances as if they were running within the mission partner’s physical infrastructure Amazon VPC pro vides advanced sec urity features such as security groups and n etwork access co ntrol lists to enable inb ound and o utbound filtering at the instance le vel and subn et level When building a VPC mission owners mu st define the su bnets ro uting rules security groups and network access control lists ( NACLs) that comply with the networking and security requirements of the DoD and their organization Amazon Web Services DoDCompliant Implementations in AWS 32 Subnets VPCs can span multiple Availability Zones After creating a VPC mission owners can add one or more subnets in each Availability Zone Each subnet must reside entirely within o ne Availability Zone cannot span zones and is assigned a unique ID by AWS Routing By design each subnet must be associated with a route table that specifies the allowed routes for outbound traffic leaving the subnet Every subnet is automatically assoc iated with the main route table for the VPC By updating the association mission owners can change the contents of the main route table Mission owners should know the following basic things about VPC route tables: • The VPC has an implicit router • The VPC comes with a main route table that mission owners can modify • Mission owners can create additional custom route tables for their VPC • Each subnet must be associated with a route table which controls the routing for the subnet If a mission owner does not associate a subnet with a particular route table the subnet uses the main route table • Mission owners can replace the main route table with a custom table that they have created (this table becomes the default table each new subnet is associated with) • Each route in a table specifies a destination Classless Inter Domain Routing (CIDR) block and a target (for example traffic destined for 1721600/12 is targeted for the virtual private gateway) Amazon VPC uses the most specific route that matches the tra ffic to determine how to route the traffic Security groups and network AC Ls AWS provides two features that mission owners can use to increase security in their VPC: security groups and net work access co ntrol lists (NACLs) Both features enab le mission owners to control the in bound and outbound traffic for their instan ces Security groups work at the in stance le vel and access control lists (ACLs) work at the subnet level security groups default to deny all and must be configured by the m ission owner to pe rmit tr affic Amazon Web Services DoDCompliant Implementations in AWS 33 Security groups pro vide stateful filtering at the in stance le vel and can meet the n etwork secur ity nee ds of many AWS mission owners However VPC users can choose to use both security groups and network ACLs to take advantage of the additional layer of secur ity that network ACLs prov ide An A CL is an optional layer of security that acts as a firewall for controlling traffic in and out of a subn et Mission owners can set up network ACLs with rules simi lar to those implemented in security groups to add a layer of stateless filtering to their VPC Mission owners should know the following basic things about network ACLs: • A network ACL is a numbered list of rules that is evaluated in order starting with the lowest numbered rule to deter mine whether traffic is allowed in or out of any subnet associated with the network ACL The highest rule number available for use is 32766 We suggest that mission owners start by creating rules with rule numbers that are multiples of 100 so that new rul es can be inserted later on • A network ACL has separate inbound and outbound rules and each rule can either allow or deny traffic • Each VPC automatically comes with a modifiable default network ACL; by default it allows all inbound and outbound traffic • Each subnet must be associated with a network ACL; if mission owners don't explicitly associate a subnet with a network ACL the subnet is automatically associated with the default network ACL • Mission owners can create custom network ACLs; each custom net work ACL starts out closed (permits no traffic) until the mission owner adds a rule • Network ACLs are stateless; responses to allow inbound traffic are subject to the rules for outbound traffic (and vice versa) The following table summarizes the basic dif ferences between security groups and network ACLs Inbound traffic will first be processed according to the rules of the network ACL applied to a subnet and subsequently by the security group applied at the instance level Amazon Web Services DoDCompliant Implementations in AWS 34 Table 2 — Differences between security groups and network ACLs Security Group Network ACL Operates at the instance level (first layer of defense) Operates at the subnet level (additional layer of defense) Supports allow rules only Supports allow rules and deny rules Is stateful; return traffic is automatically allowed regardless of any rules Is stateless; return traffic must be explicitly allowed by rules All rules are evaluated before deciding whether to allow traffic Rules are processed in order when decidi ng whether to allow traffic Applies to an instance only if someone specifies the security group when launching the instance or associates the security group with the instance after the instance is launched Automatically applies to all instances in the su bnets it’s associated with (backup later of defense so you don’t have to rely on someone specifying the security group) The following diagram illustrates the layers of security provided by security groups and network ACLs For example traffic from an internet gateway is routed to the appropriate subnet using the routes in the routing table The rules of the network ACL associated with the subnet control which traffic is allowed to the subnet The rules of the security group associated with an instance co ntrol which traffic is allowed to the instance Amazon Web Services DoDCompliant Implementation s in AWS 35 Security layers provided by security groups and network ACLs Storage There are three common storage options for instances and/or resources that can be utilized in conjunction with a system hosted within an Amazon VPC The three storage types are Amazon S3 Amazon EBS and instance storage each of which has distinct use cases Amazon S3 Amazon S3 is a highly durable repository designed for mission critical and primary data storage for mission owner data It enables mission owners to store and retrieve any amount of data at any time from within Amazon EC2 or anywhere on the web Amazon S3 stores data objects redundantly on multiple devices across multiple facilities and allows concurrent read or write access to these data objects by many separate clients or application threads Amazon S3 is designed to protect data and allow access to it even in t he case of a failure of a data center Amazon Web Services DoDCompliant Implementations in AWS 36 Additionally mission owners can use the redundant data stored in Amazon S3 to recover quickly and reliably from instance or application failures The Amazon S3 versioning feature allows the retention of prior versio ns of objects stored in Amazon S3 and also protects against accidental deletions initiated by staff or software error Versioning can be enabled on any Amazon S3 bucket Mission owners should know the following basic things about Amazon S3 functionality and features: • Mission owners can write read and delete objects containing from 1 byte to 5 terabytes of data each The number of objects mission owners can store is unlimited • Each object is stored in an Amazon S3 bucket and retrieved via a unique develop erassigned key • Objects stored in an AWS Region never leave unless the mission owner transfers them out • Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access Objects can be made private or public and rights can be granted to specific users • Options for secure data upload and download and encryption of data at rest are provided for additional data protection • Amazon S3 uses standards based REST and SOAP interfaces designed to work with any internet development toolkit • Amazon S3 is built to be flexible so that protocol or functional layers can easily be added • Amazon S3 includes options for performing recurring and high volume deletions For recurring deletions rules can be defined to remove sets of objects af ter a predefined time period For efficient one time deletions up to 1000 objects can be deleted with a single request For more information on these Amazon S3 features consult the Amazon S3 documenta tion Amazon Elastic Block Store Amazon EBS provides block level storage volumes for use with Amazon EC2 instances Amazon Web Services DoDCompliant Implementations in AWS 37 Amazon EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone Amazon EBS volumes that are attached to an Amazon EC2 instance are exposed as storage volumes that persist independently from the life of the instance With Amazon EBS users pay only for what they use Amazon EBS is recommended when data changes frequently and requires long term persistence Amazon EBS volumes are particularly well suited for use as the primary storage for file systems databases or for any applications that require f ine granular updates and access to raw unformatted block level storage Amazon EBS is particularly helpful for database style applications that frequently encounter many random reads and writes across the dataset Mission owners can attach multiple volum es to the same instance within the limits specified by their AWS account Currently an AWS account is limited 300 TiB of total storage within EBS volumes Amazon EBS volumes store data redundantly making them more durable than a typical hard drive The a nnual failure rate for an Amazon EBS volume is 01% to 05% compared to 4% for a commodity hard drive Amazon EBS and Amazon EC2 are often used in conjunction with one another when building an application on AWS Any data that needs to persist can be stor ed on Amazon EBS volumes not on the temporary storage associated with each EC2 instance If the EC2 instance fails and needs to be replaced the Amazon EBS volume can simply be attached to the new EC2 instance Because this new instance is a duplicate of the original there is no loss of data or functionality EBS volumes are highly reliable but to further mitigate the possibility of a failure backups of these volumes can be created using a feature called snapshots A robust backup strategy will include an interval between backups a retention period and a recovery plan Snapshots are stored for high durability in Amazon S3 Snapshots can be used to create new EBS volumes which are an exact copy of the original volume at the time the snapshot was taken These EBS operations can be performed through API calls Mission owners should know the following basic things about Amazon EBS functionality and features: Amazon Web Services DoDCompliant Implementations in AWS 38 • Amazon EBS allows mission owners to create storage volumes from 1 GB to 16 TB that can be mounted as devices by EC2 instances Multiple volumes can be mounted to the same instance • Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface Mission owners can create a file system on top of E BS volumes or use them in any other way they would use a block device (like a hard drive) • Amazon EBS volumes are placed in a specific Availability Zone and can then be attached to instances also in that same Availability Zone • Each storage volume is automatically replicated within the same Availability Zone This prevents data loss due to failure of any single hardware component • Amazon EBS also provides the ability to create point intime snapshots of volumes which are persisted to Amazon S3 These snapshots can be used as the starting point for new Amazon EBS volumes and protect data for long term durability The same snapshot can be used to instantiate as many volumes as desired For more information on these Amazon EBS features refer to the Amazon EBS documentation Instance storage An instance store provides volatile temporary block level storage for use with an EC2 instance and consists of one or more instance stor e volumes Instance store volumes must be configured using block device mapping at launch time and mounted on the running instance before they can be used Instances launched from an instance store backed AMI have a mounted instance store volume for the vi rtual machine's root device volume and can have other mounted instances store volumes depending on the instance type The data in an instance store is temporary and only persists during the lifetime of its associated instance If an instance reboots (int entionally or unintentionally) data in the instance store persists However data on instance store volumes is lost under the following circumstances: • Failure of an underlying drive • Stopping an Amazon EBS backed instance • Ending an instance Amazon Web Services DoDCompliant Implementations in AWS 39 Therefore AWS mission owners should not rely on instance store volumes for important long term data Instead keep data safe by using a replication strategy across multiple instances storing data in Amazon S3 or using Amazon EBS volumes Encryption AWS supports multip le encryption mechanisms for data stored within a mission owner’s VPC The following is a summary of the encryption methods: • Amazon EBS encryption — For Amazon EBS volumes encryption is managed by OS level encryption ( for example BitLocker or Encrypted File System (EFS)) by third party products or by Amazon EBS encryption For Amazon EBS encryption when customers create an encrypted Amazon EBS volume and attach it to a supported instance type the data stored at rest on the vol ume the disk I/O and the snapshots created from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances providing encryption of data in transit from EC2 instances to Amazon EBS storage • Amazon S3 encryption — Provides added security for object data stored in buckets in Amazon S3 Mission owners can encrypt data on the client side and upload the encrypted data to Amazon S3 In this case mission owners manage the encryption process the encryption keys and relate d tools Optionally mission owners can use the Amazon S3 server side encryption feature Amazon S3 encrypts object data before saving it on disks in its data centers and it decrypts the object data when objects are downloaded freeing mission owners from the tasks of managing encryption encryption keys and related tools Mission owners can also use their own encryption keys with the Amazon S3 server side encryption feature • AWS Key Management Service ( AWS KMS) — AWS KMS is a managed service that makes it easy for mission owners to create and control the encryption keys used to encrypt your data Learn more information about AWS KMS in the Management section of this paper Amazon Web Services DoDCompliant Implementations in AWS 40 • AWS Cloud HSM — AWS CloudHSM is a cloud based hardware security module (HSM) that allows you to easily add secure key storage and high performance cryptographic operations to your AWS applications CloudHSM has no upf ront costs and proves the ability to start and stop HSMs on demand allowing you to provision capacity when and where it is needed quickly and cost effectively CloudHSM is a managed service that automates the time consuming administrative tasks such as h ardware provisioning software patching high availability and backups CloudHSM is one of several AWS services including AWS KMS which offers a high level of security for your cryptographic keys AWS KMS provides an easy costeffective way to manage e ncryption keys on AWS that meets the security needs for the majority of customer data CloudHSM offers customers the option of single tenant access and control over their HSMs Management AWS Identity and Access Management (IAM) IAM is a web service that enables mission owners to manage users and permissions in AWS The service is targeted at organizations with multiple users or systems that use products such as Amazon EC2 Amazon RDS and the AWS Management Console With IAM missio n owners can centrally manage users security credentials such as access keys and permissions that control which AWS resources users can access Without IAM organizations with multiple users and systems must either create multiple AWS accounts each with its own billing and subscriptions to AWS products or employees must all share the security credentials of a single AWS account Also without IAM mission owners have no control over the tasks a particular user or system can do and what AWS resources the y might use IAM addresses this issue by enabling organizations to create multiple users (each user is a person system or application) who can use AWS products each with individual security credentials all controlled by and billed to a single AWS acco unt With IAM each user is allowed to do only what they need to do as part of the user's job IAM includes the following features: • Central control of users and security credentials —Mission owners control creation rotation and revocation of each user's AWS security credentials (such as access keys) Amazon Web Services DoDCompliant Implementations in AWS 41 • Central control of user access —Mission owners control what data users can access and how they access it • Shared resources – Users can share data for collaborative projects • Permissions based on organiz ational groups – Mission owners can restrict users' AWS access based on their job duties (for example admin developer etc) or departments When users move inside the organization mission owners can easily update their AWS access to reflect the change in their role • Central control of AWS resources – A mission owner’s organization maintains central control of the data the users create with no breaks in continuity or lost data as users move around within or leave the organization • Control over resource creation – Mission owners can help make sure that users create data only in sanctioned places • Networking controls – Mission owners can restrict user access to AWS resources to only from within the organization's corporate network using SSL AWS Key Manag ement Service ( AWS KMS) AWS Key Management Service allows mission owners to create and control encryption keys used to encrypt their data It utilizes FIPS 140 2 validated cryptographic modules AWS KMS runs with other AWS services like AWS CloudTrail to provide mission owners with logs of all key usage to help meet regulatory and compliance needs AWS KMS gives mission owners more control over the access to data that is encrypted Mission owners have control o ver who can use the AWS KMS keys and gain access to encrypted data AWS KMS uses envelope encryption to protect data Envelope encryption is the practice of encrypting plaintext data with a data key and then encrypting the data key with another key Enve lope encryption allows several benefits For example when rotating keys instead of re encrypting the raw data multiple times with different keys mission owners can only re encrypt the data keys that protect the raw data Amazon Web Services DoDCompliant Implementations in AWS 42 AWS KMS includes the followi ng features: • AWS KMS keys are the primary resource within AWS KMS AWS KMS keys are used to generate encrypt and decrypt data keys that are used outside of AWS KMS to encrypt data AWS KMS stores tracks and protect s AWS KMS keys and when an individual wants to use a n AWS KMS key the key is accessed through AWS KMS A n AWS KMS key never leaves AWS KMS unencrypted nor does AWS KMS store manage or track data keys • There are two types of AWS KMS keys within a mission owners AWS account : o Customer managed AWS KMS keys in which the mission owner creates manages and uses the AWS KMS keys In this case the mission owner is responsible for enabling and disabling AWS KMS keys establishing IAM and key policies to grant other permissions to use the keys o AWS managed AWS KMS keys In this case keys are managed by the AWS service that runs with AWS KMS • Data keys are encryption keys for encrypting data including large amounts of data AWS KMS is used to generate encrypt and decrypt data keys • Mission owners can import their own key material from their own infrastructure and use to encrypt their data They can also use AWS KMS to manage the lifecycle of the key material • Key policies are used to control access to AWS KMS in AWS Each AWS KMS has its own policy that has permissions and enables access to the key For a user to access a resource he or she must have access to the key and permissions to use the key • Mission owners can add an additional layer of security by limiting permissions to AWS KMS by using encryption context The encryption context is another key value pair of dat a that can be associated with the information protected by AWS KMS • AWS also offers an Encryption Software Development Kit (SDK) which is a library to implement encryption and follow best practices within an application AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational and risk auditing of a mission owner’s AWS account CloudTrail nearly continuously logs and monitors actions taken by a user role or another AWS service within an account Amazon Web Services DoDCompliant Implementations in AWS 43 CloudTrail monitors actions as an event in the AWS Management Console AWS CLI SDKs and APIs Mission owners can use CloudTrail to view search download archive analyze and respond to account activity across the mission owner’s AWS infrastructure Mission owners have granularity to identify who or what took the action what resources were acted upon when the even t occurred and other details CloudTrail includes the following features: • Mission owners have the abi lity to aggregate all logs in Amazon S3 and restrict access to the Amazon S3 buckets to prevent tampering and deletion of log data • Mission owners can turn on CloudTrail in all AWS Regions even if they aren’t operating in other Regions This way suspici ous activity in an account is always logged • CloudTrail can be enabled to audit usage of AWS KMS keys Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS Cloud resources and applications running within AWS CloudWatch is near real time stream of system events and can be used to monitor for specific events and performs actions in an automated manner Amazon CloudWatch is different from AWS CloudTrail as the latter records API calls for an AWS account and deliver logs Amazon CloudWatch includes the following features: • Mission owners can collect and track metrics like CPU usage and disk reads/writes of Amazon EC2 instances or other Key Performance Indicators (KPIs) • CloudWatch alarms send notifications or automatically make changes to resources that mission owners are monitoring based on the rules they have defined • Mission owners can create custom metrics to monitor application resources to gain visibility into resource utilization appl ication performance and operational health Amazon Web Services DoDCompliant Implementations in AWS 44 • The CloudWatch service also includes CloudWatch Logs which can be used to monitor store and access your log fi les from Amazon EC2 instances AWS CloudTrail Route 53 Lambda and other sources This can be used for log aggregation and consolidation to support log reduction and audit ing security operations functions AWS Config AWS Config is a service that lets mission owners assess audit and evaluate the configurations of all of their AWS resources AWS Config monitors AWS resources and captures the configuration of these resources It can automatically evaluate the configuration against desired configurations and helps simplify compliance auditing security analysis change management and operational troubleshooting Mission owners can use AWS Config to keep an inventory of their A WS resources as well as software configurations within EC2 instances AWS Config includes the following features: • Mission owners can keep track of all of their AWS resources and determine when a change to a certain resource has been made • AWS Config can be used to assess overall compliance Mission owners can define rules for provisioning AWS resources for example only allow Amazon EBS volumes to be created if they are encrypted • Mission owners can also track the relationships among the resources and r eview dependencies prior to making changes • AWS Config can also capture a comprehensive history of AWS resource configuration Mission owners can obtain the details of the event API call that invoked the change • AWS Config also allows viewing of complian ce status across the enterprise over multiple accounts and multiple Regions This way it is easier to identify non compliant accounts or resources and view the data from the AWS Config console in a central account Services in scope As stated previously hosting a workload requires classification of data and determination of DoD SRG Impact Level of the data The impact level may require the mission owner to choose AWS Regions carefully for their workloads Amazon Web Services DoDCompliant Implementations in AWS 45 CONUS Regions (US East/US West) within the US h ave a provisional authorization to host IL2 data whereas the AWS GovCloud (US) Region s may be used to host IL4 and IL5 data Within each Region there are also a variety of services that mission owners can use that have gone through the DoD SRG accredita tion process AWS is constantly working with 3rd party auditors and with DoD accreditation agencies to get more services accredited at different impact levels For an updated list of services that are currently undergoing or have already undergone various accreditation processes refer to the AWS Services in Scope page Reference architecture Impact level 2 CC SRG Impact Level 2 (IL2) systems are appropriate for hosting public or limited access information IL2 systems are not required to be fully segregated from internet traffic and they can connect directly to the internet The following is a sample reference architecture for an IL2 system with a recovery time objective (RTO) of greater than or equal to one day Amazon Web Services DoDCompliant Implementations in AWS 46 Sample Impact Level 2 Architecture with RTO >= 1day(s) The following is an IL2 sample reference architecture with a recovery t ime objective (RTO) of less than or equal to one hour Amazon Web Services DoDCompliant Implementations in AWS 47 The following reference architecture is an example of how to both meet application RTO requirements and maintain CC SRG compliance Sample Impact Level 2 Architecture with RTO >= 1 hour The following are some k ey attributes: • Access to and from the internet traverses an internet gateway • A layer 7 reverse web proxy may reside in the DMZ for protection against application level attacks targeting web infrastructures Similarly mission owners have the option of using native AWS services like AWS W eb Application Firewall and AWS Shield to protect against web based attacks • Web and application instances are deployed in Auto Scaling groups across multiple Availability Zones Amazon Web Services DoDCompliant Implementations in AWS 48 • Each impact level 2 infrastructure should be adequately stratified to limit access to the web/application and database assets to either authorized traffic (by strata) or to administrative traffic initiated from an authorized bastion host contained within the infrastructure • Static web addressable content is stored in secured Ama zon S3 buckets (using bucket policies) and directly addressable from the internet • Infrastructure backups images and volume snapshots are securely stored in the Amazon S3 infrastructure in separate buckets so they are not publicly addressable from the internet • Application database utilizes Amazon RDS which is a managed offering for many flavors of commercial databases The Amazon RDS instance is deployed in a multiAZ configuration with primary and secondary databases and synchronous replication betwe en the two By default the AWS infrastru cture operates in a “zero trust” sec urity model Access to an instance r egardless of the strata on which it res ides mu st be explicitly allowed The enforcement of this model is enab led through the use of security groups (SG) which are addressab le by other security groups For administrative access to any instance in the infrastructure the use of a bastion h ost is defined as the only host instance that is authori zed to access infrastru cture ass ets within a designated infrastructure These hosts are typically Windows Ser ver instances (RDP via port 3389) Remote Desktop Gate way servers and/or Linux instances for S SH acc ess to Linux hosts Any instance des ignated as a bastion host shou ld be included in a bastion sec urity group This should be the only security group granted access to the reverse web pro xy web/application instances a nd database instances (via por ts 22 and/or 3389) Additiona lly to further bolster the d efensive posture of the infrastructure the bastion host(s) sh ould be po wered off when adm inistration activities are not being performed Amazon Web Services DoDCompliant Implementations in AWS 49 The following table is a sample summary of security group behavior by traffic flow : Table 3 — Security group behavior by traffic flow Traffic From Security Group (SG) Traffic to SG Security Group Rule Internet Reverse web proxy (reverse proxySG) Allow 80/443 from internet (all) Reverse web proxy (reverse proxySG) Web/application server (s) (web server SG) Allow 80/443 from reverse proxySG Web/application server (s) (web server SG) Database s erver(s) (dbserver SG) Allow appropriate database port Administrator (internet trusted admin IP) Bastion host (basti onhostSG) Allow 3389/22 from trusted remote admini stration host (host IP address range) Bastion host (bastionhost SG) Proxy web application database instances (reverse proxySG webserver SG db server SG) Allow 3389/22 from bastion hostSG Impact Level 4 DoD systems hosting data categorized at IL4 and IL5 of the CC SRG must attain complete separation from systems hosting non DoD data and route traffic entirely through dedicated connections to the DoD information networks (DoDIN) through a VPN or an AWS Di rect Connect To achieve full separation of network traffic the current approved DoD reference architecture is to establish an AWS Direct Connect connection from DoDIN to AWS including BCAP with a Computer Network Defense (CND) Suite hosted in a colocat ion facility associated with AWS The following illustration is a sample reference architecture for an IL4 system The following architecture utilizes best practices according to the DoD SRG which creates guidelines for a Secure Cloud Computing Architectu re (SCCA) In addition AWS has published additional SCCA reference architecture guidance Amazon Web Services DoDCompliant Implementations in AWS 50 Sample Impact Level 4 architecture The following l ist contains the CC SRG requirements for IL4 that are added to those already defined for IL2: • No direct access to/from the public internet – All traffic in/out of AWS must traverse the DoDIN through a virtual private gateway • Security and management of the environment is separated from the application environment using different VPCs in accordance with the SCCA architecture • A Virtual Data Center Security Stack (VDSS) VPC is utilized for performing security functionality in ac cordance with the SCCA and all traffic will flow through the VDSS VPC before reaching the Mission Owner VPC which contains the application The VDSS may contain approved third party security components to meet the security requirements of the mission owne r (for example performing full packet capture or adding intrusion detection or prevention services) • A Virtual Data Center Management Stack (VDMS) VPC is established for performing management functionality and offering shared services This VPC may host shared services for hosting multiple mission owner application VPCs The VDMS may also perform host management via bastion hosts security scans and other services deemed necessary by the mission owner Amazon Web Services DoDCompliant Implementations in AWS 51 • Connection to the DoDIN – This can be accomplished t hrough the use of AWS Direct Connect IPsec VPN and/or a combination of the two All traffic traversing between DoDIN and the DoD application must use a BCAP • Access to Amazon S3 is restricted to AWS Direct Connect – Access to Amazon S3 is while internet addressable by default only accessible through a private route introduced as part of the AWS Direct Connect service • All traffic to/from the VPC is scanned on DoDIN – All traffic entering and/or exiting the Amazon VPC is required to pass through a hardwa rebased Computer Network Defense suite of tools This infrastructure is both owned and operated by the government (or on behalf of the government by a Mission Partner organization) • HostBased Security System (HBSS) servers are deployed in the VDMS VPC – All DoD EC2 instances will have HBSS installed and they will communicate with an orchestrator server hosted in the VDMS VPC • Assured Compliance Assessment Solution (ACAS) tool is deployed in the VDMS VPC – All DoD instances will be scanned by an ACAS tool that is located in the VDMS with full access to the subnets of the mission owner VPC Impact Level 5 Data that is classified at IL5 has additional controls that must be placed on top of impact level 4 controls One of the controls required for IL5 is tha t all data must be encrypted in flight and at rest Any components of the architecture that process IL5 data require s physical separation from unencrypted data Within AWS all IL5 workloads must be deployed in the AWS GovCloud (US) Region within an Amazon VPC and network traffic must flow through an approved CAP or DoD SCCA compliant solution As previously stated all data must be encrypted in flight and at rest Decryption of data at certain points of the traffic flow ( for example decrypting to perfor m compute operations) requires an Amazon EC2 Dedicated Host or Dedicated Instance to meet the requirements for physical separation The AWS services that require dedicated tenancy are EC2 E lastic Map Reduce Elastic Beanstalk Amazon WorkSpaces E lastic K ubernetes Service without Fargate and Elastic Container Service without AWS Fargate If architecting a three tier web application like in the examples used so far all three tiers of the application’s compute must use Dedicated Hosts or Instances Amazon Web Services DoDCompliant Implementations in AWS 52 It is also possible to have the web tier instances as On Demand Instances (IL4) if the web servers are only passing encrypted traffic The application and database tiers will always require Dedicated Instances or Hosts Multi tenant By default EC2 instances a re multi tenant This means that the mission owner pays for the compute capacity by the hour or second Mission owners can increase or decrease their capacity based on demand Dedicated Instances Dedicated Instances are EC2 instances that run inside a VPC on hardware that is dedicated to a single customer The Dedicated Instances of a mission owner are physically isolated at the host hardware level from instances that belong to other AWS accounts Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances More information can be found on the Dedicated Instances pricing page Dedicated Hosts A Dedicated Host is a physical server with Amazon EC2 instance capacity fully reserved for one AWS account Dedicated Hosts are designed to meet compliance requirements and allow mission owners to utilize their server bound software licenses The following diagram is an example of an IL5 architecture hosted in AWS Amazon Web Services DoDComplian t Implementations in AWS 53 Sample Impact Level 5 architecture hosted in AWS The following are some k ey attributes: • All EC2 instances must use Dedicated Instances or Dedicated Hosts if handling unencrypted data • Mission Owners can use AWS KMS for managing their encryption keys or they may bring their own encryption keys Key policies must be utilized to control and grant other resources or individual access to encryption keys • This architecture also utilizes the SCCA guidelines by in corporating a VDSS and VDMS like in the IL4 environment Conclusion AWS provides a number of important benefits to DoD mission owners including flexibility elasticity utility billing and reduced time tomarket It provides a range of security service s and features that you can use to manage the security of your assets and data in AWS Amazon Web Services DoDCompliant Implementations in AWS 54 Although AWS provides an excellent service management layer for infrastructure or platform services mission owners are still responsible for protecting the confidenti ality integrity and availability of their data in the cloud and for meeting specific mission requirements for information protection Conventional security and compliance concepts still apply in the cloud Using the various best practices highlighted in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications and data Contributors The following individuals contributed to this document: • Paul Bockelman Lead Architect AWS Worldwide Public Sector • Andrew McDermott Solutions Architect • Nabil Merchant Security Consultant AWS Worldwide Public Sector • Jim Collins Principal Consultant AWS Professional Services • Michael Alpaugh Senior Security Architect AWS Worldwide Public Sector Further reading For additional information refer to the following: • AWS Whitepapers • AWS Documentation Document revisions Date Description November 3 2021 Major structural update and additional content; updated diagrams; compliance updates April 2018 Updated diagrams IL5 reference architecture section added Added description of additional services April 2015 First publication Amazon Web Services DoDCompliant Implementations in AWS 55 Notes 1 Department of Defense Cloud Computing Security Requirements Guide 2 Using AWS for Disaster Recovery 3 FedRAMP: About Us,General,consultant,Best Practices Encrypting_Data_at_Rest,ArchivedEncrypting Data at Rest Ken Beer Ryan Holland November 2014 https://awsamazoncom/security/securitylearningThis paper has been archived For the latest security information see the AWS Cloud Security Learning page on the AWS website at:ArchivedArchivedArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 2 of 20 Contents Contents 2 Abstract 2 Introduction 2 The Key to Encry ption: Who Controls the Keys? 3 Model A: You control the encryption method and the entire KMI 4 Model B: You control the encryption method; AWS provides the storage component of the KMI while you provide the management layer of the KMI 11 Model C: AWS controls the encryption method and the entire KMI 12 Conclusion 17 References and Further Reading 19 Abstract Organizational policies or industry or government regulations might require the use of encryption at rest to protect your data The flexible nature of Amazon Web Services (AWS) allows you to choose from a variety of different options that meet your needs This whitepaper provides an overview of different methods for encrypting your data at rest available today Introduction Amazon Web Services (AWS) delivers a secure scalable cloud computing platform with high availability offering the flexibility for you to build a wide range of applications If you require an additional layer of security for the data you store in the cloud there are several options for enc rypting data at rest —ranging from completely automated AWS encryption solutions to manual client side options Choosing the right solutions depends on which AWS service you’re using and your requirements for key management This white paper provides an overview of various methods for encrypting data at rest in AWS Links to additional resources are provided for a deeper understanding of how to actually implement the encryption methods discussed ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 3 of 20 The Key to Encryption: Who Controls the Keys? Encryption on a ny system requires three components: ( 1) data to encrypt; (2 ) a method to encrypt the data using a cryptographic algorithm; and ( 3) encryption keys to be used in conjunction with the data and the algorithm Most modern programming languages provide libraries with a wide range of available cryptographic algorithms such as the Advanced Encryption Standard (AES) Choosing the right algorithm involves evaluating security performance and compliance requirements specific to your application Although the selection of an encryption algorithm is important protecting the keys from unauthorized access is critical Managing the security of encryption keys is often performed using a key management infrastructure (KMI) A KMI is composed of two sub components: the st orage layer that protects the plaintext keys and the management layer that authorizes key usage A common way to protect keys in a KMI is to use a hardware security module (HSM) An HSM is a dedicated storage and data processing device that performs cryptographic operations using keys on the device An HSM typically provides tamper evidence or resistance to protect keys from unauthorized use A software based authorization layer controls who can administer the HSM and which users or applications can use which keys in the HSM As you deploy encryption for various data classifications in AWS it is important to understand exactly who has access to your encryption keys or data and under what conditions As shown in Figure 1 t here are three different models for how you and/or AWS provide the encryption method and the KMI • You control the encryption method and the entire KMI • You control the encryption method AWS provides the storage component of the KMI and you provide the management layer of the KMI • AWS c ontrols the encryption method and the entire KMI Figure 1: Encryption models in AWS ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 4 of 20 Model A: You control the encryption method and the entire KMI In this model you use your own KMI to generate store and manage access to keys as well as control all encryption methods in your applications This physical location of the KMI and the encryption method can be outside of AWS or in an Amazon Elastic Compute Cloud (Amazon EC2) instance you own The encryption method can be a combination of open source tools AWS SDKs or third party software and/or hardware The important security property of this model is that you have full control over the encryption keys and the execution environment that utilizes those key s in the encryption code AWS has no access to your keys and cannot perform encryption or decryption on your behalf You are responsible for the proper storage management and use of keys to ensure the confidentiality integrity and availability of your data Data can be encrypted in AWS services as described in the following sections Amazon S3 You can encrypt data using any encryption method you want and then upload the encrypted data using the Amazon Simple Storage Service (Amazon S3) API Most common application languages include cryptographic libraries that allow you to perform encryption in your applications Two commonly available open source tools are Bouncy Castle and OpenSSL After you have encrypted an object and safely stored the key in your KMI the encrypted object can be uploaded to Amazon S3 directly with a PUT request To decrypt this data you issue the GET request in the Amazon S3 API and then pass the e ncrypted data to your local application for decryption AWS provides an alternative to these open source encryption tools with the Amazon S3 encryption client which is an open source set of APIs embedded into the AWS SDKs This client lets you supply a key from your KMI that can be used to encrypt or decrypt your data as part of the call to Amazon S3 The SDK leverages Java Cryptography Extensions (JCEs ) in your application to take your symmetric or asymmetric key as input and encrypt the object prior to uploading to Amazon S3 The process is reversed when the SDK is used to retrieve an object The downloaded encrypted object from Amazon S3 is passed to the client along with the key from your KMI The underlying JCE in your application decrypts the object The Amazon S3 encryption client is integrated into the AWS SDKs for Java Ruby and NET and it provides a transparent drop in replacement for any cryptographic code you might have used previously with your application that interacts with Amazon S3 Although AWS provides the encryption method you control the security of your data because you control the keys for that engine to use If you’re using the Amazon S3 encryption client on premises AWS never has access to your keys or unencrypted data If you’re using the client in an application running in Amazon EC2 a best practice is to pass keys to the client using secure transport (eg Secure Sockets Layer (SSL ) or Secure Shell (SSH )) from your KMI to help ensure confidentiality For more information ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 5 of 20 see the AWS SDK for Java documentation and Using Client Side Encryption in the Amazon S3 Developer Guide Figu re 2 shows how these two methods of client side encryption work for Amazon S3 data Figure 2: Amazon S3 client side encryption from on premises system or from within your Amazon EC2 application There are third party solutions available that can simplify the key management process when encrypting data to Amazon S3 CloudBerry Explorer PRO for Amazon S3 and CloudBerry Backup both offer a client side encryption option that applies a user defined password to the encryption scheme to protect files stored on Amazon S3 For programmatic encryption needs SafeNet ProtectApp for Java integrates with the SafeNet KeySecure KMI to provide client side encryption in your application The KeySecure KMI provides secure key storage and policy enforcement for keys that are passed to the ProtectApp Java client compatible with the AWS SDK The KeySecure KMI can run as an on premises appliance or as a virtual appliance in Amazon EC2 Figure 3 shows how the SafeNet solution can be used to encrypt data stored on Amazon S3 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 6 of 20 Figure 3: Amazon S3 client side encryption from on premises system or from within your application in Amazon EC2 using SafeNet ProtectApp and SafeNet KeySecure KMI Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are network attached and persist independently from the life of an instance Because Amazon EBS volumes are presented to an instance as a block device you can leverage most standard encryption tools for file system level or block level encryption Some common block level open source encryption solutions for Linux are Loop AES dmcrypt (with or without) LUKS and TrueCrypt Each of these operates below the file system layer using kernel space d evice drivers to perform encryption and decryption of data These tools are useful when you want all data written to a volume to be encrypted regardless of what directory the data is stored in Another option would be to use file system level encryption w hich works by stacking an encrypted file system on top of an existing file system This method is typically used to encrypt a specific directory eCryptfs and EncFs are two Linux based open source examples of file system level encryption tools These solutions require you to provide keys either manually or from your KMI An important caveat with both block level and file system level encryption tools is that they can only be used to encrypt data volumes that are not Amazon EBS boot volumes This is becaus e these tools don’t allow you to automatically make a trusted key available to the boot volume at startup ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 7 of 20 Encrypting Amazon EBS volumes attached to Windows instances can be done using BitLocker or Encrypted File System (EFS) as well as open source applica tions like TrueCrypt In either case you still need to provide keys to these encryption methods and you can only encrypt data volumes There are AWS partner solutions that can help automate the process of encrypting Amazon EBS volumes as well as supplying and protecting the necessary keys Trend Micro SecureCloud and SafeNet ProtectV are two such partner products that encrypt Amazon EBS volumes and include a KMI Both products are able to encrypt boot volumes in addition to data volumes These solutions also support use cases where Amazon EBS volumes attach to auto scale d Amazon EC2 instances Figure 4 shows how the SafeNet and Trend Micro solutions can be used to encrypt data stored on Amazon EBS using keys managed on premises via software as a service ( SaaS) or in software running on Amazon EC2 Figure 4: Encryption in Amazon EBS using SafeNet ProtectV or Trend Micro SecureCloud AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with Amazon S3 It can be exposed to your network as an iSCSI disk to facilitate copying data from other sources Data on disk volumes attached to the AWS Storage Gateway will be automatically uploaded to Amazon S3 based on policy You can encrypt source data on the disk volumes using any of the file encryption methods described previously (eg Bouncy Castle or OpenSSL) before it reaches the disk You can also use a block level encryption tool (eg BitLocker or dm crypt/LUKS) on the iSCSI endpoint that AWS Storage Gateway exposes to encrypt all data on the disk volume Alternatively two AWS partner solutions Trend Micr o SecureCloud and SafeNet StorageSecure can perform ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 8 of 20 both the encryption and key management for the iSCSI disk volume exposed by AWS Storage Gateway These partners provid e an easy check box solution to both encrypt data and manage the necessary keys that is similar in design to how their Amazon EBS encryption solutions work Amazon RDS Encryption of data in Amazon Relational Database Service (Amazon RDS) using client side technology requires you to consider how you want data queries to work Because Amazon RDS doesn’t expose the attached disk it uses for data storage transparent disk encryption using techniques described in the previous Amazon EBS section are not available to you However selective encryption of database fields in your application can be done using any of the standard encryption libraries mentioned previously (eg Bouncy Castle OpenSSL) before the data is passed to your Amazon RDS instance While this specific field data would not easily support range queries in the database queries based on unencrypted fields can still return useful results The encrypted fields of the returned results can be decrypted by your local application for presentation To support more efficient querying of encrypted data you can store a keyed hash message authentication code (HMAC) of an encrypted field in your schema and you can supply a key for the hash function Subsequent queries of protected fields that contain the HMAC of the data being sought would not disclose the plaintext values in the query This allows the database to perform a query against the encrypted data in your database without disclosing the plaintext values in the query Any of the encryption methods you choose must be performed on your own application instance before data is sent to the Amazon RDS instance CipherCloud and Voltage Secur ity are two AWS partners with solutions that simplify protecting the confidentiality of data in Amazon RDS Both vendors have the ability to encrypt data using format preserving encryption (FPE) that allows ciphertext to be inserted into the database without bre aking the schema They also support tokenization options with integrated lookup tables In either case your data is encrypted or tokenized in your application before being written to the Amazon RDS instance These partners provide options to index and sear ch against databases with encrypted or tokenized fields The unencrypted or untokenized data can be read from the database by other applications without needing to distribute keys or mapping tables to those applications to unlock the encrypted or tokenized fields For example you could move data from Amazon RDS to the Amazon Redshift data warehousing solution and run queries against the non sensitive fields while keeping sensitive fields encrypted or tokenized Figure 5 shows how the Voltage solution can be used within Amazon EC2 to encrypt data before being written to the Amazon RDS instance The encryption keys are pulled from the Voltage KMI located in your data center by the Voltage Security client running on your applications on Amazon EC2 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 9 of 20 Figure 5: Encrypting data in your Amazon EC2 applications before writing to Amazon RDS using Voltage SecureData CipherCloud for Amazon Web Services is a solution that works in a way that is similar to the way the Voltage Security client works for applications running in Amazon EC2 that need to send encrypted data to and from Amazon RDS CipherCloud provides a JDBC driver that can be installed on the application regardless of whether it’s running in Amazon EC2 or in your data center In addition the CipherCloud for Any App solution can be deployed as an inline gateway to intercept data as it is being sent to and from your Amazon RDS instance Figure 6 shows how the CipherCloud solution can be deployed this way to encrypt or tokenize data leaving your data center before being written to the Amazon RDS instance ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 10 of 20 Figure 6: Encrypting data in your data center before writing to Amazon RDS using CipherCloud Encryption Gateway Amazon EMR Amazon Elastic MapReduce (Amazon EMR) provides an easy touse Hadoop implementation running on Amazon EC2 Performing encryption throughout the MapReduce operation involves encryption and key management at four distinct points: 1 The source data 2 Hadoop Distributed File System (HDFS) 3 Shuffle phase 4 Output data If the source data is not encrypted th en this step can be skipped and SSL can be used to help protect data in transit to the Amazon EMR cluster If the source data is encrypted then your MapReduce job will need to be able to decrypt the data as it is ingested If your job flow uses Java and the source data is in Amazon S3 you can use any of the client decryption methods described in the previous Amazon S3 sections The storage used for the HDFS mount point is the ephemeral storage of the cluster nodes Depending on the instance type there m ight be more than one mount Encrypting these mount points requires the use of an Amazon EMR bootstrap script that will do the following: • Stop the Hadoop service • Install a file system encryption tool on the instance • Create an encrypted directory to mount the encrypted file system on top of the existing mount points • Restart the Hadoop service ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 11 of 20 You could for example perform these steps using the open source eCryptfs package and an ephemeral key generated in your code on each of the HDFS mounts You don’t need to worry about persistent storage of this encryption key because the data it encrypts does not persist beyond the life of the HDFS instance The shuffle phase involves passing data between cluster nodes before the reduce step To encrypt this data in transit you can enable SSL with a configure Hadoop bootstrap option when you create your cluster Finally to enable encryption of the output data your MapReduce job should encrypt the output using a key sourced from your KMI This data can be sent to Amazon S3 for storage in encrypted form Model B: You control the encryption method AWS provides the KMI storage component and you provide the KMI management layer This model is similar to Model A in that you manage the encryption method but it differs from Model A in that the keys are stored in an AWS CloudHSM appliance rather than in a key storage system that you m anage on premises While the keys are stored in the AWS environment they are inaccessible to any employee at AWS This is because only you have access to the cryptographic partitions within the dedicated HSM to use the keys The AWS CloudHSM appliance has both physical and logical tamper detection and response mechanisms that trigger zeroization of the appliance Zeroization erases the HSM’s volatile memory where any keys in the process of being decrypted were stored and destroys the key that encrypts stor ed objects effectively causing all keys on the HSM to be inaccessible and unrecoverable When you determine whether using AWS CloudHSM is appropriate for your deployment it is important to understand the role that an HSM plays in encrypting data An HSM can be used to generate and store key material and can perform encryption and decryption operations but it does not perform any key lifecycle management functions (eg access control policy key rotation) This means that a compatible KMI m ight be needed in addition to the AWS CloudHSM appliance before deploying your application The KMI you provide can be deployed either on premises or within Amazon EC2 and can communicate to the AWS CloudHSM instance securely over SSL to help protect data and encryption keys Because the AWS CloudHSM service uses SafeNet Luna appliances any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM Any of the encryption options described for AWS services in Model A can work with A WS CloudHSM as long as the solution supports the SafeNet Luna platform This allows you to run your KMI within the AWS compute environment while maintaining a root of trust in a hardware appliance to which only you have access ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 12 of 20 Applications must be able to access your AWS CloudHSM appliance in an Amazon Virtual Private Cloud (Amazon VPC) The AWS CloudHSM client provided by SafeNet interacts with the AWS CloudHSM appliance to encrypt data from your application Encrypted data can then be sent to any AWS s ervice for storage Database disk volume and file encryption applications can all be supported with AWS CloudHSM and your custo m application Figure 7 shows how the AWS CloudHSM solution works with your applications running on Amazon EC2 in an Amazon VPC Figure 7: AWS CloudHSM deployed in Amazon VPC To achieve the highest availability and durability of keys in your AWS CloudHSM appliance we recommend deploying multiple AWS CloudHSM applications across Availability Zones or in conjunction with an on premises SafeNet Luna appliance that you manage The SafeNet Luna solution support s secure replication of keying material across appliances For more information see AWS CloudHSM on the AWS website Model C : AWS controls the encryption method and the entire KMI In this model AWS provides server side encryption of your data transparently managing the encryption method and the keys ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 13 of 20 AWS Key Management Service (KMS) AWS Key Management Service (KMS) is a manage d encryption service that lets you provision and use keys to encrypt your data in AWS services and your applications Master keys in AWS KMS are used in a fashion similar to the way master keys in an HSM are used After masters key are created they are designed to never be exported from the service Data can be sent into the service to be encrypted or decrypted under a specific master key under you account This design gives you centralized control over who can access your master keys to encrypt and decrypt data and it gives you the ability to audit this access AWS KMS is natively integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Red shift to simplify encryption of your data within those services AWS SDKs are integrated with AWS KMS to let you encrypt data in your custom applications For applications that need to encrypt data AWS KMS provide s global availability low latency and a high level of durability for your keys Visit https://awsamazoncom/kms/ or download the KMS Cryptographic Details White Paper to learn more AWS KMS and other services that encrypt your data directly use a method ca lled envelope encryption to provide a balance between performance and security Figure 8 describes envelope encryption 1 A data key is generated by the AWS service at the time you request your data to be encrypted 2 Data key is used to encrypt your data 3 The data key is then encrypted with a key ­‐encrypting key unique to the service storing your data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 14 of 20 4 The encrypted data key and the encrypted data are then stored by the AWS storage service on your behalf Figure 8: Envelope encryption The keyencrypting keys used to encrypt data keys are stored and managed separately from the data and the data keys Strict access controls are placed on the encryption keys designed to prevent unauthorized use by AWS employees When you need access to your pl aintext data this process is reversed The encrypted data key is decrypted using the key encrypting key; the data key is then used to decrypt your data The following AWS services offer a variety of encryption features to choose from Amazon S3 There are three ways of encrypting your data in Amazon S3 using server side encryption 1 Server side encryption: You can set an API flag or check a box in the AWS Management Console to have data encrypted before it is written to disk in Amazon S3 Each object is en crypted with a unique data key As an addit ional safeguard this key is encrypted with a periodically rotated master key managed by Amazon S3 Amazon S3 server side encryption uses 256 bit Advanced Encryption Standard (AES) keys for both object and master keys This feature is offered at no additional cost beyond what you pay for using Amazon S3 2 Server side encryption using customer provided keys: You can use your own encryption key while uploading an object to Amazon S3 This encryption key is used by Amazon S3 to encrypt your data using AES 256 After the object is encrypted the encryption key you supplied is deleted from the Amazon S3 system that used it to protect your data When you retrieve this object from Amazon S3 you must provide the same enc ryption key in your request Amazon S3 verifies that the encryption key matches decrypts the object and returns the object to you This feature is offered at no additional cost beyond what you pay for using Amazon S3 3 Server side encryption using KMS: You can encrypt your data in Amazon S3 by defining an AWS KMS master key within your account that you want to use to encrypt the unique object key (referred to as a data key in figure 8) that will ultimately encrypt your object When you upload your object a request is sent to KMS to create an object key KMS generates this object key and encrypts it using the master key ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 15 of 20 that you specified earlier; KMS then returns this encrypted object key along with the plaintext object key to Amazon S3 The Amazon S3 web server encrypts your object using the plaintext object key and stores the now encrypted object (with the encrypted object key) and deletes the plaintext object key from memory To retriev e this encrypted object Amazon S3 sends the encrypted object key to AWS KMS AWS KMS decrypts the object key using the correct master key and returns the decrypted (plaintext) object key to S3 With the plaintext object key S3 decrypts the encrypted object and returns it to you For pr icing of this option please refer to the AWS Key Management Service pricing page Amazon EBS When creating a volume in Amazon EBS you can choose to encrypt it using a n AWS KMS master key within your acc ount that wil l encrypt the unique volume key that will ultimately encrypt your EBS volume After you make your selection the Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key AWS KMS generates this volume key encrypts it using the master key and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server The plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume When the encrypted volume (or any encrypted snapshots derived from that volume) needs to be reattached to an instance a call is made to AWS KMS to decrypt the encrypted volume key AWS KMS decrypts this encrypted volume key with the correct master key and returns the decrypted volume key to Amazon EC2 Amazon Glacier Before it’s written to disk d ata are always automatically encrypted using 256 bit AES keys unique to the Amazon Glacier service that are stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Glacier AWS Storage Gateway The AWS Storage Gateway transfers your data to AWS over SSL and stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes Amazon EMR S3DistCp is an Amazon EMR feature that moves large amounts of data from Amazon S3 into HDFS from HDFS to Amazon S3 and between Amazon S3 buckets S3DistCp supports the ability to request Amazon S3 to use server side encryp tion when it writes EMR data to an Amazon S3 bucket you manage This feature is offered at no additional cost beyond what you pay for using Amazon S3 to store your Amazon EMR data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 16 of 20 Oracle on Amazon RDS You can choose to license the Oracle Advanced Security option for Oracle on Amazon RDS to leverage the native Transparent Data Encryption (TDE) and Native Network Encryption (NNE) features The Oracle encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your Oracle instance on Amazon RDS are themselves encrypted by a periodically rotated 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control Microsoft SQL Server on Amazo n RDS You can choose to provision Transparent Data Encryption (TDE) for Microsoft SQL Server on Amazon RDS The SQL Server encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your SQL Server i nstance on Amazon RDS are themselves encrypted by a periodically rotated regional 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control This feature is offered at no additional cos t beyond what you pay for using Microsoft SQL Server on Amazon RDS Amazon Redshift When creating an Amazon Redshift cluster you can optionally choose to encrypt all data in user created tables There are three options to choose from for server side encry ption of an Amazon Redshift cluster 1 In the first option data blocks (included backups) are encrypted using random 256 bit AES keys These keys are themselves encrypted using a random 256 bit AES database key This database key is encrypted by a 256 bit AES cluster master key that is unique to your cluster The cluster master key is encrypted with a periodically rotated regional master key unique to the Amazon Redshift service that is stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Redshift 2 With the second option the 256 bit AES cluster master key used to encrypt your database keys is generated in your AWS CloudHSM or by using a SafeNet Luna HSM appliance on premises This cluster master key is then encrypted by a master key that never leaves your HSM When the Amazon Redshift cluster starts up the cluster master key is decrypted in your HSM and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from your HSM —it is never stored on disk in plaintext This option lets you more tightly control the hierarchy and lifecycle of the keys used to encrypt your data This feature is offered at no additional cost beyond what you pay for using Amazon Redshift (and AWS CloudHSM if you choose that option for storing keys) ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 17 of 20 3 In the third option the 256 bit AES cluster master key used to encrypt your database keys is generated in AWS KMS This cluster master key is then encrypted by a master key within AWS KMS When the Amazon Redshift cluster starts up the cluster master key is decrypted in AWS KMS and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from the hardened security appliance in AWS KMS— it is never stored on disk in plaintext This option lets you define fine grained controls over the access and usage of your master keys and audit these controls through AWS CloudTrail For pricing of this option please refer to the AWS Key Manageme nt Service pricing page In addition to encrypting data generated within your Amazon Redshift cluster you can also load encrypted data into Amazon Redshift from Amazon S3 that was previously encrypted using the Amazon S3 Encryption Client and keys you provide Amazon Redshift supports the decryption and re encryption of data going between Amazon S3 and Amazon Redshift to protect the full lifecycle of your data These server side encryption features across multiple services in AWS enable you to easily encr ypt your data simply by making a configuration setting in the AWS Management Console or by making a CLI or API request for the given AWS service The authorized use of encryption keys is automatically and securely managed by AWS Because unauthorized ac cess to those keys could lead to the disclosure of your data we have built systems and processes with strong access controls that minimize the chance of unauthorized access and had these systems verified by third party audits to achieve security certifications including SOC 1 2 and 3 PCI DSS and FedRAMP Conclusion We have presented three different models for how encryption keys are managed and where they are used If you take all responsibility for the encryption method and the KMI you can have granu lar control over how your applications encrypt data However that granular control comes at a cost —both in terms of deployment effort and an inability to have AWS services tightly integrate with your applications’ encryption methods As an alternative yo u can choose a managed service that enables easier deployment and tighter integration with AWS cloud services This option offers check box encryption for several services that store your data control over your own keys secured storage for your keys and auditability on all data access attempts Table 1 summarizes the available options for encrypting data at rest across AWS We recommend that you determine which encryption and key management model is most appropriate for your data classifications in the context of the AWS service you are using ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 18 of 20 Encryption Method and KMI Model A Model B Model C AWS Service Client Side Solutions Using Customer Managed Keys Client Side Partner Solutions with KMI for Customer Managed Keys Client Side Solutions for Customer Managed Keys in AWS CloudHSM Server Side Encryption Using AWS Managed Keys Amazon S3 Bouncy Castle OpenSSL Amazon S3 encryption client in the AWS SDK for Java SafeNet ProtectApp for Java Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon S3 server side encryption server side encryption with customer provided keys or server side encryption with AWS Key Management Service Amazon Glacier N/A N/A Custom Amazon VPCEC2 application integrated with AWS CloudHSM client All data is automatically encrypted using server side encryption AWS Storage Gateway Linux Block Level: Loop AES dm crypt (with or without LUKS) and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File System: BitLocker Trend Micro SecureCloud SafeNet StorageSecure N/A Amazon S3 server side encryption Amazon EBS Linux Block Level: Loop AES dm crypt+LUKS and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File Syste m: BitLocker EFS Trend Micro SecureCloud SafeNet ProtectV Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon EBS Encryption with AWS Key Management Service Oracle on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Transparent Data Encryption (TDE) and Native Network Encryption (NNE) with optional Oracle Advanced Security license TDE for Microsoft SQL Serve r Microsoft SQL Server on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client N/A Amazon Redshift N/A N/A Encrypted Amazon Redshift clusters with your master key managed in AWS CloudHSM or on premises Safenet Luna HSM Encrypted Amazon Redshift clusters with AWS managed master key Amazon EMR eCryptfs Custom Amazon VPCEC2 application integrated with AWS CloudHSM client S3DistCp using Amazon S3 server side encryption to protect persistently stored data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 19 of 20 Table 1: Summary of data at rest encryption options References and Further Reading • Bouncy Castle Java crypto library http://wwwbouncycastleorg/ • OpenSSL crypto library http://wwwopensslorg/ • CloudBerry Explorer PRO for Amazon S3 encryption http://wwwcloudberrylabcom/amazon s3explorer procloudfront IAMaspx • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 • SafeNet encryption products for Amazon S3 Amazon EBS and AWS CloudHSM http://wwwsafenet inccom/ • Trend Micro SecureCloud http://wwwtrendmicrocom/us/enterprise/cloud solutions/secure cloud/indexhtml • CipherCloud for AWS and CipherCloud for Any App http://wwwciphercloudcom/ • Voltage Security SecureData Enterprise http://wwwvoltagecom/products/securedata enterprise/ • AWS CloudHSM https://awsamazoncom/cloudhsm/ • AWS Key Management Service https://awsamazoncom/kms/ • Key Management Service Cryptographic Details White Paper https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf • Amazon EMR S3DistCp to encrypt data in Amazon S3 http://docsawsamazoncom/ElasticMapReduce/latest/DeveloperGuide/UsingEM R_s3distcphtml • Transparent Data Encryption for Oracle on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracleOp tionshtml#AppendixOracleOptionsAdvSecurity ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 20 of 20 • Transparent Data Encryption for Microsoft SQL Server on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_SQLServerh tml#SQLServerConceptsGeneralOptions • Amazon Redshift encryption http://awsamazoncom/redshift/faqs/#0210 • AWS Security Bl og http://blogsawsamazoncom/security Document Revisions November 2013: First Version November 2014: • Introduced section on AWS Key Management Service (KMS) and Amazon EBS in Model C • Updated sections in Model C for Amazon S3 Amazon Redshift,General,consultant,Best Practices Encrypting_File_Data_with_Amazon_Elastic_File_System,"ArchivedEncrypting File Data with Amazon Elastic File System Encryption of Data at Rest and in Transit April 2018 This paper has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/ efsencryptedfilesystems/efsencryptedfile systemshtmlArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Basic Concepts and Terminology 1 Encryption of Data at Rest 3 Managing Keys 3 Creating an Encrypted File System 4 Using an Encrypted File System 7 Enforcing Encryption of Data at Rest 7 Detecting Unencrypted File Systems 7 Encryption of Data in Transit 10 Setting up Encryption of Data in Transit 10 Using Encryption of Data in Transit 12 Conclusion 13 Contributors 13 Further Reading 13 Document Revisions 13 ArchivedAbstract In today’s world of cybercrime hacking attacks and the occasional security breach securing data has become increasingly important to organizations Government regulations and industry or company compliance policies may require data of different classifications to be secured by using proven encryption policies cryptographic algorithms and proper key management This paper outlines best practices for encrypting shared file systems on AWS using Amazon Elastic File System ( Amazon EFS) ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 1 Introduction Amazon Elastic File System ( Amazon EFS)1 provides simple scalable highly available and highly durable shared file system s in the cloud The file systems you create using Amazon EFS are elastic allowing them to grow and shrink automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage servers in multiple Availability Zones Data stored in these file systems can be encrypted at rest and in transit using Amazon EFS For encryption of data at re st you can create encrypted file systems through the AWS Management Console or the AWS Command Line Interface ( AWS CLI ) Or you can create encrypted file systems programmatically through the Amazon EFS API or one of the AWS SDK s Amazon EFS integrates with AWS Key Management Service ( AWS KMS)2 for key management You can also enable encryption of data in transit by mounting the file system and transferring all NFS traffic over an encrypted Transport Layer Security (TLS) tunnel This paper outlines best practices for encrypting shared file systems on AWS using Amazon EFS It describes how to enable encryption of data in transit at the client connection layer and how to create an encrypted file system in the AWS Management Console and in the AWS CLI Using the APIs and SDKs to create an encrypted file system is outside the scope of this paper but you can learn more about how this is done by readin g Amazon EFS API in the Amazon EFS User Guide3 or the SDK documentation4 Basic Concepts and Terminology This section defines concepts and terminology referenced in this whitepaper • Amazon Elastic File System (Amazon EFS ) – A highly available and highly durable service that provides simple scalable shared file storage in the AWS C loud Amazon EFS provides a standard file system interface and file system semantics You can store virtually an unlimited amount of data across an unconstrained number of storage servers in multiple Availability Zones • AWS Identity and Access Management (IAM) 5 – A service that enables you to securely co ntrol fine grained access to AWS service APIs Policies are created and used to limit access to individual users groups and roles You can manage your AWS KMS keys t hrough the IAM console ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 2 • AWS KMS – A managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data It is fully integrated with AWS CloudTrail to provide logs of API calls made by AWS KMS on your behalf to help meet compliance or regulatory requirements • Customer master key (CMK) – Represents the top of your key hierarchy It contains key material to encrypt and decrypt data AWS KMS can generate this key material or you can generate it and then import it into AWS KMS CMKs are specific to an AWS account and AWS Region and can be customer managed or AWS managed o AWS managed CMK – A CMK that is generated by AWS on your behalf An AWS managed CMK is created when you enable encryption for a resource of an integrated AWS service AWS managed CMK key policies are managed by AWS and you cannot change th em There is no charge for the creation or storage of AWS managed CMKs o Customer managed CMK – A CMK you create by using the AWS Management Console or API AWS CLI or SDKs You can use a customer managed CMK when you need more granular control over the CM K • KMS permissions – Permissions that control a ccess to a customer managed CMK These permissions are defined using the key policy or a combination of IAM policies and the key policy For more information see Overview of Managing Access in the AWS KMS Developer Guide6 • Data keys – Cryptographic keys generated by AWS KMS to encrypt data outside of AWS KMS AWS KMS allows authorized entities to obtain data keys protected by a CMK • Transport Layer Security ( TLS formerly called Secure Sockets Layer [SSL]) – Cryptographic protocols essential for encrypting information that is exchanged over the wire • EFS mount helper – A Linux client agent (amazon efsutils) used to simplify the mounting of EFS file systems It can be used to setup maintain and route all NFS traffic over a TLS tunnel ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 3 For more information about basic concepts and terminology see AWS Key Management Service Concepts in the AWS KMS Developer Guide7 Encryption of Data at Rest You can create an encrypted file system so all your data and metadata is encrypted at rest usi ng an industry standard AES 256 encryption algorithm Encryption and decryption is handled automatically and transparently so you don’t have to modify your applications If your organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest we recommend creating an encrypted file system Managing Keys Amazon EFS is integrated with AWS KMS which manages the encryption keys for encrypted file systems AWS KMS also supports encryption by other AWS services such as Amazon Simple Storage Service ( Amazon S3 ) Amazon Elastic Block Store ( Amazon EBS ) Amazon Relational Database Service ( Amazon RDS ) Amazon Aurora Amazon Redshift Amazon WorkMail Amazon WorkSpaces etc To encrypt file system contents Amazon EFS uses the Advanced Encryption Standard algorithm with XTS Mode and a 256 bit key (XTS AES 256) There are three important questions to answer when considering how to secu re data at rest by adopting any encryption policy These questions are equally valid for data stored in managed and unmanaged services Where are keys stored? AWS KMS stores your master keys in highly durable storage in an encrypted format to help ensure that they can be retrieved when needed Where are keys used? Using an encrypted Amazon EFS file system is transparent to clients mounting the file system All cryptographic operations occur within the EFS service as data is encrypted before it is written to disk and decrypted after a client issues a read request ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 4 Who can use the keys? AWS KMS key policies control access to encryption keys You can combine them with IAM policies to provide another layer of control Each key has a key policy If the key is a n AWS managed CMK AWS manages the key policy If the key is a customer managed CMK you manage the key policy These key policies are the primary way to control access to CMKs They define the permissions that govern the use and management of key s When you create an encr ypted file system you grant the EFS service access to use the CMK on your behalf The calls that Amazon EFS makes to AWS KMS on your behalf appear in your CloudTrail logs as though they originated from your AWS account For more information about AWS KMS and how to manage access to encryption keys see Overview of Managing Access to Your AWS KMS Resources in the AWS KMS Developer Guide8 For more information about how AWS KMS manages cryptography see the AWS KMS Cryptographic Details whitepaper 9 For more information about how to create an administrator IAM user and group see Creating Your First IAM Admin User and Group in the IAM User Guide 10 Creating an Encrypted File S ystem You can create an encrypted file system using the AWS Management Console AWS CLI Amazon EFS API or AWS SDKs You can only enable encryption for a file system when you create it Amazon EFS integrates with AWS KMS for key management and uses a CMK to encrypt the file system File system metadata such as file names directory names and directory contents are encrypted and decrypted using an EFS managed key The contents of your files or file data is encrypted and decrypted using a CMK that you choose The CMK can be one of thre e types : • An AWS managed CMK for Amazon EFS • A customer managed CMK from your AWS account • A customer managed CMK from a different AWS account ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 5 All users have an AWS mana ged CMK for Amazon EFS whose alias is aws/elasticfilesystem AWS manages this CMK ’s key policy and you cannot change it There is no cost for creating and storing AWS managed CMKs If you decide to use a customer managed CMK to encrypt your file system select the key alias of the customer managed CMK that you own or enter the Amazon Resource Name ( ARN ) of a customer managed CMK that is owned by a different account With a customer managed CMK that you own you control which user s and services can use the key through key policies and key grants You also control the life span and rotation of t hese keys by choosing when to disable re enable delete or revoke access to them AWS KMS charges a fee for creating and storing customer managed CMK s For information about managing access to keys in other AWS accounts see Allowing External AWS Accounts to Access a CMK in the AWS KMS Developer Guide11 For more informati on about how to mana ge customer managed CMKs see AWS Key Management Service Concepts in the AWS KMS Developer Guide12 The following sections discuss how to create an encrypte d file system using the AWS Management Console and using the AWS CLI Creating an Encrypted File System Using the AWS Management Console To create an encrypted Amazon EFS fi le system using the AWS Management Console follow these steps 1 On the Amazon EFS console select Create file system to open the file system creation wizard 2 For Step 1: Configure file system access choose your VPC create your mount targets and then choose Next Step 3 For Step 2: Configure optional settings add any tags choose your performance mode select the b ox to enable encryption for your file system select a KMS master key and then choose Next Step ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 6 Figure 1: Enabling encryption through the AWS Management Console 4 For Step 3: Review and create review your settings and choose Create File System Creating an Encrypted File System Using the AWS CLI When you use the AWS CLI to create an encrypted file system you use additional parameters to set the encryption status and customer managed CMK Be sure you are using the latest vers ion of the AWS CLI For information about how to upgrade your AWS CLI see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide 13 In the CreateFileSystem operation the encrypted parameter is a Boolean and is required for creating encrypted file systems The kms key id is required only when you use a customer managed CMK and you include the key’s alias or ARN Do not include this parameter if you’re using the AWS managed CMK $ aws efs create filesystem \ creation token $(uuidgen) \ performance mode generalPurpose \ encrypted \ kmskeyid user/ customer managedCMKalias For more information about creating Amazon EFS file sys tems using the AWS Management Console AWS CLI AWS SDKs or Amazon EFS API see the Amazon EFS User Guide 14 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 7 Using an Encrypted File System Encryption has minimal effect on I/O latency and throughput Encryption and decryption are transparent to users applications and services All data and metadata is encrypted by Amazon EFS on your behalf before it is written to disk and is decrypted before it is read by clients You don’t need to change client tools applications or services to access an encrypted file system Enforcing Encryption of Data at Rest Your organization might require the encryption of all data that meets a specific classification or is associated with a particular application workload or environment You can enforce data encryption policies for Amazon EFS file systems by using detective controls that detect the creation of a file system and verify that encryption is enabled If an unencrypted file system is detected you can respond in a number of ways ranging from deleting the file sys tem and mount targets to notifying an administrator Be aware that if you want to delete the unencrypted file system but want to retain the data you should first create a new encrypted file system Next you should copy the data over to the new encrypted file system After the data is copied over you can delete the unencrypted file system Detecting Unencrypted File Systems You can create an Amazon CloudWatch alarm to monitor CloudTrail logs for the CreateFileSystem event and trigger an alarm to notify an administrator if the file system that was created was unencrypted Creating a Metric Filter To create a CloudWatch alarm that is triggered when an unencrypted Amazon EFS file system is created follow this procedure You must have an exi sting trail created that is sending CloudTrail logs to a CloudWatch Logs log group For more information see Sending Events to CloudW atch Logs in the AWS CloudTrail User Guide 15 1 Open the CloudWatch console at https://consoleawsamazoncom/cloudwatch/ ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 8 2 In the navigation pane choose Logs 3 In the list of log groups c hoose the log group that you created for CloudTrail log events 4 Choose Create Metric Filter 5 On the Define Logs Metric Filter page choose Filter Pattern and then type the following: { ($eventName = CreateFileSystem) && ($responseElementsencrypted IS FALSE ) } 6 Choose Assign Metric 7 For Filter Name type UnencryptedFileSystemCreated 8 For Metric Namespace type CloudTrailMetrics 9 For Metric Name type UnencryptedFileSystemCreatedEventCount 10 Choose Show advanced metric settings 11 For Metric Value type 1 12 Choose Create Filter Creating an Alarm After you create the metric filter follow thi s procedure to create an alarm 1 On the Filters for Log_Group_Name page next to the UnencryptedFileSystemCreated filter name choose Create Alarm 2 On the Create Alarm page set the parameters shown in Figure 2 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 9 Figure 2: Create a Cloud Watch alarm 3 Choose Create Alarm Testing the Alarm for Unencrypted File System Created You can test the alarm by creating an unencrypted file system as follows 1 Open the Amazon EFS console at https://consoleawsamazoncom/efs 2 Choose Create File System 3 From the VPC list choose your default VPC 4 Select the check boxes for all the Availability Zones Be sure that they all have the default subnets automatic IP addresses and the default security groups chosen These are your mount targets 5 Choose Next Step 6 Name your file system and keep Enable encryption unchecked to create an unencrypted file system 7 Choose Next Step 8 Choose Creat e File System Your trail logs the CreateFileSystem operation and delivers the event to your CloudWatch Logs log group The event triggers your metric alarm and CloudWatch Logs sends you a notification about the change ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 10 Encryption of Data in Transit You ca n mount a file system so all NFS traffic is encrypted in transit using Transport Layer Security 12 (TLS formerly called Secure Sockets Layer [SSL] ) with an industry standard AES 256 cipher TLS is a set of industry standard cryptographic protocols used for encrypting information that is exchanged over the wire AES 256 is a 256 bit encryption cipher used for data transmission in TLS If your organization is subject to corporate or regulatory policies that require en cryption of data and metadata in transi t we recommend setting up encryption in transit on every client accessing the file system Setting up Encryption of Data in T ransit The recommended method to setup encryption of data in transit is to download the EFS mount helper on each client The EFS mount helper is an open source utility that AWS provides to simplify using EFS including setting up encryption of data in transit The mount helper uses the EFS recommended mount options by default 1 Install the EFS mount helper • Amazon Linux: sudo yum install y amazon efsutils • Other Linux distributions: download from GitHub (https://githubcom/aws/efs utils ) and install • Supported Linux distributions: o Amazon Linux 201709+ o Amazon Linux 2+ o Debian 9+ o Red H at Enterprise Linux / CentOS 7+ o Ubuntu 1604+ • The amazon efsutils package automatically installs the following dependencies: ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 11 o NFS client (nfs utils) o Network relay (stunnel) o Python 2 Mount the file system: sudo mount t efs o tls filesystemid efsmountpoint • mount t efs invokes the EFS mount helper • Using the DNS name of the file system or the IP address of a mount target is not supported when mounting using the EFS mount helper use the file system id instead • The EFS mount helper uses the AWS recommended mount options by default Overriding these default mount options is not recommended but we provide the flexibility to do so when the occasion arises We recommend thoroughly testing any mount option overrides s o you understand how these changes impact file system access and performance • Below are the default mount options used by the EFS mount helper o nfsvers=41 o rsize=1048576 o wsize=1048576 o hard o timeo=600 o retrans=2 3 Use the file fstab to automatically remount your file system after any system restart • Add the following line to /etc/fstab filesystemid efsmountpoint efs _netdevtls 0 0 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 12 Using Encryption of Data in Transit If your organization is subject to corporate or regulatory policies that require encrypt ion of data in transit we recommend using encryption of data in transit on every client accessing the file system Encryption and decryption is configured at the connection level and adds another layer of security Mounting the file system using the EFS m ount helper sets up and maintains a TLS 12 tunnel between the client and the Amazon EFS service and routes all NFS traffic over this encrypted tunnel The certificate used to establish the encrypted TLS connec tion is signed by the Amazon C ertificate Authority (C A) and trusted by most modern Linux distributions The EFS mount helper also spawns a watchdog process to monitor all secure tunnels to each file system and ensures they are running After using the EFS mount helper to establish encrypted connections to Amazon EFS no other user input or configuration is required Encryption is transparent to user connections and applications accessing the file system After successfully mounting and establishing an encrypted connection to an EFS file system using the EFS mount helper the output of a mount command shows the file system is mounted and an encrypted tunnel has been established using the localhost (127001) as the network relay See samp le output below 127001:/ on efs mountpoint type nfs4 (rwrelatimevers=41rsize=1048576wsize=1048576namlen=255har dproto=tcpport=20059timeo=600retrans=2sec=sysclientaddr=12 7001local_lock=noneaddr=127001) To map an efsmount point to an EFS file system query the mountlog file in /var/log/amazon/efs and find the last successful mount operation This can be done using a simple grep command like the one below grep E ""Successfully mounted* efsmountpoint"" /var/log/amazon/efs/mountlog | tail 1 The output of this grep command will return the DNS name of the mounted EFS file system See sample output below ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 13 20180315 07:03:42363 INFO Successfully mounted filesystemidefsregionamazonawscom at efs mountpoint Conclusion Amazon EFS file system data can be encrypted at rest and in transit You can encrypt data at rest by using CMKs that you can control and manage using AWS KMS Creating an encrypted file system is as simple as selecting a check box in the Amazon EFS file system cr eation wizard in the AWS Management Console or adding a single parameter to the CreateFileSystem operation in the AWS CLI AWS SDKs or Amazon EFS API Using an encrypted file system is also transparent to services applications and users with minimal e ffect on the file system’s performance You can encrypt data in transit by using the EFS mount helper to establish an encrypted TLS tunnel on each client encrypting all NFS traffic between the client and the mounted EFS file system Encryption of both data at rest and in transit is available to you at no additional cost Contributors The following individuals and organizations contributed to this document: • Darryl S Osborne storage specialist solutions architect AWS • Joseph Travaglini sr product manager Amazon EFS Further Reading For additional information see the following : • AWS KMS Cryptographic Details Whitepaper16 • Amazon EFS User Guide17 Document Revisions Date Description April 2018 Added encryption of data in transit ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 14 Date Description September 2017 First publication 1 https://awsamazoncom/efs/ 2 https://awsamazoncom/kms/ 3 https://docsawsamazoncom/efs/latest/ug/API_CreateFileSystemhtml 4 https://awsamazoncom/tools/ sdk 5 https://awsamazoncom/iam/ 6 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml 7 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml 8 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml managing access 9 https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf 10 https://docsawsamazoncom/IAM/la test/UserGuide/getting started_create admin grouphtml 11 https://docsawsamazoncom/kms/latest/developerguide/key policy modifyinghtml keypolicy modifying external accounts 12 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml master_keys 13 https://docsawsamazoncom/cli/latest/userguide/installinghtml 14 https://docsawsamazoncom/efs/latest/ug/whatisefshtml 15 https://docsawsamazoncom/awscloudtrail/latest/userguide/send cloudtrail events tocloudwatch logshtml 16 https://awsamazoncom/whitepapers/ 17 https://docsawsamazoncom/efs/latest/ug/whatisefshtml Notes",General,consultant,Best Practices Establishing_Enterprise_Architecture_on_AWS,ArchivedEstablishing Enterprise Architecture on AWS March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Abstract 4 Introduction 1 Enterprise Architecture Tenets 2 Enterprise Architecture Domains 4 AWS Services that Support Enterprise Architecture Activities 6 Roles and Actors 7 Application Portfolio 8 Governance and Auditability 9 Change Management 10 Enterprise Architecture Repository 10 Conclusion 11 Contributors 12 Document Revisions 12 Archived Abstract This whitepaper outlines AWS practices and services that support enterprise architecture (EA) activities It is written for IT leaders and enterprise architects in large organizations Enterprise architecture guide s organizations in the delivery of the target production landscape to realize their business vision in the cloud There are many established enterprise architectu re frameworks and methodologies In this whitepape r we will focus on the AWS services and practices that you can use to deliver common enterprise architecture artifacts and tools and provide business benefit to your organization This whitepaper uses terms and definitions that are familiar to The Open Group Architecture Framework (TOGAF ) practitioners but it is not restricted to TOGAF or any other EA framework 1 ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 1 Introduction A key challenge facing many organizations is demonstrat ing the business value of their IT assets Enterprise arc hitecture aims to define the target IT landscape that realizes the business vision and drives value The k ey goals of enterprise architecture are to: • Analyze and evolve the organization’s business vision and strategy • Describe the business vision and strategy in a common ma nner (for example business capabilities functions and processes) • Provide tools frameworks and specifications to support governance in all the architectural practices • Enable trace ability across the IT landscape • Define the programs and architectures nee ded to realize the target IT state A key value proposition of a mature enterprise architecture practi ce is being able to do better “W hat if?” analysis or impact analysis B eing able to identify what application s realize what business capabilities lets you make informed decisions on delivering your organization’s business vision For example : • “What is the impact on our IT landscape if we decide to outsource a certain business service ?” • “What business capabilities and processes are impacted if we retire a certain IT system ?” • “What is the cost of realizing this aspect of our bus iness vision ?” This whitepaper will help you create endtoend traceability of IT a ssets which is one of the main goals of enterprise architecture teams Traceability audit and capture of “current state” is a perpetual challenge in a world of vendor specific hardware and legacy systems Often it is simply not possible for enterprises to catalog all of their assets In this scenario they cannot determine the business value of their IT landscape Moving to the cloud ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 2 gives enterprises an opportunity to achieve traceability of their assets in the cloud Enterprise Architecture Tenets Enterprise architecture tenets are general rules and guidelines that inform and support the way in which an organization sets about fulfilling its mission They are intended to be enduring and seldom amended You should use tenets to guide your architecture design and cloud adoption Tenets can be used through the entire lifecycle of an application in your IT landscape —from conception to delivery —and to support ongoing maintenance and continuous releases Tenets are used in application design and should guide application governance and architectural reviews We highly recommend creatin g cloud based tenets to guide you in creat ing applications and workloads that will help you realize and govern your enterprise’s target landscape and business vision Examples of tenets might be: “Maximize Cost Benefit for the Enterprise” A cost centric tenet encourage s architects application teams IT stakeholders and business owners to always consider the cost effectiveness of their workloads It encourage s your enterprise to focus on projects that differentiate the business (value) not the infrastruct ure Your enterprise should examine capital expenditure and operational expenditure for each workload It will result in customer centric solutions that are most cost effective These savings benefit both your organization and your customer s “Business Con tinuity” A business continuity tenet inform s and drive s the non functional requirements for all current and future workloads in your enterprise The geographic footprint and wide range of AWS services support s the realization of this tenet The AWS Cloud i nfrastructure is built around AWS Regions and Availability Zones Each AWS Region is a separate geographic area Each Region has multiple physically separated and isolated locations know as Availability Zones Availability Zones ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 3 are connected with low latency high throughput a nd highly redundant networking This tenet guide s the architecture and application teams to leverage the reliability and availability of the AWS Cloud “Agility and Flexibility” This tenet enforces the need for all applications t o be “future proof ” In a cloud computing environment new IT resources are only ever a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increas e in agility for your organization since the cost and time it takes to experiment and develop is significantly lower Being f lexib le and agil e also mean that your enterprise respond s rapidly to business requirements as customer behaviors evolve The AWS Cloud enables teams to implement continuous integration and delivery practices across all development stages DevOps DevSecOps and methodologies such as Scrum become easier to set up Teams can quickly compare and evaluate architectures and practices ( eg microservices and serverless) to determine what solution best fits enterprise needs “Cloud First Strategy” Such a tenet is key to an organization that wishes to migrate to the cloud It prescribe s that new applications should be in the cloud This gove rnance prohibits the deployment of new applications on non approved infrastructure Architectural and review boards can closely examine why a workload should be granted an exception and not deployed in the cloud “All Users Services and Applications Belong in an Organizational Unit” An enterprise may use this tenet to ensure that its target landscape reflects the enterprise’s organizational structure It mandates that all cloud activities belong in an AWS organizational unit which lets your enterprise govern the business ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 4 vision globally but give s autonomy when necessary to various local business units “Security First” This tenet describe s the security values of the organization For example “ Data is secured in transit and rest” or “ All infrastructure should be described as code” or “ All workloads are approved by the security organization” etc Using this tenet your architecture team can determine what level of trust they have in the cloud Enterprises vary from zero trust to total tru st In a zero trust scenario the enterprise would control all encryption keys for example They would decide to use customer managed keys with AWS Key Management Service 2 They would manage key rotation themselves and store the keys in their own hardware security module (HSM) In a total trust scenario the enterprise would choose to allow AWS to manage the encryption keys and key rotation They would also choose to use AWS CloudHSM 3 AWS can support your enterprise in both zero trust and total trust scenarios The secu rity tenet guides you in decid ing where your enterprise is at on that scale Tenets should be used to guide architectural design and decisions that drive the target landscape in the cloud They provide a firm foundation for making architecture and planning decisions for framing policies procedures and standards and for supporting resolution of contradictory situations Tenets should also be heavily leveraged during the architectural review phases of applications and workloads before they go live to ens ure the correct target landscape is being realized Enterprise Architecture Domains Enterprise architecture guides your organization’s business information process and technology decisions to enable it to execute its business strategy and meet customer needs There are typic ally four architecture domains : ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 5 • Business architecture domain – describes how the enterprise is organizationally structured and what functional capabilities are necessary to deliver the business vision Business architecture addresses the questions WHAT and WHO: WHAT is the organization’s business vision strategy and objectiv es that guide creation of business services or capabilities? WHO is executing defined business services or capabilities? • Application architecture domain – describes the individual applications their interactions and their relationships to the core busine ss processes of the organization Application architecture addresses the question HOW : HOW are previously defined business ser vices or capabilities implemented? • Data architecture domain – describes the structure of an organization's logical and physical data assets and data management resources Knowledge about your customers from data analytics lets you improve and continuously evolve business processes • Technology architecture domain – describes the software and hardware needed to implement the business data and application services Each of these domains have well known arti facts diagrams and practices Enterprise architects focus on each domain and how they relate to one another to deliver an organization's strategy In addition enterprise architecture tries to answer WHERE and WHY as well: • WHERE are assets located? • WHY is something being changed? Figure 1 shows how these domains fit together: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 6 Figure 1: The four domains of an enterprise architecture AWS Services that Support Enterprise Architecture Activities Several AWS services can support your enterprise architecture activities : • AWS Organizations • AWS Identity & Access Management (IAM) • AWS Service Catalog • AWS CloudTrail • Amazon CloudWatch • AWS Config • AWS Tagging and Resource Grouping Figure 2 shows how these services support your enterprise architecture: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 7 Figure 2: AWS services that support an enterprise architecture The following sections discuss many of the enterprise architecture activities and AWS services shown in Figure 2 Roles and Actors In the b usiness architecture domain there are actors and roles An actor can be a person organization or system that has a role that initiates or interacts w ith activities Actors belong to an enterprise and in combination with the role perform the business function Understanding the actors in your organization enables you to create a definitive listing of all participants that interact with IT including users and owners of IT systems Understanding actortorole relationships is necessary to enable organizational change management and organiz ational transformation The actors and roles of your enterprise can be modelled on two levels Typically an organization ha s a corporate directory (eg Active Directory) that reflects its actors and roles On a different level you can enforce these comp onents with AWS Identity and Access Management (IAM) 4 IAM achieves the actorrole relationship while complementing AWS Organizations In IAM an actor is known as a user An AWS account within an ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 8 OU defines the users for that account and the corresponding roles that user s can adopt With IAM you can securely control access to AWS services and resources for your users You can also create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources SCPs put bounds around the permissions that IAM policies can grant to entities in an account such as IAM users and roles The AWS account inherits the SCPs defined in or inherited by the OU Then within the AWS account you can write even more granular policies to define how and what the user or role can access You can apply t hese policies at the user or group level In this manner you can create very granular permissions for the actors and roles of your organization Key business relationships between OUs actors (user s) and roles can be reflected in IAM Application Portfolio Application portfolio management is an important part of the application architecture domain in an e nterprise architecture It covers managing an organization’s collection of software applications and software based services that are used to attain its business goals or objectives An agreed application portfolio allows a standard set of applications to be used in an organization You can use AWS Service Catalog to manage your enterprise’s application portfolio in the cloud 5 and centrally manage commonly deployed appli cations It helps you achieve consistent governance and meet your compliance requirements AWS Service Catalog ensures compliance with corporate standards by providing a single location where organizations can centrally manage catalogs of their application s With AWS Service Catalog you can control which applications and versions are available the configuration of the available services and permission access by an individual group department or cost center AWS Service Catalog lets you : • Define your ow n application catalog End users of your organization can quickly discover and deploy applications using a self service portal ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 9 • Centrally manage lifecycle of applications You can add new application versions as necessary as well as control the use of applications by specifying constraints such as the AWS Region in which a product can be launched • Grant a ccess at a granular level – You can g rant a user access to a portfolio to let that user browse and launch the products • Constrain how your AWS resources are deployed You can restrict the ways that specific AWS resources can be deployed for a product You can use constraints to apply limits to products for governance or cost control For example you can let your marketing users create c ampaign websites but restrict their access to provision the underlying databases Governance and Auditability AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account 6 With CloudTrail yo u can log every API call made This enables compliance with governance bodies internal and external to your organization CloudTrail gives your organization transparency across its entire AWS landscape CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies security analysis resource change tracking and troubleshooting Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS 7 You can use CloudWatch to collect and track metrics collect and monitor log file s set alarms and automatically react to changes in your AWS resources CloudWatch monitors and logs the behavior of your application landscape CloudWatch can also trigger events based on th e behavior of your application While CloudTrail tracks usage of AWS CloudWatch monitors your application landscape I n combination these two services help with architecture g overnance and audit functio ns ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 10 Change Management Enterprise architect s manage transition architectures Transition architectures are the increm ental releases in production that bring the current state to the target state architecture The goal of transition architecture s is to ensure that the evolving architecture continue s to deliver the target business strategy Therefore you need to manage changes to the architecture in a cohesive way AWS Config is a service that lets you assess audit and evaluate the configurations of your AWS resour ces 8 AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine you r overall compliance against the configurations specified in your internal guidelines This enables you to simplify compliance auditing security analysis change management and operational troubleshooting Enterprise Architecture Repository An enterprise architecture repository is a collection of artifacts that describes an organization’s current and target IT landscape The goal of the enterprise architecture repository is to reflect the organi zation ’s inventory of technology data application s and bus iness artifacts and to show the relationships between these components Traditionally in a non cloud environment organi zations were restricted to choose expensive offtheshelf products to meet their enterprise architecture repository needs You can avoid these expenses with AWS services AWS Tagging and Resource Groups let you organize your AWS landscape by applying tags at different lev els of granularity 9 Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions 10 Using this approach you can globally manage all the application business data and technology components of you r target landscape A Resource Group is a collection of resources that share one or more tags 11 It can be used to create an enterprise architecture “view” of your IT landscape ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 11 consolidating AWS resources into a per project ( that is the on going programs that realize your targe t landscape) per entity ( that is capabilities roles processes) and perdomain ( that is Business Application Data Technology) view You can use AWS Config Tagging and Resource Groups to see exactly what cloud assets your company is using at any moment These services make i t easier to detect when a rogue server or shadow application appear in your target production landscape You may wish to continue using a tradit ional repository tool perhaps due to existing licensing commitments or legacy processes In this scenario the enterprise repository can run natively on a n EC2 instance and be maintained as before 12 Conclusion The role of an enterprise architect is to enable the organization to be innovative and respond rapidly to changing customer behavior The enterprise architect holds the long term business vision of the organization and is responsible for the journey it has to take to reach this target landscape They support an organization to achieve their objectives by successfully evolving across all domains; Business Application Technology and Data This is no different when moving t o the cloud The Enterprise A rchitect role is key in successful cloud adoption Enterprise architects can use AWS services as architectural building blocks to guide the technology decisions of the organization to realize the enterprise’s business vision It has been challenging for enterprise architects to measure their goals and demonstrate their value with on premises architectures With AWS Cloud adoption enterprise architects can use AWS services to create traceability and relationships across the enterprise architecture domains allowing the architect to correctly track how their organization is changing and improving AWS lets the enterprise architect address end toend traceability operational modeling and governance It is easier to gather data o n transition architectures in the cloud as the organization moves to its target state ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 12 The wide breadth of AWS services and agility means i t is also easier for architects and application teams to respond rapidly when architectural deviations are identified and changes need to take place Using AWS services you can more easily execute and realize the value of enterprise architecture practices Contributors The following individuals and organizations contributed t o this document: • Margo Cronin Solutions Architect AWS • Nemanja Kostic Solutions Architect AWS Document Revisions Date Description April 2020 Removed AWS Organizations section March 2018 First publication 1 http://wwwopengrouporg/subjectareas/enterprise/togaf 2 https://awsamazoncom/kms/ 3 https://awsamazoncom/cloudhsm/ 4 https://awsamazoncom/iam/ 5 http://docsawsamazoncom/servicecatalog/latest/adminguide/introduction html 6 https://awsamazoncom/cloudtrail/ 7 https://awsamazoncom/cloudwatch/ Notes ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 13 8 http://docsawsamazoncom/config/latest/developerguide/WhatIsConfight ml 9 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 10 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/tag editorhtml 11 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 12 https:// awsamazoncom/ec2/,General,consultant,Best Practices Estimating_AWS_Deployment_Costs_for_Microsoft_SharePoint_Server,Estimating AWS Deployment Costs for Microsoft SharePoint Server March 201 6 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 2 of 27 © 201 6 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 3 of 27 Contents Abstract 4 Introduction 5 AWS Regions and Availability Zones 5 Windows Server in Amazon EC2 6 Amazon EBS 6 Amazon S3 7 Amazon VPC 7 Elastic Load Balancing 7 AWS Direct Connect 8 AWS Simple Monthly Calculator 8 Reviewing the SharePoint Reference Architecture 9 Licensing and Tenancy Options 10 License Included 10 BYOL 10 Using the Simple Monthly Calculator 12 Process Overview 12 Estimating Compute Costs 13 Estimating Storage Costs 17 Using Elastic IP 19 Estimating Data Transfers 19 Estimating Load Balancing 19 Choosing AWS Direct Connect and Amazon VPC 20 Reviewing the Estimate 21 MoneySaving Ideas 22 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 4 of 27 AWS Directory Service 22 Reserved Instances and Spot Instances 23 Auto Scaling 23 NAT Alternatives 24 ThirdParty Solutions 24 Conclusion 25 Contributors 25 Further Reading 25 Abstract This whitepaper is intended for IT managers systems integrators presales engineers and Microsoft Windows IT professionals who want to learn how to use the Amazon Web Services (AWS) Simple Monthly Calculator to estimate the cost of their cloud infrastructure on AWS1 A scalable and highly available Microsoft SharePoint Server 2013 architecture is given as an example and its various components are plugged into the calculator to estimate the monthly cost Although SharePoint is highlighted the techniques described can easily be applied to other Windows workloads on AWS such as Dynamics CRM or Skype for Business Server The cost estimates include licenses for Windows Server and SQL Server but exclude licenses for SharePoint Server as will be explained A few ways to save money on the SharePoint Server deployment are also described This paper focuses on Amazon Elastic Compute Cloud (Amazon EC2) and AWS storage services that are common to most Microsoft infrastructure deployments on AWS and briefly mentions how AWS Directory Service and NAT gateways can be very beneficial in your architecture ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 5 of 27 Introduction AWS currently offers over 50 cloud computing services with new services being added frequently You won’t need to be familiar with all the services to deploy SharePoint Server on AWS but the key point is that at th e end of each month you pay only for what you use and you can start or stop using a service at any time There are no minimum commitments or longterm contracts required This pricing model helps replace upfront capital expenses for your IT projects with a low variable cost For compute resources you pay on an hourly basis from the time you launch a resource until the time you terminate it For storage and data transfer you pay on a pergigabyte basis For additional information on how AWS pricing works see the following sources:  How AWS Pricing Works whitepaper2  AWS Cloud Pricing Principles on the AWS website3 Before we get into t he calculator let’s briefly review a few of the key features and services that will come into play in a SharePoint architecture on AWS AWS Regions and Availability Zones Amazon EC2 is hosted in multiple Regions around the world Each Region is a separate geographic area and has multiple isolated locations known as Availability Zones You can think of Availability Zones as very large data centers Using redundant Availability Zones in your architecture enables you to achieve high availability AWS does not move your data or replicate your resources across Regions unless you do so specifically Figure 1 shows the relationship between Regions and Availability Zones ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 6 of 27 Figure 1: Each AWS Region Contains at Least Two Availability Zones Windows Server in Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides a secure global infrastructure to run Windows Server workloads in the cloud including Internet Information Services (IIS) SQL Server Exchange Server SharePoint Server Skype Server for Business Dynamics CRM System Center and custom NET applications 4 Preconfigured Amazon Machine Images (AMIs) enable you to start running fully supported Windows Server virtual machine instances in minutes You can choose from a number of server operating system versions and decide whether or not to include preinstalled SQL Server in the hourly cost Amazon EBS Amazon Elastic Block Storage (Amazon EBS) provides persistent blocklevel storage volumes for use with Amazon EC2 instances5 Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes provide consistent lowlatency performance On Windows Server instances Amazon EBS volumes are mounted to appear as regular drive letters to the operating system and applications Amazon EBS volumes can be up to 16 TiB in size and you can mount up to 20 volumes on a single Windows instance After writing data to an EBS volume you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup Snapshots are ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 7 of 27 incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot Snapshots are automatically saved in Amazon Simple Storage Service (Amazon S3) which stores three redundant copies across multiple Availability Zones so you have peace of mind knowing that your data is immediately backed up “off site” Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable highly scalable costeffective object storage6 Amazon S3 is easy to use and includes a simple web services interface to store and retrieve any amount of data from anywhere on the web Object storage is not appropriate for workloads that require incremental data insertions such as databases However Amazon S3 is an excellent service for storing snapshots of Amazon EBS volumes While Amazon EBS duplicates your volume synchronously in the same Availability Zone snapshots to Amazon S3 are replicated across multiple zones substantially increasing the durability of your data Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you define7 This virtual network closely resembles a traditional network that you ’d operate in your own data center with the benefits of using the scalable infrastructure of AWS Your VPC is logically isolated from other virtual networks in the AWS cloud You can configure your VPC; you can select its IP address range create subnets and configure route tables network gateways and security settings With the AWS Direct Connect service you can effectively make your VPC function as an extension of your own onpremises network Elastic Load Balancing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances8 It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 8 of 27 detecting unhealthy instances and rerouting traffic across the remaining healthy instances Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic Additionally Elastic Load Balancing offers integration with Auto Scaling to ensure that you have backend capacity to meet varying levels of traffic levels without requiring manual intervention9 For SharePoint Server you can create an internal (nonInternet facing) load balancer to route traffic between your web tier and your application tier using private IP addresses within your Amazon VPC You can also implement a multi tiered architecture using internal and Internetfacing load balancers to route traffic between application tiers With this multitier architecture your application infrastructure can use private IP addresses and security groups allowing you to expose only the Internetfacing tier with public IP addresses AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated private network connection from your premises to AWS10 In many cases this can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connections This dedicated connection can be partitioned into multiple virtual interfaces This enables you to use the same connection to access public resources such as objects stored in Amazon S3 and private resources such as Amazon EC2 instances running within an Amazon VPC while maintaining network separation between the public and private environments AWS Simple Monthly Calculator The AWS Simple Monthly Calculator is an easy touse online tool that enables you to estimate the monthly cost of AWS services for your project based on your expected usage The Simple Monthly Calculator is continuously updated with the latest pricing for all AWS services in all AWS Regions Before continuing with this guide please take a few minutes to watch this video for an introduction to the Simple Monthly Calculator: Video: Getting Started with the AWS Simple Monthly Calculator11 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 9 of 27 Reviewing the SharePoint Reference Architecture AWS provides several Quick Starts which consist of detailed deployment guides and deployment code12 Quick Starts help you understand and quickly deploy reference architectures on AWS In this whitepaper we will be using the reference architecture for SharePoint Server 2013 as an example to explore the Amazon Simple Monthly Calculator Figure 2 is copied from the AWS SharePoint Server 2013 Quick Start 13 It includes several AWS services that we will enter into the calculator Figure 2: Reference Architecture for SharePoint Server 2013 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 10 of 27 Licensing and Tenancy Options On Amazon EC2 you can choose to run instances that include the relevant license fees in their cost (“license included”) or use the Bring Your Own License (BYOL) licensing model License Included When you are launching an EC2 instance there are two ways to find an AMI for the licenseincluded model:  Choose a Quick Start AMI that includes Windows Server or SQL Server The license cost is included in the hourly instance charge At this time only Windows Server and SQL Server (excluding SQL Server Enterpris e Edition) are available with this option  Choose an AMI from the AWS Marketplace A much wider selection of software is available here including SQL Server Enterprise Edition SharePoint Enterprise Edition and many other Windowsbased applications from other vendors Windows Server Client Access Licenses (CALs) are not required with any of these AMIs BYOL Many vendors offer cloud licenses for their software There are three ways you can take advantage of your Microsoft software licenses on AWS:  BYOL wit h License Mobility (shared tenancy) This option does not cover Windows Server  BYOL with Dedicated Hosts (dedicated tenancy) This option allows you to comply with Microsoft’s 90 day rule for Windows Server cloud licenses With Dedicated Hosts you can import your own virtual machine images with Windows Server and pay Amazon EC2 Linux rates AWS has a qwikLAB that demonstrates this process 14  MSDN with Dedicated Hosts or Dedicated Instances All Microsoft products covered by MSDN can be run on AWS for dev/test environments per MSDN terms ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 11 of 27 For more information see the AWS Software Licensing FAQ 15 If you use the BYOL option for Windows Server the license cost is not included in the instance cost Instead you pay the same rate as EC2 instances with Amazon Linux pricing which is lower than the cost of instances with Windows Server pre installed When you use BYOL you are responsible for managing your own licenses but AWS has features that help you maintain license compliance throughout the lifecycle of your licenses such as instance affinity 16 targeted placement available through Amazon EC2 Dedicated Hosts17 and the AWS Key Management Service (AWS KMS)18 Microsoft License Mobility is a benefit for Microsoft Volume Licensing customers with eligible server applications covered by active Microsoft Software Assurance (SA) Microsoft License Mobility allows you to move eligible Microsoft software to AWS for use on EC2 instances with default tenancy (which means that instances might share server space with instances from other customers) But if you are bringing your own Microsoft licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances (instead of using default tenancy) then Microsoft Software Assurance is not required You should use Dedicated Hosts for BYOL license scenarios that are server bound (eg Windows Server SQL Server) and that require you to license against the number of sockets or physical cores on a dedicated server If you have SQL Server Enterprise Edition licenses that you want to use on AWS there are two significant advantages to using Dedicated Hosts:  Licensing on a Dedicated Host is per physical core (instead of vCPU) This means that when you use large instances you can license the entire host instead of licensing the instances separately For a r38xlarge instance (which is wellsuited for SQL Server) that means you would consume only 20 of your SQL Server licenses instead of 32  For disaster recovery deployments i f a failover instance is dedicated to you you don’t need licenses for it F or a cluster of two r38xlarge instances that means you would consume only 20 licenses instead of 64 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 12 of 27 Using the Simple Monthly Calculator Process Overview The following is a suggested process to help you estimate the costs of deploying your IT project on AWS We’ll discuss each step in subsequent sections 1 The first choice you need to make is usually an easy one: Which AWS Region do you want to run your SharePoint farm in? AWS pricing varies slightly by region 2 Now sketch a highlevel diagram of your project including each server you will need labeling the servers with the software functions they will perform eg Web Front End For this whitepaper we’ll use the diagram in Figure 2 from the AWS Quick Start r eference deployment for SharePoint After you’re happy with your sketch make a list of each server and load balancer in your diagram This list will be a key input to the calculator 3 Think about whether you will use OnDemand Instances or Reserved Instan ces OnDemand Instances make it easy to get started but when you are ready to make a commitment you can save significantly (up to 75%) by purchasing Reserved Instances 19 4 Determine if you have unused software licenses available and if you have the appropriate agreements with those software vendors to use those licenses in the cloud (eg Microsoft License Mobility through Software Assurance) See the Licensing and Tenancy Options section earlier in this paper for more information 5 Examine or estimate the volume of your current SharePoint storage that you intend to migrate to the cloud and estimate your monthly growth (this storage will go to Amazon EBS) Also estimate the volume and growth of your data backups (this storage will go to Amazon S3) One nice thing about the cloud is that you don’t need to over provision capacity in advance to handle demand spikes You can scale up almost instantly as you grow and pay only for what you actually consume 6 Estimate the monthly data transfers for an average user and then multiply that by the number of users of your system to determine a ballpark total for ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 13 of 27 data transfers You’ll also need to estimate data transfers between Availability Zones when synchronization or replication is included in your architecture 7 Determine if you will use AWS Direct Connect or a virtual private network (VPN) to connect your onpremises network to your VPC or neither option (for example if you plan to have all employees and customers access your AWS resources over the Internet) 8 Finally decide what level of AWS Support you will need For a businessclass SharePoint deployment you should choose the Business Support plan at the minimum But you should also consider the Enterprise Support plan which adds 15 minute response times for critical questions and a dedicated technical account manager Estimating Compute Costs Now let’s follow the steps outlined previously to begin estimating our AWS monthly costs for the SharePoint farm depicted in Figure 2 Building Your Server List Working from the sketch of our architecture we can create the following list of servers and the Amazon EC2 instance types that we think might be suitable for each server role We needn’t worry about getting the ins tance type exactly right at this stage because this is just an estimate If you have particular servicelevel agreements that you must deliver then picking the right instance types may require some experimentation and budget analysis For additional information about Amazon EC2 instance types see Amazon EC2 Instance Types on the AWS website20 At this point you’re just making a list of what you need before you use the calculator After you enter and save the data in the calculator you can also go back to edit it anytime Server Description Quantity Operating System Instance Type vCPUs RAM (GiB) NAT Network Address Translation 2 Amazon Linux t2micro 1 1 RDGW Remote Desktop Gateway 2 Windows Server 2012 R2 t2medium 2 4 WFE Web front end servers 2 Windows Server 2012 R2 c32xlarge 8 15 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 14 of 27 Server Description Quantity Operating System Instance Type vCPUs RAM (GiB) APP Application servers 2 Windows Server 2012 R2 c32xlarge 8 8 SQL SQL Server 2 Windows Server 2012 R2 r32xlarge 8 61 AD Active Directory 2 Windows Server 2012 R2 m4large 2 8 We set the quantity to two for each server because we want to use two Availability Zones to deploy a high availability design The NAT instance runs Amazon Linux because NAT is a basic function and Amazon Linux is less expensive than Windows It’s simple to set up a Linux NAT instance on AWS but an even better option is to use the NAT Gateway service21 This service isn’t available in the calculator yet so for the purposes of this whitepaper we’ll try to stick to the design from the SharePoint Quick Start that’s shown in Figure 2 Licensing Considerations SQL Server AlwaysOn Availability Groups which come with SQL Server Enterprise Edition are a good solution to achieve a highly available deployment across two Availability Zones So the SharePoint Quick Start recommends using SQL Server Enterprise in your SharePoint deployment on AWS You have two choices here: you can either purchase the SQL Server Enterprise licenses from AWS (in which case license costs will be included in your hourly charges for those Amazon EC2 instances) or you can utilize Microsoft License Mobility through Software Assurance to bring your own licenses into the cloud22 If you choose to purchase SQL Server Enterprise from AWS when you launch your EC2 instances you will need to select the AMI from AWS Marketplace (Other editions of SQL Server are offered as Quick Start AMIs but Enterprise Edition is currently offered only through AWS Marketplace) This will save you time because you won’t need to install SQL Server yourself On the other hand if you plan to use the BYOL model you need to install your own bits or import your ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 15 of 27 virtual machine with SQL Server installed (using the VM Import/Export service )23 For BYOL the first trick to estimating your costs in the calculator is to choose Amazon Linux (not Windows Server!) for each instance for which you plan to bring your own Windows Server license In the calculator you can alternatively choo se Windows Server without SQL Server if you plan to purchase Windows Server from AWS but use the BYOL model for SQL Server Enterprise; or you can choose Windows Server with SQL Server Enterprise if you don’t want to use BYOL for either The second trick for entering BYOL in the calculator comes up when you open the dialog box to pick the instance type In this dialog box you can choose Show (advanced options) to see check boxes for Detailed Monitoring (for Amazon CloudWatch) and Dedicated Instances At thi s time the calculator doesn’t offer Dedicated Hosts Remember you might use Dedicated Instances to bring your own license of SQL Server if your license is not based on the number of sockets or physical cores If you bring your own SQL Server licenses that are based on the number of sockets or physical cores then you must use Dedicated Hosts not Dedicated Instances For this exercise we will purchase all the Windows Server and SQL Server Enterprise licenses from AWS so we won’t be using Dedicated Hosts or Dedicated Instances Just to be clear if you plan to bring your own license your monthly cost will be significantly lower than the cost estimate that the calculator will give us in this example EBS Optimization There is one more detail to be aware of: For SQL Server instances we recommend that you select the EBSOptimized option An EBSoptimized instance uses an optimized configuration stack and provides additional dedicated capacity for Amazon EBS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance The hourly price for EBSoptimized instances is added to the hourly usage fee for supported instance types In the calculator when you select the r32xlarge instance type for SQL Server be sure to ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 16 of 27 select the EBSO ptimized check box See the documentation for EBSoptimized instances for more information24 Entering Your Data Now we’re ready to enter the table above into the calculator Open your browser to the AWS Simple Monthly Calculator and begin entering the data Our partial result looks like Figure 3 If you prefer not to enter all the data from scratch you can use the configuration that I’ve already shared 25 Note The prices shown in this whitepaper reflect data from the Simple Monthly Calculator at the time of writing and are provided for illustration purposes only Depending on pricing changes regional factors and special offers the cost s you get from the calculator may be different Figure 3: Entering the Amazon EC2 Instances into the Calculator For now we’ve entered all the instances as On Demand Instances running 100% of the time Later we’ll discuss saving money by using Auto Scaling to shut down some instances on weekends for example or by changing the purchase option from On Demand to Reserved Instances for 1year or 3year terms Another thing to keep in mind is that you might want to use OnDemand Instances only in development and QA environments and use Reserved Instances in your production environment ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 17 of 27 Now that you’ve entered all that data it’s a good idea to save it before proceeding Switch to the Estimate tab at the top of the calculator and then choose Save and Share You can optionally give your estimate a name and description choose OK and the calculator will generate a hyperlink for you (see Figure 4) Now copy and paste that hyperlink into an email to yourself That way you can return to the calculator anytime to continue editing the data for your SharePoint farm Figure 4: Saving Your Data in the Calculator Estimating Storage Costs The next step in the calculator is to put in the proper size for the boot volume on each instance and enter any additional Amazon EBS volumes that we need to attach to each instance When launching a Windows instance in Amazon EC2 the default boot volume is 30 GiB but the SharePoint Quick Start recommends setting it to 100 GiB That provides extra space for installing SharePoint Server and other applications you may want We won’t add any storage to the Linux NAT instances and we’ll leave the boot volumes for the RDGW and AD instances at the default size of 30 GiB If you are migrating your existing SharePoint farm to AWS you can examine your current storage needs to help estimate your future capacity requirements For the purposes of this whitepaper let’s enter one additional 5 TiB volume for SharePoint storage in each Availability Zone ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 18 of 27 You also need to think about I/O throughput For this basic exercise we’re going to skip over this consideration and simply use General Purpose SSD for all the EBS volumes AWS also offers Magnetic volumes (which are less expensive than General Purpose) and Provisioned IOPS SSD volumes (for consistent performance) For additional information about Amazon EBS see Amazon EBS Product Details 26 The final factor for Amazon EBS is the amount of backup storage that you require (backup copies are stored in Amazon S3) This value depends on the backup method backup frequency system size and backup retention Accurately calculating the amount of backup storage required can get quite involved and is beyond the scope of this guide For now let’s take a ve ry simplistic approach and estimate that the snapshot storage for each volume will equal the size of the volume itself Once you enter the EBS volumes the calculator should look like Figure 5 Go ahead and save the data in the calculator again Figure 5: Entering the Amazon EBS Volumes into the Calculator Elastic IP addresses data transfers and Elastic Load Balancing are three features that are closely related to Amazon EC2 that are optional in the Simple Monthly Calculator We’ll talk about those next ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 19 of 27 Using Elastic IP Elastic IP addresses are a limited resource but very useful for instances in a public subnet AWS only charges for Elastic IP addresses that you allocate but don’t assign to running instances and the cost is only a few dollars per mont h if you allocate one and never utilize it If you think you will have idle Elastic IP addresses you can enter them here but for this example we’ll ignore that option in the calculator Estimating Data Transfers Inbound data transfer to Amazon EC2 is free Charges do apply for data that is transferred out from Amazon EC2 to the Internet to another AWS Region or to another Availability Zone For details on AWS data transfer pricing see the “Data Transfer” section at https://awsamazoncom/ec2/pricing/ Just for illustration let’s say we plan to have 1000 users on SharePoint and each user will transfer out 05 GB per day (including weekends) So that’s 1000 users * 05 GB * 30 days = 15000 GB/mont h Let’s enter that in the calculator on the row for Data Transfer Out Estimating Load Balancing The SharePoint reference architecture uses one ELB load balancer When we enter that in the calculator we also need to estimate how much traffic will pass through it We estimated 15000 GB/month for egress in the previous section so let’s double that to cover ingress and egress Typically egress may exceed ingress but this is only an estimate; please see Elastic Load Balancing Pricing for more information27 You will see that Elastic Load Balancing is typically a very small part of the total cost At this stage the section of the calculator below Amazon EBS looks like Figure 6: ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 20 of 27 Figure 6: Entering Data Transfer and Elastic Load Balancing into the Calculator Switch to the Estimate tab at the top of the calculator and save your interim data again You can browse through the detail rows and see the line item cost of each section Choosing AWS Direct Connect and Amazon VPC Another factor you may want to enter in the calculator is the cost for AWS Direct Connect or Amazon VPC If you decide to go with either option you may want to revise your data transfer estimates for Elastic Load Balancing because these options tend to replace or reduce the ordinary Internet traffic to your VPC There is no additional charge for using Amazon VPC aside from the standard Amazon EC2 usage charges If a secure connection is required between your on premises network and Amazon VPC you can choose a hardware VPN connection or private network connection as described in the following sections Hardware VPN Connection When you use hardware VPN connecti ons to your Amazon VPC you are charged for each VPN connectionhour for which your VPN connection is provisioned and ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 21 of 27 available Additional information about hardware VPN connection pricing can be found at https://awsamazoncom/vpc/pricing/ Private Network Connection AWS Direct Connect provides the capability to establish a dedicated network connection from your onpremises network to AWS AWS Direct Connect pricing is based on perporthour and data transfer out charges Additional information about AWS Direct Connect pricing can be found at https://awsamazoncom/directconnect/pricing/ For t his example since we’ve already entered our Internet data transfer estimates we’ll skip adding AWS Direct Connect or Amazon VPC Reviewing the Estimate The final thing to do is click the AWS Support tab in the navigation bar and then select the Business Support plan as we recommended earlier The final cost estimate looks like Figure 7 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 22 of 27 Figure 7: Estimate of Your Monthly Bill This shows that Amazon EC2 is the dominant cost for SharePoint Server on AWS and if you look in the Services tab you’ll see t hat the SQL Server instances are the lion’s share of that cost As a reminder if you have licenses available you could cut your costs substantially by bringing your own licenses into AWS as discussed earlier in the Licensing and Tenancy Options section There are several other cost saving ideas that we haven’t taken advantage of yet in this example We’ll survey those in the next section MoneySaving Ideas AWS Directory Service AWS Directory Service is a managed service that makes it easy to set up and run Microsoft Active Directory (AD) in the AWS cloud or connect your AWS ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 23 of 27 resources with an existing onpremises Microsoft Active Directory Once your directory is created you can use it to manage users and groups provide single sign on to applications and services create and apply group policy domainjoin Amazon EC2 instances and simplify the deployment and management of cloud based Linux and Microsoft Windows workloads If cost and simplified administration are important to you you should consider using AWS Directory Service instead of running two EC2 instances with the Active Directory role installed in Windows Server See AWS Directory Service Product Details for more information28 Reserved Instances and Spot Instances Another way to save money in Amazon EC2 is to use Reserved or Spot Instances Spot Instances work well for intermittent workloads such as highperformance computing and may not be applicable to SharePoint in general But depending on the size and cost of your compute instances and the nature of your workload you should consider using Spot Instances to incrementally process and save data computations Once you get your pilot SharePoint farm up and running on AWS consider making a 1 year or 3year commitment to take advantage of Reserved Instance pricing You can save up to 75% Auto Scaling Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns and to applications that experience hourly daily or weekly variability in usage If you have dev/test SharePoint farms that aren’t used on weekends or if you anticipate less network traffic to your production SharePoint farm on weekends you may be able to realize significant cost savings by shutting down certain ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 24 of 27 instances periodically For example weekends account for about 33% of the total monthly cost There may be some complications to autoscaling your SharePoint farm but the savings may be worth it It’s beyond the scop e of this paper to go into the details but you’ll want to consider how to save patch and use your own SharePoint AMI with Auto Scaling And bear in mind that booting and domain joining can take a few minutes See Auto Scaling Product Details for more information29 NAT Alternatives Finally le t’s talk about alternatives to Network Address T ranslation (NAT) In the calculator we chose to deploy two Linux instances dedicated to running NAT Amazon Linux is a lowcost option and there are recipes for running NAT in Amazon EC2 that make it pretty simple But there are other options that might be less costly and even easier to administer The AWS SharePoint 2013 Quick Start was written before the launch of the NAT Gateway service This is a managed service that greatly simplifies the task of providing NAT for your VPC and you should consider it as your first option See the blog post Managed NAT (Network Address Translation) Gateway for AWS on the AWS blog for more information 30 If NAT Gateway isn’t appropriate for you there are other options Notice in our network diagram ( Figure 2 ) that we have an RDGW instance running Windows Server in each public subnet Since we’re already paying for those instances there’s no reason we couldn’t install the Windows Routing and Remote Access Service (RRAS) and make the instances dualuse for NAT and RDGW Finally we have another NAT option if we choose to add a virtual private network or AWS Direct Connect We could s et up the route tables in the VPC to route all outbound traffic through the onpremises network This would eliminate the need for NAT instances in the VPC ThirdParty Solutions AWS has a vast partner network of consulting and technology partners A few partners are worthy of mention here You could use AvePoint31 or Metalogix32 to offload storage of uploaded files (binary large objects or BLOBs) from ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 25 of 27 SharePoint (which goes in SQL Server) to Amazon S3 That can substantially reduce the size of the database which may in turn reduce your software license costs reduce your backup storage space and require less maintenance Additionally you may consider using SIOS33 or SoftNAS34 sharedstorage options to possibly remove the need for SQL Server AlwaysOn Availability Groups Conclusion This paper outlined a process you can follow to estimate the cost of running your IT workloads on AWS As an example we entered a SharePoint Server 2013 reference architecture into the AWS Simple Monthly Calculator We explored various AWS services relevant to an enterprise SharePoint deployment We also discussed how you can use your existing Microsoft software licenses on AWS There is often more than one way to design and deploy your architecture in AWS so we also provided alternative ideas that may help you save money on AWS Contributors The following individuals and organizations contributed to this document:  Scott Zimmerman partner solutions architect AWS  Bill Timm partner solutions architect AWS  Julien Lepine solutions architect AWS Further Reading For additional information please consult the following sources:  Getting Started with Amazon EC2 Windows Instances http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win_G etStartedhtml  Quick Start: Microsoft SharePoint Server 2013 on AWS https://docsawsamazoncom/quickstart/latest/sharepoint/ ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 26 of 27 Notes 1 http://calculators3amazonawscom/indexhtml 2 http://mediaamazonwebservicescom/AWS_Pricing_Overviewpdf 3 http://awsamazoncom/pricing/ 4 https://awsamazoncom/ec2/ 5 https://awsamazoncom/ebs/ 6 https://awsamazoncom/s3/ 7 https://awsamazoncom/vpc/ 8 https://awsamazoncom/elasticloadbalancing/ 9 https://awsamazoncom/autoscaling/ 10 https://awsamazoncom/directconnect/ 11 http://bitly/1mwA12X 12 http://awsamazoncom/quickstart/ 13 https://docsawsamazoncom/quickstart/latest/sharepoint/ 14 https://runqwiklabscom/ 15 https://awsamazoncom/windows/faq/ 16 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/dedicated hostsinstanceplacementhtml#dedicatedhostsaffinity 17 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/dedicated hostsinstanceplacementhtml#dedicatedhoststargetedplacement 18 http://docsawsamazoncom/kms/latest/developerguide/ 19 http://docsawsamazoncom/AWSEC2/latest/UserGuide/instance purchasingoptionshtml 20 http://awsamazoncom/ec2/instancetypes/ 21 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpcnat gatewayhtml 22 http://awsamazoncom/windows/resources/licensemobility/ ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 27 of 27 23 https://awsamazoncom/ec2/vmimport/ 24 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtml 25 http://calculators3amazonawscom/indexhtml#r=IAD&s=EC2&key= calc176211163ED74E669A4D86681BBB4462 26 https://awsamazoncom/ebs/details/ 27 https://awsamazoncom/elasticloadbalancing/pricing/ 28 https://awsamazoncom/directoryservice/details/ 29 https://awsamazoncom/autoscaling/details/ 30 https://awsamazoncom/blogs/aws/newmanagednatnetworkaddress translationgatewayforaws/ 31 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=AvePoint 32 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=metalogix 33 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=SIOS+Technology+Corp 34 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=AvePoint Archived,General,consultant,Best Practices Extend_Your_IT_Infrastructure_with_Amazon_Virtual_Private_Cloud,ArchivedExtend Your IT Infrastructure with Amazon Virtual Private Cloud December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Notices 2 Contents 3 Abstract 4 Introduction 1 Understanding Amazon Virtual Private Cloud 1 Different Levels of Network Isolation 2 Example Scenarios 7 Host a PCI Compliant E Commerce Website 7 Build a Development and Test Environment 8 Plan for Disaster Recovery and Business Continuity 10 Extend Your Data Center into the Cloud 10 Create Branch Office and Business Unit Networks 12 Best Practices for Using Amazon VPC 13 Automate the Deployment of Your Infrastructure 14 Use Multi AZ Deployments in VPC for High Availability 14 Use Security Groups and Network ACLs 15 Control Access with IAM Users and Policies 15 Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link 16 Conclusion 17 Further Reading 17 Document Revisions 18 Archived Abstract Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network you define This paper provides an overview of how you can connect an Amazon V PC to your existing IT infrastructure while meeting security and compliance requirements This allows you to access AWS resources as though they are a part of your existing networkArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 1 Introduction With Amazon Virtual Private Cloud (Amazon VPC) you can provision a private isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define With Amazon VPC you can define a virtual network topology that closely resembles a traditional network th at you might operate in your own data center You have complete control over your virtual networking environment including selection of your own IP v4 address range creation of subnets and configuration of route tables and network gateways For example with VPC you can: • Expand the capacity of existing on premises infrastructure • Launch a backup stack of your environment for disaster recovery purposes • Launch a Payment Card Industry Data Security Standard (PCI DSS) compliant website that accepts secure pa yments • Launch isolated development and testing environments • Serve virtual desktop applications within your corporate network In a traditional approach to these use cases you would need a lot of upfront investment to build your own data center provision the required hardware acquire the necessary security certifications hire system administrators and keep everything running With VPC on AWS you have little upfront investment and you can scale your infrastructure in or out as necessary You get all of the benefits of a secure environment at no extra cost; AWS security controls certifications accreditations and features me et the security criteria required by some of the most discerning and security conscious customers in large enterprise as well as governmental agencies For a full list of certifications and accreditations see the AWS Compliance Center This paper highlights common use cases and best practices for Amazon VPC and related services Understanding Amazon Virtual Private Cloud Amazon VPC is a secure private and isolated section of the AWS cloud where you can launch AWS resources in a virtual network topology that you define When you create a VPC you provide a set of private IP v4 addresses that you want instances in your VPC to use You specify this set of addresses in the form of a Classless Inter Domain ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 2 Routing (CIDR) block for example 10000/16 You can assign block sizes of between /28 (16 IP v4 addresses) and /16 (65536 IP v4 addresses) You can also add a set of IPv6 addresses to your VPC IPv6 addresses are allocated from an Amazon owned range of add resses and the VPC receives a /56 (more than 1021 IPv6 addresses) In Amazon VPC each Amazon Elastic Compute Cloud (Amazon EC2) instance has a default network interface that is assigned a primary private IP address on your Amazon VPC network You can cre ate and attach additional elastic network interfaces (ENI) to any Amazon EC2 instance in your VPC Each ENI has its own MAC address It can have multiple IPv6 or private IP v4 addresses and it can be assigned to a specific security group The total number of supported ENIs and private IP addresses per instance depends on the instance type The ENIs can be created in different subnets within the same Availability Zone a nd attached to a single instance to build for example a low cost management network or network and security appliances The secondary ENIs and private IP addresses can be moved within the same subnet to other instances for lowcost high availability sol utions To each private IP v4 address you can associate a public elastic IP v4 address to make the instance reachable from the Internet IPv6 addresses are the same whether inside the VPC or on the public Internet (if the subnet is public ) You can also con figure your Amazon EC2 instance to be assigned a public IPv4 address at launch Public IP v4 addresses are assigned to your instances from the Amazon pool of public IP v4 addresses; they are not associated with your account With support for multiple IPv6 addresses private IPv4 addresses and Elastic IP addresse s you can among other things use multiple SSL certificates on a single server and associate each certificate with a specific IP address There are some default limits on the number of compon ents you can deploy in your VPC as documented in Amazon VPC Limits To request an increase in any of these limits fill out the Amazon VPC Limits form Different Levels of Network Isolation You can set up your VPC subnets as public private or VPN only In order to set up a public subnet you have to configure its routing table so that traffic from that subnet to the Internet is routed through an Internet gateway associated with the VPC as shown in Figure 1 By assigning EIP addresses to instances in that subnet you can make them reachable from the Internet over IPv4 as well It is a best prac tice to restrict both ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 3 ingress and egress traffic for these instances by leveraging stateful security group rules for your instances You can also use network a ddress translation ( NAT ) gateways (for IPv4 traffic) and egress only gateways (for IPv6 traffic) on private subnets to enable them to reach Internet addresses without allowing inbound traffic Stateless network filtering can also be applied for each subnet by setting up network access control lists (ACLs) for the subnet Figure 1: Example of a VPC with a public subnet only For private subnets traffic to the Internet can be routed through a NAT gateway or NAT instance with a public EIP that resides in a public subnet This configuration allows your resources in the private subnet to connect outbound traffic to the Internet without allocating Elastic IP addresse s or accepting direct inbound conne ctions AWS provides a managed NAT gateway or you can use your own Amazon EC2 based NAT appliance Figure 2 shows an example of a VPC with both public and private subnets using an AWS NAT gateway ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 4 Figure 2: Example of a VPC with public and private subnets By attaching a virtual private gateway to your VPC you can create a VPN connection between your VPC and your own data center for IPv4 traffic as shown in Figure 3 The VPN connection uses industry standard IPsec tunnels (IKEv1 PSK with encryption using AES256 and HMAC SHA2 with various Diffie Hellman groups ) to mutually authenticate each gateway and to protect against eavesdropping or tampering while your data is in transit For redundancy each VPN connection has two tunnels with each tunnel using a unique virtual private gateway public IP v4 address ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 5 Figure 3: Example of a VPC isolated from the Internet and connected through VPN to a corporate data center You have two routin g options for setting up a VPN connection: dynamic routing using Border Gateway Protocol (BGP) or static routing For BGP you need the IP v4 address and the BGP autonomous system number (ASN) of the customer gateway before attaching it to a VPC Once you ha ve provided this information you can download a configuration template for a number of different VPN devices and configure both VPN tunnels For devices that do not support BGP you may set up one or more static routes back to your on premises network by providing the corresponding CIDR ranges when you configure your VPN connection You then configure static routes on your VPN customer gateway and on other internal network devices to route traffic to your VPC via the IPsec tunnel If you choose to have onl y a virtual private gateway with a connection to your on premises network you can route your Internet bound traffic over the VPN and control all egress traffic with your existing security policies and network controls You can also use AWS Direct Connect to establish a private logical connection from your on premises network directly to your Amazon VPC AWS Direct Connect provides a private high bandwidth network connection between your network and your VPC You can use multiple logical connection s to establish private connectivity to multiple VPCs while maintaining network isolation With AWS Direct Connect you can establish 1 Gbps or 10 Gbps dedicated network connections between AWS and any of the AWS Direct Connect locations A dedicated connection can be partitioned into multiple logical connections by using industry standard 8021Q VLANs In this way you can use the same connection to access public ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 6 resources such as objects stored in Amazon Simple Storage Service (Amazon S3) that use public ly accessible IPv4 and IPv6 address es and private resources such as Amazon EC2 instances that are running within a VPC using Amazon owned IPv6 space or private IPv4 space —all while maint aining network separation between the public and private environments You can choose a partner from the AWS Partner Network (APN) to integrate the AWS Direct Connect endpoint in an AWS Direc t Connect location with your remote networks Figure 4 shows a typical AWS Direct Connect setup Figure 4: Example of using VPC and AWS Direct Connect with a customer remote network Finally you may combine all of these diffe rent options in any combination that make the most sense for your business and security policies For example you could attach a VPC to your existing data center with a virtual private gateway and set up an addit ional public subnet to connect to other AWS services that do not run within the VPC such as Amazon S3 Amazon Simple Queue Service (Amazon SQS) or Amazon Simple Notification Service (Amazon SNS) In this situation you could also leverage IAM Roles for Amazon EC2 for accessing these services and configure IAM policies to only allow access from the Elastic IP address of the NAT server ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 7 Example Scenarios Becau se of the inherent flexibility of Amazon VPC you can design a virtual network topology that meets your business and IT security requirements for a variety of different use cases To understand the true potential of Amazon VPC let’s take a few of the most common use cases: • Host a PCI compliant e commerce website • Build a development and test environment • Plan for disaster recovery and business continuity • Extend your data center into the cloud • Create branch office and business unit networks Host a PCI Complia nt ECommerce Website Ecommerce websites often handle sensitive data such as credit card information user profiles and purchase history As such they require a Payment Card Industry Data Security Standard (PCI DSS) compliant infrastructure in order to protect sensitive customer data Because AWS is accredited as a Level 1 service provider under PCI DSS you can run your application on PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud As a merchant you still have to manage your own PCI certification but by using an accredited infrastructure service provider you don’t need to put additional effort into PCI compliance at the infrastructure level For more information about PCI complia nce see the AWS Compliance Center For example you can create a VPC to host the customer database and manage the checkout process of your ecommerce website To offer high availability you set up private subnets in each Availability Zone within the same region and then deploy your customer and order management databases in each Availability Zone Your checkout servers will be in an Auto Sca ling group over several private subnets in different Availability Zones Those servers will be behind an elastic load balancer that spans public subnets across all used Availability Zones and the elastic load balancer can be protected by a n AWS w eb applic ation firewall (WAF) By combining VPC subnets network ACLs and security groups you have fine grained control over access to your AWS infrastructure You’ll be prepared for the main challenges —scalability security ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 8 elasticity and availability —for the most sensitive part of commerce websites Figure 5 shows an example of a n ECommerce architecture Figure 5: Example of a n ECommerce architecture Build a Development and Test Environment Software environments are in constant flu x with new versions features patches and updates Software changes must often be deployed rapidly with little time to carry out regression testing Your ideal test environment would be an exact replica of your production environment where you would ap ply your updates and then test them against a typical workload When the update or new version passes all tests you can roll it into production with greater confidence To build such a test environment in house you would have to provision a lot of hardwa re that would go unused most of the time Sometimes this unused hardware is subsequently repurposed leaving you without your test environment when you need it Amazon VPC can help you build an economical functional and isolated test environment that sim ulates your live production environment that can be launched when you need it and shut down when you’re finished testing You don’t have to buy expensive hardware; you are more flexible and agile when your environment changes; your test environment can tra nsparently interact within your on premises network by using LDAP messaging and monitoring; and you pay AWS only for what you actually ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 9 use This process can even be fully automated and integrated into your software development process Figure 6 shows an example of development test and production environment s within different VPCs Figure 6: Example of development test and production environment s The same logic applies to experimental applications When you are eval uating a new software package that you want to keep isolated from your production environment you can install it on a few Amazon EC2 instances inside your test environment within a VPC and then give access to a selected set of internal users If all goes well you can transition these images into production and terminate unneeded resources ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 10 Plan for Disaster Recovery and Business Continuity The consequences of a disaster affecting your data center can be devastating for your business if you are not prepared for such an event It is worth spending time devising a strategy to minimize the impact on your operations when these events happen Trad itional approaches to disaster recovery usually require labor intensive backups and expensive standby equipment Instead consider including Amazon VPC in your disaster recovery plan The elastic dynamic nature of AWS is ideal for disaster scenarios where there are sudden spikes in resource requirements Start by identifying the IT assets that are most critical to your business As in the test environment described previously in this paper you can automate the replication of your production environment to duplicate the functionality of your critical assets Using automated processes you can back up your production data to Amazon Elastic Block Store (Amazon EBS) volumes or Amazon S3 buckets Database contents can be continually replicated to your AWS infra structure using AWS Database Migration Service (AWS DMS) You can write declarative AWS CloudFormation templates to describe your VPC infrastructure stack which you can launch automatically in any AWS region or Availability Zone In the event of a disaste r you can quickly launch a replication of your environment in the VPC and then direct your business traffic to those servers If a disaster involves only the loss of data from your in house servers you can recover it from the Amazon EBS data volumes that you’ve been using as backup storage For more information read Using Amazon Web Services for Disaster Recovery which is available at the AWS Architecture Center Extend Your Data Center into the Cloud If you have invested in building your own data center you may be facing challenges to keep up with constantly changing capacity requirements Occasional spikes in demand may exceed your total capacity If your enterprise is successful even routine operations will eventually reach the capacity limits of your data center and you’ll have to decide how to extend that capacity Building a new data center is one way but it is expensive and slow and the risk of underprovisioning or overprovisioning is high In both of these cases Amazon VPC can help you by serving as an extension of your own data center ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 11 Amazon VPC allows you to specify your own IP address range so you can ext end your network into AWS in much the same way you would extend an existing network into a new physical data center or branch office VPN and AWS Direct Connect connectivity options allow these networks to be seamlessly and securely integrated to create a single corporate network capable of supporting your users and applications regardless of where they are physically located And just like a physical extension of a data center IT resources hosted in VPC will be able to leverage existing centralized IT systems like user authentication monitoring logging change management or deployment services without the need to change how users or systems administrators access or manage your applications External connectivity from this extended virtual data cente r is also completely up to you You may choose to direct all VPC traffic to traverse your existing network infrastructure to control which existing internal and external networks your Amazon EC2 instances can access This approach for example allows you to leverage all of your existing Internet based network controls for your entire network Figure 7 shows an example of a data center that has been extended into AWS Figure 7: Example of a data center extended into AWS that leverages a customer’s existing connection to the Internet Additionally you could also choose to leverage the extensive Internet connectivity of AWS to offload traffic from on premises firewalls and load balancers and selectively present IPv6 endpoints ev en if your on premises network only supports IPv4 You can deploy an AWS WAF to protect your infrastructure against attacks leverage an application load balancer in your VPC to direct traffic to a mix of AWS based and on premises resources using a VPN con nection to provide a seamless end user experience as shown in Figure 8 ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 12 Figure 8: Example of a data center extended into AWS that leverages multiple connections to the Internet Create Branch Office and Business Unit Networks If you have branch offices that require separate but interconnected local networks consider deploying separate VPCs for each branch office Applications can easily communicate with each other using VPC peering subject to VPC security group rules that you app ly The VPCs can even be in different AWS accounts and different regions which can help reduce latency enhance resource isolation and enable cost allocation controls If you need to limit network communication within or across subnets you can configure security groups or network ACL rules to define which instances are permitted to communicate with each other You could also use this same idea to group applications according to business unit functions Applications specific to particular business units c an be installed in separate VPCs one for each unit Figure 9 shows an example of using VPC s and VPN s for branch office scenarios ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 13 Figure 9: Example of using VPC and VPN for branch office scenarios The main advantages of using Amazon VPC over provisioning dedicated on premises hardware in a branch office are similar to those described elsewhere: you can elastically scale resources up down in and out to meet demand ensuring that you don’t underprovision or overprovision Adding capacity is easy: launch additional Amazon EC2 instances from your custom Amazon Machine Images (AMIs) When the time comes to decrease capacity simply terminate the unneeded instances manually or automatically using Auto Scaling policies Althou gh the operational tasks may be the same to keep assets running properly you won’t need dedicated remote staff and you’ll save money with the AWS pay asyouuse pricing model Best Practices for Using Amazon VPC When using Amazon VPC there are a few bes t practices you should follow : • Automate the deployment of your infrastructure • Use Multi AZ deployments in VPC for high availability • Use security groups and network ACLs • Control access with IAM users and policies • Use Amazon CloudWatch to monitor the health of your VPC instances and VPN link ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 14 Automate the Deployment of Your Infrastructure Managing your infrastructure manually is tedious error prone slow and expensive For example in the case of a disaster recovery your plan should include only a limited number of manual steps because they slow down the process Even in less critical use cases such as development and test environments we recommend that you ensure that your standby environment is an exact replica of the production environment Manually re plicating your production environment can be very challenging and it increases the risk of introducing or not discovering bugs related to dependencies in your deployment By automating the deployment with AWS CloudFormation you can describe your infrastructure in a declarative way by writing a template You can use the template to deploy predefined stacks within a very short time in any AWS region The template can fully a utomate creation of subnets routing information security groups provisioning of AWS resources —whatever you need By using AWS CloudFormation helper scripts you can use standard Amazon Machine Images (AMIs) that will upon startup of Amazon EC2 instance s install all of the software at the right version required for your deployment Automated infrastructure deployment should be fully integrated into your processes You should treat your automation scripts like software that needs to be tested and maintai ned according to your standards and policies A continuous deployment methodology using services such as AWS CodePipeline to orchestrate the full process through build test and deploy phases can help make infrastructure deployment a regular and well tested business process Thoroughly tested automated processes are often faster cheaper more reliable and more secure than processes that rely on many manual steps Use Multi AZ Deployments in VPC for High Availability Architectures designed for high ava ilability typically distribute AWS resources redundantly across multiple Availability Zones within the same region If a service disruption occurs in one Availability Zone you can redirect traffic to the other Availability Zone to limit the impact of the disruption This general best practice also applies to architectures that include Amazon VPC ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 15 Although a VPC can span multiple Availability Zones each subnet within the VPC is restricted to a single Availability Zone In order to deploy a multi AZ Amazon Relational Database Service (Amazon RDS) instance for example you first have to configure VPC subnets in each Availability Zone within the region where the database instances will be launched Likewise Auto Scaling groups and elastic load balancers can span multiple Availability Zones by being deployed across VPC subnets that have been created for each zone Use Security Groups and Network ACLs Amazon VPC security groups allow you to control both ingress and egress traffic and you can define rules for a ll IP protocols and ports For a full overview of the features available with Amazon VPC security groups see Security Groups for Your VPC Amazon VPC security groups are stateful firewalls allowing return traffic for permitted TCP connections A network access control list ( ACL) is an additional layer of security that acts as a firewall to control traffic into and out of a subnet You can define access control rules for each of your subnets Although a VPC security group operates at the instance level a network ACL operates at the subnet level For a network ACL you can specify both allow and deny rules for both ingress and egress Network ACLs are stateless firewalls ; return traffic for TCP connections must be explicitly allowed on the TCP ephemeral ports (typically 32768 65535) As a best practice you should secure your infrastructure with multiple layers of defense By running your infrastructure in a VPC you can control which instances are exposed to the Internet in the first place and you can define both security groups and network ACLs to further protect your infrastructure at the infrastructure and subnet levels Additionally you should secure your i nstances with a firewall at the operating system level and follow other security best practices as outlined in AWS Security Resources Control Access with IAM Users and Policies With AWS Identity and Access Management (IAM) you can create and manage users in your AWS account A user can be either a person or an application that needs to interact with AWS With IAM you can centrally manage your users their security credentials such as access credentials and permissions that control which AWS ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 16 resources the users can access You typically create IAM users for users and use IAM roles for applications We recommend that you use IAM to implement a least privilege security strategy For exam ple you should not use a single AWS IAM user to manage all aspects of your AWS infrastructure Instead we recommend that you define user groups (or roles if using federated logins) for the different tasks that have to be performed on AWS and restrict each user to exactly the functionality he or she requires to perform that role For example you can create a network admin group of users in IAM and then give only that group the rights to create and modify the VPC For each user group define restrictive p olicies that grant each user access only to those services he or she needs Make sure that only authorized people in your organization have access to these users Use services such as Amazon GuardDuty to detect anomalous access patterns Implement strong a uthentication requirements such as minimum password length and complexity and consider multifactor authentication to reduce the risk of compromising your infrastructure For more information on how to define IAM users and policies see Controlling Access to Amazon VPC Resources Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link Just as you do with public Amazon EC2 instances you can use Amazo n CloudWatch to monitor the performance of the instances running inside your VPC Amazon CloudWatch provides visibility into resource utilization operational performance and overall demand patterns including CPU utilization disk reads and writes and n etwork traffic The information is displayed on the AWS Management Console and is also available through the Amazon CloudWatch API so you can integrate into your existing management tools You can also view the status of your VPN connections by using eithe r the AWS Management Console or making an API call The status of each VPN tunnel will include the state (up/down) of each VPN tunnel and the amount of traffic seen across the VPN tunnels ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 17 Conclusion Amazon VPC offers a wide range of tools that give you mo re control over your AWS infrastructure Within a VPC you can define your own network topology by defining subnets and routing tables and you can restrict access at the subnet level with network ACLs and at the resource level with VPC security groups Yo u can isolate your resources from the Internet and connect them to your own data center through a VPN You can assign elastic IP v4 and public IPv6 addresses to some instances and connect them to the public Internet through an Internet gateway while keeping the rest of your infrastructure in private subnets Amazon VPC makes it easier to protect your AWS resources while you keep the benefits of AWS with regard to flexibility scalability elasticity performance availability and the pay asyouuse pricing model Further Reading • Amazon VPC product page: https://awsamazoncom/vpc/ • Amazon VPC documentati on: https://awsamazoncom/documentation/vpc/ • AWS Direct Connect product page: https://awsamazoncom/directconnect/ • Getting started with AWS Direct Connect: https://awsamazoncom/directconnect/getting started/ • AWS Security Center: https://awsamazoncom/security/ • Ama zon VPC Connectivity Options: https://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf • AWS VPN CloudHub: https://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPN_CloudHub html • AWS Security Best Practices: https://awsamazoncom/whitepapers/aws security best practices/ • Architecting for the Cloud: Best Practices: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 18 Document Revisions Date Description December 2018 Added IPv6 features Removed references to EC2 classic Added AWS DMS AWS CodePipeline Amazon GuardDuty Changed multiple subnet strategy to multiple VPC VPC peering CloudHub Removed recommendation to change credentials regularly (no longer NIST recommended); added complexity and MFA December 2013 Major revision to reflect new functionality of Amazon VPC Added new use cases for Amazon VPC Added section “Understanding Amazon Virtual Private Cloud” Added section “Best Practices for Using Amazon VPC” January 2010 First publication,General,consultant,Best Practices Federal_Financial_Institutions_Examination_Council_FFIEC_Audit_Guide,"ArchivedPage 1 of 23 Federal Financial Institutions Examination Council (FFIEC) Audit Guide October 201 5 THIS WHITEPAPER HAS BEEN ARCHIVED For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 2 of 23 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 3 of 23 Contents Executive Summary 4 Approaches for using AWS Audit Guides 4 Examiners 4 AWS Provided Evidence 4 FFIEC Audit Checklist for AWS 5 1 Governance 5 2 Network Configuration and Management 7 3 Asset Configuration and Management 9 4 Logical Access Control 10 5 Data Encryption 13 6 Security Logging and Monitoring 13 7 Security Incident Response 15 8 Disaster Recovery 15 9 Inherited Controls 17 ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 4 of 23 Executive Summary This AWS Federal Financial Institutions Examination Council (FFIEC) audit guide has been designed by AWS to guide financial institutions that are subject to audits by members of the FFIEC on the use and security architecture of AWS services This document is intended for use by AWS financial institution customers their examiners and audit advisors to understand the scope of AWS services and to provide guidance for implementation and examination when using AWS services as part of the financial institutions environment for customer data Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shared Responsibility” model between AWS and the customer This audit guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements For more detail on each control reference the applicable regulatory requirements examiner activities and AWS evidence of compliance please refer to the Coalfire FFIEC Compliance on AWS whitepaper In general AWS services should be treated similar to onpremise infrastructure services that have been traditionally used by customers for their operating services and applications Policies and processes that apply to onpremise devices and servers should also apply when supplied by AWS services Controls pertaining solely to policy or procedure are generally entirely a responsibility of the customer Similarly the management of access to AWS services either via the AWS Console or Command Line API should be treated like other privileged administrator access AWS Provided Evidence AWS services are regularly assessed against industry standards and requirements In an attempt to support a variety of industries including federal agencies retailers international organizations health care providers and financial institutions AWS elects to have a variety of assessments performed ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 against the services and infrastructure For a complete list and information on assessments performed by third parties please refer to the AWS Compliance web site FFIEC Audit Checklist for AWS The AWS compliance program ensures that AWS services are regularly audited against applicable standards Some control statements may be satisfied by the customer’s use of AWS (for instance physical access to sensitive data) However most controls have either shared responsibilities between AWS and the customer or are entirely the customer’s responsibility This audit checklist describes the customer ’s responsibilities for compliance with the FFIEC IT Handbook when utilizing AWS services 1 Governance Definition: Governance includes the elements required to provide senior management assurance that its direction and intent are reflected in the security posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services the customer has purchased what kinds of systems and information the customer plans to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Understand what AWS services and resources are being used by the customer and ensure that the customer’s security or risk management program has taken into account their use of the public cloud environment Audit approach: As part of this audit determine who within the customer’s organization is an AWS account owner and resource owner and what kinds of AWS services and resources they are using Verify that the customer’s policies plans and procedures include cloud concepts and that cloud is included in the scope of the customers audit program ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Governance Checklist Checklist Item IT Security Program and Policy Access the security policy and program related to the use of AWS services Ensure that the program is properly document ed for oversight changes in service IT secu rity policies incident reporting and security roles  Verify that there is appropriate approval for the use of AWS and the services are appropriately addressed within the information security program  Confirm that an employee is assigned as authority for the use and security of AWS services and there are defined roles for those noted key roles  Verify that any customer changes in AWS services are reflected in the security program  Review the customer’s IT security policies and ensure that they cover AWS services and take size and complexity into consideration  Review management oversight and ensure that they assess and approve the use and configuration of AWS services  Ensure the customer has integrated AWS services into their SIEM tools and has a process for monitoring and addressing non compliance  Review the customer’s use of any AWS reporting tools such as:  Amazon CloudWatch  AWS Trusted Advisor  Verify that there is a policy in place for the appropriate disclosure of client information within AWS  Information Security Oversight Verify that the customer has conducted oversight and annual IT assessments including any remediation(s) related to AWS services  Include a review of management and B oard of Directors (B OD) oversight Risk Assessment Assess and review the customer’s risk assessment for AWS services including: adherence to the customer’ s risks assessment policy and procedures AWS deployed data inclusion into the cu stomer’s risks assessment and BOD oversight  Verify that AWS services were included in risk assessment and privacy impact assessment Personnel Controls Verify that there are proper segregation of duties background checks and training conducted for IT operations staff  Verify that the level of access for AWS services is comparable to the level of secure information and comprehensive screening including signed statements of understanding for non disclosure ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item Systems Development Lifecycle Verify that the use of AWS development tools are documented and follow the customers SDLC process including security requirements and configuration changes Service Provider Oversight Ensure that the customer documents and follows a defined process to evaluate on board and maintain security safeguards including AWS  Ensure that internal procedures include onboarding shared security responsi bility and communication process with AWS  Verify that the customer’s contract with AWS includes a requirement to implement and maintain privacy and security safeguards  Verify adherence to appropriate due diligence standards security program management and monitoring of service capabilities and reliability Documentation and Inventory Verify that the customer’s AWS network is fully documented and all AWS critical systems are included in inventory documentation with limited access to this documentation  Review AWS Config reports for AWS resource inventory configuration history and configuration change notifications (Example API Call 1) 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure that their network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS additional transmission protection in the form of a VPN and limits on inbound and outbound traffic Customers who must perform monitoring of their network can do so using hostbased intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer’s private networks Note: AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network segmentation is applied within the customer’s AWS environment (Example API Call 2 5)  Review the customer’s overall infrastructure including use of AWS services to ensure there is no single point of failure  Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and f irewall setting for AWS services  Ensure that the customer’s procedures for governing the daily activities of personnel include the administration of the AWS services  Confirm the customer has established appropriate logging and monitoring for Amazon EC 2 instances to ensure th at any possible security related events are identified  Verify that the customer has a procedure for granting remote internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and systems Malicious Code Controls Assess the customer’s implementation and management of antimalware for Amazon EC2 instances in a similar manner as with physical systems Firewall Controls Review the customer’s defined process of firewall rules management within AWS and include Security Group configuration changes VPN configuration and management approval along with m aintenance of documentation of approval s  Verify that the host based or other firewall configuration is properly hardened  Verify if AWS Security Groups are the primary firewall solution If other firewall technologies are used the examiner should review the technology to ensure that it is properly configured to hide internal addresses block malicious code and has logging enable d  Ensure AWS Security Group administration is performed from secure workstations and via HTTPS for either the AWS Console or c ommand line API Additionally ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item ensure that multi factor authentication is enabled for any user that is assigned general administrative rights or rights to manage security groups within the AWS Console or through command line APIs  Verify internal policies for restricting AWS Security Group management to select IT staff  Verify that the customer’s training records include AWS security such as Amazon IAM usage EC2 Security Groups and remote access to EC2 instances 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything they install on or connect to their AWS resources Secure management of the customer ’s AWS resources means knowing what resources the customer is using (asset inventory) securely configuring the guest OS and applications on the customers resources (secure configuration settings patching and antimalware) and controlling changes to the customers resources (change management) Major audit focus: Customers must manage their operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate that the customer ’s OS and applications are designed configured patched and hardened in accordance to the customer’s policies procedures and standards All OS and application management practices can be common between onpremise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Change Management Controls Ensure the customer ’s use of AWS services follows the same change control processes as internal ser vices including testing back out procedures training and logs related to changes  Verify that AWS services are included within the customer’s internal patch management process ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item  Ensure that patch management strategies include establishing version control o f all operation systems Amazon Machine Images and application software used within the AWS service environment  Ensure that polic ies and procedures related to client information within AWS is secured in accordance with the customer’s IT Security Policies Operating System Access Ensure the customer’s internal policies and procedures call for restricting and monitoring privileged access to AWS services and Amazon EC2 instances to de signated administrator  Review the Amazon EC2 instances in use within the customer’s organization  If AWS monitoring tools are used such as AWS CloudWatch review its use for logical security Application Access Controls Review controls for applications implemented on Amazon EC2 instances to ensure they are appropriate for the risk of the application and the needs of the customer users  Ensure that authentication and authorization methods application access controls and assessment event logging for applications implemented on Amazon EC2 instances is conducted in a similar manner as with physical systems Database Security Controls Review access and data modification activity for Amazon RDS or customer databases in a similar manner as with internal systems  Determine if production data is utilized in test environment using AWS database services and if so ensure that the security policies and controls are configured to match production controls 4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer’s corporate directory (single sign on) AWS Identity and Access Management (IAM) enables a customer’s users to securely control access to AWS services and resources Using IAM a customer can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up in AWS for the services being used by the customer It is also important to ensure that the credentials associated with all of the customer’s AWS accounts are being managed securely by the customer Audit approach: Validate that permissions for AWS assets are being managed in accordance with customer’s internal policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for managing access to AWS services and Amazon EC2 instances  Federated Access Controls: Ensure that the mechanisms properly apply internal role assignment to AWS permission and understand the processes and methods to authorize access levels to ensure a least privilege model has been implemented  Native AWS Access C ontrols: Compare Amazon IAM roles and user assignment to functional roles and responsibilities Temporary credentials should also be considered to ensure that these credentials are only assigned limited privileges (Example A PI Call 6 7)  Instant Access Controls: For Amazon EC2 instances review implemented roles and assignments based on the local operating systems access controls mechanisms and/or any federation that the customer has established for managing access to the EC2 virtual machines  Review the records for granting access the type of access control in use within the customer’s organization as it related to AWS services and user account policy and password complexities and validate that they extend to AWS services ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item  Ensure that multi factor identification is enabled for users and no shared accounts exist as it relates to AWS services Remote Access Ensure internal policies and procedures are followed for managing remote access to AWS services and Amazon EC2 instances Note: All access to AWS and Amazon EC2 instances is “remote acces s” by definition unless Direct Connect has been configured  Review access logging and Amazon IAM configuration Amazon IAM accounts for network access should be configured for multi factor authentication (Example API Call 8)  Ensure that Security groups are configured to allow for direct access to common management ports for Amazon instances (Example API Call 9)  Ensure that multi factor authentication mechanisms and encryption configuratio n have been implemented on the system in a similar manner as with physical systems Personnel Control & Segregation of Duties Ensure that the IT staff are aware of the informa tion security program applicable to AWS services and how it relates to their job functions  Review the customer’s type of access control in use within the ir organization as it relates to AWS services :  Federated Access Controls: Review internal role assignments to AWS permissions and understand the processes and methods to authorize  Native AWS Access Controls: Compare Amazon IAM roles and user assignment to functional roles and responsibilities (Example API Cal l 10)  Instance Access Controls: Review implemented roles and assignments based on the local operating systems access controls mechanisms and/or any federation that the customer has established for managing access to the EC2 virtual machines (Example API Call 11)  Verify internal policies and procedures for managing access to AWS services and Amazon EC2 instances Individuals monitoring security administrator logs should function independently from individua ls responsible for operations administrators  Verify that information security awareness training includes AWS security such as Amazon IAM usage EC2 Security Group s and remote access to EC2 instances ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However some customers who have sensitive data may require additional protection which they can enable by encrypting the data when it is stored on AWS Only Amazon S3 service currently provides an automated serverside encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data Major audit focus: Data at rest should be encrypted in the same way as the customer protects onpremise data Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper prot ection of customers’ data could create a security exposure for the customer Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential customer information in trans it while using AWS services  Review methods for connection to AWS Consol e management API S3 RDS and Amazon EC2 VPN for enforcement of encryption  Review internal policies and procedures for key management including AWS services and Amazon EC2 instances (Example API Call 12 14) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within a customer’s information systems and networks Audit logs are used to identify activity that ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 may impact the security of those systems whether in realtime or after the fact so the proper configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on the customers Amazon EC2 instances and that implementation is in alignmen t with the customer’s policies and procedures especially as it relates to the storage protection and analysis of the logs Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring  Review the customers logging and monitoring policies and procedures and ensure their inclusion of AWS services and that they address segregation of duties se curity and access authority  Verify that there is a process to monitor service configuration changes (Example API Call 15)  Verify that logging mechanisms are configured to send logs to a centralized server and ensure that for Ama zon EC2 instances the proper type and format of logs are retained in a similar manner as with physical systems  For customers using Amaz on CloudWatch review the customer’s process and record their use of network monitoring Specifically review VPC FlowLog events (Example API Call 16) Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems  Review AWS provided evidence on where information on intrusion detection processes can be reviewed  Review the customer’s use and configuration of Amazon CloudWatch and how logs are stored and protected ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may by monitored by the interaction of both AWS and AWS customers AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application The customer should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess the existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporting Ensure the incident response plan and policy includes appropriate AWS reporting processes as well as communication procedures between the customer and AWS  Ensure the customer is leveraging existing incident monitoring tools as well as AWS available tools to monitor the use of AWS services (Example API Call 17 18)  Verify that the customer’s use of AWS services aligns with and can support their internally defined thresholds  Verify that the Incident Response Plan undergoes an annual review and changes related to AWS are made as needed  Note if the Incident Response Plan has a customer notification procedure 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact to the customer While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design would often utilize multiple components in different AWS availability zones and involve data replication Audit approach: Understand the DR strategy for the customer’s environment and determine the faulttolerant architecture employed for the cus tomer’s critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities Disaster Recovery Checklist : Checklist Item Business Continuity Planning (BCP) Ensure the customer ha s a comprehensive BCP that includes AWS services  Within the Plan ensure that AWS is included in the emergency preparedness and crisis managem ent elements senior manager oversight responsibilities and the testing plan  Ensure the customer has a recovery plan that includes the proper use of AWS availability zones  Review the annual BCP test for AWS services Backup and Storage Controls Review the use of AWS services for off site backup and ensure it is consist ent with the customer’s policy and procedures as well as follows AWS best practices  Review inventory of data backed up to AWS services as off site backup  Ensure policies and procedures address scalability as it relates to AWS services  Conduct a test of backup data stored in AWS services (Example API Call 19 21) ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 9 Inherited Controls Definition: Amazon has many years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript faciliti es Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if he or she continue s to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate that the customer conducted the appropriate due diligence in selecting service providers Audit approach: Understand how the customer can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can be reviewed that are managed by AWS for physical security controls ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Conclusion There are many thirdparty tools that can assist you with your assessment As AWS customers have full control of their operating systems network settings and traffic routing a majority of tools used inhouse can be used to assess and audit the assets in AWS A useful tool provided by AWS is the AWS Trusted Advisor tool AWS Trusted Advisor draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers The AWS Trusted Advisor performs several fundamental checks of your AWS environment and makes recommendations when opportunities exist to save money improve system performance or close security gaps This tool may be leveraged to perform some of the audit checklist items to enhance and support your organizations auditing and assessment processes ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix A: References and Further Reading 1 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Com pliance_Whitepaperpdf 2 AWS OCIE Cybersecurity Workbook https://d0awsstaticcom/whitepapers/compliance/AWS_SEC_Workboo kpdf 3 Using Amazon Web Services for Disaster Recovery http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf 4 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 5 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=sear chQuery&x=20&y=25&fromSearch=1&searchPath=all&searchQuery=iden tity%20federation 6 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8& queryArg=searchQuery&fromSearch=1&searchQuery=Token%20Vending %20machine 7 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 8 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 9 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix B: Glossary of Terms API: Application Programming Interface (API) in the context of AWS These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS AWS provides SDKs and CLI reference which allows customers to programmatically manage AWS services via API Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make webscale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services Read more: http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws and http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 Review VPNs aws ec2 describecustomergateways aws ec2 describevpnconnections 3 Review Direct Connect aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 4 Review VPCs Subnets and Routing Tables aws ec2 describevpcs aws ec2 describesubnets aws ec2 describeroutetables 5 Review Security Groups and Network ACLs aws ec2 describenetworkacls aws ec2 describesecuritygroups 6 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 7 List all IAM Policies aws iam listpolicies 8 API to list IAM Users with MFA aws iam listmfadevices 9 API to list Security Groups: ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 aws ec2 describesecuritygroups 10 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies rolename XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 11 Review Amazon EC2 instances launched as roles: a Identify Amazon EC2 Role ARN: aws iam listroles b Filter Amazon EC2 instances by ARN: aws ec2 describeinstances filters ""Name=iaminstance profilearnValues=arn:aws:iam::accountid:instanceprofile/rolename"" 12 List KMS Keys aws kms listaliases 13 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 14 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes 15 Confirm AWSConfig Service is enabled within a region aws configservice getstatus –region XX XXXXX (where XX XXXXX = AWS region targeted eg useast 1) 16 Examine FlowLog current status aws ec2 describeflowlogs a View VPC Flow Log events in Cloudwatch taking output of loggroupname from above API call aws logs describe logstreams loggroupname mylogs aws logs get logevents loggroup name my logs log stream name 20150601 17 Review all Cloudwatch Alarms awscloudwatch describealarms 18 Review alarms associated with a specific resource and metric aws cloudwatch describe alarms formetric metric name CPUUtilization namespace AWS/EC2 dimensions Name=InstanceIdValue=XXXXX (Where XXXX = ec2 instance id) 19 Create Snapshot/Backup of EBS volume ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 20 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX) 21 Create volume from Snapshot (Restoring Backup) aws ec2 createvolume availabilityzone XXXX snapshotid YYYY (where XXX is the availability zone you want the new volume created) (where YYY is the snapshotid you want to restore from)",General,consultant,Best Practices File_Gateway_for_Hybrid_Cloud_Storage_Architectures,"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers File Gateway for Hybrid Cloud Storage Architectures Overview and Best Practices for the File Gateway Configuration of the AWS Storage Gateway Service March 2019 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assu rances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 File Gateway Architecture 1 File to Object Mapping 2 Read/Write Operations and Local Cache 4 Choosing the Right Cache Resources 6 Security and Access Controls Within a Local Area Network 6 Monitoring Cache and Traffic 7 File Gateway Bucket Inventory 7 Amazon S3 and the File Gateway 10 File Gateway Use Cases 12 Cloud Tiering 13 Hybrid Cloud Backup 13 Conclusion 15 Contributors 15 Further Reading 15 Document Revisions 15 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Organizations are looking for ways to reduce their physical data center footprints particularly for storage arrays used as secondary file backup or on demand workloads However providing data services that bridge private data centers and the cloud comes with a unique set of challenges Traditional data center storage services rely on low latency network attached storage (NAS) and storage area network (SAN) protocols to access storage locally Cloud native applications are generally optimized for API acces s to data in scalable and durable cloud object storage such as Amazon Simple Storage Service (Amazon S3) This paper outlines the basic architecture and best practices for building hybrid cloud storage environments using the AWS Storage Gateway in a file gateway configuration to address key use cases such as cloud tiering hybrid cloud backup distribution and cloud processing of data generated by on premises applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 1 Introduction Organizations are looking for ways to reduce their physical data center infrastructure A great way to start is by moving secondary or tertiary workloads such as long term file retention and backup and re covery operations to the cloud In addition organ izations want to take advantage of the elasticity of cloud architectures and features to access and use their data in new on demand ways that a traditional data center infrastructure can’t support AWS Storage Gateway has multiple gateway types including a file gateway that provides lowlatency Network File System (NFS) and Server Message Block (SMB) access to Amazon Simple Storage Service (Amazon S3) objects from on premises applications At the same time customers can access that data from any Amazon S 3 APIenabled application Configuring AWS Storage Gateway as a file gateway enables hybrid cloud storage architectures in use cases such as archiving on demand bursting of workloads and backup to the AWS Cloud Individual files that are written to Amazo n S3 using the file gateway are stored as independent objects This provides high durability lowcost flexible storage with virtually infinite capacity Files are stored as objects in Amazon S3 in their original format without any proprietary modificatio n This means that data is readily available to data analytics and machine learning applications and services that natively integrate with Amazon S3 buckets such as Amazon EMR Amazon Athena or Amazon Trans cribe It also allows for storage management through native Amazon S3 features such as lifecycle policies analytics and crossregion replication (CRR) A file gateway communicates efficiently between private data centers and AWS Traditional NAS protocols (SMB and NFS) are trans lated to object storage API calls This makes file gateway an ideal component for organizations looking for tiered storage of file or backup data with lowlatency local access and durable storage in the cloud File Gateway Architecture A file gateway provides a simple solution for presenting one or more Amazon S3 buckets and their objects as a mountable NFS or SMB file share to one or more clients onpremises The file gateway is deployed as a virtual machine in VMware ESXi or Microsoft Hyper V environments on premises or in an Amazon Elastic Compute Cloud (Amazon EC2) instance in AWS File gateway can also be deployed in data center and remote office locations on a Stora ge Gateway hardware appliance When deployed file gateway provides a seamless connection between onpremises NFS (v30 or v41) or SMB (v1 or v2) client s—typically application s—and Amazon S3 buckets hosted in a given AWS Region The file gateway employs a local read/write cache to provide a lowlatency This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 2 access to data for file share clients in the same local area network (LAN) as the file gateway A bucket share consists of a file share hosted from a file gateway across a single Amazon S3 bucket The file gateway virtual machine appliance currently supports up to 10 bucket shares Figure 1: Basic file gateway architecture Here are the components of the fi le gateway architecture shown in Figure 1 : 1 Clients access objects as files using an NFS or SMB file share exported through an AWS Storage Gateway in the file gateway configuration 2 Expandable read/write cache for the file gateway 3 File gateway virtual appliance 4 Amazon S3 which provides persistent object storage for all files that are written using the file gateway File to Object Mapping After deploy ing activat ing and configur ing the file gateway one or more bucket shares can be presented to clients that support NFS v3 or v41 protocols or mapped to a share via SMB v1 or v2 protocols on the local LAN Each share (or mount point) on the gateway is paired to a single bucket and the contents of the bucket are available as files and folders in the share Writing an individual file to a share on the file gateway creates an identically named object in the associated bucket All newly created objects are written to Amazon S3 Standard Amazon S3 Standard – Infrequent Access ( S3 Standard – IA) or Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cl oud Storage Architectures Page 3 One Zone – Infrequent Access ( S3 One Zone – IA) storage classes depending on the configuration of the share The Amazon S3 key name of a newly created object is identical to the full path of the file that is written to the moun t point in AWS Storage Gateway Figure 2: Files stored over NFS on the file gateway mapping to Amazon S3 objects One difference between storing data in Amazon S3 versus a traditional file system is the way in which granular permi ssions and metadata are implemented and stored Access to files stored directly in Amazon S3 is secured by policies stored in Amazon S3 and AWS Identity and Access Management (IAM) All other attributes such as storage class and creation date are stored in a given object’s metadata When a file is accessed over NFS or SMB the file permissions folder permissions and attributes are stored in the file system To reliably persist file permissions and attributes the file gateway stores this information as part of Amazon S3 object metadata If the permissions are changed on a file over NFS or SMB the gateway modifies the metadata of the associated objects that are stored in Amazon S3 to reflect the changes Custom default UNIX permissions are defined for all existing S3 objects within a bucket when a share is created from the AWS Management Console or using the file gateway API This feature lets you create NFS or SMB enabled shares from buckets with existing content without having to manually assign permissions after you create the share The following is an example of a file that is stored in a share bucket and is listed from a Linux based client that is mounting the share bucket over NFS The example shows that the file “file1txt” has a mod ification date and standard UNIX file permissions [e2user@host]$ ls l /media/filegateway1/ total 1 rwrwr 1 ec2user ec2 user 36 Mar 15 22:49 file1txt [e2user@host]$ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 4 The following example shows the output from the head object on Amazon S3 It shows the same file from the perspective of the object that is stored in Amazon S3 Note that the permissions and time stamp in the previous example are stored durably as metadata for the object [e2user@host]$ aws s3api head object bucket filegateway1 key file1txt { ""AcceptRanges"": ""bytes"" ""ContentType"": ""application/octet stream"" ""LastModified"": ""Wed 15 Mar 2017 22:49:02 GMT"" ""ContentLength"": 36 ""VersionId"": ""93XCzHcBUHBSg2yP8yKMHzxUumhovEC"" ""ETag"": "" \""0a7fb5dbb1a e1f6a13c6b4e4dcf54977 1\"""" ""ServerSideEncryption"": ""AES256"" ""Metadata"": { ""filegroup"": ""500"" ""useragentid"": ""sgw 7619FB1F"" ""fileowner"": ""500"" ""awssgw"": ""57c3c3e92a7781f868cb10020b33aa6b2859d58c86819066 1bcceae87f7b96f1"" ""filemtime"": ""1489618141421"" ""filectime"": ""1489618141421"" ""useragent"": ""aws storagegateway"" ""filepermissions"": ""0664"" } } [e2user@host]$ Read/Write Operations and Local Cache As part of a file gateway deployment dedicated local storage is allocated to provide a read/write cache for all hosted share buckets The read/write cache greatly improves response times for onpremises file (NFS/SMB) operations The local cache holds both recently wr itten and recently read content and does not proactively evict data while the cache disk has free space However when the cache is full AWS Storage Gateway evicts data based on a least recently used (LRU) algorithm Recently accessed data is available fo r reads and write operations are not impeded Read Operations (Read Through Cache) When an NFS client performs a read request the file gateway first checks the local cache for the requested data If the data is not in the cache the gateway retrieves the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 5 data from Amazon S3 using Range GET requests to minimize data transferred over the Internet while repopulating the read cache on behalf of the client 1 The NFS /SMB client performs a read request on part of a given file 2 The file gateway first checks to see if required bytes are cached locally 3 In the event the bytes are not in the loca l cache the file gateway performs a byte range GET on the associated S3 object Figure 3: File gateway read operations Write Operations (Write Back Cache) When a file is written to the file gateway over NFS /SMB the gateway first commits the write to the local cache At this point the write success is acknowledged to the local NFS/SMB client taking full advantage of the low latency of the local area network After the write cache is populated the file is transferred to the associated Amazon S3 bucket asynchronously to increase local performance of Internet transfers When an existing file is modified the file gateway transfers only the newly written bytes to the associated Amazon S3 bucket This uses Amazon S 3 API calls to construct a new object from a previous version in combination with the newly uploaded bytes This reduces the amount of data required to be transferred when clients modify existing files within the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 6 1 File share client performs many parallel writes to a given file 2 File gateway appliance acknowledges writes synchronously aggregates writes locally 3 File gateway appliance uses S3 multi part upload to send new writes (bytes) to S3 4 New object is constructed in S3 from a combination of new uploads and byte ranges from the previous version of an object Figure 4: File gateway write operations Choosing the Right Cache Resources When configuring a file gateway VM on a host machine you can allocate disks for the local cache Selecting a cache size that can sufficiently hold the active working set (eg a Database backup file) provide s optimal performance for file share clients Addit ionally splitting the cache across multiple disks maximize s throughput by parallelizing access to storage resulting in faster reads and writes When available for your on premises gateway we also recommend using SSD or ephemeral disks which can provide write and read (cache hits) throughputs of up to 500MB /s Security and Access Controls Within a Local Area Network When you creat e a mount point (share) on a deployed gateway you select a single Amazon S3 bucket to be the persistent object storage for files and associated metadata Default UNIX permissions are defined a s part of the configuration of the mount point These permissions are applied to all existing objects in the Amazon S3 bucket This process ensures that clients that access the mount point adhere to file and directory level security for existing content In addition an entire mount point and its associated Amazon S3 content can be protected on the LAN by limiting mount access to individual hosts or a range of hosts This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 7 For NFS file shares this limitation is defined by using a Classless Inter Domain Routing (CIDR) block or individual IP addresses For SMB file shares you can control access using Active Directory (AD) domains or authenticated guest access You can further limit a ccess to selected AD users and groups allowing only specified users (or users in the specified groups) to map the file share as a drive on their Microsoft Windows machines Monitoring Cache and Traffic As workloads or architectures evolve t he cache and Internet requirements that are associated with a given file gateway deployment can change over time To give visibility into resource use the file gateway provides statistical information in the form of Amazon CloudWatch metrics The metrics cover cache consumptio n cache hits/misses data transfer and read/write metrics For more information see Monitoring Your File Share File Gateway Bucket Inventory To re duce both latency and the number of Amazon S3 operations when performing list operations the file gateway stores a local bucket inventory that contains a record of all recently listed objects The bucket inventory is populated on demand as the file share clients list parts of the file share for the first time The file gateway updates inventory records only when the gateway itself modifies deletes or creates new objects on behalf of the clients The file gateway cannot detect changes to objects in an NFS or SMB file share’s bucket by a secondary gateway that is associated with the same Amazon S3 bucket or by any other Amazon S3 API call outside of the file gateway When Amazon S3 objects have to be modified outside of the file share and recognized by the file gateway (such as changes made by Amazon EMR or other AWS services ) the bucket inventory must be refreshed using either the RefreshCache API call or RefreshCache AWS Command Line Interface (CLI) command RefreshCache can be manually invoked automate d using a CloudWatch Event or triggered through the use of the NotifyWhenUploaded API call once the files have been written to the file share using a secondary gateway A CloudWatch notification named Storage Gatew ay Upload Notification Event is triggered once the files written by the secondary gateway have been uploaded to S3 The target of this event could be a Lambda function invoking RefreshCache to inform the primary gateway of this change RefreshCache reinventories the existing records in a file gateway’s bucket inventory This communicates changes of known objects to the file share clients that access a given share This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Clou d Storage Architectures Page 8 1 Object created by secondary gateway or external source 2 RefreshCache API called on file g ateway appliance share 3 Foreign object is reflected in file gateway bucket inventory and accessible by clients Figure 5: RefreshCache API called to re inventory Amazon S3 bucket Bucket Shares with Multiple Contributors When deploying more c omplex architectures such as when more than one file gateway share is associated with a single Amazon S3 bucket or in scenarios where a single bucket is modified by one or more file gateways in conjunction with other Amazon S3 enabled app lications note that file gateway does not support object locking or file coherency across file gateways Since file gateways cannot detect other file gateways be cautious when designing and deploy ing solutions that use more than one file gateway share wi th the same Amazon S3 bucket File gateways associated with the same Amazon S3 bucket detect new changes to the content in the bucket only in the following circumstances: 1 A file gateway recognizes changes it makes to the associated Amazon S3 bucket and ca n notify other gateways and applications by invoking the NotifyWhenUploaded API after it is done writing files to the share 2 A file gateway recognize s changes made to objects by other file gateways when the affected objects are located in folders (or prefixes) that have not been queried by that particular file gateway 3 A file gateway recognizes changes in an associated Amazon S3 bucket (bucket share) m ade by other contributors after the RefreshCache API is executed We recommend that you use the read only mount option on a file gateway share when you dep loy multiple gateways that have a common Amazon S3 bucket Designing architectures with only one writer and many readers is the simplest way to avoid write conflicts If multiple writers are required the clients accessing each gateway must be This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 9 tightly cont rolled to ensure that they don’t write to the same objects in the shared Amazon S3 bucket When multiple file gateways are accessing the same objects in the same Amazon S3 bucket make sure to call the RefreshCache API on file gateway shares that have to recognize changes made by other file gateways To fu rther optimize this operation and reduce the time it takes to run you can invoke the RefreshCache API on specific folders (recursively or not) in your share 1 Client creates a new file and file gateway #1 uploads object to S3 2 Customer invokes NotifyWhenUploaded API on file share of file gateway #1 3 CloudWatch Event (generated upon completion of Step 1 ) initiate s the RefreshCache API call to initiate a re inventory on file gateway #2 4 File gateway #2 presents newly created objects to clients Figure 6: RefreshCache API makes objects created by file gateway #1 visible to file gateway #2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 10 Amazon S3 and the Fi le Gateway The file gateway uses Amazon S3 buckets to provide storage for each mount point (share) that is created on an individual gateway When you use Amazon S3 buckets mount points provide limitless capacity 99999999999% durability on objects stored and a consumption based pricing model Costs for data stored in Amazon S3 via AWS Storage Gateway are based on the region where the gateway is located and the storage class A given mount point writes data directly to Amazon S3 Standard Amazon S3 Standa rd – IA or Amazon S3 One Zone – IA storage depending on the initial configuration select ed when creating the mount point All of these storage classes provide equal durability However Amazon S3 Standard – IA and Amazon S3 One Zone – IA have a different pricing model and lower availability (ie 999% compared with 9999%) which makes them good solution s for less frequently accessed objects The pricing for Amazon S3 Standard – IA and Amazon S3 One Zone – IA is ideal for objects that exist for more than 30 days and are larger than 128 KB per object For details about price differences for Amazon S3 storage classes see the Amazon S3 Pricing page Using Amazon S3 Object Lifecycle Management for Cost Optimization Amazon S3 offers many storage classes Today AWS Storage Gateway file gateway supports S3 Standard S3 Standard – Infrequent Access and S3 One Zone – IA natively Amazon S3 lifecycle policies automate the management of data across storage tiers It’s also possible to expire objects based on the object’s age To transition data between storage classes lifecycle policies are applied to an entire Amazon S3 bucket which reflects a single mount point on a storage gateway Lifecycle policies can also be applied to a specific prefix that reflects a folder within a hosted mount point on a file gateway The lifecycle policy transition condition is based on the creation date or optionally on the object tag key value pair For more information about tagging see Object Tagging in the Amazon S3 Developer Guide As an example a lifecycle policy in its simplest implementation move s all objects in a given Amazon S3 bucket from Amazon S3 Standard to Amazon S3 Standard – IA and finally to Amazon S3 Glacier as the data ages This means that files created by the file gateway are stored as objects in Amazon S3 buckets and can then be automatically transitioned to more economic al storage classes as the content ages This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 11 Figure 7: Example of f ile gateway storing files as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard – IA and Amazon S3 Glacier If you use file gateway to store data in S3 Standard IA or S3 One Zone IA or acce ss data from any of the infrequent storage classe s see Using Storage Classes in the AWS Storage Gateway User Guide to learn how the gateway mediates between NFS/SMB (file based) uploads to update or access the object Transitioning Objects to Amazon S3 Glacier Files migrated using lifecycle policies are immediately available for NFS file read/write operations Objects transitioned to Amazon S3 Glacier are visible when NFS files are listed on the file gateway However they are not readable unless restored to an S3 storage class using an API or the Amazon S3 console If you try to read files that are stored as objects in Amazon S3 Glacier you encounter a read I/O error on the client that tries the read operation For this reason we recommend using lifecycle to transition files to Amazon S3 Glacier objects only for file content that does not require immediate access from an NFS /SMB client in an AWS Storag e Gateway environment Amazon S3 Object Replication Across AWS Regions Amazon S3 crossregion replication (CRR) can be combined with a file gateway architecture to store objects in two Amazon S3 buckets across two separate AWS Regions CRR is used for a va riety of use cases such as protection against human error protection against malicious destruction or to minimize latency to clients in a remote AWS Region Adding CRR to the file gateway architecture is just one example of how native Amazon S3 tools an d features can be used in conjunction with the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 12 Figure 8: File gateway in a private data center with CRR to duplicate objects across AWS Regions Using Amazon S3 Object Versioning You can use f ile gateway with Amazon S3 Object Versioning to store multiple versions of files as they are modified If you require access to a previous version of the object using the gateway it first must be restore d to the previous version in S3 You must also use t he RefreshCache operation for the gateway to be notified of this restore See Object Versioning Might Affect What You See in Your Fil e System in the AWS Storage Gateway User Guide to learn more about using Amazon S3 versioned buckets for your file share Using the File Gateway for Write Once Read Many (WORM) Data You can also use f ile gateway to store and access data in environments with regulatory requirements that require use of WORM storage In this case select a bucket with S3 Object Lock enabled as the storage for the file share If there are file modifications or renames through the file share clients the file gateway creates a new version of the object without affecting prior versions so the original locked version remains unchanged See also Using the file gateway with Amazon S3 Object Lock in the AWS Storage Gateway User Guide File Gateway Use Cases The following scenarios demonstrate how a file gateway can be used in both cloud tiering and backup architectures This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 13 Cloud Tiering In on premises environments where storage resources are reaching capacity migrating colder data to the file gateway can extend the life span of existing storage on premises and reduce the need to use capital expenditure s on additional storage hardware and data center resources When adding the file gateway to an existing storage environment on premises applications can take advantage of Amazon S3 storage durability consumption based pricing and virtual infinite scale while ensuring low latency access to recently accessed data over NFS or SMB Data can be tiered using either native host OS tools or third party tools that integra te with standard file protocols such as NFS or SMB Figure 9: File gateway in a private data center providing Amazon S3 Standard or Amazon S3 Standard – IA as a complement to existing storage deployments Hybrid Cloud Backup The file gateway provides a low latency NFS /SMB interface that creates Amazon S3 objects of up t o 5 TiB in size stored in a supported AWS Region This makes it an ideal hybrid target for backup solutions that can use NFS or SMB By using a mixture of Amazon S3 storage classes data is stored on low cost highly durable cloud storage and automaticall y tiered to progressively lower cost storage as the likelihood of restoration diminishes Figure 10 shows an example architecture that assumes backups must retained for one year After 30 days the likelihood of restoration beco mes infrequent and after 60 days it becomes extremely rare In this solution you use Amazon S3 Standard as the initial location for backups for the first 30 days The backup software or scripts write backups to the file share preferably in the form of multi megabyte or larger size files Larger files offer better cost This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 14 optimization in the end toend solution including colder storage costs and lifecycle transition costs because fewer transitions are required After anoth er 30 days the backups are transitioned to Amazon S3 Glacier Here they are held until a full year has passed since they were first created at which point they are deleted 1 Client writes backups to file gateway over NFS or SMB 2 File gateway cache siz ed greater than expected backup 3 Initial backups stored in S3 Standard 4 Backups are transitioned to S3 Standard IA after 30 days 5 Backups are transitioned to S3 Glacier after 60 days Figure 10: Example of file gateway storing file s as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard IA and Amazon S3 Glacier When sizing the file gateway cache in this type of solution understand the backup process itself One approach is to size the cache to be large enough to contain a complete full backup which allows restores from that backup to come directly from the cache —much more quickly than over a wide area network (WAN) link If the backup solution uses software that consolidates backup files by reading existing back ups before writing ongoing backups factor this configuration into the sizing of cache also This is because reading from the local cache during these types of operations reduces cost and increases overall performance of ongoing backup operations For both cases specified above you can use AWS DataSync to transfer data to the cloud from an onpremises data store From there the access to the data can be retain ed using a file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 15 Conclusion The file gateway configuration of AWS Storage Gateway provides a simple way to bridge data between private data centers and Amazon S3 storage The file gateway can enable hybrid architectures for cloud migration cloud tiering and hybrid cloud backup The file gat eway’s ability to provide a translation layer between the standard file storage protocol s and Amazon S3 APIs without obfuscation makes it ideal for architectures in which data must remain in its native format and be available both on premises and in the AWS Cloud For more information about the AWS Storage Gateway service see AWS Storage Gateway Contributors The following individuals and organizations contributed to this document: • Peter Levett Solut ions Architect AWS • David Green Solutions Architect AWS • Smitha Sriram Senior Product Manager AWS • Chris Rogers Business Development Manager AWS Further Reading For additional information see the following: • AWS Storage Services Overview Whitepaper • AWS Whitepapers Web page • AWS Storage G ateway Documentation • AWS Documentation Web page Document Revisions Date Description March 2019 Updated for S3 One Zone IA storage class This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 16 Date Description April 2017 Initial document creation",General,consultant,Best Practices Financial_Services_Grid_Computing_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlFinancial Services Grid Computing on AWS First Published January 2015 Updated August 24 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlContents Overview 1 Introduction 2 Grid computing on AWS 5 Compute and networking 6 Storage and data sharing 15 Data management and transfer 22 Operations and management 23 Task scheduling and infrastructure orchestration 26 Security and compliance 30 Migration approaches patterns and anti patterns 32 Conclusion 35 Contributors 36 Further reading 36 Glossary of terms 37 Document versions 39 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAbstract Financial services organizations rely on high performance computing (HPC) infrastructure grids to calculate risk value portfolios and provide reports to their internal control functions and external regulators The scale cost and complexity of this infrastructure is an increasing challenge Amazon Web Services (AWS) provides a number of services that enable these customers to surpass their current capabilities by delivering results quickly and at a lower cost than onpremises resources The intended audience for this paper include s grid computing managers architects and engineers within financial services o rganizations who want to improve their service It describes the key AWS services to consider some best practices and includes relevant reference architecture diagram s This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 1 Overview High performance computing (HPC) in the financial services industry is an ongoing challenge because of the pressures from everincreasing computational demand across retail commercial and investment groups combined with growing cost and capital constrai nts The traditional on premises approaches to solving these problems have evolved from centralized monolithic solutions to business aligned clusters of commodity hardware to modern multi tenant grid architectures with centralized schedulers that mana ge disparate compute capacity Regulators and large financial institutions increasing ly accept hyperscale cloud provider s which resulted in significant interest in how to best leverage new capabilities while ensuring good governance and cost controls C loud concepts such as capacity on demand and pay as you go pricing models offer new opportunities to teams who run HPC platforms Historically the challenge has been to manage a fixed set of on premises resources while maximizing utilization and minimiz ing queuing times In a cloud based model with capacity that is effectively unconstrained the focus shifts away from managing and throttling demand and towards optimizing supply With this model decisions become more granular and tailored to each customer and focus on how fast and at what cost with the ability to make adjustments as required by the business With this basica lly limitless capacity concepts such as queuing and prioritization become irrelevant as clients are able to submit calculation requests and have them ser viced immediately This also results in u pstream consumers increasingly expect ing and demand ing near instantaneous processing of their workloads at any scale Initial cloud migrations of HPC platforms are often seen as extensions or evolutions of onpremises grid implementations However forward looking institutions are experimenting with the everexpand ing ecosystem of capabilities enabled by AWS Some emerging themes i nclud e refreshing financial models to run on open source Linux based operating systems and exploring the performance benefits of the latest Arm Neoverse N1 central processing units ( CPUs ) through AWS Graviton2 Amazon SageMaker increasingly democratiz es the use of artificial intelligence/machine learning (AI/ML ) techniques and customers are looking to these tools to enable accelerated development of predictive risk models For data heavy calculations Amazon EMR offers a fully managed industry leading cloud big data platform based on standard tooling using directed acyclic graph This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 2 structures This topic is explored further in the blog post How to improve FRTB’s Internal Model Approach implementation using Apache Spark and Amazon EMR As H PC environments move to the cloud the applications that are associated with them start to migrate too Risk management systems which drive compute grids quickly become a bottleneck when the downstream HPC platform is unconstrained By migrating these applications with the compute grid the applications benefit from the elasticity that the cloud provides In turn data sources such as market and static data are sourced natively from within the cloud from the same providers that customers work with today through services such as AWS Data Exchange Many of the building blocks required for fully serverless risk management and report ing solutions already exist today within AWS with services like AWS Lambda for serverless compute and AWS Step Functions to coordinate them As financial institutions become increasingly familiar and comfortable with these services it’s likely that serverless patterns will become the predominant HPC architectures of the future Introduction In general traditional HPC systems are used to solve complex mat hematical problems that require thousands or even millions of CPU hours These system s are commonly used in academic institutions biotech and engineering firms In banking organizations HPC systems are used to quantify the risk of given trades or portfolios which enables traders to develop effective hedging strategies price trades and report positions to their internal control functions and ultimately to external regulators Insurance companies leverage HPC systems in a similar way for actuarial modeling and in support of their own regulatory requirements Unpredictable global events seasonal variation and regulatory reporting commitments contribute to a mixture of demands on HPC pla tforms This includes short latency sensitive intraday pricing tasks near real time risk measures calculated in response to changing market conditions or large overnight batch workloads and back testing to measure the efficacy of new models to historic events Combined these workloads can generate hundreds of millions of tasks per day with a significant proportion running for less than a second Because of t he regulatory landscape demand for these calculations continues to outpace the progress of Moor e’s law Regulations such as the Fundamental Review of the Trading Book (FRTB) and IFRS 17 require even more analysis with some customers estimating between 40% and 1000% increases in demand as a result In turn This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 3 financial services organizations continue to grow their grid computing platforms and increasingly wrestle with the costs associated with purchasing and managing this infrastructure The blog post How cloud increases flexibility of trading risk infrastructure for FRTB compliance explores this topic in greater detail discussing the challenges of data compute and the agility benefits achi eved by running these workloads in the cloud Risk and pricing calculations in financial services are most commonly embarrassingly parallel do not requir e communication between nodes to complete calculations and broadly benefit from horizontal scalability Because of this they are well suited to a shared nothing architectural approach in which each compute node is independent from the other For example a financial model based on the Monte Carlo method can create millions of scenarios to be divided across a large number (often hundreds or thousands) of compute nodes for calculation in parallel Each scenario reflects a different market condition based on a number of variables In general doubling the number of compute nodes allow s these tasks to be distributed more wide ly which reduces by half the overall duration of the job Access to increased compute capacity through AWS allows for additional scenarios and greater precision in the results in a given timeframe Alternatively you can use the additional capacity to complete the same calculations in less time Financial services firms typically use a thirdparty grid scheduler to coordinate the allocation of compute tasks to available capacity Grid schedulers have these features in common: • A central scheduler to coordin ate multiple clients and a large number (typically hundreds or thousands) of compute nodes The scheduler manage s the loss of any given component and reschedul es the work accordingly • Deployment tools to ensure that software binaries and relevant data are reliably distributed to compute nodes that are allocated a specific task • An engine to allow rules to be defined to ensure that certain workloads are prioritized over others in the even t that the total capacity of the grid is exhausted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 4 • Brokers are typically employed to manage the direct allocation of tasks that are submitted by a client to the compute grid In some cases an allocated compute node make s a direct connection back to a cli ent to collect tasks to reduce latency Brokers are usually horizontally scalable and are well suited to the elasticity of cloud In some cases the client is another grid node that generat es further tasks Such multi tier recursive architectures are not uncommon but present further challenges for software engineers and HPC administrators who want to maximize utilization while managing risks such as deadlock when parent tasks are unable to yield to child tasks The key benefit of running HPC workloads on AWS is the ability to allocate large amounts of compute capacity on demand without the need to commit to the upfront and ongoing costs of a large hardware investment Capacity can be scaled minute by minute according to your needs at the time This avoi ds preprovision ing of capacity according to some estimate of future peak demand Because AWS infrastructure is charged by consumption of CPU hours it’s possible to complete the same workload in less time for the same price by simply scaling the capacit y The following figure shows two approaches to provisioning capacity In the first two CPUs are provisioned for ten hours In the second ten CPUs are provisioned for two hours In a CPU hour billing model the overall cost is the same but the latter produces results in one fifth of the time Two approaches to provisioning 20 CPU hours of capacity Developers of the analytics calculations us ed in HPC applications can use the latest CPUs graphics processing units ( GPUs ) and fieldprogrammable gate arrays ( FPGAs ) available through the many Amazon EC2 instance types This drives effici ency per core and differs from on premises grids that tend to be a mixture of infrastructure which reflect s historic procurement rather than current needs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 5 Diverse pricing models offer flexibility to these customers For example Amazon EC2 Spot Instances can reduce compute costs by up to 90% These instances are occasionally interrupted by AWS but HPC schedulers with a history of managing scavenged CPU resources can react to these events and reschedu le tasks accordingly This document includes several recommended approaches to building HPC systems in the cloud and highlight s AWS services that are used by financi al services organizations to help to address their compute networking storage and security requirements Grid computing on AWS A key driver for the migration of HPC workloads from onpremises environments to the cloud is flexibility AWS offers HPC teams the opportunity to build reliable and cost efficient solutions for their customers while retaining the ability to experiment and innovate as new solutions and approaches become available HPC teams that want to migrat e an existing HPC solution to the cloud or to build a new solution should review the AWS Well Architected Framework which also includes a specific Financial Services Industry Lens with a focus on how to design deploy and architect financial services industry (FSI) workloads that promo te resiliency security and operational performance in line with risk and control objectives This framework applies to any cloud deployment and seeks to ensure that systems are architected according to best practice s Additionally t he HPC specific lens document also identifie s key elements to help ensure the successful deployment and operation of HPC system s in the cloud The following secti ons include information about AWS services that are most relevant to HPC systems particular ly those that support financial services customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 6 A typical HPC architecture with the key components including the risk management system (RMS ) grid controller grid brokers and two compute instance pools Compute and networking AWS offers a wide range of Amazon Elastic Compute Cloud (Amazon EC2) instance types which enable you to select the configuration t hat is best suited to your needs at any given time This is a departure from the typical Bill of Materials approach which limits the configurations available on premises in favor of deployment simplicity It also offers evergreening which enables you to take advantage of the latest CPU technologies as they are released without consideration for any prior investment HPC customers in financial services should consider the following instances types : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 7 • Amazon EC2 compute optimized instances — C class instance s are optimized for compute intensive workload s and deliver costeffective high performance at a low price per compute ratio • Amazon EC2 General purpose instances — o M class — Commonly used in HPC applications because they offer a good balance of compute memory and networking resources o Z class — Offer the highest CPU frequencies with a high memory footprint o T series — Provide a baseline level of CPU performance with the ability to burst to a higher level when required The use of these instances for HPC workloads can be attractive for some workloads ; however their variable performance profile can result in inconsistent behavi or which might be undesirable o Amazon EC2 memory optimized instances — o R class – These instances offer higher memory toCPU ratios and so may be applied to X Valuation Adjustment (XVA) calculations such as Credit Value Adjustments which typically require a dditional memory • Instances with the suffix a have AMD processors for example R5a • Instances with the suffix g have Arm based AWS Graviton2 processors for example C6g • Amazon EC2 Accelerated Computing instances use hardware accelerators or co processors to perform functions such as floating point number calculations graphics processing or data pattern matching more efficiently than is possible in software running on CPUs o P class instances are intended for g eneral purpose GPU compute applications o F class instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) The latest AWS instance s are based on the AWS Nitro System The Nitro System is collection of AWS built hardware and software components that enable high performance high availability high security and bare metal capabilities to eliminate virtualization overhead By selecti ng Nitro based instances HPC applications can expect performance levels that are indistinguishable to a baremetal system while retaining all of the benefits of an ephemeral virtual host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 8 Table 1 – Amazon EC2 instance types that are typically used for HPC workloads Instance Type Class Description General purpose T Burstable general purpose low cost M General purpose instances Compute optimized C For compute intensive workloads Memory optimized R For memory intensive workloads X For memory intensive workloads Z High compute capacity and high memory Accelerated computing P / F General purpose GPU (P) or FPGA (F) capabilities This diverse selection of instance types helps support a wide variety of workloads with optimal hardware and promotes experimentation HPC teams can benchmark various sets of instances to optimize their scheduling strategies Quantitative developers can try new approaches with GPUs FPGAs or the latest CPU s without upfront cost s or protracted procurement processes You can immediately deploy at scale your optimal approach without the traditional hardware lifecycle considerations When you run experiments or if a subset of production workloads require s a specific instance type grid schedulers typically enable tasks to be directed to the appropriate hardware through compute resource groups x86 based Amazon EC2 instances support multithreading which enables multiple threads to run concurrently on a single CPU core Each thread is represented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type To ensure that each vCPU is used effectively it’s important to understand the behavior of the calculations run ning in the HPC environment If all processes are single threaded a good initial strategy is to have the scheduler assign one process per vCPU on each instance However if the calculations require multithreading tuning might be required to maximize the use of vCPUs without introducing excessive CPU context switching By default x86 based Amazon EC2 instances have hyperthreading (HT) enabled You can disable HT either at boot or at runtime if the analytics perform better without it which you can establi sh through benchmarking The Disabling Intel Hyper Threading This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 9 Technology on Amazon Linux blog post has an explanation of the methods you can use to configure HT on an Amazon Linux instance You might typically tune your infrastructure to increase processor performance consistency or to reduce latency Some Amazon EC2 instances enable control of processor C states ( idle state pow er saving) and P states (optimization of voltage and CPU frequency during run) The default settings for C state and P state are tuned for maximum performance for most workloads If an application might benefit from reduced latency in exchange for lower f requencies or from more consistent performance without the benefit of Turbo Boost then changes to the C state and P state configurations might be worth considering For information about the instance types that support the adjustment and how to make thes e changes to an Amazon Linux 2 based instance see Processor State Control for Your EC2 Instance in the Amazon Elastic Compute Cloud User Guide for Linux Instanc es Another potential optimization is over subscription This approach is useful when you know processes spend time on non CPU intensive activities such as waiting on data transfers or loading binaries into memory For example if this overhead is estimat ed at 10% you might be able to schedule one additional task on the host for every 10 vCPUs to achieve higher CPU utilization and throughput There are many performance benefits of AWS Graviton processors AWS Graviton processors are custom built by AWS using 64 bit Arm Neoverse cores AWS Graviton2 processors provide up to 40% better price performance over comparable current generat ion x86 based instances for a wide variety of workloads including application servers microservices high performance computing electronic design automation gaming open source databases and in memory caches Interpreted and bytecode compiled languag es such as Python Jav a Nodejs and NET Core on Linux may run on AWS Graviton2 without modification Support for Arm architectures is also increasingly common in third party numerical libraries aiding the path to adoption Compiler selection is another consideration The use of a complier that is optimized for the target CPU architecture can yield performance improvements For example quant itative analyst s might see value in developing analytics using the Intel C++ Compiler and running on instances that support AVX512 capable CPUs The AVX 512 instruction set allows developers to run twice the number of floating point operations per second (FLOPS) per clock cycle Similarly AMD offers the AMD Optimizing C/C++ Compiler which optimizes for AMD EPYC archi tectures This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 10 In addition to the instance types and classes shown in Table 1 there are also options for procuring instances in AWS : • Amazon EC2 On Demand Instances offer capacity as required for a s long as they are needed You are only charged for the time that the instance is active These are ideal for components that benefit from elasticity and predictable availability such as brokers compute instances hosting longrunning tasks or tasks that generate further generations of tasks • Amazon EC2 Spot Instances are particularly appropriate fo r HPC compute instances because they benefit from substantial savings over the equivalent on demand cost Spot Instances can occasionally be ended by AWS when capacity is constrained but grid schedulers can typically accommodate these occasional interrupt ions • Amazon EC2 Reserved Instances provide a significant discount of up to 7 2% based on a one year or three year commitment Convertible Reserved Instances offer additional flexibility on the instance family operating system and tenancy of the reservation Relatively static hosts such as HPC grid controller nodes or data caching host s might benefit from Reserv ed Instances • Savings Plans is a flexible pricing model that also provides savings of up to 72% on your AWS compute usage regardless of instance family size operating system ( OS) tenancy or AWS Region Savings Plans offer significant discounts in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or three year period Just like Amazon EC2 Reserved Instances Savings Plans are ideal for long running hosts such as HPC Controller nodes It’s important to note that regardless of the procurement model selected the instances delivered by AWS are exactly the same Compute instance provisioning and management strategies Spot Instances are not suitable for workloads that are inflexible stateful fault intolerant or tightly coupled between instance nodes They are also not recommended for workloads that are intolerant of occasional periods when the target capacity is not completely available However many financial services organizations make use of Spot Instances for part of their HPC workloads A Spot Instance interruption notice is a warning that is issued two minutes before Amazon EC2 interrupts a Spot Instance You can configure your Spot Instances to be This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 11 stopped or hibernated instead of being ended when they are interrupted Amazon EC2 then automatically stops or hibernates your Spot Instances on interruption and automatically resumes the instances when capacity is available AWS enables you to minimize the impact of a Spot Instance interruption through instance rebalance recommendations and Spot Instance interruption notices An EC2 Instance rebalance recommendation is a signal that notifies you when a Spot Instance is at elevated risk of interruption The signal gives you the opportunity to proactively manage the Spot Instance in advance of the two minute Spot Instance interruption notice You can decide to rebalance your workload to new or existing Spot Instances that are not at an elevated risk of interruption AWS has made it easy for you to use this new signal by using the Capacity Rebalancing feature in EC2 Auto Scaling groups and Spot Fleet If hibernation is configured this feature operate s like closing and opening the lid on a laptop computer and saves the memory state to an Amazon Elastic Block Store (Amazon EBS) disk However this approach to managing interruptions should be used with caution because the grid scheduler might not be able to track such quiesced workloads which could result in timeouts and rescheduling tasks if the hibernated image is not reactivated quickly • Amazon EC2 Spot Fleets enable you to launch a fleet of Spot Instances that span various EC2 instance types and Availability Zones By defining the target capacity using an appropriate metric ( for example a Slot for an HPC application) the fleet source s capacity from EC2 Spot Instances at the best possible price HPC teams can define Spot Fleet strategies that use diverse instance types to make sure you have the best experience at the lowest cost • Amazon EC2 Fleet also enables you to quickly create fleets that are diversified by using EC2 On Demand Instances Reserved Instanc es and Spot Instances With this approach you can optimize your HPC capacity management plan according to the changing demands of your workloads Both EC2 Fleet and Spot Fleet integrate with Amazon Even tBridge to notify you about important Fleet events state changes and errors This enables you to automate actions in response to Fleet state changes and monitor the state of your Fleet from a central place without need ing to continuously poll Fleet APIs They both also support the Capacity Optimized allocation strategy which automatically makes the most efficient use of available spare capacity while still taking advantage of the steep discounts offered by Spot Instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 12 • Amazon EC2 Auto Scaling groups contain a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management An Auto Scaling group enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies • Amazon EC2 launch templates contain the configuration i nformation used to launch an instance The template can define the AMI ID (Operating system image) instance type and network settings for the compute instances You can use Launch Templates with EC2 Fleet Spot Fleet or Amazon EC2 Auto Scaling and make it easier to implement and track configuration standards • Launch Template versioning can be used within the EC2 Auto Scaling Group ‘Instance Refresh’ feature to update pools of capacity while minimizing interruptions to the workload All you need to do is specify the percentage of healthy instances to keep in the group while the Auto Scaling group terminates and launches instances You can a lso specify t he warm up time which is the time period that the Auto Scaling group waits between instances that get refreshed via Instance Refresh One option to begin an HPC deployment is to use only OnDemand Instances After you understand the performance of your workloads you can develop and optimize a strategy to provision instance s using Amazon EC2 Auto Scaling Groups Amazon EC2 Fleet or Amazon EC2 Spot Fleet For example you can deploy a number of Reserved Instances or Savings Plans to host core grid services such as schedulers that are required to be available at all times You can provision OnDemand Instances during the intraday period to ensure predictable performance for synchronous pricing calculations For an overnight batch you can use large fleets of Spot Instances to provide massive volumes of capacity at a minimum cost and supplement them as necessary with OnDemand Instances to ensure predictable performance for the most timesensitive workloads The following figure shows two approaches to provisioning In each case ten vCPUs of Reserved Instance capacity remain online for the stateful scheduling components In the first case 20 further vCPUs are provisioned using On Demand Instances for ten hours to accommodate a b atch that runs for 200 vCPU hours with a tenhour SLA In the second approach the 20 vCPUs are also provisioned at the outset using On Demand Instances to provide confidence in the batch delivery but 70 vCPUs based on lowcost Spot Instances are also ad ded Because of the volume of Spot Instances the batch completes much more quickly (in about three hours) and at a significantly reduced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 13 cost However if the Spot Instances were not available for any reason the batch would still complete on time with th e On Demand Instances provisioned AWS instance provisioning strategies One of the key benefits of deploying applications in the AWS Cloud is elasticity Amazon EC2 Auto Scaling enables HPC managers to configure Amazon EC2 instance provisioning and decommissioning events based on the real time demands of their platform The concept of ‘Instance Weightings’ allows Auto Scaling groups to start instances from a diverse pool of instance types to meet an overall capacity target for the workload Though g rids were previously provisioned based on predictions of peak demands (with periods of both constraint and idle capacity ) Amazon EC2 Auto Scaling has a rich API that enables it to be integrated with schedulers to easily manage scaling events When you remove hosts from a running cluster make sure to allow for a drain down period During this period t he targeted host stops taking on new work but is allowed to complete work in progress When y ou select n odes for removal avoid any long running tasks so that the shutdown is not delayed and you don’t lose progress on those calculations If the scheduler allows a query of total runtime of tasks in progress grouped by instance you can use this to identi fy which are the optimal candidates for removal specifically the instances with the lowest aggregate total of runtime by tasks in progress Where capacity is managed automatically Amazon EC2 Auto Scaling groups offer ‘scale in’ protection as well as configurable termination policies to allow HPC managers to minimize disruption to tasks in flight Scale in protection allows an Auto Scaling Group or an individual instance to be marked as ‘InService’ and so ineligible for termination in a ‘scale in’ event You also have the option to build custom ending policies using AWS Lambda to give more control over which instances are ended This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 14 These protections can be controlled by an API for integration with the scheduler to automate the drain down process Paradoxically adding instances to a cluster can temporarily slow the flow of tasks if those new instances need some time to reach optimal performance as binaries are loaded into memory and local caches are populated Amazon EC2 Auto Scaling groups also support warm pools A warm pool is a pool of pre initialized EC2 instances that sits alongside the Auto Scaling group Whenever your application needs to scale out the Auto Scaling group can draw on the warm pool to meet its new desired capacity The goal of a warm pool is to ensure that instances are ready to quickly start serving applicati on traffic accelerating the response to a scale out event This is known as a warm start So far this section has addressed compute instance provisioning at the host level Increasingly customers are looking to serverless solutions based on either contai ner technologies such as Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) or AWS Lambda For both Amazon ECS and Amazon EKS the AWS Fargate serverless compute engine removes the need to orchestrate infrastructure capacity to support containers Fargate allocates the right amount of compute eliminating the need to choose instances and scale cluster capacity You pay only for the resources req uired to run your containers so there is no over provisioning and paying for additional servers Fargate supports both Spot Pricing for ECS and Compute Savings Plans for Amazon ECS and Amazon EKS To illustrate how Amazon EKS might be used in a high throughput compute (HTC) environment AWS has released the open source solution ‘awshtcgrid ’ This project shows how AWS technologies such as Lambda Amazon DynamoDB and Amazon Simple Queue Service (Amazon SQS ) can be combined to provide much of the functionality of a traditional HPC scheduler Note that awshtc grid is not a supported AWS service offering For customers using AWS Lambda there are no instances to be scaled ; however there is the concept of Concurrency which is the number of instances of a function which can serve requests at a time There are default Regional concurrency limits which can be increased through a request in the Support Center console Financial services firms have already built completely serverless HPC solution s based on Lambda (similar to the architecture outlined here) that support tens of million s of calculations per day In addition to considering alternative CPU architectures and accelerated computing options customers are increasingly looking at their existing dependencies on This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 15 commercial operating systems such as Microsoft Windows Such dependencies are often historical stemming from risk management systems built around spreadsheets however today th e cost premiums can be very material especially when compared to deeply discounted EC2 capacity under Amazon EC2 Spot AWS offers a variety of Linux distributions including Red Hat SUSE CentOS Debian Kali Ubuntu and Amazon Linux The latter is a sup ported and maintained Linux image provided by AWS for use on Amazon EC2 (it can also be run on premises for development and testing) It is designed to provide a stable secure and high performance run environment for applications running on Amazon EC2 I t supports the latest EC2 instance type features and includes packages that enable easy integration with AWS AWS provides ongoing security and maintenance updates to all instances running the Amazon Linux AMI and it is provided at no additional charge t o Amazon EC2 users Storage and data sharing In HPC systems there are two primary data distribution challenges The first is the distribution of binaries In financial services large and complex analytical packages are common These packages are often 1GB or more in size and often multiple versions are in use at the same time on the same HPC platform to support different businesses or back testing of new models In a constrained onpremises environment you can mitigate this challenge through relatively infrequent updates to the package and a fixed set of insta nces However in a cloud based environment instances are short lived and the number of instances can be much larger As a result multiple packages may be distributed to thousands of instances on an hourly basis as new instances are provisioned and new p ackages are deployed There are a number of possible approaches to this problem One is to maintain a build pipeline that incorporates binary packages into the Amazon Machine Images (AMIs) This means that once the machine has started it can process a workload immediately because the packages are already in place The EC2 Image Builder tool simplifies the process of building testing and deploying AMIs A limitation of this approach is that it doesn’t accommodate the deployment of new packages to running instances and it require s them to be ended and replaced to get new versions Anoth er approach is to update running instances There are two different methods for this type of update which are sometimes combined : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 16 • Pull (or lazy) deployment — In this mode when a task reaches an instance and it depends on a package that is not in place the engine pulls it from a central store before it runs the task This approach minimizes the distribution of packages and saves on local storage because only the minimum set of packages is deployed However these benefits are at the expense of delaying tasks in an unpredictable way such as the introduction of a new instance in the middle of a latency sensitive pricing job This approach may not be acceptable if large volumes of tasks have to wait for the grid nodes to pull packages from a central store which could struggle to service very large numbers of requests for data • Push deployment — In this mode you can instruct instance engines to proactively get a specific package before they receive a task that depends on it This approach allows for rolling upgrades and ensures tasks are not delayed by a package update One challenge with this method is the possibility that new instances (which can be added at any time) might miss a push message which means you must keep a list of all currently live packages In practice a combination of these approaches is common Standard analytics packages are pushe d because they’re likely to be needed by the majority of tasks Experimental packages or incremental ‘ Delta’ releases are then pulled perhaps to a smaller set of instances It might also be necessary to purge deprecated packages especially if you deploy experimental packages In this case you can use a list of live packages to enable your compute instances to purge any packages that are not in the list and thus are not current The following figure shows a cloud native implementation of these approaches It uses a centralized package store in Amazon Simple Storage Service (Amazon S3 ) with agents that respond to messages delivered through an Amazon Simple Notification Service (Amazon SNS) topic After the package is in place on Amazon S3 notifications of new releases can be generated either by an operator or as a final step in an automated build pipeline Compute instances subscribed to an SNS topic (or to multiple topics for different applicatio ns) use these messages as a trigger to retrieve packages from Amazon S3 You can also use the same mechanism to distribute delete messages to remove packages if required This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 17 Data distribution architecture using Amazon SNS messages and S3 Object Storage The second data distribution challenge i n HPC is managing data related to the tasks being processed Typically this is bi directional with data flowing to the engines that support the processing and resul ting data passed back to the clients There are thre e common approaches for this process : • In the first approach communications are inbound (see the following figure) with all data passing through the grid scheduler along with task data This is less common because it can cause a performance bottleneck as the cluster grows An inbound data distribution approach This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 18 • In another approach tasks pass through the scheduler but the data is handled outofbound s through a shared scalable data store or an inmemory data gri d (see the following figure) The t ask data contain s a reference to the data’s location and the compute instances can retrieve it as required An outofbound s data distribution approach Finally some schedulers support a direct data transfer (DDT) approach In this model the scheduler grid broker allocates compute instances which then communicate directly with the client This architecture can work well especially with very short running tasks with little data However in a hybrid model with thousands of engines running on AWS that need to access a single onpremises client this can present challenges to on premises firewall rules or to the availability of ephemeral ports on the clie nt host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 19 DDT (direct data transfer ) data distribution approach All of these approaches can be enhanced with caches located as close as possible to or hosted on the compute instances Such caches help to minimize the distribution of data especially if a significant ly similar set is required for many calculations Some schedulers support a form of data aware scheduling that tries to ensure that tasks that require a specific dataset are scheduled to instances that already have that dataset This cannot b e guaranteed but often provides a significant performance improvement at the cost of local memory or storage on each compute instance Though the combination of grid schedulers and distributed cache technologies used on premises can provide solutions to these challenges their capabilities vary and they are not typically engineered for a cloud deployment with highly elastic ephemeral instances You can consider t he following AWS services as potential solutions to the typical HPC data managem ent use cases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 20 Amazon Simple Storage Service (Amazon S3) The Amazon S3 provides virtually unlimited object storage designed for 99999999999% of durability and high availability For binary packages it offers both versioning and various immutability features such as S3 Object Lock which prevents deletion or replacement of objects and has been assessed by Cohasset Associates for use in environments that are subject to SEC 17a 4 CFTC and FINRA regulations Binary immutability is a common audit requirement in regulated industries which require you to demonstrate that the binaries approved in the testing phase are identical to those used to produce reports You can include this feature in your deployment pipeline to make sure that the analytics binaries you use in production are the same as those that you validated This service also offers easy to implement encryption and granu lar access control s Some HPC architectures use checkpointing (compute instance s save a snapshot of their current state to a datastore ) to minimize the computational effort that could be lost if a node fails or is interrupted during processing For this purpose a dist ributed object store (such as Amazon S3) might be an ideal solution Because the data is likely to only be needed for the life of the batch you can use S3 life cycling rules to automatically purge these objects after a small number of days to reduce cost s Amazon Elastic File System (Amazon EFS) Amazon EFS offers shared network storage that is elastic which means it grow s and shrink s as required Thousands of Amazon EC2 instances can mount EFS volumes at the same time which enables shared access to common data such as analytics packages Amazon EFS does not currently support Windows clients Amazon FSx for Windows File Ser ver Amazon FSx for Windows File Server provides fully managed highly reliable and scalable file storage that is accessible over the open standard Server Message Block (SMB) protocol It is built on Windows Server delivering a wide range of administrative features such as user quotas end user file restores and Microsoft Active Directory integration It offers single and MultiAvailability Zone deployment options fully managed backups and encryption of data at rest and in transit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 21 Amazon FSx for Lustre For transient job data the Amazon FSx for Lustre service provides a highperformance file system that offers sub millisecond access to data and read /write speeds of up to hundreds of gigabytes per second with millions of IOPs Amazon FSx fo r Lustre can link to an S3 bucket which makes it easy for clients to write data objects to the bucket (including clients from an on premises system ) and have those objects available to thousands of compute nodes in the cloud (see the following figure ) FSx for Lustre is ideal for HPC workloads because it provides a file system that’s optimized for the performance and costs of high performance workloads with file system access across thousands of EC2 instances An example of an Amazon FSx for Lustre implementation Amazon Elastic Block Store (Amazon EBS) After a compute instance has binary or job data it might not be possible to keep it in memory so you might want to keep a copy on a local disk Amazon EBS offers persistent block storage volumes for Amazon EC2 instances Though the volumes for compute nodes can be relatively small (10GB can be sufficient to store a variety of binary package versions and some job data) there might be some benefit to the higher IOPS and throughput offered by the Amazon EBSprovisioned input/output operations per second ( IOPS ) solid state drives ( SSDs) These offer up to 64000 IOPS p er volume and up to 1000MB/s of throughout which can be valuable for workloads that require frequent highperformance access to these datasets This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 22 Because these volumes incur additional cost you should complete an analysis of whether they provide any add itional value over the standard general purpose volumes AWS Cloud hosted data providers AWS Data Exchange makes it easy to find subscribe to and use third party data in the cloud The catalog includes hundreds of financial services datasets from a wide variety if providers Once subscribed to a data product you can use the AWS Data Exchange API to load data directly into S3 The Bloomberg Market Data Feed (B PIPE) is a managed service providing programmatic access to Bloomberg’s complete catalog of content (all the same asset classes as the Bloomberg Terminal) Network connectivity with Blo omberg B PIPE leverages AWS PrivateLink exposing the services a s set of local IP addresses within your Amazon Virtual Private Cloud (Amazon VPC) subnet and elim inating DNS issues BPIPE services are presented via Network Load Balancers to further simplify the architecture Additionally Refinitiv’s Elektron Data Platf orm provides cost efficient access to global realtime exchange ‘over the counter ’ (OTC) and contributed data The data is also provided using AWS PrivateLink allowing simple and secure connectivity from your Virtual Private Cloud (VPC) Data managemen t and transfer Although HPC systems in financial services are typically loosely coupled with limited need for EastWest communication between compute instances there are still significant demands for North South communication bandwidth between layers in the stack A key consideration for networking is where in the stack any separation between onpremises systems and cloudbased systems occurs This is because communication within the AWS network is typically of higher bandwidth and lower cost than communication to external networks As a result any architecture that cause s hundreds or thousands of compute instances to connect to an external network —particularly if they’re requesting the same binaries or task data—would create a bottlenec k Ideally the fanout point (the point in the architecture at which large numbers of instances are introduced) is in the cloud This mean s that the larger volumes of communication stay in the AWS network with relatively few connections to on premises systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 23 AWS offers networking services that complement the financial services HPC systems A common starting point is to deploy AWS Direct Connect connections between customer data centers and an AWS Region through a third party point of presence (PoP) provider A Direct Connect link offers a consistent and predictable experience with speeds of up to 100Gbps You can employ multiple diverse Direct Connect links to provide highly resi lient highbandwidth connect ivity Though most HPC applications within financial services are loosely coupled this isn’t universal and there are times when network bandwidth is a significant component of overall performance The current AWS Nitro –based i nstances offer up to 100Gbps of network bandwidth for the largest instance types such as the c5n18xlarge or up to 400Gbps in the case of the p4d24xlarge instance Additionally a cluster placement group packs instances close together inside an Availability Zone This strategy enables workloads to achieve the low latency network performance necessary for tightly coupled node tonode communication that is typical of HP C applications The Elastic Fabric Adaptor service (EFA) enhances the Elastic Network Adaptor (ENA) and is specifically engineered to support tightly coupled HPC workloads which require low latency communication between instances An EFA is a virtual network device which can be attached to an Amazon EC2 instance EFA is suited to workloads using the Message Passing Interface (MPI) EFA may be worthy of consideration for some financial services workloads such as weather predictions as part of an insurance industry catastrophic event model EFA traffic that bypasses the operating system (OS bypass) is not routable so it’s limited to a single subnet As a result any peers in this network must be in the same subnet and Availability Zone which could alter resiliency strateg ies The O Sbypass capabilities of EFA are also not supported on Windows Some Amazon EC2 instance types support jumbo frames where the Network Maximum Transmission Unit ( the number of bytes per packet) is increased AWS supports MTUs of up to 9001 bytes By using f ewer packets to send the same amount of data endto end network performance is improved Operations and management HPC systems are traditionally highly decoupled and resilient to the failure of any given component with minimal disruption However HPC sy stems in financial services organizations tend to be both mission critical and limited by the capabilities of traditional approaches such as physical primary and secondary data centers In this model HPC teams ha ve to choose between having secondary infrastructure sitting mostly idle in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 24 case of the loss of a data center or using all of the infrastructure on a daily basis but with the possibility of losing up to 50% of that capacity in a disaster event Some add a third or fourth location to reduce the impact of the loss of a site but at the cost of an increased likelihood of an outage and network inefficiencies When you move to the cloud you not only open up the availability of new services but also new approach es to solving these problems AWS operates a model with Regions and Availability Zones that are always active and offer high levels of availability By architecting HPC systems for mul tiple AWS Availability Zones financial services you can benefit from high levels of resiliency and utilization In the unlikely event of the loss of an Availability Zone additional instances can be automatically provisioned in the remaining Availability Zones to enable workloads to continue without any loss of data and only a brief interruption in service A sample HPC architecture for a MultiAZ deployment This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 25 The high level architect ure in the preceding figure shows the use of multiple Availability Zones and separate subnets for the stateful scheduler infrastructure ( including schedulers brokers data stores) and the compute instances You can base your scheduler instances on long running Reserved Instances with static IP addresses to help them c ommunicat e with onpremises infrastructure by simplifying firewall rules Conversely you can base your compute instances on On Demand Instance or Spot Instances with dynamically allocated IP addresses Security groups act as a virtual firewall which you can configure to allow the compute instances to communicate only with scheduler instances With the Compute Instances being inherently ephemeral and with potentially limited connectivity needs it can be beneficial to have them sit within separate private address ranges to avoid the need for you to manage demand for and allocate IPs from your own pools This can be achieved either through a secondary CIDR on the VPC or with a separate VPC for the compute infrastructure connected through VPC peering The majority of AWS services relevant to financial services customers are accessible from within the VPC using AWS PrivateLink which offers private connec tivity to those services and services hosted by other AWS accounts and supported AWS Marketplace partner solutions Traffic between your VPC and the service does not leave the Amazon network and is not exposed to the public internet One of the key s to effective HPC operations are the metrics you collect and the tools to explore and manipulate them A common question from end users is “Why is my job slow?” It’s important to set up your HPC operation in a way that enables you to either answer that question or to empower user s to find it for themselves AWS offers tools you can use to collect metrics and log s at scale Amazon CloudWatch is a monitoring and management service that not only collects metrics and logs related to AWS services but through an agent it can also be a target for telemetry from HPC systems and the applications running on them This provides a valuable central sto re for your data and allows diverse data sources to be presented on a common time series and helps you to correlat e events when you diagnos e issues You can also use CloudWatch as an auditable record of the calculations that were completed with the analytics binary versions that were used You can export these logs to S3 and protect them with the object lock feature for long term immutable retention You may want to use a third party log analytics tool Many of the most common products have native integrations with Amazon Web Services Additionally Amazon Managed Service for Grafana enables you to analyze monitor and alarm on metrics This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 26 logs and traces acro ss multiple data sources including AWS third party independent software vendors ( ISVs ) databases and other resources Some grid schedulers require a relational database for the retention of statistics data For this purpose you can use Amazon Relational Database Service (Amazon RDS) which provides costefficient and resizable database capacity while automating administration tasks such as hardware provisioning patching and backups Another common challen ge with shared tenancy HPC systems is the apportioning of cost The ability to provide very granular cost metrics according to usage can drive effective business decisions within financial services The pay as you go pricing model of AWS empowers HPC manag ers and their end customers to realize the benefits from the optimization of the system or its us e AWS tools such as resource tagging and the AWS Cost Explorer can be combined t o provide rich cost data and to build reports that highlight the sources of cost within the system Tags can include details of report types cost centers or other information pertinent to the client organization There’s also an AWS Budgets tool which can be used to create reports and alerts according to consumption When you combine d etailed infrastructure costs with usage statistics you can create granular cost attribution reports Some trades are particularly demanding of HPC capacity to the extent that the business might decide to exit the trade instead of continu ing to support the cost Task scheduling and infrastructure orchestration A high performance computing system needs to achieve two goals : • Scheduling — Encompasses the lifecycle of compute tasks including: capturing and prioriti zing tasks allocating them to the appropriate compute resources and handling failures • Orchestration — Making compute capacity available to satisfy those demands It’s common for financial services organizations to use a third party grid scheduler to coordinate HPC workloads Orchestration is often a slow moving exercise in procurement and physical infrastructure provisioning Traditional schedulers are therefore highly optimized for making lowlatency scheduling decisions to maximize usage of a relative ly fixed set of resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 27 As customers migrate to the cloud the dynamics of the problem change s Instead of nearstatic resource orchestration capacity can be scaled to meet the demands at that instant As a result the scheduler doesn’t need to reason about which task to schedule next but rather just inform the orchestrator that additional capacity is needed Table 2 — Task scheduling and infrastructure orchestration approaches HPC hosting Task scheduling approach Infrastructure orchestration approach OnPremises Rapid task scheduling decisions to manage prioritization and maximize utilization while minimizing queue times Static a procurement and physical provisioning process run over weeks or months Cloud based Focus on managing the task lifecycle decisions around prioritization and queue times are minimized by dynamic orchestration Highly dynamic capacity on demand with ‘pay as you go’ pricing Optimized for cost and performance through selection of instance type and procurement model When you plan a migration a valid option is to migrate the on premises solution first and the n consider optimizations For example an initial ‘lift and shift’ implementation might use Amazon EC2 OnDemand Instances to provision capacity which yields some immediate benefits from elasticity Some of the commercial schedulers also have integration s with AWS which enable them to add and remove nodes according to demand When you are comfortable with running c ritical workloads on AWS you can further optimize your implementation with options such as using more native services for data management capacity provisioning and orchestration Ultimately the scheduler might be in scope for replacement at which poin t you can consider a few different approaches Though financial services workloads are often composed of very large volumes of relatively short running calculations there are some cases where longer running calculations need to be scheduled In these situ ations AWS Batch could be a viable alternative or a complementary service AWS Batch plans schedules and runs batch workloads while dynamically provisioning compute resources using containers You can configure parallel computation and job dependencies to allow for workloads where the results of one job are used by another AWS Batch is offered at no additional ch arge; only the AWS resources it consumes generate costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 28 Customers looking to simplify their architecture might consider a queue based architecture in which clients submit tasks to a stateful queue This can then be service d by an elastic group of hungry w orker processes that take pending workloads process them and then return results The Amazon SQS can be used for this purpose Amazon SQS is a fully managed message queuing service that is ideal for this type of decoupled architecture As a serverless offering it reduces the administrative burden of infrastructure management and offers seamless elastic scaling A simple HPC approach with Amazon SQS Amazon SQS queues can be service d by groups of Amazon EC2 instances that are managed by AWS Auto Scaling groups You can configure the AWS Auto Scaling groups to scale capacity up or down based on metrics such as average CPU load or the depth of the queue AWS Auto Scaling groups can also incorporate provisioning strategies that can combine Amazon EC2 On Demand Instances or Spot Instances to provide flexible and low cost capacity With serverless queuing provided by Amazon SQS it’s logical to think about serverless compute capacity With AWS Lambda you can run code without provisioning or managing any servers This function asaservice product allows you to only pay for the computation time you consume You can also configure Lambda to process workloads from SQS scaling out horizon tally to consume messages in a queue Lambda attempt s to process the items from the queue as quickly as possible and is constrained only by the maximum concurrency allowed by the account memory and runtime limits In 2020 these limits were increased significantly You can now allocate up to 10GB of memory and six vCPUs to your functions which also have support for the AVX2 instruction set This makes Lambda functions suitable for an even wider range of HPC applications This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 29 A serverless event driven approach to HPC Taking these concepts further the blog post Decoupled Serverless Scheduler To Run HPC Applications At Scale on EC2 describes a decoupled serverless HPC scheduler which can run on hundreds of thousands of cores using EC2 Spot Instances The following figure shows a cloud native serverless HPC scheduling architecture A cloudnative serverless scheduler architecture When you explore these alternative cloud native approaches especially in comparison to established schedulers it’s important to consider all of the features required to run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 30 what can be a critical system Metrics gathering data management and management tooling are only some of the typical requirements that must be addressed and should not be overlooked A key benefit of running HPC workloads on AWS is the flexibility of the offerings that enable you to combine various solutions to meet very specific needs An HPC architect can use Amazon EC2 Reserved Instances for long runnin g stateful hosts You can use Amazon EC2 OnDemand Instances for long running tasks or to secure capacity at the start of a batch Additionally you can provision Amazon EC2 Spot Instances to try to deliver a batch more quickly and at lower cost Some wo rkloads can then be directed to alternative platforms such as GPU enabled instances or Lambda functions You can optimize t he overall mix of these options on a regular basis to adapt to the changing needs of your business Security and compliance The approach to security in HPC systems running in the cloud is often different from other applications This is because of the ephemeral and stateless nature of the majority of the resources Issues of patching inventory tooling or human access can be eliminated because of the short lived nature of the resources • Patching – When you use a pre patched AMI the host is in a known compliant state at startup If a relatively short limit is placed on the life of the instance it’s likely that this approach wi ll meet all necessary patching standards Additionally AWS Systems Manager Patch Manager can be used to automate the process of patching managed instan ces if necessary • Inventory tooling – Onpremises hosts typically interact with compliance and inventory systems In the AWS Cloud controls around the instance image and the delivery of binaries mean that instances remain in a known state and can be progr ammatically audited so these historic controls might not be necessary Additionally b ecause h ighly scalable and elastic resources can put excessive load on such systems fully managed cloud based solutions such as AWS CloudTrail might provide a more suitable alternative • Root access – When you enable all debugging through centralized metrics and automated reporting you c an mandate zero access to the compute nodes Without any root access you can avoid key rotation and access control issues When you consider migrating to the cloud an important early step is to decide which internal tools and processes (if any) need to be replicated in the cloud Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 31 instances that are unencumbered by tooling tend to start up more quickly which is important when additional capacity is required to meet a business need Because of the stateless nature of the workloads there is of ten little need to store data for long periods particularly when the job data isn’t especially sensitive doesn’t include personally identifying information (PII) and largely consists of public market datasets Regardless encryption by default is easy to implement across a wide range of AWS services Binary analytics packages often contain proprietary code that has intellectual value financial services organizations typically encrypt these binaries while in transit and us e builtin AWS tools to ensure they’re encrypted while at rest in AWS storage If compute instances are configured for minimal or no access the risk of exfiltration while the binaries are in memory is minimized AWS has a wide range of certifications and attestations relevant to financial services and other industries For full details of AWS certifications see AWS Compliance Before you design secure systems in AWS to make sure you understand the respect ive areas of responsibility for AWS and the customer review the Shared Responsibility Model The AWS Shared Responsibility Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 32 This model is complemented by a n extensive suite of tools and services to help you be secure in the cloud For more detailed information review the AWS Well Architected Framework Security Pillar One service of particular interest to HPC applications is the AWS Identity and Access Management (AWS IAM) service which provides finegrained access control across all of the AWS services included in this paper IAM also offers integration with your existing identity providers through identity federation Interactions with the AWS APIs can be tracked with AWS CloudTrail a service that enables governance and auditing across the AWS account This event history simplifies security analyse s changes to resources and troubleshooting Encryption by default is becoming increasing common within financial services and many AWS services now offer simple encryption features that integrate with AWS Key Management Service (AWS KMS) This service makes it easy for you to create and manage keys that can be used across a wide variety of AWS services For HPC applications keys managed by AWS KMS might be used to encrypt AMIs or S3 buckets that contain analytics binaries or to encrypt data stored in the Parameter Store AWS KMS uses FIPS 140 2 validated hardware security modules (HSMs) to generate and protect customer keys The keys never lea ve these devices unencrypted Customers with specific internal or external rules regarding HSMs can choose AWS CloudHSM which is a fully managed FIPS 140 2 Level 3 validated HSM cluster with dedicated singletenant access Migration approaches patterns and antipatterns Many financial services organizations already have some form of HPC environment hosted in an on premises data center If you’re migrating from such an implementation it’s important to con sider what might be the best method to complete the migration The optimal approach depend s on the desired outcome risk appetite and timescale but typically begin s with one of the 6 Rs: Rehosting Replatforming Repurchasing Refactoring /Rearchitecting and (to a lesser degree ) Retiring or Retain ing (revisiting) HPC cloud migrations typically progress through three stages The nuances and timings of each stage depend s on the individual businesses involved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 33 The first stage is Bursting capacity In this mode very little changes with the existing on premises HPC environment However at times of peak demand Amazon EC2 instance s can be created and added to the system to provide additional capacity The trigger for the creation of these instances is usually either: • Scheduled – If workloads are predictable in terms of timing and scale then a simple schedule to add and remove a fi xed number of hosts at predefined times can be effective The schedule can be managed by an on premises system or with Amazon EventBridge rules • Demand ba sed – In this mode a component can monitor the performance of workloads and add or remove capacity based on demand If a task queue starts to increase additional instances can be requested through the AWS API and if the queue decreases some instances can be removed • Predictive – In some cases especially when the startup time for a new instance is long (perhaps because of very large package dependencies or complex OS builds) it might be desirable to use a simple machine learning model to an alyze historic demand and determine when to bring capacity online This approach is rare but can work well when combined with a demand based approach As customers build confidence in the ir ability to supplement existing capacity with cloud based instance s they often make a decision to complete a migration However with existing on premises hardware still available customers want to keep the value of that infrastructure before it can be decommissioned In this case it can make sense to provision a new strategic grid — with all of the same scheduler components — into the cloud and retain the existing on premises grid It’s then left to the upstream clients to direct workloads accordingly switching to the cloud based grid as the on premises capacity is gradually retired When you have completed migration and are running all of their HPC workloads in the cloud the on premises infrastructure can be removed At this point you have completed a Rehosting approach When your infrastructur e is in the cloud you then have the flexibility to look at Replatforming or Refactoring your environment The ability to build entirely new architectures in the cloud alongside existing production systems means that new approaches can be fully tested before they’re put into production One anti pattern that’s occasionally proposed by customers involves platform stacking In this approach solutions such as virtualization and/or container platforms are placed under the HPC platform to try t o create portability or parity between cloud based systems and on premises systems This approach can have some disadvantages: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 34 • Computational inefficiency – By adding more layers between the analytics binaries and CPUs performance computational efficiency is inevitably degraded as CPU cycles are consumed by the abstraction layers • Licensing costs – HPC environments are large and continue to grow Though enterprise licenses can keep the upfront costs of using these technologies very low the large number of CPU cores involved in HPC workloads can mean significant additional costs when the licenses are due for renewal • Management overhead – In the simplest approach an Amazo n EC2 instance can be created on demand using an Amazon Linux 2 AMI This AMI is patched and up to date and because it exist s for just a few hours it require s no further management However by building HPC stacks on top of other abstractions those long running layers need patching and upgrading and when multiple layers are involved the scope for disruption through planned maintenance or an unplanned outage increases significantly • Scaling challenges – Amazon EC2 instances can be available very quickly on demand If scaling out involves the creation of a complex stack before processes can run this adds to the billing time of the instance before useful work can be done In worst case scenarios there can be a temptation to leave large numbers of instance s running so that they’re available if additional workloads arise • Optimization challenges – HPC systems are already complex especially when supporting huge volumes of variable workloads with different CPU and memory requirements Knowing where CPU and memory resources are consumed is vital to identifying bottlenecks or debugging failures If an HPC platform is based on a series of abstract ion layers this can introduce additional variables that make it difficult to see where inefficiencies exist and as a result they might never be found • Security challenges – Securing a more complex stack can be challenging because there are more component s to configure monitor and maintain to ensure the integrity of the system By defining portability in terms of a virtual machine image or a Docker image you can find a good balance of portability while off setting some of the disadvantages through the u se of cloud native virtualization with Amazon EC2 and/or container management solutions such as Amazon ECS and EKS especially when combined with AWS Fargate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 35 Keeping HPC systems as simple as possible provides the best performance at the lowest cost Most HPC solutions are already platforms by design and offer portability through simple deployment patterns to standard operating systems Conclusion AWS has a long history of helping customers from various industries — including financial services — to optimize their HPC workloads This experience over many years from customers with diverse requirements has directly contributed to the products and services offered today and will continue to do so AWS regularly accommodates very large scale requests for Amazon EC2 instances Some of these clusters are large enough to be recognized among the world’s largest supercomputers For example a group of researchers from Clemson University created a high performance cluster on the AWS Cloud using more than 11 million vCPUs on Amazon EC2 Spot Instances running in a single AWS Region This cluster was used to study how human language is processed by computer s by analyzing over 500000 documents AWS also partnered with TIBCO to demonstrate the creation of a 13 million vCPU grid on AWS using AWS Spot Instances They were able to secure 61299 instances in total for the test which ran sample calculations based on the Strata open source analytics and market risk library from OpenGamma and was set up with their assistance TIBCO now offers their DataSynapse GridServer Manager scheduler via the AWS Mar ketplace as a ‘pay as you go’ offering The PathWise HPC solution from professional services firm Aon allows (re)insurers and pension funds to rapidly solve key insurance challenges The platform relies upon cloud compute capacity from AWS and recently moved to Amazon EC2 P3 instances powered by NVIDIA V100 Tensor Core GPUs These GPUs enable PathWise to run immense calcul ations in parallel completing in seconds or minutes analysis that can take days or weeks in traditional systems Standard Chartered cut their Grid costs by 60% by leveraging Amazon EC2 Spot Instances and recently DBS Bank shared their architecture for a scalable serverless compute grid based on AWS technologies HPC platforms are crucial enablers for many different types of financial services organizations including capital markets insurance banking and payments However as demands on these platforms increase as a result of regulatory requirements it’s clear This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 36 that the tradit ional approaches to provisioning HPC infrastructure are inefficient and ultimately unsustainable Constraints on capital and capital expenditure furth er compound the challenge By migrating these systems to AWS customers benefit from a wide variety of compute instances and relevant services but also from a fundamental change in the delivery of compute capacity This new approach offers tremendous flex ibility both in terms of the management of workloads that vary daytoday but also in the overall approach to cost optimizations security availability and operations HPC workloads already have much in common with stateless function asaservice architectural patterns Just as financial services moved from local calculations to clusters and into grids they are starting to explore decentralized serverless approaches As scaling become transparent bottlenecks will continue to be removed until process ing becomes near real time If you have challenges with the scale cost and capacity challenges of managing a high performance computing system today AWS has a number of services and partner relationships that can help To learn more you can contact AWS Financial Services through the AWS Financial Services – Contact Sales form Contributors Contributors to this document include : • Alex Kimber Solutions Architect Global Financial Services Amazon Web Services • Richard Nicholson S olutions Architect Global Financial Services Amazon Web Services • Carlos Manzanedo Rueda Specialis t Solutions Architect Amazon Web Services • Ian Meyers S olutions Architect Head of Technology Amazon Web Services Further reading For additional information see: • AWS Well Architected Framework This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 37 • AWS Well Architected Framework – HPC Lens • AWS Well Architected Framework – Financial Services Industry Lens • AWS HPC Blog Glossary of terms The following are the definitions for the terms that appear throughout this document Binary package – A set of binaries that run tasks A typical HPC environment can support multiple packages of various versions running in parallel The package and version required are defined by the client or risk system at the point of job submission These packages typically contain proprietary models that are built by the firm’s Quantitative Analysis teams ( quants ) and are often the subject of intellectual property concerns as they can form competitive differentiation Broker – A component of a typical HPC/Grid platform The broker is typically responsible for coordinating tasks an d/or client connections to compute instances As grids and task volumes grow the number of brokers is typically scaled out to ensure throughput can be maintained Client – A software system accessed by a user that generates job requests and presents res ults In financial services this is generally some form of risk management system (RMS) Engine – A software component responsible for invoking the calculation of a task using a given binary package A compute instance can run multiple engines in parallel perhaps one or more within each Slot Grid controller – A component of a typical HPC/Grid platform The controller is responsible for tracking the state of compute instances and Brokers and hosting API or GUI interfaces and metrics The controller host is generally not involved in the scheduling of individual tasks Instance – An Amazon EC2 virtual server Each instance ha s a number of available virtual CPUs (vCPU s) and an allocation of memory Job (or session) – The definition of a series of one or more related tasks For example a job might define a series of scenarios and how they are sub divided into a set of tasks This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 38 Job data – The set of data that is required in addition to the task metadata Typically job data is passed to the compute instance as a reference bypassing the scheduler itself In investment banking applications job data is generally a combination of static reference data (such as holiday calendars used to calculate trade expir ation dates ) marke t data (used to build the market environment ) and trade data (referencing the trade or portfolio of trades which are the focus of the calculation ) Quantitative analys ts / Quant s – The team associated with the development of mathematical models to predict the behavior of financial products Risk management system (RMS) – To improve oversight of risk calculations centralize operations and improve efficiency financial services firms are increasingly leveraging risk management systems to sit between the users and the HPC platform Scheduler /Grid scheduler – A software compone nt responsible for managing the lifecycle of tasks through receipt allocation to compute instances collection of results and metrics and management processes Slot – A unit of compute currency used to approximate homogeneity within a heterogenous comput e environment For example a slot might be defined as two CPU cores and 8GB of RAM and would be considered interchangeable regardless of whether the compute instance was able to provide two or 32 slots Task – A unit of work to be scheduled to a compute instance A task can define external dependencies (such as market and reference data) In recursive workload patterns a parent task can spawn a child Job or a series of other child tasks Thread – An engine run s either single threaded or multi threaded p rocesses Ideally each thread run s on a separate vCPU to minimize the overhead of CPU context switching User – In financial services a user is t ypically a member of the front office either a trader managing positions or desk head who wants oversight and ensur es successful internal or external reporting is completed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 39 Document versions Date Description August 24 2021 Updates to reflect AWS service improvements more modern and inclusive terminology and new cloud native architectures September 2019 Updates to services diagrams and topology January 2016 Updates to services and topology January 2015 Initial publication,General,consultant,Best Practices Guidance_for_Trusted_Internet_Connection_TIC_Readiness_on_AWS,"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Guidance for Trusted Internet Connection (TIC) Readiness on AWS February 201 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 2 of 57 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 3 of 57 Contents Abstract 4 Introduction 4 FedRAMPTIC Overlay Pilot 5 Pilot Objectives Process and Methods 7 Pilot Results 8 Customer Implementation Guidance 9 Connection Scenarios 9 AWS Capabilities and Features 13 Conclusion 17 Contributors 17 APPENDIX A: 18 Control Implementation Summary 18 APPENDIX B: 21 Implementation Guidance 21 APPENDIX C: 32 TIC Capabilities Matrix 32 Notes 57 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 4 of 57 Abstract The Trusted Internet Connection (TIC) Initiative1 is designed to reduce the number of United States Government (USG) network boundary connections including Internet points of presence (POPs) to optimize federal network services and improve cyber protection detection and resp onse capabilities In the face of an everincreasing body of laws and regulations related to information assurance USG customers wanting to move to the cloud are confronted with security policies guidelines and frameworks that assume onpremises infrastructure and that do not align with cloud design principles Today TIC capa bilities are not available “in the cloud” This document serves as a guidance for TIC readiness on the Amazon Web Services (AWS) cloud Introduction USG agencies must route connections for the increasing number of mobile users accessing cloud services via smart phones and tablets through their agency network2 In alignment with this trend toward mobile use USG employees and contractors now want the ability to access cloudbased content anytime anywhere and with any device Agencies want to leverage compliant cloud service providers (CSPs) for agile development and rapid delivery of modern scalable and costoptimized applications without compromising on either their information assurance posture or the capabilities of the cloud In its current form a TICcompliant architecture precludes direct access to applications running in the cloud Users are required to access their compliant CSPs through an agency TIC connection either a TIC Access Provider (TICAP) or a Managed Trusted IP Service (MTIPS) provider This architecture often results in application latency and might strain existing government infrastructure In response to these challenges the TIC program recently proposed a Draft Federal Risk and Authorization Management Program (FedRAMP) –TIC Overlay3 that provides a mapping of National Institute of Standards and Technology (NIST) 80053 security controls to the required TIC capabilities Figure 1 below shows the challenge mobile applications face with the current state of the TIC architecture; it also shows a proposed future state of the architecture contemplated by the Department of Homeland Security (DHS) TIC Program Office and General Services Administration (GSA) FedRAMP Program This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 5 of 57 Office This new approach enables direct access to applications running in a compliant CSP Through a pilot program DHS and GSA sought to understand whether the objectives of the TIC initiative could be achieved in a cloud environment Figure 1: TIC Pilot Objective FedRAMPTIC Overlay Pilot In May of 2015 GSA and DHS invited AWS to participate in a FedRAMPTIC Overlay pilot The purpose of the pilot was to determine whether the proposed TIC overlay on the FedRAMP moderate security control baseline was achievable In collaboration with GSA and DHS AWS assessed how remote agency users could use the TIC overlay to access cloudbased resource s and whether existing AWS capabilities would allow an agency to enforce TIC capabilities The scope of the pilot leveraged the existing AWS FedRAMP Moderate authorization Participants in the pilot included a USG customer the DHS TIC Program Management Office (PMO) the GSA FedRAMP PMO and AWS The alignment to FedRAMP and TIC control objectives was evaluated and administered by an accredited FedRAMP thirdparty assessment organization (3PAO) Table 1 below indicates the count of TIC capabilities included in the overlay pilot Appendix C provides the supporting data for Table 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 6 of 57 TIC Capabilities Group Total Description Original Capabilities 74 Total TIC v20 Reference Architecture Capabilities Excluded Capabilities 4 TIC Capabilities determined by DHS as excluded from Draft FedRAMP – TIC Overlay These capabilities are not applicable to FedRAMP Cloud Service Provider environments and are not included in the FedRAMP – TIC Overlay baseline Mapped Capabilities 70 Original Capabilities less Excluded Capabilities These define the baseline FedRAMP – TIC Overlay as defined in the Draft FedRAMP – TIC Overlay Control Mapping Deferred Capabilities 13 Mapped Capabilities determined to be specific to the agency (TIC Provider) and removed from the initial s cope of the assessment as directed by DHS TIC and GSA FedRAMP PMO Included Capabilities 57 Mapped Capabilities less Deferred Capabilities These capabilities represent the evaluation target of the pilot Table 1: FedRAMP Associated TIC Capabilities Evaluated The following items were also included in the assessment scope:  Customer AWS Management Console  Customer services  Amazon Simple Storage Service (Amazon S3)  Amazon Elastic Compute Cloud (Amazon EC2)  Amazon Elastic Block Store (Amazon EBS)  Amazon Virtual Private Cloud (Amazon VPC)  AWS Identity and Access Management (IAM)  Customer thirdparty tools and AWS ecosystem providers used to enforce TIC capabilities  AWS supporting infrastructure  Control r esponsibilities shown in Table 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 7 of 57 Responsible Party Total Description Customer 16 TIC capabilities determined to be solely the responsibility of the AWS customer Shared 36 TIC capabilities determined to be a shared responsibility between the customer and AWS AWS 5 TIC capabilities determined to be solely the responsibility of AWS TIC Capabilities Evaluated 57 Total number of candidate capabilities evaluated as part of the pilot Table 2: Control Responsibilities Pilot Objectives Process and Methods To test the overlay AWS worked with a FedRAMPaccredited 3PAO and a USG customer to produce results for the following testing objectives:  Identify whether and how agencies can use TIC overlay controls vi a mapping to the FedRAMP Moderate control baseline to provide remote agency users access to AWS while enforcing TIC compliance  Determine whether the required capabilities exist within AWS to implement and enforce TIC compliance  Determine the allocation of responsibility for implementing and enforcing TIC compliance An initial analysis of the TIC overlay controls by AWS revealed that over 80 percent of the TIC capability requirements map directly to one or more existing FedRAMP Moderate controls satisfied under the current AWS FedRAMP Authority to Operate (ATO) With the control mapping inhand and in collaboration with our 3PAO AWS developed a TIC security requirements traceability matrix (SRTM) that included control responsibilities The results from this exercise shown in Table 2 above demonstrated that only 16 TIC capabilities would rest solely with the customer Next our 3PAO proceeded with the following testing process and methods: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 8 of 57  Leveraged previous writeups evidence security documentation and interviews from the existing AWS FedRAMP Moderate ATO to determine the satisfaction of security controls that were either the responsibility of AWS or a shared responsibility  Developed a customer test plan for the controls that were either a customer responsibility or a shared responsibility using guidance provided by AWS Certified Solutions Architects  Tested the covered AWS services (IAM Amazon EC2 Amazon S3 Amazon EBS and Amazon VPC) and supporting infrastructure including features functionality and underlying components that assist with enforcing TIC capabilities  Tested implementation of shared and customer responsibilities using a Customer Test Plan and a TIC Pilot SRTM  Interviewed the USG customer on internal policies procedures and security tools used to enforce TIC capabilities as defined by DHS  Collected evidence from the customer to complete assessment of the customer and shared responsibility controls Pilot Results After completion of the assessment phase of the pilot roughly two dozen of the included TIC capabilities required additional discussion with the DHS TIC PMO The outstanding items were reviewed sequentially and final dispositions were recorded based on DHS TIC PMO direction Table 3 below summarizes the results of the pilot assessment and final disposition discussion as synthesized by AWS FedRAMP Associated TIC Capabilities Version 20 Disposition Total Description Implemented 43 TIC capability determined as satisfied or able to be satisfied on AWS Gap 1 TIC capability determined to require further evaluation on AWS by FedRAMP PMO and DHS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 9 of 57 Not Assessed 13 TIC capability determined to be not applicable to a CSP or not included in the customer environment FedRAMP TIC Capabilities Evaluated 57 Total number of candidate capabilities evaluated as part of the pilot Table 3: Synthesized FedRAMP TIC Associated Capability Dispositions Customer Implementation Guidance Based on the results of the pilot and lessons learned AWS is providing guidance on both relevant connection scenarios and the use of AWS capabilities and features that align with the FedRAMPTIC Overlay work described above Following the conclusion of the overlay pilot and pending official guidance from the FedRAMP PMO and TIC PMO AWS designed the next sections to provide USG agencies and contractors with information to assist in the development of “TIC Ready” architectures on AWS As additional referen ce Appendix A contains a Control Implementation Summary (CIS) showing TIC Capability to FedRAMP Control mappings and includes responsible party information Appendix B provides percontrol guidance for AWS and ecosystem capabilities that enable customer compliance with required TIC capabilities Finally Appendix C contains a mapping of TIC Capabilities to their AWSsynthesized dispositions Connection Scenarios In this section we highlight common connection scenarios that relate to TIC compliance For each scenario we provide a brief explanation and a highlevel architecture diagram Public Web and Mobile Applications (Not Included in Pilot) This use case covers public unauthenticated web and mobile applications These applications are accessible via the Internet typically over HTTPS by the general public Users access these web and mobile applications using their choice of web browser and device They can access these web and mobile applications from their home or any public WiFi networks or via their mobile devices These applications are deployed in one or more AWS regions Figure 2 below illustrates this connection scenario This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 10 of 57 Figure 2: Public Web and Mobile Applications (Unauthenticated) In this architecture an Internet Gateway (IGW) provides Internet connectivity to two or more customerdefined public subnets across two or more Availability Zones (MultiAZ) in the VPC An Elastic Load Balancing (ELB) load balancer is placed in these public subnets A web tier is configured within an Auto Scaling group leveraging the load balancer to provide a continuously available web front end The web tier securely communicates with back end resources such as databases and other persistent storage The environment is completely contained within the cloud Public Web and Mobile Applications Requiring Authentication: “All in” Deployments This use case covers authenticated web and mobile application used in an “all in cloud” deployment These applications are accessible via the Internet typically over HTTPS by the agency users They access these web and mobile applications from their home any public WiFi networks or agency networks using either personal or agencyissued electronic devices These applications are deployed in one or more AWS regions These applications leverage rolebased authentication This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 11 of 57 to arbitrate access to application functionality The following examples are public websites with authentication requirements:  System for Award Management (SAM)  GSA Advantage  OMB Max Portal  Cloudbased software as a service (SaaS) offerings (eg email) Figure 3 below illustrate s this connection scenario In this architecture an IGW provides Internet connectivity to two or more customerdefined public subnets across multiple Availability Zones in the VPC An ELB load balancer is p laced in these public subnets A web tier is configured within an Auto Scaling group leveraging the ELB load balancer to provide a continuously available web front end This web tier securely communicates with other backend resources most notably the backend identity store used for role based authentication The environment is completely contained within the cloud Figure 3: Public Web and Mobile Applications Authenticated All In This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 12 of 57 Public Web and Mobile Applications Requiring Authentication: “Hybrid” D eployments This use case covers authenticated web and mobile application use where a portion of the environment resides within a customer datacenter These applications are accessible via Internet typically over HTTPS by the agency users They access these web and mobile applications from their home any public WiFi networks or agency networks using either personal or agencyissued electronic devices These applications are deployed in one or more Amazon Web Services (AWS) regions and one or more customer datacenters These application s leverage rolebased authentication to arbitrate access to application functionality In the hybrid deployment scenario a portion of the application architecture typically the public web presence resides in the cloud while another portion typically sensitive data sources reside in an agency datacenter This scenario is most commonly seen when an agency wishes to maintain its identity and/or data stores outside of the cloud environment Connectivity between the incloud portions of the application and the controlled onpremises components is achieved using AWS Direct Connect or VPN service in conjunction with a TICAP or MTIPS provider In this way data flow between the customer’s in cloud and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 13 of 57 onpremises services are seen by the TIC Figure 4 below illustrates this connection scenario Figure 4: Public Web and Mobile A pplications Authenticated Hybrid AWS Capabilities and Features In order to achieve TIC compliance on AWS we recommend using the following AWS capabilities and features and following our published best practices to secure the resources AWS Identity and Access Management (IAM) is a web service that enables IT organizations to manage multiple users groups roles and permissions for AWS services such as Amazon EC2 Amazon Relational Database Service (RDS) and Amazon VPC IT can centrally manage AWS Service related resources through IAM policies using security credentials such as Access Keys These access keys can be applied to users groups and roles This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 14 of 57 AWS CloudFormation is a web service that uses JSON templates within which customers can describe their IT architecture as code These templates can then be used to launch or create AWS resources that were defined within the template This collection of resources is called a stack CloudFormation templates allow agencies to programmatically implement controls for new and existing environments These controls provide comprehensive rule sets that can be systematically enforced AWS CloudTrail provides a log of all requests and a history of AWS API calls for AWS resources This includes calls made by using the AWS Management Console AWS SDKs commandline tools (CLI) and higherlevel AWS services IT can identify which users and accounts called AWS for services that support CloudTrail the source IP address the calls were made from and when the calls were made Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain systemwide visibility into resource utilization application performance and operational health You can use these insights to react and keep your application running smoothly CloudWatch Logs can be used to monitor your logs for specific phrases values or patterns For example you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs You can view the original log data to see the source of the problem if needed Log data can be stored and accessed for as long as you need using highly durable low cost storage so you don’t have to worry about filling up hard drives AWS Config is a managed service that provides an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config IT can discover existing AWS resources export a complete inventory of AWS resources with all configuration details and determine how a resource was configured at any point in time This facilitates This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 15 of 57 compliance auditing security analysis resource change tracking and troubleshooting You can use AWS Config Rules to create custom rules used to evaluate controls applied to AWS resources AWS also provides a list of standard rules that you can evaluate against your AWS resources such as checking that port 22 is not open in any production security group Amazon S3 is storage for the Internet Amazon S3 is a highly scalable durable and available distributed object store designed for missioncritical and primary data storage Amazon S3 stores objects redundantly on multiple devices across multiple facilities within an AWS region Amazon S3 is designed to protect data and allow access to it even in the case of a failure of a data center The versioning feature in Amazon S3 allows the retention of prior versions of objects stored in Amazon S3 and also protects against accidental deletions initiated by staff or software error Versioning can be enabled on any Amazon S3 bucket Amazon EC2 is a web service that provides resizable compute capacity in the cloud; it is essentially server instances used to build and host software systems Amazon EC2 is designed to make webscale computing easier for developers and customers to deploy virtual machines on demand The simple web service interface allows customers to obtain and configure capacity with minimal friction; it provides complete control of their computing resources Amazon EC2 changes the economics of computing because it allows enterprises to avoid large capital expenditures by paying only for capacity that is actually used Amazon VPC enables the creation of a logically separate space within AWS that can house compute resources and storage resources that can be connected to a customer’s existing infrastructure through a virtual private network (VPN) AWS Direct Connect or the Internet With Amazon VPC it is possible to extend existing management capabilities and security services such as DNS LDAP Active Directory firewalls and intrusion detection systems to include private AWS resources maintaining a consistent means of protecting information whether residing on internal IT resources or on AWS Amazon Glacier is an extremely lowcost storage service that provides secure durable and flexible storage for data backup and archival With Amazon Glacier customers can reliably store their data for as little as $0007 per gigabyte per month Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they don’t have to worry about This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 16 of 57 capacity planning hardware provisioning data replication hardware failure detection and repair or timeconsuming hardware migrations Amazon VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC Flow log data is stored using Amazon CloudWatch Logs After you've created a flow log you can view and retrieve its data in Amazon CloudWatch Logs Flow logs can help you with a number of tasks; for example you can troubleshoot why specific traffic is not reaching an instance which in turn can help you diagnose overly restrictive security group rules You can also use flow logs as a security tool to monitor the traffic that is reaching your instance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 17 of 57 Conclusion AWS services features and our partner ecosystem deliver a suite of capabilities that assist in delivering “TIC Ready” cloud architectures Through collaboration with a USG customer the DHS TIC Program Management Office (PMO) the GSA FedRAMP PMO and our accredited FedRAMP thirdparty assessment organization (3PAO) AWS has demonstrated how customers might enforce many of the capabilities prescribed by TIC While the FedRAMP TIC Overlay is being finalized using the evidence resulting from our TIC Mobile assessment USG customers can implement the TIC capabilities as part of their virtual perimeter protection solution using functionality provided by AWS with a clear definition of the customer responsibility for implementation of the additional TIC capabilities Contributors The following individuals and organizations contributed to this document:  Jennifer Gray US Public Sector Compliance Architect AWS Security  Alan Halachmi Principal Solutions Architect Amazon Web Services  Nandakumar Sreenivasan Senior Solutions Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 18 of 57 APPENDIX A: Control Implementation Summary TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TMAU01 AC6 (1) SHARED AC6 (2) IA1 IA2 IA2 (1) IA2 (2) IA2 (3) IA2 (8) IA2 (11) IA2 (12) IA3 IA4 IA4 (4) IA5 IA5 (1) IA5 (2) IA5 (3) IA5 (6) IA5 (7) IA5 (11) IA6 IA7 IA8 TMCOM02 AC8 SHARED CA3 PL4 TMDS01 AU4 CUSTOMER TMDS02 CP2 CUSTOMER CP10 TMDS03 AU1 SHARED SI4 N/A TMDS04 AU1 SHARED TMDS05 N/A CUSTOMER TMLOG01 AU8 (1) SHARED TMLOG02 AU3 SHARED TMLOG03 AU11 SHARED TMLOG04 AU11 SHARED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 19 of 57 TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TMPC06 N/A SHARED TMTC01 CP8 AWS CP8 (1) CP8 (2) TMTC02 CM7 SHARED TMTC03 CP11 SHARED TMTC04 SC20 CUSTOMER SC21 SC22 TMTC05 IR8 SHARED TMTC06 IR1 SHARED TMTC07 CP2 SHARED TOMG01 CM8 SHARED TOMG02 CM3 SHARED CM9 TOMG04 CP2 SHARED TOMG07 CM8 SHARED TOMG08 N/A AWS TOMG09 N/A AWS TOMG10 N/A AWS TOMG11 N/A AWS TOMON02 CA2 SHARED TOMON03 AU6 (1) SHARED TOMON04 AU1 SHARED AU2 TOMON05 IR3 CUSTOMER TOREP01 CA7 SHARED TOREP02 CA7 SHARED TOREP03 CA7 SHARED TOREP04 IR6 SHARED TORES01 IR8 SHARED TORES02 SI2 SHARED TORES03 SC5 SHARED TSCF01 SC7 SHARED SC7 (8) TSCF02 SC7 SHARED SC7 (8) TSCF03 SC7 SHARED SC7 (8) TSCF04 SC7 CUSTOMER SI3 SI8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 20 of 57 TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TSCF05 SI4 CUSTOMER TSCF06 SC8 (1) CUSTOMER TSCF07 SC8 (1) CUSTOMER TSCF08 SI4 CUSTOMER TSCF09 IA9 SHARED TSCF10 IA5 SHARED TSCF13 AU3 (1) CUSTOMER SC7 SC20 SC21 SC22 TSINS01 AU1 CUSTOMER AU6 AU6 (1) SC7 TSPF01 AC4 CUSTOMER SC7 TSPF03 SC7 SHARED TSPF04 SC7 SHARED TSPF06 AU3 (1) SHARED TSRA01 AC17 CUSTOMER AC17 (2) IA2 (2) SC7 (7) TSRA02 AC20 CUSTOMER CA3 CA3 (3) CA3 (5) TSRA03 AC20 CUSTOMER This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 21 of 57 APPENDIX B: Implementation Guidance TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMAU01 User Authentication SHARED Leverage IAM and its multi factor authentication capabilities TMCOM02 TIC and Customer SHARED Leverage IAM Policies to control and to restrict access to AWS resources TMDS01 Storage Capacity CUSTOMER Leverage AWS Marketplace providers for packet capture and analysis Leverage VPC Flow Logs to capture data flow metadata Leverage CloudWatch Logs with appropriate log retention for log aggregation Enable logging with AWS service s (eg S3 logs ELB logs) TMDS02 Back up Data CUSTOMER Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 autoscaling to recovery from transient hardware failures TMDS03 Data Ownership SHARED Administrative control This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 22 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMDS04 Data Attribution & Retrieval SHARED Leverage S3 buckets with IAM policies and S3 bucket policies to segregate access to data Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing workflows Leverage CloudWatch Logs with IAM policies to consolidate or segregate agency data as required Implement VPC Flo w Logs on all VPCs Enable Cloud Trail logs Enable AWS Config Enable ELB logs Enable S3 logs TMDS05 DLP CUSTOMER Leverage AWS Marketplace providers for DLP technologies Leverage S3 buckets with versioning enabled and MFA delete Enable S3 cross region replication of critical or sensitive data into another AWS account in another region Leverage Glacier Vault Lock for data retention TMLOG01 NTP Server SHARED Configure approved NTP providers within the customer environment TMLOG02 Time Stamping SHARED Configure approved NTP providers within the customer environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 23 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMLOG03 Session Traceability SHARED Leverage S3 buckets with appropriate lifecycle policies Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing workflows Leverage CloudWatch Logs to receive AWS specific and customer service logs with appropriate retention policies Configure services such as VPC Flow Logs to log to the appropriate Log S tream Leverage AWS Marketplace offerings for log aggregation and analysis TMLOG04 Log Retention SHARED Leverage S3 lifecycle policies Leverage Glacier Vault Lock TMPC06 Geographic Diversity SHARED AWS provides geographic diversity within a region Customers must leverage multiple Availability Zones to achieve this diversity Customers may also elect to deploy multi region applications TMTC01 Route Diversity AWS AWS provides route diversity intraregion inter region and for Internet access TMTC02 Least Functionality SHARED Leverage IAM Policies to restrict access to AWS resources Leverage Network Access Control Lists (NACLs) for course grained stateless packet filtering Leverage Security Groups (SGs) for fine grained stateful flow filtering Consider a separation of duties approach for management of NACLs and SGs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 24 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMTC03 IPv6 SHARED Contact AWS Sales Representative regarding current IPv6 offerings TMTC04 DNS Authoritative Servers CUSTOMER Leverage customer managed DNS systems TMTC05 Response Authority SHARED Leverage AWS access and flow control capabilities including IAM Network Access Control Lists and Security Groups Leverage AWS Marketplace providers TMTC06 TIC Staffing SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 25 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMTC07 Response Access SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 Auto Scaling to recovery from transient hardware failures TOMG01 System Inventory SHARED Leverage AWS Config Leverage resource level tags TOMG02 Change & Configuration Management SHARED AWS maintains a formalized change and configuration management system Customers are responsible for these processes within their AWS environment TOMG04 Contingency Planning SHARED Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 Auto Scaling to recovery from transient hardware failures Plan f or alternate region recovery This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 26 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOMG07 Network Inventory SHARED Leverage AWS Config Leverage resource level tags TOMG 08 Service Level Agreement AWS AWS provides service level information through published artifacts including the AWS website TOMG 09 Tailored Service Level Agreement AWS AWS provides elasticity natively as a cloud service provider AWS services can expand/contract based on customer configuration and demand TOMG10 Tailored Security Policies AWS AWS allows customers to customize t heir cloud environment including security policies TOMG11 Tailored Communications AWS AWS provides services and features that enable customers to tailor communication processes AWS develops new capabilities based on customer demand TOMON02 Vulnerability Scanning SHARED Leverage pre authorized products from the AWS Marketplace and/or submit request to AWS for customer executed vulnerability scans TOMON03 Audit Access SHARED Leverage IAM to control and to restrict access to AWS resources This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 27 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOMON04 Log Sharing SHARED Leverage S3 buckets with IAM policies and S3 bucket policies to segregate access to data Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing work flows Leverage CloudWatch Logs with IAM policies to consolidate or segregate agency data as required Implement VPC Flow Logs on all VPCs Enable CloudTrail logs Enable AWS Config Enable ELB logs Enable S3 logs TOMON05 Operational Exercises CUSTOMER Customer Responsibility Contact your AWS Sales Representative regarding Security Incident Response Simulation (SIRS) Game Day offering TOREP01 Customer Service Metrics SHARED AWS maintains customer service metrics Customers must provide customer service for their application Customer secures an AWS Support plan and designates an account point of contact such that AWS customer service may engage as required TOREP02 Operational Metrics SHARED AWS maintains operational metrics Customers must provide operational metrics for their application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 28 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOREP03 Customer Notification SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide like capabilities for users of applications they operate on AWS TOREP04 Incident Reporting SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide like capabi lities for users of applications they operate on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 29 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TORES01 Response Timeframe SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide their own incident response plan TORES02 Response Guidance SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide their own incident response plan TORES03 Denial of Service Response SHARED Leverage Anti DDoS design patterns described in AWS whitepapers Leverage Elastic Load Balanc ing Leverage Auto Scaling Leverage Network Access Controls Lists Leverage Security Groups Leverage AWS Marketplace providers for appropriate tools This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 30 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TSCF01 Application Layer Filtering SHARED Leverage AWS Marketplace providers for appropriate tools TSCF02 Web Session Filtering SHARED Leverage AWS Marketplace providers for appropriate tools TSCF03 Web Firewall SHARED Leverage AWS Marketplace providers for appropriate tools TSCF04 Mail Filtering CUSTOMER Customer responsibility TSCF05 Agency Specific Mail Filters CUSTOMER Customer responsibility TSCF06 Mail Forgery Detection CUSTOMER Customer responsibility TSCF07 Digitally Signing Mail CUSTOMER Customer responsibility TSCF08 Mail Quarantine CUSTOMER Customer responsibility TSCF09 Crypto graphically authenticated protocols SHARED AWS Direct Connect requires customer use of BGP MD5 authentication TSCF10 Reduce the use of clear text management protocols SHARED Leverage IAM aws:SecureTransport Policy Condition TSCF13 DNS Filtering CUSTOMER Customer responsibility TSINS01 NCPS CUSTOMER Customer responsibility TSPF01 Secure all TIC traffic CUSTOMER Customer responsibility TSPF03 Stateless Filtering SHARED Leverage Network Access Control Lists TSPF04 Stateful Filtering SHARED Leverage Security Groups Leverage AWS Marketplace providers TSPF06 Asymmetric Routing SHARED Implement symmetric routing in to or out from AWS TSRA01 Agency User Remote Access CUSTOMER Customer responsibility Leverage Customer Gateway VPN and Virtual Private Gateway to connect a VPC with a site tosite VPN This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 31 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TSRA02 External Dedicated Access CUSTOMER Customer responsibility TSRA03 Extranet Dedicated Access CUSTOMER Customer responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 32 of 57 APPENDIX C: Mapped FedRAMP TIC Capabilities Matrix FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TMAU01 User Authentication TIC systems and components comply with NIST SP 800 53 identification and authentication controls for high impact systems (FIPS 199) Administrative access to TIC access point devices requires multi factor authentication (OMB M 1111) INCLUDED IMPLEMENTED TMCOM01 TIC and US CERT (TS/SCI) The TICAP has a minimum of three qualified people with TOP SECRET/SCI clearance available within 2 hours 24x7x365 with authority to report acknowledge and initiate action based on TOP SECRET/SCI level information including tear line information with U S CERT Authorized personnel with TOP SECRET/SCI clearances have 24x7x365 access to an ICD 705 accredited Sensitive Compartment Information Facility (SCIF) including EXCLUDED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 33 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION the following TOP SECRET/SCI communications channels: Secure telephone (STE/STU) an d card authorized for TOP SECRET/SCI and Secure FAX machine Typically personnel with appropriate clearances to handled classified information will include at least the Senior NOC/SOC manager Chief Information Security Officer (CISO) and Chief Inf ormation Officer (CIO) and other personnel as determined by the agency The SCIF may be shared with another agency and should be within 30 minutes of the TIC management location during normal conditions in order for authorized personnel to exchange clas sified information evaluate the recommendations initiate the response and report operational status with US CERT within two hours of the notification TMCOM02 TIC and Customer The Multi Service TICAP secures and authenticates the administrative communications (ie customer service) between the TICAP operator and each TICAP client INCLUDED IMPLEMENTED TMCOM03 TIC and US CERT (SECRET) The TICAP has a minimum of one qualified person with SECRET or higher clearance immediately available on each shift 24x7x365 with authority to report acknowledge and initiate action based on SECRET level information; including tear line information wit h US CERT Authorized personnel with SECRET EXCLUDED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 34 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION clearances or higher have 24x7x365 immediate access at the TIC management location (NOC/SOC) to the following SECRET communications channels: Secure telephone (STE/STU) and card authorized for SECRET or hig her Secure FAX machine SECRET level email account able to exchange messages with the Homeland Secure Data Network (HSDN) and Access to the US CERT SECRET website Additionally authorized personnel with TOP SECRET/SCI clearances have 24x7 x365 access within 2 hours of notification to an ICD 705 accredited Sensitive Compartment Information Facility (SCIF) including the following TOP SECRET/SCI communications channels: Secure telephone (STE/STU) and card authorized for TOP SECRET/SCI Secure FAX machine TOP SECRET/SCI level email account able to exchange messages with the Joint Worldwide Intelligence Communications System (JWICS) and Access to the US CERT TOP SECRET website TMDS01 Storage Capacity Each TIC access point must be able to perform real time header and content capture of all inbound and outbound traffic for administrative legal audit or other operational purposes The TICAP has storage capacity to retain at least 24 hours of data genera ted at full TIC operating capacity The TICAP INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 35 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION is able to selectively filter and store a subset of inbound and outbound traffic TMDS02 Back up Data In the event of a TICAP system failure or compromise the TICAP has the capability to restore operations to a previous clean state Backups of configurations and data are maintained offsite in accordance with the TICAP continuity of operations plan INCL UDED IMPLEMENTED TMDS03 Data Ownership The Multi Service TICAP documents in the agreement with the customer agency that the customer agency retains ownership of its data collected by the TICAP INCLUDED IMPLEMENTED TMDS04 Data Attribution & Retrieval The Multi Service TICAP identifies and can retrieve each customer agency's data for the customer agency without divulging any other agency's data INCLUDED IMPLEMENTED TMDS05 DLP The TICAP has a Data Loss Prevention program and follows a documented procedure for Data Loss Prevention INCLUDED IMPLEMENTED TMLOG01 NTP Server Each TIC access point has a Network Time Protocol (NTP) Stratum 1 system as a stable Primary Reference Tim e Server (PRTS) synchronized within 025 seconds relative to Coordinated Universal Time (UTC) The primary synchronization method is an out of band NIST/USNO national reference time source (Stratum 0) such as the Global Positioning System (GPS) or WWV radi o clock See the TIC Reference Architecture Appendix F for additional information INCLUDED IMPLEMENTED TMLOG02 Time Stamping All TIC access point event recording clocks are synchronized to within 3 seconds relative to Coordinated INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 36 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Universal Time (UTC) All TICAP log timestamps include the date and time with at least to thesecond granularity Log timestamps that do not use Coord inated Universal Time (UTC) include a clearly marked time zone designation The intent is to facilitate incident analysis between TICAPs and TIC networks and devices TMLOG03 Session Traceability The TICAP provides online access to at least 7 days of session traceability and audit ability by capturing and storing logs / files from installed TIC equipment including but not limited to firewalls routers servers and other designated devices The TICAP maintains the logs needed to est ablish an audit trail of administrator user and transaction activity and sufficient to reconstruct security relevant events occurring on performed by and passing through TIC systems and components Note: This capability is intended for immediate online access in order to trace session connections and analyze security relevant events In addition TMLOG04 requires retaining logs for an additional period of time either online or offline INCLUDED IMPLEMENTED TMLOG04 Log Retention The TICAP follows a documented procedure for log retention and disposal including but not limited to administrative logs session connection logs and application transaction logs Record retention and disposal schedules are in accordance with the National Archives and Reco rds Administration existing General Records Schedules in particular Schedule 12 “Communications INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 37 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Records” and Schedule 20 “Electronic Records;” or NARA approved agency specific schedule Note: This capability is intended for the management and operation of the TICAP itself and does not require the TICAP infer or implement retention policies based on the content of TICAP client communications The originator and recipient of communications through a TICAP remain responsible for their own retention and dis posal policies TMPC01 TIC Facility The TIC access points comply with NIST SP 800 53 physical security controls for high impact systems (FIPS 199) DEFERRED N/A TMPC02 NOC/SOC Facilities The TIC management locations such as a Network Operations Center (NOC) and a Security Operations Center (SOC) comply with NIST SP 80053 physical security controls for medium impact systems (FIPS 199) DEFERRED N/A TMPC03 SCIF Facilities The TICAP maintains access to an accredited Sensitive Compartment Information Facility (SCIF) that complies with ICD 705 “Sensitive Compartmented Information Facilities” EXCLUDED N/A TMPC04 Dedicated TIC Spaces The TIC access points and TIC management functions such as NOC/SOC are located in spaces dedicated for exclusive use or support of the US Government The space is secured by physical access controls to ensure that TIC systems and components are accessi ble only by authorized personnel Examples of dedicated spaces include but are not DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 38 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION limited to secured racks cages rooms and buildings TMPC05 Facility Resiliency The TIC access point is equipped for uninterrupted operations for at lea st 24 hours in the event of a power outage and conforms to specific physical standards including but not limited to: Electrical systems meet or exceed the building operating and maintenance standards as specified by the GSA Public Buildings Service Standards PBS 100 TIC systems and components are connected to uninterruptable power in order to maintain mission and business essential functions including but not limited to TIC systems support systems and powered telecommunications facilities inc luding at the DEMARC or MPOE Uninterruptable power systems HVAC and lighting are connected to an onsite automatic standby/emergency generator capable of operating continuously (without refueling) for at least 24 hours DEFERRED N/A TMPC06 Geographic Diversity The Multi Service TICAP has geographic separation between its TIC access points with at least 10 miles separation recommended It is also recommended that single agency TICAPs have geographic separation between their TIC access points INCLUDED IMPLEMENTED TMTC01 Route Diversity The TIC access point follows the National Communications System (NCS) recommendations for Route Diversity including at least two physically separate points of entry at the TIC access point and physically INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 39 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION separate cabling paths to an external telecommunications provider or Internet provider facility TMTC02 Least Functionality TIC systems and components in the TIC access point are configured according to the principal of ""least functionality"" in that they provide only essential capabilities and specifically prohibit or restrict the use of non essential functions ports protoco ls and/or services INCLUDED IMPLEMENTED TMTC03 IPv6 All TIC systems and components of the TIC access point support both IPv4 and IPv6 protocols in accordance with OMB Memorandum M 0522 and Federal CIO memorandum “Transition to IPv6” The TICAP supports both IPv4 and IPv6 addresses and can transit both native IPv4 and native IPv6 traffic (ie dualstack) between external connections and agency internal networks The TICAP may also support other IPv6 transit methods such as tunneling or translation The TICAP ensures that TIC access point systems implement IPv6 capabilities (native tunneling or translation) without compromising IPv4 capabilities or security IPv6 security capabilities should achieve at least functional parity with IPv4 security capabilities INCLUDED GAP TMTC04 DNS Authoritative Servers The TIC access point supports hosted DNS services including DNSSEC for TICAP client domains The TICAP configures DNS services in accordance with but not limited to the following recommendations from NIST INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 40 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION SP 800 81 Rev 1: 1 The TICAP deploys separate authoritative name servers from caching (also known as resolving/recursive) name servers or an alternative architecture preventing cache poisoning 2 The TICAP implements DNS SEC by meeting NIST SP 800 81 Rev 1 for key generation key storage key publishing zone signing and signature verification TMTC05 Response Authority The TICAP maintains normal delegations and devolution of authority to ensure essential incident response performance to a no notice event This includes but is not limited to terminating limiting or modifying access to external connections including to the Internet based on documented criteria including when advised by US CERT INCLUDED IMPLEMENTED TMTC06 TIC Staffing The TIC management location such as a Network Operations Center (NOC) and/or Security Operations Center (SOC) is staffed 24x7 On scene personnel are qualified and authorized to initiate appropriate technical responses including when external access is disrupted INCLUDED IMPLEMENTED TMTC07 Response Access TICAP Operations personnel have 24x7 physical or remote access to TIC management systems which control the TIC access point devices Using this access TICAP operations personnel can terminate tro ubleshoot or repair external connections including to the Internet as required INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 41 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TOMG01 System Inventory The TICAP develops documents and maintains a current inventory of all TIC information systems and components including relevant ownership information INCLUDED IMPLEMENTED TOMG02 Change & Configuration Management The TICAP follows a formal configura tion management and change management process to maintain a proper baseline INCLUDED IMPLEMENTED TOMG03 Change Communication The TICAP communicates all changes approved through the formal configuration management and change management processes to customers as defined in SLAs or other authoritative documents DEFERRED N/A TOMG04 Contingency Planning The TICAP maintains an Information Systems Contingency Plan (ISCP) that provides procedures for the assessment and recovery of TIC systems and components following a disruption The contingency plan should be structured and implemented in accordance with N IST SP 800 34 Rev 1 INCLUDED IMPLEMENTED TOMG05 TSP The TICAP has telecommunications service priority (TSP) configured for external connections including to the Internet to provide for priority restoration of telecommunication services DEFERRED N/A TOMG06 Maintenance Scheduling The TICAP employs a formal technical review process to schedule conduct document and communicate maintenance and repairs The TICAP maintains maintenance records for TIC systems and components The intent of this capability is to minimize DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 42 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION downtime and op erational impact of scheduled maintenance and outages TOMG07 Network Inventory The TICAP maintains a complete map or other inventory of all customer agency networks connected to the TIC access point The TICAP validates the inventory th rough the use of network mapping devices Static translation tables and appropriate points of contact are provided to US CERT on a quarterly basis to allow in depth incident analysis INCLUDED IMPLEMENTED TOMG08 Service Level Agreement The Multi Servi ce TICAP provides each customer with a detailed Service Level Agreement INCLUDED NOT ASSESSED TOMG09 Tailored Service Level Agreement The Multi Service TICAP provides an exception request process for individual customers INCLUDED NOT ASSESSED TOMG10 Tailored Security Policies The Multi Service TICAP accommodates individual customer agencies’ security policies and corresponding security controls as negotiated with the customer INCLUDED NOT ASSESSED TOMG11 Tailored Communications The Multi Service TICAP accommodates tailored communications processes to meet individual customer requirements INCLUDED NOT ASSESSED TOMON01 Situational Awareness The TICAP maintains situational awareness of the TIC and its supported networks as need ed to support customer security requirements Situational awareness can be achieved by correlating data from multiple sources multiple vendors and multiple types of data by using for example Security Incident & Event Management (SIEM) tools DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 43 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TOMON02 Vulnerability Scanning At a minimum the TICAP annually conducts and documents a security review of the TIC access point and undertakes the necessary actions to mitigate risk to an acceptable level (FISMA FIPS 199 and FIPS 200) Vulnerability scanning of the TIC architecture is a component of the security review INCLUDED IMPLEMENTED TOMON03 Audit Access The TICAP provides access for government authorized auditing of the TIC access point including all TIC systems and components Authorized assessment teams are provided acce ss to previous audit results of TIC systems and components including but not limited to C&A and ICD documentation INCLUDED IMPLEMENTED TOMON04 Log Sharing The TICAP monitors and logs all network services where possible including but not limited to DNS DHCP system and network devices web servers Active Directory Firewalls NTP and other Information Assurance devices/tools These logs can be made avail able to US CERT on request INCLUDED IMPLEMENTED TOMON05 Operational Exercises The TIC Access Provider participates in operational exercises that assess the security posture of the TIC The lessons learned from operational exercises are incorporated into network defenses and operational procedures for both the TICAP and its customers INCLUDED IMPLEMENTED TOREP01 Customer Service Metrics The TICAP collects customer service metrics about the TIC access point and reports them to its customers INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 44 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION DHS and/or OMB as required Examples of customer service metrics include but are not limited to performance within SLA provisions issue identific ation issue resolution customer satisfaction and quality of service TOREP02 Operational Metrics The TICAP collects operational metrics about the TIC access point and reports them to its customers DHS and/or OMB as requested Examples of operational metrics include but are not limited to performance within SLA provisions network activity data (including normal and peak usage) and improvement to customer security posture INCLUDED IMPLEMENTED TOREP03 Customer Notification The Multi Service TICAP reports threats alerts and computer security related incidents and suspicious activities that affect a subscribing agency to the subscribing agency INCLUDED IMPLEMENTED TOREP04 Incident Reporting The TICAP report s incidents to US CERT in accordance with federal laws regulations and guidance INCLUDED IMPLEMENTED TORES01 Response Timeframe The TICAP has a documented and operational incident response plan in place that defines actions to be taken during a decla red incident In the event of a declared incident or notification from US CERT TICAP operations personnel immediately activate incident response plan(s) TICAP operations personnel report operational status to US CERT within two hours and continue to repo rt based on US CERT direction INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 45 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TORES02 Response Guidance TIC operations personnel acknowledge implement and document tactical threat and vulnerability mitigation guidance provided by US CERT INCLUDED IMPLEMENTED TORES03 Denial of Service Response The TICAP manages filters excess capacity bandwidth or other redundancy to limit the effects of information flooding types of denial of service attacks on the organization’s internal networks and TICAP services The TICAP has a greements with external network operators to reduce the susceptibility and respond to information flooding types of denial of service attacks The Multi Service TICAP mitigates the impact on non targeted TICAP clients from a DOS attack on a particular TICA P client This may included diverting information flooding types of denial of service attacks targeting a particular TICAP client in order to maintain service to other TICAP clients INCLUDED IMPLEMENTED TSCF01 Application Layer Filtering The TIC access point uses a combination of application firewalls (stateful application protocol analysis) application proxy gateways and other available technical means to implement inbound and outbound application layer filtering The TICAP will develop and implement a risk based policy on filtering or proxying new protocols INCLUDED IMPLEMENTED TSCF02 Web Session Filtering The TIC access point filters outbound web sessions from TICAP clients based on but not limited to: web content active content destination URL pattern and IP address Web INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 46 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION filters have the capability of blocking malware fake software updates fake antiv irus offers phishing offers and botnets/keyloggers calling home TSCF03 Web Firewall The TIC access point filters inbound web sessions to web servers at the HTTP/HTTPS/SOAP/XML RPC/Web Service application layers from but not limited to cross site scripting (XSS) SQL injection flaws session tampering buffer overflows and malicious web crawlers INCLUDED IMPLEMENTED TSCF04 Mail Filtering The TIC access point performs malware scanning filters content and blocks spam sending servers as specified by NIST 800 45 ""Guidelines for Electronic Mail Security"" for inbound and outbound mail These TIC access point protections are in addition to ma lware scanning and content filtering performed by the agency's mail servers and end user’s host systems The TICAP takes agency specified actions for potentially malicious or undesirable mail including at least the following actions: block messages tag undesirable content sanitize malicious content and deliver normally Multi Service TICAPs tailor their malware and content filtering services for individual agency mail domains INCLUDED NOT ASSESSED TSCF05 Agency Specific Mail Filters The TIC access point uses an agency specified custom processing list with at least the combinations of senders recipients network IP addresses or host names The agency specified custom processing list has custom TICAP malware and content filtering INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 47 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION actio ns Mail allowed by an agency specified custom processing list is still scanned by the TICAP for malware or undesirable content and tagged if found Multi Service TICAPs tailor their malware and content filtering services for individual agency mail domains TSCF06 Mail Forgery Detection For email received from other agency mail domains known to have domain level sender authentication (for example Domain Keys Identified Mail or Sender Policy Framework) the TIC access point includes the results of the domain level sender forgery analysis when determining potentially suspicious or undesirable email This capability is intended to support domain level sender authentication but does not necessarily confirm a particular sender or message is trustworthy Scoring criteria for this capability will be aligned with the National Strategy for Trusted Identities in Cyberspace (NSTIC) The TICAP takes agency specific actions for email determined to be suspicious or undesirable INCLUDED NOT ASSESSED TSCF07 Digitally Signing Mail For email sent to oth er agency mail domains the TICAP ensures the messages have been digitally signed at the Domain Level (for example Domain Keys Identified Mail) in order to allow receiving agencies to verify the source and integrity of email This capability is intended to support domain level sender authentication but does not necessarily confirm a particular sender or message is trustworthy Signing procedures will be in alignment with the National Strategy INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 48 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION for Trusted Identities in Cyberspace and may occur at the burea u or agency sub component level instead of the TIC access point TSCF08 Mail Quarantine The TICAP quarantines mail categorized as potentially suspicious while the agency's mail domain reviews and decides what action to take The agency's mail domain can take at least the following actions: block the message deliver the message sanitize mali cious content and tag undesirable content Note: this is intended to be an additional option which agency mail operators can specify with capability TSCF04 It does not require agencies to quarantine potentially suspicious mail INCLUDED NOT ASSESSED TSCF09 Crypto graphically authenticated protocols The TICAP validates routing protocol information using authenticated protocols The TICAP configures Border Gateway Protocol (BGP) sessions in accordance with but not limited to the following recommendat ion from NIST SP 800 54: BGP sessions are protected with the MD5 signature option NIST and DHS are collaborating on additional BGP robustness mechanisms and plan to publish future deployment recommendations and guidance INCLUDED IMPLEMENTED TSCF10 Reduce the use of clear text management protocols The TIC access point limits and documents the use of unauthenticated clear text protocols for TIC management and will phase out such protocols or enable cryptographic authentication where technically and operationally feasible INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 49 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TSCF11 Encrypted Traffic Inspection The TICAP has a documented procedure or plan that explains how it inspects and analyzes encrypted traffic The document includes a description of defensive measures taken to protect TICAP clients from malicious content or unauthorized data exfiltration whe n traffic is encrypted The TIC access point analyzes all encrypted traffic for suspicious patterns that might indicate malicious activity and logs at least the source destination and size of the encrypted connections for further analysis DEFERRED N/A TSCF12 User Authentication The TICAP has a documented procedure or plan that explains how it inspects and analyzes connections by particular TICAP client end users or host systems which have custom requirements for malware and content filtering Connecti on content is still scanned by the TICAP for malware or undesirable content and logged by the TICAP when found DEFERRED N/A TSCF13 DNS Filtering The TIC access point filters DNS queries and performs validation of DNS Security Extensions (DNSSEC) signed domains for TICAP clients The TICAP configures DNS resolving/recursive (also known as caching) name servers in accordance with but not limited t o the following recommendations from NIST SP 800 81 Revision 1 (Draft): 1 The TICAP deploys separate recursive name servers from authoritative name servers to prevent cache poisoning 2 The TICAP filters DNS queries for known malicious domains INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 50 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION 3 Th e TICAP logs at least the query answer and client identifier TSINS01 NCPS The TIC access point participates in the National Cyber Protection System (NCPS operationally known as Einstein) INCLUDED NOT ASSESSED TSINS02 IDS/NIDS The TIC access point passes all inbound/outbound network traffic through Network Intrusion Detection Systems (NIDS) configured with custom signatures including signatures for the application layer This includes but is not limited to critical signatures published by US CERT DEFERRED N/A TSPF01 Secure all TIC traffic All external connections are routed through a TIC access point scanned and filtered by TIC systems and components according to the TICAP's documented policy which includes critical sec urity policies when published by US CERT The definition of ""external connection"" is in accordance with the TIC Reference Architecture Appendix A (Definition of External Connection) INCLUDED IMPLEMENTED TSPF02 Default Deny By default the TIC access point blocks network protocols ports and services The TIC access point only allows necessary network protocols ports or services with a documented mission requirement and approval DEFERRED N/A TSPF03 Stateless Filtering The TIC access point implements stateless blocking of all inbound and outbound connections without being limited by connection state tables of TIC systems and components INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 51 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Attributes inspected by stateless blocks include but are not limited to: Directio n (inbound outbound interface) Source and destination IPv4/IPv6 addresses and network masks Network protocols (TCP UDP ICMP etc) Source and destination port numbers (TCP UDP) Message codes (ICMP) TSPF04 Stateful Filtering By default the TIC access point blocks unsolicited inbound connections For authorized outbound connections the TIC access point implements stateful inspection that tracks the state of all outbound connections and blocks packets that deviate from standard protocol state transitions Protocols supported by stateful inspection devices include but are not limited to: ICMP (errors matched to original protocol header) TCP (using protocol state transitions) UDP (using timeouts) Other Inter net protocols (using timeouts) Stateless network filtering attributes INCLUDED IMPLEMENTED TSPF05 Filter by Source Address The TIC access point only permits outbound connections from previously defined TICAP clients using Egress Source Address Verification It is recommended that inbound filtering rules block traffic from packet source addresses assigned to internal networks a nd special use addresses (IPv4 RFC5735 IPv6 RFC5156) DEFERRED N/A TSPF06 Asymmetric Routing The TIC access point stateful inspection devices correctly process traffic returning through asymmetric INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 52 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION routes to a different TIC stateful inspection device; or documents how return traffic is always routed to the same TIC access point stateful inspection device TSPF07 FedVRE (H323) The TIC access point supports Federal Video Relay Service (FedVRS) for the Deaf (wwwgsagov/fedrelay) network connections including but not limited to devices implementing stateful packet filters Please refer to http://wwwfedvrsus/supports/technical for FedVRS technical requirements Agencies may document alternative ways to achieve reasonable accommod ation for users of FedVRS EXCLUDED N/A TSRA01 Agency User Remote Access The TIC access point supports telework/remote access for TICAP client authorized staff and users using adhoc Virtual Private Networks (VPNs) through external connections including the Internet This capability is not intended to include permanent VPN con nections for remote branch offices or similar locations In addition to supporting the requirements of OMB M0616 “Protection of Sensitive Agency Information"" the following baseline capabilities are supported for telework/remote access at the TIC Access Point: 1 The VPN connection terminates behind NCPS and full suite of TIC capabilities which means all outbound traffic to/from the VPN users to external connections including the Internet can be inspected by NCPS 2 The VPN connection terminates in front of TICAP managed security controls including but not limited to a INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 53 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION firewall and IDPS to allow traffic to/from remote access users to internal networks to be inspected 3 NIST FIPS 140 2 validated cryptography is used to implement encryption on all VPN connections (see NIST SP 800 46 Rev1) 4 Split tunneling is not allowed (see NIST SP 800 46 Rev1) Any VPN connection that allows split tunneling is considered an external connection and terminates in front of NCPS 5 Multi factor authentication is used (see NIST SP 800 46 Rev1 OMB M 1111) 6 VPN concentrators and Virtual Desktop/Application Gateways use hardened appliances maintained as TICAP network security boundary devices 7 If telework/remote clients use Government Furnished Equipment (GFE) the VPN connection may use access at the IP network level and access through specific Virtual Desktops/Application Gateways 8 If telework/remote clients use non GFE the VPN connection uses only access through specific Virtual Desktops/Applicatio n Gateways TICAP clients may support additional telework/remote access connections for authorized staff and users using equivalent agency managed security controls at non TIC Access Point locations The agency level NOC/SOC is responsible for maintainin g the inventory of additional telework/remote access connections and coordinating agency managed security controls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 54 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Because of the difficulty verifying the configuration sanitizing temporary and permanent data storage and analyzing possible compromises of nonGovernment Furnished Equipment it is the agency’s responsibility to document in accordance with OMB M 0716 if sensitive data may be accessed remotely using non GFE and informing the TIC Access Provider of the appropriate security configuration policies to implement TSRA02 External Dedicated Access The TIC access point supports dedicated external connections to external partners (eg non TIC federal agencies externally connected networks at business partners state/local governments) with a documented mission requirement and approval This include s but not limited to permanent VPN over external connections including the Internet and dedicated private line connections to other external networks The following baseline capabilities are supported for external dedicated VPN and private line connect ions at the TIC Access Point: 1 The connection terminates in front of NCPS to allow traffic to/from the external connections to be inspected 2 The connection terminates in front of the full suite of TIC capabilities to allow traffic to/from external c onnections to be inspected 3 VPN connections use NIST FIPS 1402 validated cryptography over shared public networks including the INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 55 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Internet 4 Connections terminated in front of NCPS may use split tunneling TSRA03 Extranet Dedicated Access The TIC access point supports dedicated extranet connections to internal partners (eg TIC federal agencies closed networks at business partners state/local governments) with a documented mission requirement and approval This includes but not limited to permanent VPN over external connections including the Internet and dedicated private line connections to other internal networks The following baseline capabilities are supported for extranet dedicated VPN and private line connections at the TIC Access Point: 1 The connection terminates behind NCPS and full suite of TIC capabilities which means all outbound traffic to/from the extranet connections to external connections including the Internet is inspected by NCPS 2 The connection terminates in front of TICAP managed security controls including but not limited to a firewall and IDPS to allow traffic to/from extranet connections to internal networks including other extranet connections to be inspected 3 VPN connections use NIST FIPS 1402 validated cryptography over shared public networks including the Internet 4 Split tunneling is not allowed Any VPN connection that allows split tunneling is considered an external connection and must terminate in front INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 56 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION of NCPS TICAP clients may support dedicated extranet connections with internal partners using equivalent agency managed security controls at non TIC Access Point locations The agency level NOC/SOC is responsible for maintaining the inventory of extranet connections with internal p artners and coordinating agency managed security controls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 57 of 57 Notes 1https://wwwwhitehousegov/sites/default/files/omb/assets/omb/memoranda/ fy2008/m0805pdf 2 https://wwwfedrampgov/files/2015/04/Description FTOverlaydocx 3 https://wwwfedrampgov/draftfedrampticoverlay/",General,consultant,Best Practices Homelessness_and_Technology,Homelessness and Technology How Technology Can Help Communities Prevent and Combat Homelessness March 2019 This document has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or lice nsors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document i s not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 6 Best Practices for Combatting Homelessness 6 Connect Data Sources with Data Lakes 7 Data Lake Solution 9 AWS Lake Formation 9 Enable Data Analytics Using Big Data and Machine Learning Techniques 10 Data Processing and Storage 10 Make Predictions with Machine Learning and Analytics 11 Manage Identity and Vital Records 12 Leverage AWS for HIPAA Compliance 13 HMIS Data Privacy and HIPAA 13 Conclusion 14 Contributors 14 Further Reading 15 Document Revisions 15 ArchivedAbstract The disparate nature of current homeless information management systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing demand to connect these disparate systems to affect better outcomes In this document we have outlined four pillars of how AWS technology and services can act as a best practice to organizations looking to leverage the cloud for Homeless Management Information Systems (HMIS) These pillars are as follows: • Connect disparate data sources using a data lake design patte rn • Make predictions using data analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk for experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPAA) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 6 Introduction Preventing and combatting homelessness depends on a coordinated Continuum of Care (CoC) on the ground locally sharing information across disparate systems and collaborating with the public nonprofit philanthropic and private sector partners The systems that collect this information today (ie homelessness services electronic health records education and criminal justice information systems ) were designed independently to address particular applications and are managed by different entities with separate IT systems and governance The disparate nature of these systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing deman d to connect these disparate systems to affect better outcomes Redesigning these systems for interoperability is critical but it will take time In the meantime you can use the best practices in this document to connect disparate information today to d evelop a comprehensive view for each client to drive better outcomes and enable analytics that support data drive n decision making Best Practices for Combatting Homelessness The following best practices focus on addressing some of the challenges of comba tting homelessness but they are highly applicable to other socioeconomic and healthcare challenges that cross multiple systems • Connect disparate data sources using a data lake design pattern • Make predictions using d ata analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk or experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPA A) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 7 Connect Data Sources with Data Lakes Connecting disparate data sources to create a comprehensive view of the homeless population and their interactions across numerous service providers and government entities can come with many technical challenges Schema and structural differences in separate locations can be difficult to combine and query in a single place Also some data may be highly structured whereas other dataset s may be less structured and involv e a smaller signal to noise ratio For example data stored in a tabular CSV format from a traditional database combined with a nested JSON schema that may come from a fleet of devices (eg personal health records v ersus realtime medical equipment data) can be difficult to join and query together using a relational database alone A data lake is a centralized repository that allows you to store all of your structured and unstructured data at any scale You can store your data as is without having to firs t structure the data and run different types of analytics Dashboards visualizations big data processing real time analytics and machine learning can all help contribute to better decision making and improve client outcomes A data warehouse is a central repository of structured information that can be analyzed to make better informed decisions Data flows into a data warehouse from transactional systems relational databases and other sources typically on a regular cadence Business analysts da ta scientists and decision makers access the data through business intelligence (BI) tools SQL clients and other analytics applications Data warehouses and data lakes complement each other well by allowing separation of concerns and leveraging scalable storage and scalable analytic capability respectively ArchivedAmazon Web Services Homelessness and Technology Page 8 Figure 1: Connecting Disparate Data Sources A Homeless Management Information System (HMIS) is an information technology system used to collect client level data and data o n the provision of housing and services to homeless individuals and families and persons at risk of homelessness You can create data lakes to connect disparate HMIS data across CoC and regional boundaries With a consolidated dataset you gain a comprehen sive and unduplicated understanding of who is served with which programs and to what outcomes across a region or state This depth of understanding reveals patterns that can help care providers rapidly create and tune interventions to the unique needs of homeless groups (eg veterans youth elders chronically homeless and so on ) and provides the public elected officials and funders with transparency about investments versus outcomes By centralizing data and allowing Federated access to a searchabl e data catalog you can address pain points around connecting disparate data systems The data lake can accept data from many different sources These may include but are not limited to: • Existing relational database and data warehouse engines (either on premises or in the cloud) • Clickstream data from mobile or web applications • Internet of Things (IoT) device data • Flat file imports ArchivedAmazon Web Services Homelessness and Technology Page 9 • API data • Media sources such as v ideo and audio streams This data should be stored durably and encrypted with industry standard open source tools both at rest and in transit since the data may contain personally identifiable information (PII) and be subject to compliance controls Federated access through an Identity provider (eg Active Directory Google Facebook etc) should also be used as a means of authorization to enable different teams to access the correct level of data Metadata concerning the data should be held within a searchable data catalog to enable fast access to structural and data classification inform ation This should all be accomplished in a cost effective and scalable manner with the data held in its native format to facilitate export further transformation and analysis Data Lake Solution The Data Lake solution automatically crawls data sources identifies data formats and then suggests schemas and transformations so you don’t have to spend time hand coding data flows For example if you upload a series of JSON files to Amazon Simple Storage Service (Amazon S3) AWS Glue a fully managed extract transform and load (ETL) tool can scan these files and work out the schema and data types present within these files Thi s metadata is then stored in a catalog to be used in subsequent transforms and queries Additionally user defined tags are stored in Amazon DynamoDB a key value document database to add business relevant context to each dataset The solution enables you to create simple governance policies that require specific tags when datasets are registered with the data lake You can browse available datasets or search on dataset attributes and tags to quickly find an d access data relevant to your business needs AWS Lake Formation The AWS Lake Formation service builds on the existing data lake solution by allow ing you to set up a secure data lake within days Once you define where your lake is located Lake Formation collects and catalogs this data moves the data into Amazon S3 for secure access and finally cleans and classifies the data using machine learning algorithms You can then access a centralized data ca talog which describes available dataset s and their appropriate usage This approach has a number of benefits from ArchivedAmazon Web Services Homelessness and Technology Page 10 building out a data lake quickly to simplifying security management and allowing easy and secure self service access Enabl e Data Analytics Using Big Data and Machine Learning Techniques Communities want a better understanding of the circumstances that contribute to homelessness prevent homelessness and accelerate someone’s path out of homelessness These predictions are crit ical inputs for the development of interventions across a continuum of care and for disaster response planning With a data lake communities can build train and tune machine learning models to predict outcomes Data Processing and Storage In today's co nnected world a number of data sources are available to be consumed Some examples include public APIs sensor/device data website analytics imagery as well as traditional forms of data such as relational databases and data warehouses Amazon Relational Database Service ( Amazon RDS) allows developers to build and migrate existing databases into the cloud AWS supports a large range of commercial and open source database engines (eg MySQL PostGres Amazon Au rora Oracle SQL Server) allowing developers freedom to keep their current database or migrate to an open source platform for cost savings and new features Amazon RDS maintains highavailab ility through the use of Multi Availability Zone deployments to ensure that production databases stay operational in the event of a hardware failure For customers with data warehousing needs Amazon Redshift enables developers to query large sets of structured data within Redshift a nd with in Amazon S3 When combined with a business intelligence tool such as Amazon Quick Sight Tableau or Microsoft Power BI you can create powerful data visualizations and gain insights into data that were previously out of reach on legacy IT systems Amazon Kinesis makes it easy to collect process and analyze streaming data Kinesis enables th e construction of real time data dashboards video analytics and stream transformations to filter and query data as it comes into the organization from an array of sources ArchivedAmazon Web Services Homelessness and Technology Page 11 Make Predictions with Machine Learning and Analytics Machine learning can help ans wer complicated questions by making predictions about future events from past data Some examples of machine learning models include image classification regression analysis personal recommendation systems and time series forecasting For a CoC these ca pabilities may seem out of reach but due to the power and scale of the cloud these capabilities are now within anyone’s reach Amazon Comprehend Medical Amazon Forecast and Amazon Personalize put powerful machine learning model creation capabilities int o the hands of developers requiring no machine learning background or servers to manage Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that makes it easy to use natural language processing and machine learning to extract relevant medical information from unstructured text For example you can use Comprehend Medical to identify and search for key terms in a large corpus of health records allowing case officers and medical professionals to look for recurring patterns or key phrases in patient records when providing treatment to homeless individuals Amazon Forecast Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts You can use Amazon Forecast to predict changes in a homeless population over time Forecast can also consider how other correlating external factors affect the population such as natu ral disasters or severe weather or the introduction of new programs and initiatives Amazon Personalize Amazon Personalize is a machine learning service that makes it easy for developers to create individ ualized recommendations for customers using their applications For example many times individuals at risk of or experiencing homelessness struggle to find assistance programs Navigating these many programs and facilities can be daunting and time consumi ng By using HMIS data from other individuals in similar situations you can build a recommendation engine that suggests relevant programs to individuals and families These recommendations enable them to access programs that they may not be aware of or have the time to research ArchivedAmazon Web Services Homelessness and Technology Page 12 Manag e Identity and Vital Records Proof of identity and eligibility are critical to matching the right people at the right time to the right interventions Copies of vital records such as social security cards birth certificates proof of disability and copies of utility bills lease or property title documents are often required by various programs that are designed to help those experiencing or at risk of experien cing homelessness However without a secure and reliable place to store and access these documents the most vulnerable people are often left the worst off Their lack of documentation can become a barrier to service and extend the length of crisis In ad dition to the need for a secure storage location customers need a mechanism to control and share documents with authorized parties to evaluate eligibility for various programs and/or to verify authenticity This mechanism must track who accesses these documents at what time and in what manner in a cryptographically verifiable immutable way Ledger or blockchain based applications can meet this requirement by storing the interaction event metadata for a document or set of documents in a verifiable ledger This ledger creates a verifiable audit trail that can store all of the events that occur during a document ’s lifetime Amazon Simple Storage Service (Amazon S3) Amazon Simple Storage Service ( Amazon S3) store s objects in the cloud reliably and at scale Using Amazon S3 you can build the substrate for a document storage and retrieval application Amazon S3 has many pertinent security features such as multi factor control of deleting and modifying objects and object versioning Amazon S3 also uses encryption at rest and in transit using industry standard encryption algorithms and a simple HTTPS based API Amazon S3 supports signed URLs so that access to objects can be granted for a limited time Finally Amazon S3 offers cost savings with intelligent tiering so that documents can be automatically moved into different storage tiers depending on their usage patterns Amazon Quantum Ledger Database (Amazon QLDB) Amazon Quantu m Ledger Database ( Amazon QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and ArchivedAmazon Web Services Homelessness and Technology Page 13 every application data change and maintains a complete and verifiable history of changes over time Amazon Managed Blockchain Amazon Managed Blockchain is fully managed blockchain service that makes it easy to create and manage scalable blockchain networks using popular open source frameworks such as Hyperledger Fabric and Ethereum By combining secure storage in the cloud with a cryptographically verifiabl e event log it is possible to build a scalable application that can store documents in a secure manner and be able to verify the contents and access patterns to each individual document during its lifetime Leverage AWS for HIPAA Compliance Health Insurance Portability and Accountability Act (HIPAA) compliance concerns the storage and processing of protected health information (PHI) such as insurance and billing information diagnosis dat a lab results and so on HIPAA applies to covered entities (eg health care providers health plans and health care clearinghouses) as well as business associates (eg entities that provide services to a covered entity involving the processing stora ge and transmission of PHI) AWS offers a standardized Business Associates Addendum (BAA) for business associates Customers who execute a BAA may process store and transmit PHI using HIPAA eligible services defined in the AWS BAA such as Amazon S3 Amazon QuickSight AWS Glue and Amazon DynamoDB For a complete list of services see HIPAA Eligible Services Referenc e HMIS Data Privacy and HIPAA Each CoC is responsible for selecting an HMIS software solution that complies with the Department of Housing and Urban Development's (HUD) standards HMIS has a number of privacy and security standards that were developed to protect the confidentiality of personal information while at the same time allowing limited data disclosure in a responsible manner These standards were developed after careful review of the HIPAA standards regarding PHI The Reference Architecture for HIPAA on AWS deploys a model environment that can help organizations with workloads that fall within the scope of HIPAA The reference ArchivedAmazon Web Services Homelessness and Technology Page 14 architecture addresses certain technic al requirements in the Privacy Security and Breach Notification Rules under the HIPAA Administrative Simplification Regulations (45 CFR Parts 160 and 164) AWS has also produced a quick start reference deployment for Standardized Architecture for NIST based Assurance Frameworks on the AWS Cloud This quick start focuses on the NIST based assurance frameworks: • National Institute of Standards and Technology (NIST) SP 800 53 (Revision 4) • NIST SP 800 122 • NIST SP 800 171 • The OMB Trusted Internet Connection (TIC) Initiative – FedRAMP Overlay (pilot) • The DoD Cloud Computing Security Requirements Guide (SRG) This quick start includes AWS CloudFormation templates which can be integrated with AWS Service Catalog to automate building a standardized reference architecture that aligns with the requirements within the controls listed above It also includes a security controls matrix which maps the security controls and requirements to architecture decisions features and configuration of the baseline to enhance your organization’s ability to understand and assess th e system security configuration Conclusion AWS technology can help communities drive better outcomes for citizens using the technology and services included this paper However w e understand that homelessness is fundamentally a human problem —all of these initiatives must have strong backing by forward thinking officials and program managers to make an impact in the lives of those at risk or experiencing homelessness Contributors The following individuals and organizations contributed to this document: • Alistair McLean Sr Solutions Architect AWS • Jessie Metcalf Program Manager AWS ArchivedAmazon Web Services Homelessness and Technology Page 15 • Casey Burns Health and Human Services Leader AWS Further Reading For additional information see the following: • HMIS Data and Technical Standards • Reference Architecture for HIPAA on AWS • Reference Architecture for HIPAA on the AWS Cloud: Quick Start Reference Deployment • Standardized Architecture for NIST based A ssurance Frameworks on the AWS Cloud: Quick Start Reference Deployment • AWS Machine Learning Blog: Create a Question and Answ er Bot with Amazon Lex and Amazon Alexa • AWS Government Education and Non Profits Blog Document Revisions Date Description March 2019 Initial document release Archived,General,consultant,Best Practices Hosting_Static_Websites_on_AWS_Prescriptive_Guidance,"This paper has been archived For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/build staticwebsitesaws/buildstaticwebsitesawshtml Hosting Static Websites on AWS Pr escriptive Guidance First published May 2017 Updated May 21 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Abstract vi Introduction 1 Static Website 1 Dynamic Website 2 Core Architecture 2 Moving to an AWS Architecture 4 Use Amazon S3 Website Hosting to Host Without a Single Web Server 6 Scalability and Availability 7 Encrypt Data in Transit 7 Configuration Basics 8 Evolving the Architecture with Amazon CloudFront 13 Factors Contributing to Page Load Latency 13 Speeding Up Your Amazon S3 Based Website Using Amazon CloudFront 14 Using HTTPS with Amazon CloudFront 16 Amazon CloudFront Reports 17 Estimating and Tracking AWS Spend 17 Estimating AWS Spend 17 Tracking AWS Spend 18 Integration with Your Continuous Deployment Process 18 Access Logs 19 Analyzing Logs 19 Archiving and Purg ing Logs 20 Securing Administration Access to Your Website Resources 21 Managing Administrator Privileges 22 Auditing API Calls Made in Your AWS Account 23 Controlling How Long Amazon S3 Content is Cached by Amazon CloudFront 24 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Set Maximum TTL Value 24 Implement Content Versioning 25 Specify Cache Control Headers 26 Use CloudFront Invalidation Requests 1 Conclusion 1 Contributors 2 Further Reading 2 Document Revisions 2 Notes 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper covers comprehensive architectural guidance for developing deploying and managing static websites on Amazon Web Services (AWS) while keeping operational simplicity and business requirements in mind We also recommend an approach that provides 1) insignificant cost of operation 2) little or no management required and 3) a highly scalable resilient and reliable website This whitepaper first reviews how static websites are hosted in traditional hosting environments Th en we explore a simpler and more cost efficient approach using Amazon Simple Storage Service (Amazon S3) Finally we show you how you can enhance the AWS architecture by encrypting data in transit and to layer o n functionality and improve quality of service by using Amazon CloudFront This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 1 Introduction As enterprises become more digital operations their websites span a wide spectrum from mission critical e commerce sites to departmental apps and from business tobusiness (B2B) portals to marketing sites Factors such as business value mission criticality service level agreements (SLAs) quality of service and information security drive the choice of architecture and techn ology stack The simplest form of website architecture is the static website where users are served static content (HTML images video JavaScript style sheets and so on) Some examples include brand microsites marketing websites and intranet informa tion pages Static websites are straightforward in one sense but they can still have demanding requirements in terms of scalability availability and service level guarantees For example a marketing site for a consumer brand may need to be prepared for an unpredictable onslaught of visitors when a new product is launched Static Website A static website delivers content in the same format in which it is stored No server side code execution is required For example if a static website consists of HTML documents displaying images it delivers the HTML and images as is to the browser without altering the contents of the files Static websites can be delivered to web browsers on desktops tablets or mobile devices They usually consist of a mix of HTML documents images videos CSS style sheets and JavaScript files Static doesn’t have to mean boring —static sites can provide client side interactivity as well Using HTML5 and client side JavaScript technologies such as jQuery AngularJS React and Backbone you can deliver rich user experiences that are engaging and interactive Some examples of static sites include: • Marketing websites • Product landing pages • Microsites that display the same content to all users • Team homepages This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 2 • A website that lists available assets (eg image files video files and press releases) allows the user to download the files as is • Proofs ofconcept used in the early stages of web development to test user experience flows and gather feedback Static websites load quickly since content is delivered as is and can be cached by a content delivery network (CDN) The web server doesn’t need to perform any application logic or database queries They’re also relatively inexpensive to develop and host However maintaining large static websites can be cumbersome without the aid of automated tools and static websites can’t deliver personalized information Static websites are most suitable when the content is infrequently updated After the content evolves in complexity or needs to be frequently updated personalized or dynamically generated it's best to consider a dynamic website architecture Dynamic Website Dynamic websites can display dynamic or personalized content They usually interact with data sources and web services and require code development expertise to create and maintain For example a sports news site can displa y information based on the visitor's preferences and use server side code to display updated sport scores Other examples of dynamic sites are e commerce shopping sites news portals social networking sites finance sites and most other websites that di splay ever changing information Core Architecture In a traditional (non AWS) architecture web servers serve up static content Typically content is managed using a content management system (CMS) and multiple static sites are hosted on the same infrastructure The content is stored on local disks or on a file share on network accessible storage The following example shows a sample file system structure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 3 ├─ css/ │ ├─ maincss │ └─ navigationcss ├─ images/ │ ├─ bannerjpg │ └─ logojpg ├─ indexhtml ├─ scripts/ │ ├─ script1js │ └─ script2js ├─ section1html └─ section2html A network firewall protects against unauthorized access It’s common to deploy multiple web servers behind a load balancer for high availability (HA) and scalability Since pages are static the web servers don’t need to maintain any state or session information and the load balancer doesn’t need to implement session affinity (“sticky sessions”) The following diagram shows a traditional (nonAWS) hosting environment: Figure 1: Basic architecture of a traditional hosting environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 4 Moving to an AWS Architecture To translate a traditional hosting environment to an AWS architecture you could use a “lift andshift” approach where you substitute AWS services instead of using the traditional environment In this approach you can substitute the following AWS services: • Amazon Elastic Compute Cloud (Amazon EC2) to run Linux or Windows based servers • Elastic Load Balancing (ELB) to load balance and distribute the web traffic • Amazon Elastic Block Store (Amazon EBS) or Amazon Elastic File System (Amazon EFS) to store static content • Amazon Virtual Private Cloud (Amazon VPC) to deploy Amazon EC2 instances Amazon VPC is your isolated and private virtual network in the AWS Cloud and gives you full control over the network topology firewall configuration and routing rules • Web servers can be spread across multiple Availability Zones for high availability even if an entire data center were to be down • AWS Auto Scaling automatically adds servers during high traffic periods and scales back when traffic decreases The following diagram shows the basic architecture of a “lift and shift” approach This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 5 Figure 2: AWS architecture for a “Lift and Shift” Using this AWS architecture you gain the security scalability cost and agility ben efits of running in AWS This architecture benefits from AWS world class infrastructure and security operations By using Auto Scaling the website is ready for traffic spikes so you are prepared for product launches and viral websites With AWS you only pay for what you use and there’s no need to over provision for peak capacity In addition you gain increased agility because AWS services are available on demand (Compare this to the traditional process in which provisioning servers storage or ne tworking can take weeks) You don’t have to manage infrastructure so this frees up time and resources to create business differentiating value AWS challenges traditional IT assumptions and enables new “cloud native” architectures You can architect a modern static website without needing a single web server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 6 Use Amazon S3 Website Hosting to Host Without a Single Web Server Amazon Simple Storage Service (Amazon S3) can host static websites without a need for a web server The website is highly performant and scalable at a fraction of the cost of a traditional web server Amazon S3 is storage for the cloud providing you with secure durable highly scalable ob ject storage A simple web services interface allows you to store and retrieve any amount of data from anywhere on the web1 You start by creating an Amazon S3 bucket enabling the Amazon S3 website hosting feature and configuring access permissions for the bucket After you upload files Amazon S3 takes care of serving your content to your visitors Amazon S3 provides HTTP webserving capabilities and the content can be viewed by any browser You must also configure Amazon Route 53 a managed Domain Name System (DNS) service to point your domain to your Amazon S3 bucket Figure 3 illustrates this architecture where http://examplecom is the domain Figure 3: Amazon S3 website hosting This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 7 In this solution there are no Windows or Linux servers to manage and no need to provision machines install operating systems or f inetune web server configurations There’s also no need to manage storage infrastructure (eg SAN NAS) because Amazon S3 provides practically limitless cloud based storage Fewer moving parts means fewer troubleshooting headaches Scalability and Avail ability Amazon S3 is inherently scalable For popular websites Amazon S3 scales seamlessly to serve thousands of HTTP or HTTPS requests per second without any changes to the architecture In addition by hosting with Amazon S3 the website is inherently highly available Amazon S3 is designed for 99999999999% durability and carries a service level agreement (SLA) of 999% availability Amaz on S3 gives you access to the same highly scalable reliable fast and inexpensive infrastructure that Amazon uses to run its own global network of websites As soon as you upload files to Amazon S3 Amazon S3 automatically replicates your content across multiple data centers Even if an entire AWS data center were to be impaired your static website would still be running and available to your end users Compare this solution with traditional non AWS costs for implementing “active active” hosting for impo rtant projects Active active or deploying web servers in two distinct data centers is prohibitive in terms of server costs and engineering time As a result traditional websites are usually hosted in a single data center because most projects can’t ju stify the cost of “active active” hosting Encrypt Data in Transit We recommend you use HTTPS to serve static websites securely HTTPS is the secure version of the HTTP protocol that browsers use when communicating with websites In HTTPS the communication protocol is encrypted using Transport Layer Security (TLS) TLS protocols are cryptographic protocols designed to provide privacy and data integrity between two or more communicating computer applications HTTPS protects against maninthemiddle (MITM) attacks MITM attacks intercept and maliciously modify traffic Historically HTTPS was used for sites that handled financial information such as banking and ecommerce sites However HTTPS is now becoming more of the norm This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 8 rather than the exception For example the percentage of web pages loaded by Mozilla Firefox using HTTPS has increased from 49% to 75% in the past two years2 AWS Certificate Manager (ACM) is a service that lets you easily provision manage and deploy public and private Secure Sockets Layer (SSL)/TLS certificates for use with AWS services and your internal connected resources See Using HTTPS with Amazon CloudFro nt in this document for more implementation information Configuration Basics Configuration involves these steps: 1 Open the AWS Management Console 2 On the Amazon S3 console create an Amazon S3 bucket a Choose the AWS Region in which the files will be geographically stored 3 Select a Region based on its proximity to your visitors proximity to your corporate data centers and/or your regulatory or compliance requirements (eg some countries have restrictive data residency regulations) b Choose a bucket name that complies with DNS naming conventions If you plan to use your own custom domain/subdomain such as examplecom or wwwexamplecom your buc ket name must be the same as your domain/subdomain For example a website available at http://wwwexamplecom must be in a bucket named wwwexamplecom Note: Each AWS account can have a maximum of 1000 buckets 3 Toggle on the static website hosting feature for the bucket This generates an Amazon S3 website endpoint You can access your Amazon S3hosted website at the following URL: http://s3 websiteamazonawscom Domain Names For small non public websites the Amazon S3 website endpoint is probably adequate You can also use internal DNS to poin t to this endpoint For a public facing website we recommend using a custom domain name instead of the provided Amazon S3 website endpoint This way users can see userfriendly URLs in their browsers If you plan to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 9 use a custom domain name your bucket name must match the domain name For custom root domains (such as examplecom ) only Amazon Route 53 can configure a DNS record to point to the Route S3 hosted website For non root subdomains (such as wwwexa mplecom ) any DNS service (including Amazon Route 53) can create a CNAME entry to the subdomain See the Amazon Simple Storage Service Develo per Guide for more details on how to associate domain names with your website Figure 4: Configuring static website hosting using Amazon S3 console The Amazon S3 website hosting configuration screen in the Amazon S3 console presents additional options to configure Some of the key options are as follows: • You can configure a default page that users see if they visit the domain name directly (without specifying a specific page)4 You can also specify a custom 404 Page Not Found error page if the user stumbles onto a nonexistent page • You can enable logging to give you access to the raw web access logs (By default logging is disabled) • You can add tags to your Amazon S3 bucket These tags help when you want to analyze your AWS spend by project This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 10 Amazon S3 Object Names In Amazon S3 a bucket is a flat container of objects It doesn’t provide a hierarchical organization the way the file system on your computer does However there is a straightforward mapping between a file system’s folders/files to Amazon S3 objects The example that follows shows how folders/files are mapped to Amazon S3 objects Most thirdparty tools as well as the AWS Management Console and AWS Command Line Interface (AWS CLI) handle this m apping transparently for you For consistency we recommend that you use lowercase characters for file and folder names Uploading Content On AWS you can design your static website using your website authoring tool of choice Most web design and authori ng tools can save the static content on your local hard drive Then upload the HTML images JavaScript files CSS files and other static assets into your Amazon S3 bucket To deploy copy any new or modified files to the Amazon S3 bucket You can use the AWS API SDKs or CLI to script this step for a fully automated deployment You can upload files using the AWS Management Console You can also use AWS partner offerings such as CloudBerry S3 Bucket Explorer S3 Fox and other visual management tools The easiest way however is to use the AWS CLI The S3 sync command recursively uploads files and synchronizes your Amazon S3 bucket with your local folder5 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 11 Making Your Content Publicly Accessible For your visitors to access content at the Amazon S3 website endpoint the Amazon S3 objects must have the appropriate permissions Amazon S3 enforces a security by default policy New objects in a new bucket are private by default For example an Access Denied error appears when trying to view a newly uploaded file using your web browser To fix this configure the content as publicly accessible It’s possible to set objectlevel permissions for every individual object but that quickly becomes tedious Instead define an Amazon S3 bucket wide policy The following sample Amazon S3 bucket policy enables everyone to view all objects in a bucket: { ""Version"":""2012 1017"" ""Statement"":[{ ""Sid"":""PublicReadGetObject"" ""Effect"":""Allow"" ""Principal"": ""*"" ""Action"":[""s3:GetObject""] ""Resource"":[""arn:aws:s3:::S3_BUCKET_NAME_GOES_HERE/*""] } ] } This policy defines who can view the contents of your S3 bucket See Securing Administrative Access to Your Website Resources for the AWS Identity and Access Management (IAM) policies to manage permissions for your team members Together S3 bucket policies and IAM policies g ive you fine grained control over who can manage and view your website Requesting a Certificate through ACM You can create and manage public private and imported certificates with ACM This section focuses on creating and using public certificates to be used with ACM integrated services specifically Amazon Route 53 and Amazon CloudFront This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 12 To request a certificate: 1 Add in the qualified domain names (eg examplecom ) you want to secure with a certificate 2 Select a validation method ACM can validate ownership by using DNS or by sending email to the contact addresses of the domain owner 3 Review the domain names and validation method 4 Validate If you used the DNS validation method you must create a CNAME record in the DNS configuration for each of the domains If the domain is not currently managed by Amazon Route 53 you can choose to export the DNS configurati on file and input that information in your DNS web service If the domain is managed by Amazon Route 53 you can click “Create record in Route 53” and ACM can update your DNS configuration for you After validation is complete return to the ACM console Your certificate status changes from Pending Validation to Issued Low Costs Encourage Experimentation Amazon S3 costs are storage plus bandwidth The actual costs depend upon your asset file sizes and your site’s popularity (the number of visito rs making browser requests) There’s no minimum charge and no setup costs When you use Amazon S3 you pay for what you use You’re only charged for the actual Amazon S3 storage required to store the site assets These assets include HTML files images JavaScript files CSS files videos audio files and any other downloadable files Your bandwidth charges depend upon the actual site traffic More specifically the number of bytes that are delivered to the website visitor in the HTTP responses Small websites with few visitors have minimal hosting costs Popular websites that serve up large videos and images incur higher bandwidth charges The Estimating and Tracking AWS Spend section of this document describes how you can estimate and track your costs With Amazon S3 experi menting with new ideas is easy and cheap If a website idea fails the costs are minimal For microsites publish many independent microsites at once run A/B tests and keep only the successes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 13 Evolving the Architecture with Amazon CloudFront Amazon CloudFront content delivery web service integrates with other AWS products to give you an easy way to distribute content to users on your website with low latency high data transfer speeds and no minimum usage commitments Factors Contr ibuting to Page Load Latency To explore factors that contribute to latency we use the example of a user in Singapore visiting a web page hosted from an Amazon S3 bucket in the US West (Oregon) Region in the United States From the moment the user visits a web page to the moment it shows up in the browser several factors contribute to latency: • FACTOR (1) Time it takes for the browser (Singapore) to request the web page from Amazon S3 (US West [Oregon] Region) • FACTOR (2) Time it takes for Amazon S3 to retrieve the page contents and serve up the page • FACTOR (3) Time it takes for the page contents (US West [Oregon] Region) to be delivered across the Internet to the browser (Singapore) • FACTOR (4) Time it takes for the browser to parse and display the web page This latency is illustrated in the following figure Figure 5: Factors affecting page load latency This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 14 AWS addresses FACTOR (2) by optimizing Amazon S3 to serve up content as quickly as possible You can improve FACTOR (4) by optimizing the actual page content (eg minifying CSS and JavaScript using efficient image and video formats) However page loading studies consistently show that most latency is due to FACTOR (1) and FACTOR (3)6 Most of the delay in accessing pages over the internet is due to the round trip delay associated with establishing TCP connections (the infamous three way TCP handshake) and the time it takes for TCP packets to be delivered across long Internet distan ces) In short serve content as close to your users as possible In our example users in the USA will experience relatively fast page load times whereas users in Singapore will experience slower page loads Ideally for the users in Singapore you would want to serve up content as close to Singapore as possible Speeding Up Your Amazon S3 Based Website Using Amazon CloudFront Amazon CloudFront is a CDN that uses a global network of edge locations for content delivery Amazon CloudFront also provides reports to help you understand how users are using your website As a CDN Amazon CloudFront can distribute content with low latency and high data transfer rates There are multiple CloudFront edge locations all around the world Therefore no matter where a visitor lives in the world there is an Amazon CloudFront edge location that is relatively close (from an Internet laten cy perspective) The Amazon CloudFront edge locations cache content from an origin server and deliver that cached content to the user When creating an Amazon CloudFront distribution specify your Amazon S3 bucket as the origin server The Amazon CloudFront distribution itself has a DNS You can refer to it using a CNAME if you have a custom domain name To point the A record of a root domain to an Amazon CloudFront distribution you can use Amazon Route 53 alias records as illustrated in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 15 Figure 6: Using Amazon Route 53 alias records with an Amazon CloudFront distribution Amazon CloudFront also keeps persistent connections with your origin servers so that those files can be fetched from the origin servers as quickly as possible Finally Amazon CloudFront uses additional optimizations (eg wider TCP initial congestion window) to provide higher perfor mance while delivering your content to viewers When an end user requests a web page using that domain name CloudFront determines the best edge location to serve the content If an edge location doesn’t yet have a cached copy of the requested content CloudFront pulls a copy from the Amazon S3 origin server and holds it at the edge location to fulfill future requests Subsequent users requesting the same content from that edge location experience faster page loads because that content is already cached The following diagram shows the flow in detail This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 16 Figure 7: How Amazon CloudFront caches content Using HTTPS with Amazon CloudFront You can configure Amazon CloudFront to require that viewers use HTTPS to request your objects so that connections are encrypted when Amazon CloudFront communicates with viewers You can also configure Amazon CloudFront to use HTTPS to get objects from your origin so that connections are encrypted when Amazon CloudFront commun icates with your origin If you want to require HTTPS for communication between Amazon CloudFront and Amazon S3 you must change the value of the Viewer Protocol Policy to Redirect HTTP to HTTPS or HTTPS Only This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 17 We recommend using Redirect HTTP to HTTPS Viewers can use both protocols HTTP GET and HEAD requests are automatically redirected to HTTPS requests Amazon CloudFront returns HTTP status code 301 (Moved Permanently) along with the new HTTPS URL The viewer then resubmits the request to Amazon CloudFront using the HTTPS URL Amazon CloudFront Reports Amazon CloudFront includes a set of reports that provide insight into and answers to the following questions: • What is the overall health of my website? • How many visitors are viewing my website? • Which browsers devices and operating systems are they using? • Which countries are they coming from? • Which websites are the top referrers to my site? • What assets are the most popular ones on my site? • How often is CloudFront caching taking place? Amazon CloudFront reports can be used alongside other online analytics tools and we encourage the use of multiple reporting tools Note that some analytics tools may require you to embed client side JavaScript in your HTML pages Ama zon CloudFront reporting does not require any changes to your web pages See the Amazon CloudFront Developer Guide for more information on reports Estimating and Tracking AWS Spend With AWS there is no upper limit to the amount of Amazon S3 storage or network bandwidth you can consume You pay as you go and only pay for actual usage Because you’re not using web servers in this architecture you have no licensing costs or concern for server scalability or utilization Estimating AWS Spend To estimate your monthly costs you can use the AWS Simple Monthly Calculator Pricing sheets for Amazon Route 53 Amazon S3 and Amazon CloudFront are This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 18 available online 7 AWS pricing is Region specific See the following links for the most recent pricing information: Amazon Route 53 Amazon CloudFront Amazon S3 Tracking AWS Spend The AWS Cost Explorer can help you track cost trends by service type It’s integrated in the AWS Billing console and runs in your browser The Monthly Cost by Service chart allows you to see a detailed breakdown by service The Daily Cost report helps you track your spending as it happens If you configured tags for your Amazon S3 bucket you can filter your reports against specific tags for cost allocation purposes See Using the Default Cost Explorer Reports Integration with Your Continuous Deployment Process Your website content should be managed using version control software (such as Git Subversion or Microsoft Team Foundation Server) to make it possible to revert to older versions of your files8 AWS offers a managed source control service called AWS CodeCommit that makes it easy to host secure and private Git repo sitories Regardless of which version control system your team uses consider tying it to a continuous build/integration tool so that your website automatically updates whenever the content changes For example if your team is using a Git based repository for version control a Git post commit hook can notify your continuous integration tool (eg Jenkins) of any content updates At that point your continuous integration tool can perform the actual deployment to synchronize the content with Am azon S3 (using either the AWS CLI or the Amazon S3 API) and notify the user of the deployment status This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 19 Figure 8: Example continuous deployment process If you don’t want to use version control then be sure to periodically download your website and back up the snapshot The AWS CLI lets you download your entire website with a single command: aws s3 sync s3://bucket /my_local_backup_directory Access Logs Access logs can help you troubleshoot or analyze traffic coming to your site Both Amazon CloudFront and Amazon S3 give you the option of turning on access logs There’s no extra charge to enable logging other than the storage of the actual logs The access logs are delivered on a best effort basis; they are usually delivered within a few hours after the events are recorded Analyzing Logs Amazon S3 access logs are deposited in your Amazon S3 bucket as plain text files Each record in the log files provides details about a single Amazon S3 access request such as the requester bucket name request time request action response status and error code if any You can open individual log files in a text editor or use a third party tool that can interpret the Amazon S3 access log format CloudFront logs are deposited in your Amazon S3 bucket as GZIP compressed text files CloudFront logs follow the standard W3C extended log file format and can be analyzed using any log analyzer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 20 You can also build out a custom analytics solution using AWS Lambda and Amazon Elasticsearch Service (Amazon ES) AWS Lambda functions can be hooked to an Amazon S3 bucket to detect when new log files are available for processing AWS Lambda function code can process the log files and send the data to an Amazon ES cluster Users can then analyze the logs by querying Amazon ES or using the Kibana visual dashboard Both AWS Lambda and Amazon ES are managed services and there are no servers to manage Figure 9: Using AWS Lambda to send logs from Amazon S3 to Amazon Elasticsearch Service Archiving and Purging Logs Amazon S3 buckets don’t have a storage cap and you’re free to retain logs for as long as you want However an AWS best practice is to archive files into Amazon S3 Glacier Amazon S3 Glacier is suitable for long term storage of infrequently accessed files Like Amazon S3 Amazon S3 Glacier is also designed for 99999999999% data durability and you have practically unlimited storage The difference is in retrieval time Amazon S3 supports immediate file retrieval With Amazon S3 Glacier after you initiate a file retrieval request there is a delay before you can start downloading the files Amazon S3 Glacier storage costs are cheaper than S3 disks or tape drives See the Amazon S3 Glacier page for pricing The easiest way to archive data into Amazon S3 Glacier is to use Amazon S3 lifecycle policies The lifecycle policies can be applie d to an entire Amazon S3 bucket or to specific objects within the bucket (eg only the log files) A minute of configuration in the Amazon S3 console can reduce your storage costs significantly in the long run Here’s an example of setting up data tiering using lifecycle policies: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 21 • Lifecycle policy #1: After X days automatically move logs from Amazon S3 into Amazon S3 Glacier • Lifecycle policy #2: After Y days automatically delete logs from Amazon S3 Glacier Data tiering is illustrated in the following figure Figure 10: Data tiering using Amazon S3 lifecycle policies Securing Administration Access to Your Website Resources Under the AWS shared responsibility model the responsibility for a secure website is shared between AWS and the customer (you) AWS provides a global secure infrastructure and foundation compute storage networking and database services as well as higher level services AWS also provides a range of security services and features that you can use to secure your assets As an AWS customer you’re responsible for protecting the confidentiality integrity and availability of your data in the cloud and for meeting your specific business requirements for information protection We strongly rec ommend working closely with your Security and Governance teams to implement the recommendations in this whitepaper A benefit of using Amazon S3 and Amazon CloudFront as a serverless architecture is that the security model is also simplified You have no o perating system to harden servers to patch or software vulnerabilities to generate concern Also S3 offers security options such as server side data encrypt ion and access control lists This results in a significantly reduced surface area for potential attacks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 22 Managing Administrator Privileges Enforcing the principle of least privilege is a critical part of a security and governance strategy In most organizations the team in charge of DNS configurations is separate from the team that manages web content You should grant users appropriate levels of permissions to access only the resources they need In AWS you can use IAM to lock down permissions You can create multiple IAM users under your AWS account each with their own login and password9 An IAM user can be a person service or application that requires access to your AWS resources (in this case Amazon S3 buckets Amazon CloudFront distributions and Amazon Route 53 hosted zones) through the AWS Management Console command line tools or APIs You can then organize them into IAM groups based on functional roles When an IAM user is placed in an IAM group it inherits the group’s permissions The finegrained policies of IAM allow you to grant administrators the minimal privileges needed to accomplish their tasks The permissions can be scoped to specific Amazon S3 buckets and Amazon Route 53 hosted zones The following is an example separation of duties : IAM configuration can be managed by: • Super_Admins Amazon Route 53 configuration can be managed by: • Super_Admins • Network_Admins CloudFront configuration can be managed by: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 23 • Super_Admins • Network_Admins • Website_Admin Amazon S3 configuration can be managed by: • Super_Admins • Website_Admin Amazon S3 content can be updated by: • Super_Admins • Website_Admin • Website_Content_Manager An IAM user can belong to more than one IAM group For example if someone must manage both Amazon Route 53 and Amazon S3 that user can belong to both the Network_Admins and the Website_Admins groups The best practice is to require all IAM users to rotate their IAM passwords periodically Multi factor authentication (MFA) should be enabled for any IAM user account with administrator privileges Auditing API Calls Made in Your AWS Account You can use AWS CloudTrail to see an audit trail for API activity in your AWS account Toggle it on for all AWS Regions and the audit logs will be deposited to an Amazon S3 bucket You can use the AWS Management Console to search against API acti vity history Or you can use a third party log analyzer to analyze and visualize the CloudTrail logs You can also build a custom analyzer Start by configuring CloudTrail to send the data to Amazon CloudWatch Logs CloudWatch Logs allows you to create automated rules that notify you of suspicious API activity CloudWatch Logs also has seamless integration with Amazon ES and you can configure the data to be automatically streamed over to a managed Amazon ES cluster Once the data is in Amazon ES users can query against that data directly or visualize the analytics using a Kibana dashboard This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 24 Controlling How Long Amazon S3 Content is Cached by Amazon CloudFront It is important to control how long your Amazon S3 content is cached at the CloudFront edge locations This helps make sure that website updates appear correctly If you’re ever confused by a situation in which you’ve updated your website but you are still seeing stale content when visiting your CloudFront powered website one likely reason is that CloudFront is still serving up cached content You can control CloudFront caching behavior with a co mbination of Cache Control HTTP headers CloudFront Minimum Time toLive (TTL) specifications Maximum TTL specifications content versioning and CloudFront Invalidation Requests Using these correctly will help you manage website updates CloudFront will typically serve cached content from an edge location until the content expires After it expires the next time that content is requested by an end user CloudFront goes back to the Amazon S3 origin server to fetch the content and then cach e it CloudFront edge locations automatically expire content after Maximum TTL seconds elapse (by default this is 24 hours) However it could be sooner because CloudFront reserves the flexibility to expire content if it needs to before the Maximum TTL is reached By default the Minimum TTL is set to 0 (zero) seconds but this value is configurable Therefore CloudFront may expire content anytime between the Minimum TTL (default is 0 seconds) and Maximum TTL (default is 24 hours) For example if Minimum TTL=60s and Maximum TTL=600s then content will be cached for at least 60 seconds and at most 600 seconds For example say you deploy updates to your marketing website with the latest and greatest product images After uploading your new images to Amazon S3 you immediately browse to your website DNS and you still see the old images! It is likely that one and possibly more CloudFront edge locations are still holding onto cached versions of the older images and serving the cached versions up to your website visitors If you’re the patient type you can wait for CloudFront to expire the content but it could take up to Maximum TTL seconds for that to happen There are several approaches to address this issue each with its pros and cons Set Maximum TTL Value Set the Maximum TTL to be a relatively low value The tradeoff is that cached content expires faster because of the low Maximum TTL value This results in more frequent This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 25 requests to your Amazon S3 bucket because the CloudFront caches need to be repopulated more often In addition the Maximum TTL setting applies across the board to all CloudFront files and for some websites you might want to control cache expiration behaviors based on file types Implement Content Ve rsioning Every time you update website content embed a version identifier in the file names It can be a timestamp a sequential number or any other way that allows you to distinguish between two versions of the same file For example instead of banner jpg call it banner_20170401_v1jpg When you update the image name the new version banner_20170612_v1jpg and update all files that need to link to the new image In the following example the banner and logo images are updated and receive new file names However because those images are referenced by the HTML files the HTML markup must also be updated to reference the new image file names Note that the HTML file names shouldn’t have version identifiers in order to provide stable URLs for end users This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 26 Content versioning has a clear benefit: it sidesteps CloudFront expiration behaviors altogether Since new file names are involved CloudFront immediately fetches the new files from Amazon S3 (and afterwards cache them) NonHTML website changes are reflected immediately Additionally you can roll back and roll forward between different versions of your website The main challenge is that content update processes must be “version aware” File names must be versioned Files with references to changed files must also be updated For example if an image is updated the following items must be updated as well: • The image file name • Content in any HTML CSS and JavaScript files referencing the older image file name • The file names of any referencing files (with the exception of HTML files) A few static site generator tools can automatically rename file names with version identifiers but most tools aren’t version aware Manually managing version identifiers can be cumbersome and error prone If your website would benefit from content versioning it may be worth investing in a few automation scripts to streamline your content update process Specify Cache Control Headers You can manage CloudFront expiration behavior by specifying Cache Control headers for your website content If you keep the Minimum TTL at the default 0 seconds then This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 27 CloudFront honors any CacheControl: max age HTTP header that is individually set for your content If an image is configured with a Cache Control: maxage=60 header then CloudFront expires it at the 60 second mark This gives you more precise control over content expiration for individual files You can configure Amazon S3 to return a CacheControl HTTP header with the value of maxage= when S3 serves up the content This setting is on a file byfile basis and we recommend using different values depending on the file type (HTML CSS JavaScript images etc) Since HTML files won’t have version identifiers in their file names we recommend using smaller maxage values for HTML files so that CloudFront will expire the HTML files sooner than other content You can set this by editing the Amazon S3 object metadata using the AWS Management Console Figure 11: Setting Cache Control Values in the console In practice you should automate this as part of your Amazon S3 upload process With AWS CLI you can alter your deployment scripts like the following example: aws s3 sync /path s3://yourbucket/ delete recursive \ cachecontrol max age=60 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 1 Use CloudFront Invalidation Requests CloudFront invalida tion requests are a way to force CloudFront to expire content Invalidation requests aren’t immediate It takes several minutes from the time you submit one to the time that CloudFront actually expires the content For the occasional requests you can submit them using the AWS Management Console Otherwise use the AWS CLI or AWS APIs to script the invalidation In addition CloudFront lets you specify which content should be invalidated: You can choose to invalidate your entire Amazon S3 bucket indivi dual files or just those matching a wildcard pattern For example to invalidate only the images directory issue an invalidation request for: /images/* In summary the best practice is to understand and use the four approaches together If possible implement content versioning It allows you to immediately review changes and gives you the most precise control over the CloudFront and Amazon S3 experience Set the Minimum TTL to be 0 seconds and the Maximum TTL to be a relatively low value A lso use CacheControl headers for individual pieces of content If your website is infrequently updated then set a large value for Cache Control:max age= and then issue CloudFront invalidation requests every time your site is updated If the website is updated more frequently use a relatively small value for CacheControl:max age= and then issue CloudFront invalidation requests only if the CacheControl:max age= settings exceeds several minutes Conclusion This whitepaper began with a look at traditional (non AWS) architectures for static websites We then showed you an AWS Cloud native architecture based on Amazon S3 Amazon CloudFront and Amazon Route 53 The AWS architecture is highly available and scalable secure and provides for a responsive user experience at very low cost By enabling and analyzing the available logs you can you understand your visitors and how well the website is performing Fewer moving parts means less maintenance is required In addition the architecture costs only a few dollars a month to run This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 2 Contributors Contributors to this document include: •Jim Tran AWS Principal Enterprise Solutions Architect •Bhushan Nene Senior Manager AWS Solutions Architecture •Jonathan Pan Senior Product Marketing Manager AWS •Brent Nash Senior Software Development Engineer AWS Further Reading For additional information see: •AWS Whitepapers page •Amazon CloudFront Developer Guide •Amazon S3 Documentation Document Revisions Date Description May 21 2021 Updated for technical accuracy February 2019 Added usage of HTTPS May 2017 First publication Notes 1 Each S3 object can be zero bytes to 5 TB in file size and there’s no limit to the number of Amazon S3 objects you can store 2 https://letsencryptorg/stats/ 3 If your high availability requirements require that your website must remain available even in the case of a failure of an entire AWS Region explore the Amazon S3 Cross Region Replication capability to automatically replicate your S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 3 data to another S3 bucket in a second AWS Region 4 For Microsoft IIS web servers this is equivalent to “defaulthtml”; for Apache web servers this is equivalent to “indexhtml” 5 For moving very large quantities of data into S3 see https://awsamazoncom/s3/transfer acceleration/ 6 “We find that the performance penalty incurred by a web flow due to its TCP handshake is between 10% and 30% of the latency to serve the HTTP request as we show in detail in Section 2” f rom https://raghavanuscedu/papers/tfo conext11pdf 7 Any pricing information included in this document is provided only as an estimate of usage charges for AWS services based on the prices effective at the time of this writing Monthly charges will be based on your actual use of AWS services and may vary from the estimates provided 8 If version control software is not in use at your organization one alternative approach is to look at the Amazon S3 object versioning feature for versioning your critical files Note that object versioning introduces storage costs for each version and requires you to programmatically manage the different versions For more information see http://docsawsamazoncom/AmazonS3/latest/dev/Versioninghtml 9 The AWS account is the account that you create when you first sign up for AWS Each AWS account has root permissions to all AWS resources and services The best practice is to enable multi factor authentication (MFA) for your root account and then lock away the root credentials so that no person or system uses the root credentials directly for day today ope rations Instead create IAM groups and IAM users for the day today operations",General,consultant,Best Practices How_AWS_Pricing_Works,"ArchivedHow AWS Pricing Works AWS Pricing Overview October 30 2020 This paper has been archived For the latest technical guidance on How AWS Pricing Works see https://docsawsamazoncom/whitepapers/latest/how awspricingworks/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Key principles 1 Understand the fundamentals of pricing 1 Start early with cost optimization 2 Maximize the power of flexibility 2 Use the right pricing model for the job 2 Get started with the AWS Free Tier 3 12 Months Free 3 Always Free 4 Trials 4 AWS Pricing/TCO Tools 4 AWS Pricing Calculator 5 Migration Evaluator 5 Pricing details for individual services 6 Amazon Elastic Compute Cloud (Amazon EC2) 6 AWS Lambda 10 Amazon Elastic Block Store (Amazon EBS) 11 Amazon Simple Storage Service (Amazon S3) 12 Amazon S3 Glacier 13 AWS Outposts 14 AWS Snow Family 16 Amazon RDS 18 Amazon DynamoDB 19 Amazon CloudFront 23 Amazon Kendra 23 Amazon Kinesis 25 ArchivedAWS IoT Events 27 AWS Cost Optimization 28 Choose the right pricing models 28 Match Capacity with Demand 28 Implement processes to identify resource waste 29 AWS Support Plan Pricing 30 Cost calculation examples 30 AWS Clou d cost calculation example 30 Hybrid cloud cost calculation example 33 Conclusion 37 Contributors 38 Further Reading 38 Document Revisions 39 ArchivedAbstract Amazon Web Services (AWS) helps you move faster reduce IT costs and attain global scale through a broad set of global compute storage database analytics application and deployment services One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs even as those needs change over time ArchivedAmazon Web Services How AWS Pricing Works Page 1 Introduction AWS has the services to help you build sophisticated applications with increased flexibility scalability and reliability Whether you're looking for compute power database storage content delivery or other functionality with AWS you pay only for the individual services you need for as long as you use them without complex licensing AWS offers you a variety of pricing models for over 160 cloud services You only pay for the services you consume and once you stop using them there are no additional costs or termination fees This whitepaper provides an overview of how AWS pricing works across some of the most widely u sed services The latest pricing information for each AWS service is available at http://awsamazoncom/pricing/ Key principles Although pricing models vary across services it’s worthwhile to review key principles and best practices that are broadly applicable Understand the fundamentals of pricing There are three fundamental drivers of cost with AWS: compute storage and outbound data transfer These characteristics vary somewhat depending on the AWS product and pricing model you choose In most cases there is no charge for inbound data transfer or for data transfer between other AWS services within the same Region There are some exceptions so be sure to verify data transfer rates before beginni ng Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate This charge appears on the monthly statement as AWS Data Transfer Out The more data you transfer the less you pay per GB For compute resources you pay hourly from the time you launch a resource until the time you terminate it unless you have made a reservation for which the cost is agreed upon beforehand For data storage and transfer you typically pay per GB Except as otherwise noted AWS prices are exclusive of applicable taxes and duties including VAT and sales tax For customers with a Japanese billing address use of AWS is subject to Japanese Consumption Tax For more information see Amazon Web Services Consumption Tax FAQ ArchivedAmazon Web Services How AWS Pricing Works Page 2 Start early with cost optimization The cloud allows you to trade fixed expenses (such as data centers and physical servers) for variable expenses and only pay for IT as you consume it And because of the economies of scale the variable expenses are much lower than what you would pay to do it yourself Whether you started in the cloud or you are just starting your migration journey to the cloud AWS has a set of solutions to help you manage and optimize your spend This includes services tools and resources to organize and track cost and usage data enhance control throu gh consolidated billing and access permission enable better planning through budgeting and forecasts and further lower cost with resources and pricing optimizations To learn how you can optimize and save costs today visit AWS Cost Optimization Maximize the power of flexibility AWS services are priced independently transparently and available on demand so you can choose and pay for exactly what you need You may also choose to save money through a reservation model By paying for services on an as needed basis you can redirect your focus to innovation and invention reducing procurement complexity and enabling your business to be fully elastic One of the key advantages of cloud based resources is that you don’t pay for them when they’re not running By turning off instances you don’t use you can reduce costs by 70 percent or more compared to using them 24/7 This enables you to be cost efficient and at t he same time have all the power you need when workloads are active Use the right pricing model for the job AWS offers several pricing models depending on product These include: • OnDemand Instances let you pay for compute or database capacity by the hour or second (minimum of 60 seconds) depending on which instances you run with no long term commitments or upfront payments • Savings Plans are a flexible pricing model that offer low prices on Amazon EC2 AWS Lambda and AWS Fargate usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a one or three year term ArchivedAmazon Web Services How AWS Pricing Works Page 3 • Spot Instances are an Amazon EC2 pricing mechanism that let you request spare computing capacity with no upfront commitment and at disc ounted hourly rate (up to 90% off the on demand price) • Reservations provide you with the ability to receive a greater discount up to 75 percent by paying for capacity ahead of time For more details see the Optimizing costs with reservations section Get started with the AWS Free Tier The AWS Free Tier enables you to gain free hands on experience with more than 60 products on AWS platform AWS Free Tier includes the following free offer types : • 12 Months Free – These tier offers include 12 months free usage following your initial sign up da te to AWS When your 12 month free usage term expires or if your application use exceeds the tiers you simply pay standard pay asyougo service rates • Always Free – These free tier offer s do not expire and are available to all AWS customers • Trials – These offers are short term free trial s starting from d ate you activate a particular service Once the trial period expires you simply pay standard pay as yougo service rates This section lists some of the most commonly used AWS Free Tier services Terms and conditions apply For the full list of AWS F ree Tier services see AWS Free Tier 12 Months Free • Amazon Elastic Compute Cloud (Amazon EC2) : 750 hours per month of Linux RHEL or SLES t2micro/t3micro instance usage or 750 hours per month of Windows t2micro/t3micro instance usage dependent on Region • Amazon Simple Storage Service (Amazon S3) : 5 GB of Amazon S3 standard storage 20000 Get Requests and 2000 Put Requests • Amazon Relational Database Service (Amazon RDS) : 750 hours of Amazon RDS Single AZ dbt2micro database usage for running MySQL PostgreSQL MariaDB Oracle BYOL or SQL Server (running SQL Server Express Editi on); 20 GB of general purpose SSD database storage and 20 GB of storage for database backup and DB snapshots ArchivedAmazon Web Services How AWS Pricing Works Page 4 • Amazon CloudFront: 50 GB Data Transfer Out and 2000000 HTTP and HTTPS Requests each month Always Free • Amazon DynamoDB : Up to 200 million requests per month (25 Write Capacity units and 25 Read Capacity units ); 25 GB of storage • Amazon S3 Glacier : Retrieve up to 10 GB of your Amazon S3 Glacier data per month for free (applies to standard retrievals using the Glacier API only) • AWS Lambda : 1 million free requests per month; up to 32 million seconds of compute time per month Trials • Amazon SageMaker: 250 hours per month of t2medi um notebook50 hours per month of m4xlarge for training 125 hours per month of m4xlarge for hosting for the first two months • Amazon Redshift : 750 hours per month for fr ee enough hours to continuously run one DC2Large node with 160GB of compressed SSD storage You can also build clusters with multiple nodes to test larger data sets which will consume your free hours more quickly Once your two month free trial expires or your usage exceeds 750 hours per month you can shut down your cluster to avoid any charges or keep it running at the standard OnDemand Rate The AWS Free Tier is not available in th e AWS GovCloud (US) Regions or the China (Beijing) Region at this time The Lambda Free Tier is available in the AWS GovCloud (US) Region AWS Pricing/TCO Tools To get the most out of your estimates you should have a good idea of your basic requirements For example if you're going to try Amazon Elastic Compute Cloud (Amazon EC2) it might help if you know what kind of operating system you need what your memory requirements are and how much I/O you need You should also decide whether you need storage such as if you're going to run a database and how long you intend to use the servers You don't need to make these decisions before generating an estimate thoug h You can play around with the service configuration and parameters to ArchivedAmazon Web Services How AWS Pricing Works Page 5 see which options fit your use case and budget be st For more information about AWS service pricing see AWS Services Pricing AWS offers couple of tools (free of cost) for you to use If the workload details and services to be used are identified AWS pricing calculator can help with calculating the total cost of ownership Migration Evaluator helps with inventorying your existin g environment identifying workload information and designing and planning your AWS migration AWS Pricing Calculator AWS Pricing Calculator is a web based service that you can use to create cost estimates to suit your AWS use case s AWS Pricing Calculat or is useful both for people who have never used AWS and for those who want to reorganize or expand their usage AWS Pricing Calculator allows you to explore AWS services based on your use cases and create a cost estimate You can model your solutions befo re building them explore the price points and calculations behind your estimate and find the available instance types and contract terms that meet your needs This enables you to make informed decisions about using AWS You can plan your AWS costs and us age or price out setting up a new set of instances and services AWS Pricing Calculator is free for use It provides an estimate of your AWS fees and charges The estimate doesn't include any taxes that might apply to the fees and charges AWS Pricing Cal culator provides pricing details for your information only AWS Pricing Calculator provides a console interface at https://calculatoraws/#/ Migration Evaluator Migration Evaluator (Formerly TSO Logic) is a complimentary service to create data driven business cases for AWS Cloud planning and migration Creating business cases on your own can be a time consuming process and does not always identify the most cost effective deployment and purchasing options Migration Evaluator quickly provides a business case to make sound AWS planning and migration decisions With Migration Evaluator your organization can build a data driven business case for AWS gets access to AW S expertise visibility into the costs associated with multiple migration strategies and insights on how reusing existing software licensing reduces costs further ArchivedAmazon Web Services How AWS Pricing Works Page 6 A business case is the first step in the AWS migration journey Beginning with on premises inventory discovery you can choose to upload exports from 3rd party tools or install a complimentary agentless collector to monitor Windows Linux and SQL Server footprints As part of a white gloved experience Migration Evaluator includes a team of program managers and solution architects to capture your migration objective and use analytics to narrow down the subset of migration patterns best suited to your business needs The results are captured in a transparent business case which aligns business and technology stakeholders to provide a prescriptive next step in your migration journey Migration Evaluator service analyzes an enterprise’s compute footprint including server configuration utilization annual costs to operate eligibility for bring yourownlicense and hundreds of other parameters It then statistically models utilization patterns matching each workload with optimized placements in the AWS Amazon Elastic Cloud Compute and Amazon Elastic Block Store Finally it outputs a business case with a comparison of the current state against multiple future state configurations showing the flexibility of AWS For more information see Migration Evaluator Pricing details for individual se rvices Different types of services lend themselves to different pricing models For example Amazon EC2 pricing varies by instance type wh ereas the Amazon Aurora database service includes charges for data input/output (I/ O) and storage This section provi des an overview of pricing concepts and examples for few AWS services You can always find current price information for each AWS service at AWS Pricing Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure resizable compute capacity in the cloud It is designed to make web scale cloud computing easier for developers The simple web service interf ace of Amazon EC2 allows you to obtain and configure capacity with minimal friction with complete control of your computing resources Amazon EC2 reduces the time required to obtain and boot new server instances in minutes allowing you to quickly scale ca pacity both up and down as your computing requirements change ArchivedAmazon Web Services How AWS Pricing Works Page 7 Pricing models for Amazon EC2 There are five ways to pay for Amazon EC2 instances: OnDemand Instances Savings Plans Reserved Instances and Spot Instances OnDemand Instances With OnDemand Instances you pay for compute capacity per hour or per second depending on which instances you run No long term commitments or upfront payments are required You can increase or decrease your compute capacity to meet the demands of your application and only pay the specified hourly rates for the instance you use On Demand Instances are recommende d for the following use cases : • Users who prefer the low cost and flexibility of Amazon EC2 without upfront payment or long term commitments • Applications with short term spiky or unpredictable workloads that cannot be interrupted • Applications being develo ped or tested on Amazon EC2 for the first time Savings Plans Savings Plans are a flexible pricing model that offer low prices on Amazon EC2 AWS Lambda and AWS Fargate usage in exchange for a commitmen t to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices on Amazon EC2 instances usage reg ardless of instance family size OS tenancy or AWS Region and also applies to AWS Fargate and AWS Lambda usage For workloads that have predictable and consistent usage Savings Plans can provide significant savings compared to On Demand Instances it i s recommended for: • Workloads with a consistent and steady state usage • Customers who want to use different instance types and compute solutions across different locations • Customers who can make monetary commitment to use EC2 over a oneor threeyear term ArchivedAmazon Web Services How A WS Pricing Works Page 8 Spot Instances Amazon EC2 Spot Instances allow you to request spare Amazon EC2 computing capacity for up to 90 percent off the On Demand price Spot Instances are recommended for: • Applications that have f lexible start and end times • Applications that are only feasible at very low compute prices • Users with fault tolerant and/or stateless workloads Spot Instance prices are set by Amazon EC2 and adjust gradually based on long term trends in supply and demand f or Spot Instance capacity Reserved Instances Amazon EC2 Reserved Instances provide you with a significant discount (up to 75 percent) compared to On Demand Instance pricing In addi tion when Reserved Instances are assigned to a specific Availability Zone they provide a capacity reservation giving you additional confidence in your ability to launch instances when you need them Persecond billing Persecond billing saves money and has a minimum of 60 seconds billing It is particularly effective for resources that have periods of low and high usage such as development and testing data processing analytics batch processing and gaming applications Learn more about per second billing Estimating Amazon EC2 costs When you begin to estimate the cost of using Amazon EC2 consider the following: • Clock hours of server time: Resources incur charges when they are running — for example from the time Amazon EC2 instances are launched until they are terminated or from the time Elastic IP addresse s are allocated until the time they are de allocated ArchivedAmazon Web Services How AWS Pricing Works Page 9 • Instance type: Amazon EC2 provides a wide selection of instance types optimized to fit different use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications Each instance type includes at least one instance size allowing you to scale your resources to the requirements of your target workload • Pricing model : With On Demand Instances you pay for compute capacity by the hour with no required mini mum commitments • Number of instances : You can provision multiple instances of your Amazon EC2 and Amazon EBS resources to handle peak loads • Load balancing: You can use Elastic Load Balanc ing to distribute traffic among Amazon EC2 Instances The number of hours Elastic Load Balanc ing runs and the amount of data it processes contribute to the monthly cost • Detailed monitoring : You can use Amazon CloudWatch to monitor your EC2 instances By default basic monitoring is enabled For a fixed monthly rate you can opt for detailed monitoring which includes seven preselected metrics recorded once a minute Partial months are charged on an hourly pro rata basis at a per instance hour rate • Amazon EC2 Auto Scal ing: Amazon EC2 Auto Scaling automatically adjusts the number of Amazon EC2 instances in your deployment according to the scaling policies you define This service is available at no additional charge beyond Amazon CloudWatch fees • Elastic IP addresses : You can have one Elastic IP address associated with a running instance at no charge • Licensing : To run operating systems and applications on AWS you can obtain variety of software licenses from AWS on a pay asyougo basis that are fully compliant and do no t require you to manage complex licensing terms and conditions However i f you have existing licensing agreements with software vendors you can bring your eligible licenses to the cloud to reduce total cost of ownership (TCO) AWS offers License Manager which makes it easier to manage your software licenses from vendors such as Microsoft SAP Oracle and IBM across AWS and on premises environments For more information see Amazon EC2 pricing ArchivedAmazon Web Services How AWS Pricing Works Page 10 AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume —there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service —all with zer o administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability AWS Lambda pricing With AWS Lambda you pay only for what you use You are charged based on the number of requests for y our functions and the time it takes for your code to execute Lambda registers a request each time it starts executing in response to an event notification or invoke call including test invokes from the console You are charged for the total number of req uests across all your functions Duration is calculated from the time your code begins executing until it returns or otherwise terminates rounded up to the nearest 100 milliseconds The price depends on the amount of memory you allocate to your function AWS Lambda participates in Compute Savings Plans a flexible pricing model that offers low prices on Amazon EC2 AWS Fargate and AWS Lambda usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term Wi th Compute Savings Plans you can save up to 17% on AWS Lambda Savings apply to Duration Provisioned Concurrency and Duration (Provisioned Concurrency) Request pricing • Free Tier: 1 million requests per month 400000 GB seconds of compute time per mont h • $020 per 1 million requests thereafter or $00000002 per request Duration pricing • 400000 GB seconds per month free up to 32 million seconds of compute time • $000001667 for every GB second used thereafter ArchivedAmazon Web Services How AWS Pricing Works Page 11 Additional charges You may incur additional c harges if your Lambda function uses other AWS services or transfers data For example if your Lambda function reads and writes data to or from Amazon S3 you will be billed for the read/write requests and the data stored in Amazon S3 Data transferred int o and out of your AWS Lambda functions from outside the Region the function executed in will be charged at the EC2 data transfer rates as listed on Amazon EC2 On Demand Pricing under Data Transf er Amazon Elastic Block Store (Amazon EBS) Amazon Elastic Block Store (Amazon EBS) an easy to use high performance block storage service designed for use with Amazon EC2 instances Amazon EBS volumes are off instance storage that persists independently from the life of an instance They are analogous to virtual disks in the cloud Amazon EBS provides two volume types: • SSDbacked volumes are optimized for transactional workloads involving frequent read/write o perations with small I/O size where the dominant performance attribute is IOPS • HDD backed volumes are optimized for large streaming workloads where throughput (measured in megabits per second) is a better performance measure than IOPS How Amazon EBS is priced Amazon EBS pricing includes three factors: • Volumes : Volume storage for all EBS volume types is charged by the amount of GB you provision per month until you release the storage • Snapshots : Snapshot storage is based on the amount of space your da ta consumes in Amazon S3 Because Amazon EBS does not save empty blocks it is likely that the snapshot size will be considerably less than your volume size Copying EBS snapshots is charged based on the volume of data transferred across Regions For the f irst snapshot of a volume Amazon EBS saves a full copy of your data to Amazon S3 For each incremental snapshot only the changed part of your Amazon EBS volume is saved After the snapshot is copied standard EBS snapshot charges apply for storage in the destination Region ArchivedAmazon Web Services How AWS Pricing Works Page 12 • EBS Fast Snapshot Restore (FSR) : This is charged in Date Services Unit Hours (DSUs) for each Availability Zone in which it is enabled DSUs are billed per minute with a 1 hour minimum The price of 1 FSR DSU hour is $075 per Availabil ity Zone (pricing based on us east1 (NVirginia)) • EBS direct APIs for Snapshots : EBS direct APIs for Snapshots provide access to directly read EBS snapshot data and identify differences between two snapshots The following charges apply for these APIs o ListChangedBlocks and ListSnapshotBlocks APIs are charged per request o GetSnapshotBlock API is charged per SnapshotAPIUnit (block size 512 KiB) • Data transfer: Consider the amount of data transferred out of your application Inbound data transfer is free a nd outbound data transfer charges are tiered If you use external or cross region data transfers additional EC2 data transfer charges will apply For more information see the Amazon EBS pricing page Amazon Simple Storage Service (Amazon S3) Amazon Simple Storage Service (Amazon S3) is object storage built to store and retrieve any amount of data from anywhere: websites mobile apps corporate applications and data from IoT sensors or devices It is designed to deliver 99999999999 percent durability and stores data for millions of applications used by market leaders in every industry As with other AWS services Amazon S3 provides the simplicity and cost effectiveness of pay asyougo pricing Estimating Amazon S3 storage costs With Amazon S3 you pay only for the storage you use with no minimum fee Prices are based on the location of your Amazon S3 bucket When you begin to estimate the cost of Amazon S3 consider the following: ArchivedAmazon Web Services How AWS Pricing Works Page 13 • Storage class: Amazon S3 offers a range of storage classes designed for different use cases These inc lude S3 Standard for general purpose storage of frequently accessed data; S3 Intelligent Tiering for data with unknown or changing access patterns; S3 Standard Infrequent Access (S3 Standard IA) and S3 One Zone Infrequent Access (S3 One Zone IA) for long lived but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long term archive and digital preservation Amazon S3 also offers capabilities to manage your data throughout its lifecycle Once an S3 Lifecycle policy is set your data will automatically transfer to a different storage class without any changes to your application • Storage: Costs vary with number and size of objects stored in your Amazon S3 buckets as well as type of storage • Requests and Data retrievals: Requests costs made against S3 buckets and objects are based on request type and quantity of requests • Data transfer: The amount of data transferred out of the Amazon S3 region Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free • Management and replication: You pay for the storage management features (Amazon S3 inventory analytics and object tagging) that are enabled on your account’s buckets For more information see Amazon S3 pricing You can estimate your monthly bill using the AWS Pricing Calculator Amazon S3 Glacier Amazon S3 Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup It is designed to deliver 99999999999 percent durability with comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements Amazon S3 Glacier provides query in place functionality allowing you to run powerful analytics directly on your archived data at rest Amazon S3 Glacier provides low cost long term storage Starti ng at $0004 per GB per month Amazon S3 Glacier allows you to archive large amounts of data at a very low cost You pay only for what you need with no minimum ArchivedAmazon Web Services How AWS P ricing Works Page 14 commitments or upfront fees Other factors determining pricing include requests and data transf ers out of Amazon S3 Glacier (incoming transfers are free) Data access options To keep costs low yet suitable for varying retrieval needs Amazon S3 Glacier provides three options for access to archives that span a few minutes to several hours For details see the Amazon S3 Glacier FAQs Storage and bandwidth include all file overhead Rate tiers take into account your aggregate usage for Data Transfer Out to the internet across Amazon EC2 Amazon S3 Amazon Glacier Amazon RDS Amazon SimpleDB Amazon SQS Amazon SNS Amazon DynamoDB and AWS Storage Gateway Amazon S3 Glacier Select pricing Amazon S3 Glacier Select allows queries to run directly on data stored in Amazon S3 Glacier without having to retrieve the entire archive Pricing for this feature is based on the total amount of data scanned the amount of data returned by Amazon S3 Glacier Select and the number of Amazon S3 Glacier Select requests initiated For more information see the Amazon S3 Glacier pricing page Data transfer Data transfer in to Amazon S3 is free Data transfer out of Amazon S3 is priced by Region For more information on AWS Snowball pr icing see the AWS Snowball pricing page AWS Outposts AWS Outposts is a fully managed service that extends AWS infrastructure AWS services APIs and tools to any datacenter co location space or on premises facility AWS Outposts is ideal for workloads that require low latency access to on premises systems local data processing or local data storage Outposts are connected to the nearest AWS Region to provide the same management and control p lane services on premises for a truly consistent operational experience across your on premises and cloud environments Your Outposts infrastructure and AWS services are managed monitored and updated by AWS just like in the cloud ArchivedAmazon Web Services How AWS Pricing Works Page 15 Figure 1: Example AWS Outposts architecture Pricing of Outposts configurations Priced for Amazon EC2 and Amazon EBS capacity in the SKU Three year term with partial upfront all upfront and no upfront options available Price includes d elivery installation servicing and removal at the end of term AWS Services running locally on AWS Outposts will be charged on usage only Amazon EC2 capacity and Amazon EBS storage upgrades available Operating system charges are billed based on usage as an uplift to cover the license fee and no minimum fee required Same AWS Region data ingress and egress charges apply No additional data transfer charges for local network Figure 2: AWS Outposts ingress/egress charges For more information see the AWS Outposts pricing page ArchivedAmazon Web Services How AWS Pricing Works Page 16 AWS Snow Family The AWS Snow Family helps customers that need to run operations in austere non data center environments and in locations where there's lack of consistent network connectivity The Snow Family comprised of AWS Snowcone AWS Snowball and AWS Snow mobile offers a number of physical devices and capacity points most with builtin computing capabilities These services help physically transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and computing capabilities AWS Snowcone AWS Snowcone is the smallest member of the AWS Snow Family of edge computing and data transfer devices Snowcone is portable rugged and secure You can use Snowcone to collect process and move data to AWS either offline by shipping the device or online with AWS DataSync With AWS Snowcone you pay only for the use of the device and for data transfer out of AWS Data transferred offline into AWS with Snowcone does not incur any transfer fees For online data transfer pricing with AWS DataSync please refer to the DataSync pricing page Standard pricing applies once data is stored in the AWS Cloud For AWS Snowcone you pay a service fee per job which includes five days usage on site and for any extra days y ou have the device on site For high volume deployments contact your AWS sales team For p ricing details see AWS Snowcone Pricing AWS Snowball AWS Snowball is a data migration and edge computing d evice that comes in two device options: Compute Optimized and Storage Optimized Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or Amazon S3 compatible object storage It is wellsuite d for local storage and large scale data transfer Snowball Edge Compute Optimized devices provide 52 vCPUs 42 terabytes of usable block or object storage and an optional GPU for use cases such as advanced machine learning and full motion video analysis in disconnected environments Customers can use these two options for data collection machine learning and processing and storage in environments with ArchivedAmazon Web Services How AWS Pricing Works Page 17 intermittent connectivity (such as manufacturing industrial and transportation) or in extremely remot e locations (such as military or maritime operations) before shipping it back to AWS These devices may also be rack mounted and clustered together to build larger temporary installations AWS Snowball has three pricing elements to consider: usage device type and term of use First understand your planned use case Is it data transfer only or will you be running compute on the device? You can use either device for data transfer or computing but it is more cost effective to use a Snowball Edge Storage Optimized for data transfer jobs Second choose your device either Snowball Edge Storage Optimized or Snowball Edge Compute Optimized You can also select the option to run GPU instances on Snowball Edge Compute Optimized for edge applications For on demand use you pay a service fee per data transfer job which includes 10 days of on site Snowball Edge device usage Shipping days including the day the device is received and the day it is shipped back to AWS are not counted toward the 10 days After th e 10 days you pay a low per day fee for each additional day you keep the device For 1 year or 3 year commitments please contact your sales team; you cannot make this selection in the AWS Console Data transferred into AWS does not incur any data transfe r fees and standard pricing applies for data stored in the AWS Cloud For pricing details see AWS Snowball Pricing AWS Snow mobile AWS Snowmobile moves up to 100 PB of data in a 45 foot long rugge dized shipping container and is ideal for multi petabyte or Exabyte scale digital media migrations and data center shutdowns A Snowmobile arrives at the customer site and appears as a network attached data store for more secure high speed data transfer After data is transferred to Snowmobile it is driven back to an AWS Region where the data is loaded into Amazon S3 Snowmobile pricing is based on the amount of data stored on the truck per month ArchivedAmazon Web Services How AWS Pricing Works Page 18 Snowmobile can be made available for use with AWS services in select AWS regions Please follow up with AWS Sales to discuss data transport needs for your specific region and schedule an evaluation For pricing details see AWS Snowmobile Pricing Amazon RDS Amazon RDS is a web service that makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks so you can focus on your applications and busine ss Estimating Amazon RDS costs The factors that drive the costs of Amazon RDS include: • Clock hours of server time: Resources incur charges when they are running — for example from the time you launch a DB instance until you terminate it • Database characteristics: The physical capacity of the database you choose will affect how much you are charged Database characteristics vary depending on the database engine size and memory class • Database purchase type : When you use On Demand DB Insta nces you pay for compute capacity for each hour your DB Instance runs with no required minimum commitments With Reserved DB Instances you can make a low one time upfront payment for each DB Instance you wish to reserve for a 1 or 3year term • Number of database instances: With Amazon RDS you can provision multiple DB instances to handle peak loads • Provisioned storage : There is no additional charge for backup storage of up to 100 percent of your provisioned database storage for an active DB Instance After the DB Instance is terminated backup storage is billed per GB per month • Additional storage: The amount of backup storage in addition to the provisioned storage amount is billed per GB per month ArchivedAmazon Web Services How AWS Pricing Works Page 19 • Long Term Retention : Long Term Retention is priced per vCPU per month for each database instance in which it is enabled The price depends on the RDS instance type used by your database and may vary by region If Long Term Retention is turned off performance data older than 7 days is deleted • API Request s: The API free tier includes all calls from the Performance Insights dashboard as well as 1 million calls outside of the Performance Insights dashboard API requests outside of the Performance Insights free tier are charged at $001 per 1000 requests • Deployment type: You can deploy your DB Instance to a single Availability Zone (analogous to a standalone data center) or multiple Availability Zones (analogous to a secondary data center for enhanced availability and durability) Storage and I/O charges var y depending on the number of Availability Zones you deploy to • Data transfer: Inbound data transfer is free and outbound data transfer costs are tiered Depending on your application’s needs it’s possible to optimize your costs for Amazon RDS database i nstances by purchasing reserved Amazon RDS database instances To purchase Reserved Instances you make a low one time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance For more information see Amazon RDS pricing Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent single digit millisecond latency at any scale It is a fully managed cloud database and supports both document and key value store models Its flexible data model reliable performance and au tomatic scaling of throughput capacity make it a great fit for mobile web gam es ad tech IoT and many other applications Amazon DynamoDB pricing at a glance DynamoDB charges for reading writing and storing data in your DynamoDB tables along with an y optional features you choose to enable DynamoDB has two capacity modes and those come with specific billing options for processing reads and writes on your tables: on demand capacity mode and provisioned capacity mode ArchivedAmazon Web Services How AWS Prici ng Works Page 20 DynamoDB read requests can be eith er strongly consistent eventually consistent or transactional OnDemand Capacity Mode With on demand capacity mode you pay per request for the data reads and writes your application performs on your tables You do not need to specify how much read and write throughput you expect your application to perform as DynamoDB instantly accommodates your workloads as they ramp up or down DynamoDB charges for the core and optional features of DynamoDB Table 1: Amazon DynamoDB OnDemand Pricing Core Feature Billing unit Details Read request unit (RRU) API calls to read data from your table are billed in RRU A strongly consistent read request of up to 4 KB requires one RRU For items larger than 4 KB additional RRUs are required For items up to 4 KB An eventually consistent read request requires one half RRU A transactional read request requires two RRUs Write request unit (WRU) Each API call to write data to your table is a WRU A standard WRU can write an item up to 1KB Items larger than 1 KB require additional WRUs Transactional write requires two WRUs Example RRU : • A strongly consistent read request of an 8 KB item requires two read request units • An eventually consistent read of an 8 KB item requires one read re quest unit • A transactional read of an 8 KB item requires four read request units Example WRU : • A write request of a 1 KB item requires one WRU • A write request of a 3 KB item requires three WRUs • A transactional write request of a 3 KB item requires six WRUs ArchivedAmazon Web Services How AWS Pricing Works Page 21 For details on how DynamoDB charges for the core and optional features of DynamoDB see Pricing for On Demand Capacity Provisioned Capacity Mode With provisioned capacit y mode you specify the number of data reads and writes per second that you require for your application You can use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance while reducing costs Table 2: Amazon DynamoDB Provisioned Capacity Mode Core Feature Billing unit Details Read Capacity unit (RCU) API calls to read data from your table is an RCU Items up to 4 KB in size one RCU can perform one strongly consistent read request per second For Items larger than 4 KB require additional RCUs For items up to 4 KB One RCU can perform two eventually consistent read requests per second Transactional read requests require two RCUs to perform one read per second Write Capacity Unit (WCU) Each API call to write data to your table is a write request For items up to 1 KB in size one WCU can perform one standard write request per second Items larger than 1 KB require additional WCUs Transactional write requests require two WCUs to perform one write per second for items up to 1 KB Data Storage DynamoDB monitors the size of tables continuously to determine storage charges DynamoDB measures the size of your billable data by adding the raw byte size of the data you upload plus a per item storage overhead of 100 bytes to account for indexing First 25 GB stored per month is free Example WCU • A standard write request of a 1 KB item would require one WCU • A standard write request of a 3 KB item would require three WCUs ArchivedAmazon Web Services How AWS Pricing Works Page 22 • A transactional write request of a 3 KB item would require six WCUs Example RCU: • A strongly consistent read of an 8 KB item would require two RCUs • An eventually consistent read of an 8 KB item would require one RCU • A transactional read of an 8 KB item would require four RCUs For details see Amazon DynamoDB pricing Data transfer There is no addit ional charge for data transferred between Amazon DynamoDB and other AWS services within the same Region Data transferred across Regions (eg between Amazon DynamoDB in the US East (Northern Virginia) Region and Amazon EC2 in the EU (Ireland) Region) wil l be charged on both sides of the transfer Global tables Global tables builds on DynamoDB’s global footprint to provide you with a fully managed multi region and multi master database tha t provides fast local read and write performance for massively scaled global applications Global tables replicates your Amazon DynamoDB tables automatically across your choice of AWS Regions DynamoDB charges for global tables usage based on the resource s used on each replica table Write requests for global tables are measured in replicated WCUs instead of standard WCUs The number of replicated WCUs consumed for replication depends on the version of global tables you are using Read requests and data st orage are billed consistently with standard tables (tables that are not global tables) If you add a table replica to create or extend a global table in new Regions DynamoDB charges for a table restore in the added regions per gigabyte of data restored C rossRegion replication and adding replicas to tables that contain data also incur charges for data transfer out For more information see Best Practices and Requirements for Managing Global Tables Learn more about pricing for additional DynamoDB features at the Amazon DynamoDB pricing page ArchivedAmazon Web Services How AWS Pricing Works Page 23 Amazon CloudFront Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data videos applications and APIs to your viewers with low latency and high transfer speeds Amazon CloudFront pricing Amazon CloudFront charges are based on the data transfers and requests used to deliver content to your customers There are no upfront payme nts or fixed platform fees no long term commitments no premiums for dynamic content and no requirements for professional services to get started There is no charge for data transferred from AWS services such as Amazon S3 or Elastic Load Balancing And best of all you can get started with CloudFront for free When you begin to estimate the cost of Amazon CloudFront consider the following: • Data Transfer OUT (Internet/Origin) : The amount of data transferred out of your Amazon CloudFront edge locations • HTTP/HTTPS Requests: The number and type of requests (HTTP or HTTPS) made and the geographic region in which the requests are made • Invalidation Requests : No additional charge for the first 1000 paths requested for invalidation each month Thereafter $0 005 per path requested for invalidation • Field Level Encryption Requests : Field level encryption is charged based on the number of requests that need the additional encryption; you pay $002 for every 10000 requests that CloudFront encrypts using field level encryption in addition to the standard HTTPS request fee • Dedicated IP Custom SSL: $600 per month for each custom SSL certificate associated with one or more CloudFront distributions using the Dedicated IP version of custom SSL certificate support Th is monthly fee is pro rated by the hour For more information see Amazon CloudFront pricing Amazon Kendra Amazon Kendra is a highly accurate and ea sy to use enterprise search service that’s powered by machine learning Amazon Kendra enables developers to add search ArchivedAmazon Web Services How AWS Pricing Works Page 24 capabilities to their applications so their end users can discover information stored within the vast amount of content spread across the ir company When you type a question the service uses machine learning algorithms to understand the context and return the most relevant results whether that be a precise answer or an entire document For example you can ask a question like ""How much is the cash reward on the corporate credit card?” and Amazon Kendra will map to the relevant documents and return a specific answer like “2%” Amazon Kendra pricing With the Amazon Kendra service you pay only for what you use There is no minimum fee or usage requirement Once you provision Amazon Kendra by creating an index you are charged for Amazon Kendra hours from the time an index is created until it is deleted Partial index instance hours are billed in one second increments This applies to Kendr a Enterprise Edition and Kendra Developer Edition Amazon Kendra comes in two editions Kendra Enterprise Edition provides a high availability service for production workloads Kendra Developer Edition provides developers with a lower cost option to build a proof ofconcept; this edition is not recommended for production workloads You can get started for free with the Amazon Kendra Developer Edition that provides free usage of up to 750 hours for the first 30 days Connector usage does not qualify for free usage regular run time and scanning pricing will apply If you exceed the free tier usage limits you will be charged the Amazon Kendra Developer Edition rates for the additional resources you use See Amazon Kendra Pricing for pricing details Amazon Macie Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS Amazon Macie uses machine learning and pattern matching to cost efficiently discover sensitive data at scale Macie automatically detects a large and growing list of sensitive data types including personally identifiable information (PII) such as names addresses and cred it card numbers It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3 Macie is easy to set up with one click in the AWS Management Console or a single API call Macie provides multi account support u sing AWS Organizations so you can enable Macie across all of your accounts with a few clicks ArchivedAmazon Web Services How AWS Pricing Works Page 25 Amazon Macie pricing With Amazon Macie you are charged based on the number of Amazon S3 buckets evaluated for bucket level security and access controls and the quantity of data processed for sensitive data discovery When you enable Macie the service will gather detail on all of your S3 buckets including bucket names size object count resource tags encryption status access controls and region placement M acie will then automatically and continually evaluate all of your buckets for security and access control alerting you to any unencrypted buckets publicly accessible buckets or buckets shared with an AWS account outside of your organization You are cha rged based on the total number of buckets in your account after the 30 day free trial and charges are pro rated per day After enabling the service you are able to configure and submit buckets for sensitive data discovery This is done by selecting the bu ckets you would like scanned configuring a one time or periodic sensitive data discovery job and submitting it to Macie Macie only charges for the bytes processed in supported object types it inspects As part of Macie sensitive data discovery jobs you will also incur the standard Amazon S3 charges for GET and LIST requests See Requests and data retrievals pricing on the Amazon S3 pricing page Free tier | Sensitive data discovery For sensitive data dis covery jobs the first 1 GB processed every month in each account comes at no cost For each GB processed beyond the first 1 GB charges will occu r Please refer this link for pricing details *You ar e only charged for jobs you configure and submit to the service for sensitive data discovery Amazon Kinesis Amazon Kinesis makes it easy to collect process and analyze real time streaming data so you can get timely insights and react quickly to new info rmation Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale along with the flexibility to choose the tools that best suit the requirements of your application With Amazon Kinesis you can ingest real time data such as video audio application logs website clickstreams and IoT telemetry data for machine learning analytics and other applications Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wa it until all your data is collected before the processing can begin ArchivedAmazon Web Services How AWS Pricing W orks Page 26 Amazon Kinesis Data Streams is a scalable and durable real time data streaming service that can continuously capture gigabytes of data per second from hundreds of thousands of sources See Amazon Kinesis Data Streams Pricing for pricing details Amazon Kinesis Data Firehose is the easiest way to capture transform and load data streams into AWS data stores for near rea ltime analytics with existing business intelligence tools See Amazon Kinesis Data Firehose Pricing for pricing details Amazon Kinesis Data Analytics is the easiest way to process data streams in real time with SQL or Apache Flink without having to learn new programming languages or processing frameworks See Amazon Kinesis Data Ana lytics Pricing for pricing details Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to securely stream media from connected devices to AWS for storage analytics machine learning (ML) playback and other processing Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming media from millions of dev ices It durably stores encrypts and indexes media in your streams and allows you to access your media through easy touse APIs Kinesis Video Streams enables you to quickly build computer vision and ML applications through integration with Amazon Rekog nition Video Amazon SageMaker and libraries for ML frameworks such as Apache MxNet TensorFlow and OpenCV For live and on demand playback Kinesis Video Streams provides fully managed capabilities for HTTP Live Streaming (HLS) and Dynamic Adaptive Stre aming over HTTP (DASH) Kinesis Video Streams also supports ultra low latency two way media streaming with WebRTC as a fully managed capability Kinesis Video Streams is ideal for building media streaming applications for camera enabled IoT devices and fo r building real time computer vision enabled ML applications that are becoming prevalent in a wide range of use cases Amazon Kinesis Video Streams pricing You pay only for the volume of data you ingest store and consume in your video streams WebRTC pricing If you use WebRTC capabilities you pay for the number of signaling channels that are active in a given month number of signaling messages sent and received and TURN streaming minutes used for relaying media A signaling channel is conside red active in ArchivedAmazon Web Services How AWS Pricing Works Page 27 a month if at any time during the month a device or an application connects to it TURN streaming minutes are metered in 1minute increments Note: You will incur standard AWS data transfer charges when you retrieve data from your video strea ms to destinations outside of AWS over the internet See Amazon Kinesis Video Streams Pricing for pricing details AWS I oT Events AWS IoT Events helps companies continuously monitor their equipment and fleets of devices for failure or changes in operation and trigger alerts to respond when events occur AWS IoT Events recognizes events across multiple sensors to identify operational issues such as equipment slowdowns and generates alerts such as notifying support teams of an issue AWS IoT Events offers a managed complex event detection service on the AWS Cloud accessible through the AWS IoT Events console a browser based GUI where you can define and manage your event detectors or direct ingest application program interfaces (APIs) code that allows two applications to communicate with each other Understanding equipment or a process based on telemetry from a single sensor is often not possible; a complex event detection service will combine multiple sources of telemetry to gain full insight into equipment and processes You define conditional logic and states inside AWS IoT Events to evaluate incoming telemetry data to detect events in equipment or a process When AWS IoT Events detects an event it can trigger pre defined actions in another AWS service such as sending alerts through Amazon Simple Notification Service ( Amazon SNS) AWS I oT Events pricing With AWS IoT Events you pay only for what you use with no minimum fees or mandatory service usage When you create an event detector in AWS IoT Events you apply conditional logic such as if thenelse statements to understand events such as when a motor might be stuck You are only charged for each message that is evaluated in AWS IoT Events See AWS IoT Events Pricing for pricing details The AWS Free Tier is available to you for 12 months starting on the date you create your AWS account When your free usage expires or if your application use exceeds the free usage tiers you simply pay the above rates Your usage is calculated each month across all regions and is automat ically applied to your bill Note that free usage does not accumulate from one billing period to the next ArchivedAmazon Web Services How AWS Pricing Works Page 28 AWS C ost Optimization AWS enable s you to take control of cost and continuousl y optimize you r spend while building modern scalable application s to meet you r needs AWS' s breadth of services and pricing option s offer the flexibilit y to effectivel y manage you r cost s and still keep the performance and capacit y you require AW S is dedicated to helping custome rs achieve highest saving potential During thi s period of crisis we will wo rk with you to develop a plan that meet s your financial needs Get started with the step s below that will have an immediate impact on you r bill today Choose the right pricing models Use Reserved Instances (RI) to reduce Amazon RDS Amazon Redshift Amazon ElastiCache and Amazon Elasticsearch costs For certain service s like Amazon EC2 and Amazon RDS you can invest in reserved capacity With Reserved Instances you can save up to 72% ove r equivalent ondemand capacity Reserved Instance s are available in 3 option s – All upfront (AURI) partial up f ront (PURI ) or no upfront payment s (NURI) Use the recommendation s provided in AWS Cost Explore r RI purchase recommendations which i s based on you r Amazon RDS Amazon Redshift Amazon ElastiCache and Elasticsearch usage Amazon EC2 Cos t Savings Use Amazon EC2 Spot Instances to reduce EC2 costs o r use Compute Savings Plans to reduce EC2 Fargate and Lambda cost Match Capacity with Demand Identify Amazon EC2 instances with lowutilization and reduce cos t by stopping or rightsizing Use AW S Cost Explore r Resource Optimization to get a report of EC2 instance s that are eithe r idle o r have low utilization You can reduce cost s by eithe r stopping or downsizing these instances Use AWS Instance Scheduler to automaticall y stop instances Use AWS Operation s Conductor to automaticall y resize the EC2 instances (based on the recommendation s report fro m Cost Explorer) ArchivedAmazon Web Services How AWS Pricing Works Page 29 Identify Amazon RDS Amazon Redshift instances with low utiliz ation and reduce cost by stopping (RDS) and pausing (Redshift) Use the Trusted Advisor Amazon RDS Idle DB instances check to identify DB instances which have not had any connection over the last 7 days To reduce costs stop these DB instances using the automation steps described in this blog post For Redshift use the Trusted Advisor Underutilized Redshift clusters check to identify clusters which hav e had no connections for the last 7 days and less than 5% cluster wide average CPU utilization for 99% of the last 7 days To reduce costs pause these clusters using the steps in this blog Analyze Amazon DynamoDB usage and reduce cost by leveraging Autoscaling or Ondemand Analyze your DynamoDB usage by monitoring 2 metrics ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits in CloudWatch To automatically scale (in and out) your DynamoDB table use the AutoScaling feature Using the steps here you can enable AutoScaling on your existing tables Alternately you can also use the on demand option This option allows you to pay perrequest for read and write requests so that you only pay for what you use making it easy to balance costs and performance Implement processes to identify resource waste Identify Amazon EBS volumes with low utilization and reduce cost by snapshotting then deleting them EBS volumes that have very low activity (less than 1 IOPS per day) over a period of 7 days indicate that they are probably not in use Identify these volumes using the Trusted Advisor Underutilized Amazon EBS Volumes Check To reduce costs first snapshot the volume (in case you need it later) then delete these volumes You can automate the creation of snapshots using t he Amazon Data Lifecycle Manager Follow the steps here to delete EBS volumes Analyze Amazon S3 usage a nd reduce cost by leveraging lower cost storage tiers Use S3 Analytics to analyze storage access patterns on the object data set for 30 days or longer It makes re commendations on where you can leverage S3 Infrequently Accessed (S3 IA) to reduce costs You can automate moving these objects into lower cost storage tier using Life Cycle Policies Alternately you can also use S3 Intelligent ArchivedAmazon Web Services How AWS Pricing Works Page 30 Tiering which automatically analyzes and moves your objects to the appropriate storage tier Review networking and reduce costs by deleti ng idle load balancers Use the Trusted Advisor Idle Load Balancers check to get a report of load balancers that have RequestCount of less than 100 over the past 7 days Then use the steps here to delete these load balancers to reduce costs Additionally use t he steps provided in this blog review your data transfer costs using Cost Explorer AWS Support Plan Pricing AWS Support provides a mix o f tools and technology people and programs designed to proactively help you optimize performance lower costs innovate faster and focused on solving some of the toughest challenges that hold you back in your cloud journey There are three types of suppo rt plans available : Developer Business and Enterprise For more details see Compare AWS Support Plans and AWS Support Plan Pricin g Cost calculation examples The following sections use the AWS Pricing Calculator to provide example cost calculations for two use cases AWS Cloud cost calculation example This example is a common use case o f a dynamic website hosted on AWS using Amazon EC2 AWS Auto Scaling and Amazon RDS The Amazon EC2 instance runs the web and application tiers and AWS Auto Scaling match es the number of instances to the traffic load Amazon RDS uses one DB instance for its primary storage and t his DB instance is deployed across multiple Availability Zones Architecture Elastic Load Balanc ing balances traffic to the Amazon EC2 Instances in an AWS Auto Scaling group which adds or subtracts Amazon EC2 Instances to match load Deploying Amazon RDS across multiple Availability Zones enhances data durability and availability Amazon RDS provisions and maintains a standby in a different Availability Zone for automatic failover in the event of outages planned or unplanned The following illustration shows the example architecture for a dynamic website using Amazon EC2 ArchivedAmazon Web Services How AWS Pricing Works Page 31 AWS Auto Scaling Security Groups to enforce least privilege access to AWS infrastruct ure and selected architecture components and one Amazon RDS database instance across multiple Availability Zones ( Multi AZ deployment) All these components are deployed into single region and VPC The VPC is spread out into two availability zones to supp ort failover scenarios with and Route 53 Resolver to manage and route requests for 1 hosted zone towards Elastic Load Balancer Figure 3: AWS Cloud deployment architecture Daily usage profile You can monitor daily usage for your application so that you can better estimate your costs For instance you can look at the daily pattern to figure out how your application handles traffic For each hour track how many hits you get on your website and how many instances are running and t hen add up the total number of hits for that day Hourly instance pattern = (hits per hour on website) / (number of instances) VPC Availabilit y Zone Private Su bnet Amazon RDS DB Instance ReplicationPublic Su bnet Application Server Elastic Load Bala ncing Amazon Route 53 Internet gateway AWS Auto Scaling GroupsRegion Availabilit y Zone Private Su bnet Amazon RDS Standby DB I nstance Public Su bnet Application Server Route Table UsersDNS Resolution RequestArchivedAmazon Web Services How AWS Pricing Works Page 32 Examine the number of Amazon EC2 instances that run each hour and then take the average You can use the number of hits per day and the average number of instances for your calculations Daily profile = SUM(Hourly instance pattern) / 24 Amazon EC2 cost breakdown The following table shows the characteristics for Amazon EC2 used for this dynamic site in the US East Region Charac teristic Estimated Usage Description Utilization 100% All infrastructure components run 24 hour per day 7 days per week Instance t3axlarge 16 GB memory 4 vCPU Storage Amazon EBS SSD gp2 1 EBS volume per instance with 30 GB of storage per volume Data backup Daily EBS snapshots 1 EBS volume per instance with 30 GB of storage per volume Data transfer Data in: 1 Tb/month Data out: 1 Tb/month 10% incremental change per day Instance scale 4 On average per day there are 4 instances running Load Balancing 20 Gb/Hour Elastic Load Balancing is used 24 hours per day 7 days per week It processes a total of 20 Gb/Hour (data in + data out) Database MySQL dbm5large instance with 8 GB memory 2 vCPUs 100 GB storage Multi AZ deployment with synchron ous standby replica in separate Availability Zone The total cost for one month is the sum of the cost of the running services and data transfer out minus the AWS Free Tier discount We calculated the total cost using the AWS Pricing Calculator ArchivedAmazon Web Services How AWS Pricing Works Page 33 Table 3: Cost breakdown Service Monthly Annually Configuration Elastic Load Balancing $8760 $105120 Number of Network Load Balancers (1) Processed bytes per NLB for TCP (20 GB per hour) Amazon EC2 $43916 $526992 Operating system (Linux) Quantity (4) Storage for each EC2 instance (General Purpose SSD (gp2)) Storage amount (30 GB) Instance type (t3axlarge) Amazon Elastic IP address $0 $0 Number of EC2 instances (1) Number of EIPs per instance (1) Amazon RDS for MySQL $27266 $ 327192 Quantity (1) dbm5large Storage for each RDS instance (General Purpose SSD [gp2]) Storage amount (100 GB) Amazon Route 53 $18300 $219600 Hosted Zones (1) Number of Elastic Network Interfaces (2) Basic Checks Within AWS (0) Amazon Virtual Private Cloud (Amazon VPC) $9207 $110484 Data Transfer cost Inbound (from: Internet) 1 TB per month Outbound (to: Internet) 1 TB per month IntraRegion 0 TB per month Hybrid cloud cost calculation example This example is a hybrid cloud use case of AWS Outposts deployed on premise s connected to AWS Clou d using AWS Direct Connect AWS Outpost s extend s the existing VPC from the selected AWS R egion to the customer data center Selected AWS services required to run on premise s (ie Amazon EKS) are available at AWS Outposts inside the Outpost Availability Zone deployed inside a separate subnet Hybrid architecture description The following example shows Outpost deployment with distributed Amazon EKS service extending to on premise s environments ArchivedAmazon Web Services How AWS Pricing Works Page 34 Figure 4: AWS Outpost with Amazon EKS Control Plane and Data Plane Architecture Architecture • The Control Plane for Amazon EKS remain s in the R egion which means in the case of Amazon EKS the Kubernetes Primary node will stay in the Availability Zone deployed to the R egion (not on the Outposts ) • The Amazon EKS worker nodes are deployed on the Outpost controlled by Primary node deployed in the Availability Zone Traffic Flow • The EKS Control Plane Traffic between EKS AWS metrics and Amazon CloudWatch transits thirdparty network (AWS Direct Connect /AWS SitetoSite VPN to the AWS Region ) • The Application / Data Traffic is isolate d from Control plane and distributed between Outposts and local network • Distribution of AMIs (deployed on Outpost) is driven by central Amazon ECR in Region however all images are cached locally on the Outpost Load Balancers • Application Load Balancer is supported on Outpost as the only local Elastic Load Balancing available OutpostAWS Region VPC Availability Zone Subnet Primary Node EKS Control PlaneCorporate DC Availability Zone Subnet Subnet Worker Nodes Primary Node 3rdParty 3rdParty AWS Direct ConnectAWS Cloud ArchivedAmazon Web Services How AWS Pricing Works Page 35 • The Network Load Balancer and Classic Load Balancer stay in the Region but targets deployed at AWS Outposts are supported ( including Application Load Balancer ) • Onpremises (inside corporate DC) Load Balancers (ie F5 BIG IP NetScaler) can be deployed and routed via Local Gateway (inside AWS Outpost) Hybrid cloud components selection Customers can choose from a range of pre validated Outposts configurations ( Figure 2) offering a mix of EC2 and EBS capacity designed to meet a variety of application needs AWS can also work with customer to create a customized configuration designed for their unique applicat ion needs To consider correct configuration make sure to verify deployment and operational parameters of the selected physical location for AWS Outpost rack installation The following example represents a set of parameters highlighting facility networ king and power requirements needed for location validation (selected parameter: example value): Purchase Option: All Upfront Term: 3 Years Max on premises power capacity: 20kVA Max weight: 2500lb Networking uplink speed: 100Gbps Number of Racks: 1 Average Power Draw per Rack: 934 Constraint (power draw/weight): Power Draw Total Outpost vCPU: 480 Total Outpost Memory: 2496GiB In addition to minimum parameters you should make deployment assumptions prior to any order to minimize performance and security i mpact on existing infrastructure landscape deeply affecting existing cost of on premises infrastructure (selected question: example assumption) ArchivedAmazon Web Services How AWS Pricing Works Page 36 What is the speed of the uplink ports from your Outposts Network Devices (OND): 40 or 100Gbps How many uplinks per Outpost Networking Device (OND) will you use to connect the AWS Outpost to your network: 4 uplinks How will the Outpost service link (the Outpost control plane) access AWS services: Service link will access AWS over a Direct Connect public VIF Is ther e a firewall between Outposts and the Internet: Yes These assumptions together with selected components will further lead to an architecture with higher granularity of details influencing overall cost of a hybrid cloud architecture deployment (Figure 5) Figure 5: Hybrid cloud architecture deployment example ArchivedAmazon Web Services How AWS Pricing Works Page 37 Hybrid cloud architecture cost breakdown Hybrid cloud cost include multiple layers and components deployed across the AWS cloud and on premises location When you use AWS Managed Serv ices on AWS Outposts you are charged only for the services based on usage by instance hour and excludes underlying EC2 instance and EBS storage charges Breakdown of these services is showcased in next sections for a 3 year term with partial upfront all upfront and no upfront options (EC2 and EBS capacity) Price includes delivery installation servicing and removal at the end of term – there is no additional charge Outpost rack charges ( customized example) EC2 Charges • c524xlarge 11 TB o $714867 mo nthly; o $12365018 upfront $343473 monthly o $23976141 upfront • 1 m524xlarge 11 TB o $735969 monthly o $12716706 upfront $353242 monthly o $24637314 upfront EBS Charges • 11 TB EBS tier is priced at $030/GB monthly Conclusion Although the number and types of services offered by AWS have increased dramatically our philosophy on pricing has not changed You pay as you go pay for what you use pay less as you use more and pay even less when you reserve capacity All these options are empowering AWS customers to choose they preferred pricing model and increase flexibility of their cost strategy ArchivedAmazon Web Services How AWS Pricing Works Page 38 Projecting costs for a use case such as web application hosting can be challenging because a solution typically uses multiple features across multiple AWS p roducts which in turn means there are more factors and purchase options to consider The best way to estimate costs is to examine the fundamental characteristics for each AWS product estimate your usage for each characteristic and then map that usage to the prices posted on the website You can use the AWS Pricing Calculator to estimate your monthly bill The calculator provides a per service cost breakdown as well as an aggregate monthly estimate You can also use the calculator to see an estimation and breakdown of costs for common solutions Remember you can get started with most AWS services at no cost using the AWS Free Tier Contributors Contributors to this document include : • Vladimir Baranek Principal Partner Solution Architect Amazon Web Services • Senthil Arumugam Senior Partner Solutions Architect Amazon Web Services • Mihir Desai Senior Partner Solutions Architect Amazon Web Service s Further Reading For additional information see: • AWS Pricing • AWS Pricing Calculator • AWS Free Tier • AWS Cost Management • AWS Cost and Usage Reports • AWS Cloud Economics Center ArchivedAmazon Web Services How AWS Pricing Works Page 39 Document Revisions Date Description October 2020 Updated and added service pricing details options calculation and examples June 2018 First publication",General,consultant,Best Practices How_Cities_Can_Stop_Wasting_Money_Move_Faster_and_Innovate,ArchivedHow Cities Can Stop Wasting Money Move Faster and Innovate Simplify and Streamline IT with AWS Cloud Computing January 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: h ttps://awsamazoncom/whitepapersArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 3 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 4 of 16 Contents Abstract 4 Stop Investing in Technology Infrastructure 5 Trend Toward the Cloud 6 Move Faster 7 Pick Your Project Pick One Thing 8 Manage the Scope 10 Take Advantage of New Innovations 12 Engage Your Citizens in Crowdsourcing 12 Automate Critical Functions for Citizens 14 Start Your Journey 15 Contributors 16 Abstract Local and r egional governments around the world are using the cloud to transform services improve their operations and reach new horizons for citizen services The Amazon Web Services (AWS) cloud enables data col lection analysis and decision making for smarter cities This whitepaper provides strategic considerations for local and regional governments to consider as they identify which IT systems and applications to move to the cloud Real examples that show how cities can stop wasting money move faster and innovate ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 5 of 16 Stop Investing in Technology Infrastructure Faced with pressure to innovate within fixed or shrinking budgets while meeting aggressive timelines governments are turning to Amazon Web S ervices (AWS) to provide costeffective scalable secure and flexible infrastructure necessary to make a difference The cloud provides rapid access to flexible and low cost IT resources With cloud computing local and regional governments no longer need to make large upfront investments in hardware or spend a lot of time and money on the heavy lifting of managing hardware “I wanted to move to a model where we can deliver more to our citizens and reduce the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business By shifting from capex to opex we can free up money and return those funds to areas that need it more—fire trucks a bridge or a sidewalk” Chris Chiancone CIO City of McKinney Instead government agencies can provision exactly the right type and size of computing resources needed to power your newest bright idea and drive operational efficiencies with your IT budget You can access as many resources as you need almost instantly and only pay for what you use AWS helps agencies reduce overall IT costs in multiple ways With cloud computing you do not have to invest in infrastructure before you know what AWS Cloud Computing AWS offers a broad set of global compute storage database analytics application and deployment services that help local and regional governments move faster lower IT costs and scale applications ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 6 of 16 demand will be You convert your capital expense into variable expense that fluctuates with demand and you pay only for the resources used Trend Toward the Cloud Local and regional governments are adopting cloud computing however identifying the correct projects to migrate can be overwhelming Applications that deliver increased return on investment (ROI) through reduced operational costs or deliver increased business results should be at the top of the priority list Applications are either critical or strategic —if they do not fit into either category they should be removed from the priority list Instead categorize applications that aren’t strategic or critical as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 1: Focus Areas for Successful Cloud Projects When considering the AWS cloud for citizen services local and regional governments must first make sure that their IT plans align with their organizations’ business model Having a solid understanding of the core competencies of your organization will help you identify the areas that are best served through an external infrastructure such as the AWS cloud The following example shows how a city is using the AWS cloud to deliver more with less and reduc e costs City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going allin on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on Save on costs and provide efficiencies over current solutions Improve outcomes of existing services Capitalize on the advantages of moving to the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 7 of 16 delivering new and better services for its fastgrowing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of their department AWS provides an easy fit for the way they do business Without having to own the infrastructure the City of McKinney has the ability to use cloud resources to address business needs By moving from a capex to an opex model they can now return funds to critical city projects Move Faster AWS has helped over 2 000 government agencies around the world successfully identify and migrate applications to the AWS platform resulting in significant business benefits The following steps help governments identify plan and implement new citizen services that take advantage of current technology to boost efficiencies save tax dollars and deliver an excellent use r experience Business Benefits of Agile Development on AWS • Trade capital expense for variable expense ⎯ Instead of having to invest heavily in data centers and servers before you know how you’re going to use them you can pay only when you consume computing resources and pay only for how much you consume • Benefit from massive economies of scale ⎯ By using cloud computing you can achieve a lower variable cost than you can get on your own Because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale that translate into lower payasyougo prices • Stop guessing capacity ⎯ Eliminate guessing on your infrastructure capacity needs When you make a capacity decision prior to deploying an application you might end up either sitting on expensive idle resources or dealing with limited capacity With cloud computing these problems go away You can access as much or as little as you need and scale up and down as required with only a few minutes’ notice ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 8 of 16 • Increase speed and agility ⎯ In a cloud computing environment new IT resources are only a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower • Stop spending money on running and maintaining data centers ⎯ Focus on projects that differentiate your business not the infrastructure Cloud computing lets you focus on your own customers rather than on the heavy lifting of racking stacking and powering your data center Pick Your Project Pick One Thing A common mistake is starting too many projects at once A good first step is to identify a critical need and focus your development efforts on that service Completing the following actions will help drive success of the new service throughout the development cycle: • Find the right resources • Get all team members on board during initial planning phases • Secure executive buyin • Clearly communicate status through regularly scheduled meetings with all stakeholders Be flexible throughout the project Periodically take a fresh look to review the progress and be open to changes that may need to be incorporated into the project plan Many organizations choose to begin their cloud experiments with either creating a test environment for a new project (since it allows rapid prototyping of multiple options) or solv ing a disaster recovery n eed given that it is not physically based in their location Below is an example of an ideal first workload to start with The City of Asheville started with a disaster recovery (DR) solution as their first workload in the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 9 of 16 City of Asheville The City of Asheville NC Uses AWS for Disaster Recovery Located in the Blue Ridge and Great Smoky mountains in North Carolina the City of Asheville attracts both tourists and businesses Recent disasters like Hurricane Sandy led the city’s IT department to search for an offsite DR solution Working with AWS partner CloudVelox the city used AWS to build an agile disaster recovery solution without the time and cost of investing in an onpremises data center The City of Asheville views the geographic diversity of AWS as the key component for a successful DR solution Now the City of Asheville is using AWS for economic development using tools to develop great sites that attract large businesses and job development Validate with a Proof of Concept A proof of concept (POC) demonstrates that the service under consideration is financially viable The overall objective of a POC is to find solutions to technical problems such as how systems can be integrated or throughput can be achieved with a given configuration A POC should accomplish the following: • Validate the scope of the project The project team can validate or invalidate assumptions made during the design phase to make sure that the service will meet critical requirements • Highlight areas of concern Technical teams have a clear view of potential problems during the development and test phase with the opportunity to make functional changes before the service goes live • Demonstrate a sense of momentum Projects can sometimes be slow to start By testing a small number of users acting in a “citizen role ” the POC shows both development progress and helps to establish whether the service satisfies critical requirements and delivers a good user experience King County used a POC to realize cost savings in the use case below validating the project’ s viability ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 10 of 16 King County King County Saves $1 Million in First Year by Archiving Data in AWS Cloud King County is the most populous county in Washington State with about 19 million residents The county needed a more efficient and costeffective solution to replace a tapebased backup system used to store information generated by 17 different county agencies It turned to AWS for longterm archiving and storage using Amazon Glacier and NetApp’s AltaVault solution which helps the county meet federal security standards including HIPAA and the Criminal Justice Information Services (CJIS) regulations The county is saving about $1 million in the first year by not having to replace outdated servers and projects; an annual savings of about $200000 by reducing operational costs related to data storage King County selected AWS due to the mature services and rich feature set that is highly available secure cost competitive and easy to use King County has a longterm vision to shift to a virtual data center based on cloud computing Manage the Scope Defining the scope of your cloud migration or cloud application development project is key to success Often when developing new citizen services there is a desire to address all citizen needs with a single project while insufficient resources and changing definitions (requirements scope timeframes purpose deliverables and lack of appropriate management support) add to the challenge With a flexible cloud computing environment it is possible to tightly focus on a single issue develop an application that addresses that need and then iterate upon it with updates while the application is in flight This can minimize the impact of these issues allowing realworld piloting and improvements Since processes are always linked to other processes any unplanned changes affect these other interfacing processes With just a little structure and some checkpoints most of the major changes in scope can be avoided Start with a project that will involve a limited number of users This will allow you to control and manage the service development and production process more efficiently and effectively To get started select a service and define scope using the following actions: ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 11 of 16 • Define terms related to the project • Involve the right people in defining the scope • Accurately define processes • Define process boundaries explicitly • Outline high level interfaces between processes • Conduct a health check on the process interfaces • Realize that certain aspects of the project still make it too large to manage By minimizing the project scope local and regional governments can reduce development and administrative costs as well as achieve time savings Release Minimally V iable P roduct and Iterate When is the right time to release a citizen service? If released too soon it may lack necessary functionality and deliver a poor user experience If it is too elegant developers may spend too much time on functionality Releasing a minimally viable service and then iterating based on feedback can be an effective design process when designing citizen services With this approach you still guide the development but an iterative process allows citizens to provide feedback to help shape the functionality before it is locked down Only the local or regional government knows the “minimum” With no upfront costs and the ability to scale the cloud allows for this to happen quickly and easily from anywhere with device independence By the time the citizens access the site IT has already made several iterations so the public sees a more mature site It’s more productive to release early This minimizes development work on functionality that citizens do not want Most people are happy to help test the service to make sure that it meets their needs Additionally this stress testing will help uncover bugs that need to be fixed before the site goes into production This will help meet the ultimate goal: an excellent user experience The City of Boston is an example of how a city released a minimally viable product and continued to iterate on the product to get the best version for the needs of their citizens ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 12 of 16 City of Boston Quickly Identifies Road Conditions that Need Immediate Attention and Repair The City of Boston with technology partner Connected Bits has created the Street Bump program to drive innovative scalable technology to tackle tough local government challenges They are using AWS to propel machine learning with an app that uses a smartphone’s sensors – including the GPS and accelerometers to capture enough (big) data to identify bumps and disturbances that motorists experience while they drive throughout the city The big data collected helps the Boston’s Public Works Department to better understand roads streets and areas that require immediate attention and long term repair They have chosen AWS to create a scalable open and robust infrastructure that allows for this information to flow to and from city staff via the Open311 API This solution was created as a large multitenant softwa reasaservice platform so other cities can also leverage the same repository creating one data store for all cities Several other cities are interested in testing the next version Take Advantage of New Innovations Engage Your Citizens in Crowdsourcing The idea of soliciting customer input is not new Crowdsourcing has become an important business approach to define solutions to problems By tapping into the collective intelligence of the public local and regional government can validate service requirements prior to a lengthy design phase Crowdsourcing can improve both the productivity and creativity of your IT staff while minimizing design development and testing expenses Let the citizens do the work—after all they are the ones who will be using the service Make sure it is designed to meet their requirements Two example s of using crowdsourcing to provide realtime updates to the citizens are Moovit and Transport of London ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 13 of 16 Moovit With AWS Moovit Now Proc esses 85 million Requests Each Da y Moovit headquartered in Israel is redefining the transit experience by giving people the realtime information they need to get to places on time With schedules trip planning navigation and crowdsourced reports Moovit guides transit riders to the best most efficient routes and makes it easy for locals and visitors to navigate the world's cities Since launching in 2012 Moovit's free awardwinning app for iPhone Android and Windows Phone serves nearly 10 million users and is adding more than a million new users every month The app is available across 400 cities in 35 countries including the US Canada France Spain Italy Brazil and the UK Moovit’s goal was to continue to add metros quickly and it needed a solution that would scale j ust as fast Moovit now uses AWS to host and deliver services for its public transportation tripplanning app — using Amazon CloudFront to rapidly deliver information to its users The company made the decision to use AWS because it has servers that can handle the app’s heavy request volume and different types of information and because it supports multiple databases including SQL and NoSQL and includes storage options Transport for London Transport for London Creates an Open Data Ecosystem with Amazon 4 Web Services with AWS Transport for London ( TfL) has been running its flagship tflgovuk website on AWS for over a year and serves over 3 million page views to between 600000 and 700000 visitors a day with 54% of visits coming from mobile devices TfL has been able to scale interactive services to this level (its previous site was static) by leveraging AWS services as an elastic buffer between its backoffice services and the 76% of London’s 84 million population that uses the site regularly to plan their journeys Enhanced personalization for customers is now available on this site; in parallel the department is fostering closer relationships with the thirdparty app and portal providers that contribute digital solutions of their own for London’s trave lers based on TfL’s (openly licensed) transport data TfL has chosen to ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 14 of 16 release this data under an open data license which has helped to establish an ecosystem of thirdparty developers also working on digital travelrelated projects Some 6000 developers are now engaged in digital projects using TfL’s anonymized open data spawning 360 mobile apps to date Automate Critical Functions for Citizens People are more connected to each other than ever before and the increased connectivity of devices creates new opportunities for the public sector to truly become hubs of innovation driving technology solutions to help improve citizens' lives The Internet of Things (IoT) is the everexpanding network of physical “things” that can connect to the Inte rnet and the information that they transfer without requiring human interaction “Things” in the IoT sense refer to a wide variety of devices embedded with electronics software sensors and network connectivity which enable them to collect and exchange data over the Internet AWS is working with local and regional governments to apply IoT capabilities and solutions to opportunities and challenges that face our customers While the possibilities for IoT are virtually endless the following diagram highlights use cases we are discussing with customers today Figure 2: Internet of Things Use Cases for Local and Regional Governments London City Airport IoT Technologies Enhance Customer Experience at London City Airport The ‘Smart Airport Experience’ project was funded by the government run Technology Strategy Board in the UK and implemented at London City Airport working with a Transportation Public Safety Health & WellBeing • Parking solutions • Connected smart intersections • Smart routing / navigation • Fleet tracking / monitoring • Crowd control / management • Officer safety • Emergency notification • Security solutions • Air / particle quality • Water control management • Trash / garbage collection • Lighting control • Water metering City Services • Infrastructure monitoring • Building automation systems ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 15 of 16 technology team led by Living PlanIT SA The goal of the project was to demonstrate how Internet of Things technologies could be used to both enhance customer experiences and improve operational efficiency at a popular business airport that already offers fast checkin to boarding times The project used the Living PlanIT Urban Operating System (UOS™) hosted in an AWS environment as the backbone for realtime data collection processing analytics marshaling and event management Start Your Journey AWS provides a number of important benefits to local and regional governments as the platform for running citizen services and infrastructure programs It provides a range of flexible cost effective scalable elastic and secure capabilities that you can use to manage citizen data in the AWS cloud Work with AWS Government & Education Experts Your dedicated Government and Education team includes solutions architects business developers and partner managers ready to help you get started solving business problems with AWS Get in touch with us to start building solutions » Support AWS customers can choose from a range of support options including our hands on support for enterprise IT environments Learn more about AWS support options » Professional Services AWS has a worldclass professional services team that can help you get more from your cloud deployment It's easy to build solutions using our toolsets but when you need help building complex solutions or migrating from an on premises environment we're there Talk to your Government & Education Experts to learn more about professional services from AWS » ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document: • Frank DiGiammarino General Manager AWS State and Local Government • Carina Veksler Public Sector Solutions AWS Public Sector SalesVar,General,consultant,Best Practices Hybrid_Cloud_DNS_Solutions_for_Amazon_VPC,"This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Hybrid Cloud DNS Options for Amazon VPC November 2019 This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2019 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Contents Introduction 1 Key Concepts 1 Constraints 6 Solutions 7 Route 53 Resolver Endpoints and Forwarding Rules 7 Secondary DNS in an Amazon VPC 11 Decentralized Conditional Forwarders 13 Scaling DNS Management Across Multiple Accounts and VPCs 18 Selecting the Best Solution for Your Organization 22 Additional Considerations 23 DNS Logging 23 Custom EC2 DNS Resolver 25 Microsoft Windows Instances 27 Unbound – Additional Options 28 DNS Forwarder – Forward First 28 DNS Server Resil iency 28 Conclusion 30 Contributors 30 Document Revisions 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract The Domain Name System (DNS) is a foundational element of the internet that underpins many services offered by Amazon Web Services (AWS) Amazon Route 53 Resolver provides resolution with DNS for public domain n ames Amazon Virtual Private Cloud ( Amazon VPC) and Route 53 private hosted zones This whitepaper includes solutions and considerations for advanced DNS architectures to help customers who have workloads with unique DNS requirement s or on premises resources that require DNS resolution between on premises data centers and Amazon EC2 instances in Amazon VPCs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 1 Introduction Many organizations have both on premises resources and resources in the cloud DNS name resolution is essential for on premises and cloud based resources For customers with hybrid workloads which include on premises and cloud based resources extra steps are ne cessary to configure DNS to work seamlessly across both environments AWS services that require name resolution could include Elastic Load Balancing load balancer (ELB) Amazon Relational Database Service (Amazon RDS) Amazon Redshift and Amazon El astic Compute Cloud (Amazon EC2) Route 53 Resolver which is available in all Amazon VPCs responds to DNS queries for public records Amazon VPC resources and Route 53 private hosted zones (PHZs) You can configure it to forward queries to customer man aged authoritative DNS servers hosted on premises and to respond to DNS queries that your on premises DNS servers forward to your Amazon VPC This whitepaper illustrates several different architectures that you can implement on AWS using native and custo mbuilt solutions These architectures meet the need for name resolution of on premises infrastructure from your Amazon VPC and address constraints that have only been partially addressed by previously published solutions Key Concepts Before we dive into the solutions it is important to establish a few concepts and configuration options that we’ll reference throughout this whitepaper Amazon VPC DHCP Options Set The Dynamic Host Configuration Protocol (DHCP) provides a standard for pa ssing configuration information to hosts on a TCP/IP network The options field of a DHCP message contains configuration parameters such as domain name servers domain name ntpservers and netbios node type In any Amazon VPC you can create DHCP options sets and specify up to four DNS servers Currently these options sets are created and applied per VPC which means that you can’t have a DNS server list at the Availability Zone level For more information about DHCP options sets and configuration see Overview of DHCP Option Sets in the Amazon VPC Developer Guide 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 2 Amazon Route 53 Resolver Route 53 Resolver also known as the Amazon DNS Server or Amazon Provided DNS provides full public DNS resolution with additional resolution for internal records for the VPC and customer defined Route 53 private DNS records 2 Route 53 Resolver maps to a DNS server running on a reserved IP address at the base of the VPC network range plus two For example the DNS Server on a 10000/16 network is located at 10002 For VPCs with multiple CIDR bloc ks the DNS server IP address is located in the primary CIDR block Elastic Network Interfaces (ENIs) Elastic network interfaces (referred to as network interfaces in the Amazon EC2 console) are virtual network interfaces that you can attach to an instance in a VPC They’re available only for instances running in a VPC A virtual network interface like any network adapter is the interface that a device uses to connect to a network Each instance in a VPC depending on the instance type can have multiple network interfaces attached to it For more information see Elastic Network Interfaces in the Amazon EC2 User Guide for Linux Instances 3 How ENIs Work for Route 53 Resol ver A Route 53 Resolver endpoint is made up of one or more ENIs which reside in your VPC Each endpoint can only forward queries in a single direction Inbound endpoints are available as forwarding targets for DNS resolvers and use an IP address from the subnet space of the VPC to which it is attached Queries forwarded to these endpoints have the DNS view of the VPC to which the endpoints are attached Meaning if there are names local to the VPC such as AWS PrivateLink endpoints EFS clusters EKS clust ers PHZs associated etc the query can resolve any of those names This is also true for any VPCs peered with the VPC which owns the endpoint Outbound endpoints serve as the path through which all queries are forwarded out of the VPC Outbound endpoint s are directly attached to the owner VPC and indirectly associated with other VPCs via rules Meaning if a forwarding rule is shared with VPC that does not own the outbound endpoint all queries that match the forwarding rule pass through to the owner VPC and then forward out It is important to realize this when using queries to forward from one VPC to another The outbound endpoint may reside in an entirely different A vailability Zone than the VPC that originally sent the query and there is potential fo r an A vailability Zone outage in the owner VPC to impact query This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 3 resolution in the VPC using the forwarding rule This can be avoided by deploying outbound endpoints in multiple A vailability Zones Figure 1: Route 53 Resolver with Outbound Endpoint See Getting Starting with Route 53 Resolver in the Amazon Route 53 Developer Guide for more information Route 53 Private Hosted Zone A Route 53 private hos ted zone is a container that holds DNS records that are visible to one or more VPCs VPCs can be associated to the private hosted zone at the time of (or after) the creation of the private hosted zone For more information see Working with Private Hosted Zones in the Amazon Route 53 Developer Guide 4 Connection Tracking By default Amazon EC2 security groups use connection tracking to track information about traffic to and from the instance 5 Security group rules are applied based on the connection state of the traffic to determine if the traffic is allowed or denied This allows security groups to be stateful which means that responses to inbound traffic are allowed to flow out of the instan ce regardless of outbound security group rules and vice versa This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 4 Linux Resolver The stub resolver in Linux is responsible for initiating and sequencing DNS queries that ultimately lead to a full resolution A resolver is configured via a configuration file /etc/resolvconf The resolver queries the DNS server listed in the resolvconf in the order they are listed The following is a sample resolvconf: options timeout:1 nameserver 100010 nameserver 100110 Linux DHCP Client The DHCP client on Linux provides the option to customize the set of DNS servers that the instance uses for DNS resolution The DNS servers provided in the AWS DHCP options are picked up by this DHCP client to further update the resolvconf with a list of DNS Server IP addresses In addition you can use the supersede DHCP client option to replace the DNS servers provided by the AWS DHCP options set with a static list of DNS servers You do this by modifying the DHCP client configuration file /etc/dhcp/dhclientconf : interface ""et h0"" { supersede domain nameservers 100210 100310; } This sample statement replaces DNS servers 100010 and 100110 in the resolvconf sample with 100210 and 100310 We discuss the use of this option in the Zonal Forwarders Using Supersede solution Conditional Forwarder – Unbound A conditional forwarder examines the DNS queries received from instances and forwards them to different DNS servers based on rules set in its configuration typicall y using the domain name of the query to select the forwarder In a hybrid architecture conditional forwarders play a vital role to bridge name resolution between on premises and cloud resources For this particular solution we use Unbound which is a recu rsive and caching DNS resolver in addition to a conditional forwarder Depending on your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 5 requirements this option can act as an alternative or hybrid to forwarding rules in Amazon Route 53 Resolver For instructions on how to set up an Unbound DNS serve r see the How to Set Up DNS Resolution Between On Premises Networks and AWS by Using Unbound blog post in the AWS Sec urity Blog 6 The following is a sample unboundconf: forwardzone: name: """" forwardaddr: 10002 # Amazon Provided DNS forwardzone: name: ""examplecorp"" forwardaddr: 192168110 # On premises DNS In this sample configuration queries to examplecorp are forwarded to the on premises DNS server and the rest are forwarded to Route 53 Resolver This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 6 Constraints In addition to the concepts established so far it is important that you are aware of some constraints that are key in shaping the rest of this whitepaper and its solutions Packet per Second (PPS) per Elastic Network Interface limit Each network interfac e in an Amazon VPC has a hard limit of 1024 packets that it can send to the Amazon Provided DNS server every second Therefore a computing resource on AWS that has a network interface attached to it and is sending traffic to the Amazon DNS resolver (for example an Amazon EC2 instance or AWS Lambda function) falls under this hard limit restriction In this whitepaper we refer to this limit as packet per second (PPS) per network interface When you’re designing a scalable solution for name resolution yo u must consider this limit because failure to do so can result in queries to Route 53 Resolver going unanswered if the limit is reached This limit is a key factor considered for the solutions proposed in this whitepaper This limit is higher for Route 53 resolver endpoints which have a limit of approximately 10000 QPS per elastic network interface Connection Tracking The number of simultaneous stateful connections that an Amazon EC2 security group can support by default is an extremely large value that the majority of standard TCP based customers never encounter any issues with In rare cases customers with restrictive security group policies and applications that create a large amount of concurrent connections for instance a self managed recursive DNS server may run into issues of exhausting all simultaneous connection tracking resources When that limit is exceeded subsequent connections fail silently In such cases we recommend that you have a security group set up that you can use to disable conn ection tracking To do this set up permissive rules on both inbound and outbound connections Linux Resolver The default maximum number of DNS servers that you can specify in the resolvconf configuration file of a Linux resolver is three which means it isn’t useful to specify four DNS servers in the DHCP options set because the additional DNS server won’t be used This limit further places an upper boundary on some of the solutions discussed in this whitepaper It is also key to note that different opera ting systems can handle the assignment and failover of DNS queries differently This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 7 Solutions The solutions in this whitepaper present options and best practices to architect a DNS solution in the hybrid cloud keeping in mind criteria like ease of implementat ion management overhead cost resilience and the distribution of DNS queries directed toward the Route 53 Resolver We cover the following solutions: • Route 53 Resolver Endpoints and Forwarding Rules – This solution focuses on using Route 53 Resolver end points to forward traffic between your Amazon VPC and on premises data center over both AWS Direct Connect and Amazon VPN • Secondary DNS in an Amazon VPC – This solution focuses on using Route 53 to mirror on premises DNS zones that can then be natively re solved from within VPCs without the need for additional DNS forwarding resources • Decentralized Conditional Forwarders – This solution uses distributed conditional forwarders and provides two options for using them efficiently While we use unbound as a c onditional forwarder in some of these solutions you can use any DNS server that supports conditional forwarding with similar features • Scaling DNS Management Across Multiple Accounts and VPCs – This solution walks through options for managing DNS names as you scale your hybrid DNS solution Route 53 Resolver Endpoints and Forwarding Rules In November 2018 Route 53 launched Route 53 Resolver endpoints and forwarding rules which allow you to forward traffic between your Amazon VPC and on premises data cen ter without having to deploy additional DNS servers For more detailed information about Amazon Route 53 Resolver see the Amazon Route 53 Resolver Developer Guide You use the following features of Route 53 resolver in this solution: inbound endpoint outbound endpoint and forwarding rules to make the hybrid resolution possible between on premises and AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 8 Use Case Advantages Limitations • Customers that must forward queries between an Amazon VPC and on premises data center • Customers that have one or more VPCs connected to an on premises environment via AWS Direct Connect or Amazon VPN • Low management overhead you only have to manage forwarding rules and monito r query limits via CloudWatch alarms • Uses the highly available AWS backbone • Approximately 10000 QPS limit per elastic network interface on resolver endpoints • No logging visibility on queries answered by Resolver • Query source IP address is replaced with IP address of the endpoint from which it is forwarded This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 9 Figure 2 – Route 53 Resolver Endpoints and Forwarding Rules Description  Private Hosted Zones are associated with a Shared Service VPC  Create forward rules in on premises DNS server for Route 53 names you want to resolve from on premises These rules use an inbound endpoint as their destination  Create Route 53 Resolver rules for names you want to resolve on premises from your Amazon VPC These rules use an outbound endpoint and can be shared with other VPCs through Resource Access Manager This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 10 Considerations: • Though you can have multiple VPCs across many accounts only a single Availability Zone redundant set of inbound endpoints is required in your Shared Services VPC • You only need one outbound endpoint for multiple VPCs You don’t have to create an outbound endpoint in ea ch VPC Instead you share an outbound endpoint by sharing the rule(s) created for that endpoint with additional accounts using Resource Access Manager (RAM) • Endpoints cannot be used across Regions Best Practices: • Manually specify the private IP addresse s of the inbound Route 53 resolver endpoint while creating it as opposed to having the resolver choose a random IP address from the subnet This way in case there is an accidental deletion of the endpoint you can reuse those IP addresses • When you creat e the inbound or outbound endpoints we recommend that you use at least two subnets in different Availability Zones for high availability For inbound resolver make sure that you us e both endpoint IP addresse s in your on premises DNS resolver so that the load can be spread across all available IP addresse s • For environments that require a high number of queries per second you should be aware that there is a limit of 10000 queries per second per elastic network interface in an endpoint More ENIs can be added to an endpoint to scale QPS • We publish InboundQueryVolume and OutboundQueryVolume metrics via CloudWatch and recommend that you set up monitoring rules that alert you if the threshold exceeds a certain value (for example 80% of 10000 QPS) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 11 Seconda ry DNS in an Amazon VPC Alternatively you may decide to deploy and manage additional DNS infrastructure running on EC2 instances to handle DNS requests either from VPCs or on premises where you can still benefit from using AWS Managed Services This appr oach uses Route 53 private hosted zones with AWS Lambda and Amazon CloudWatch Events to mirror on premises DNS zones This can then be natively resolved from within a VPC without conditional forwarding and without a real time dependency on on premises DNS servers For the full solution see the Powering Secondary DNS in a VPC using AWS Lambda and Amazon Route 53 Priv ate Hosted Zones on the AWS Compute blog 7 The following table outlines this solution: Table 1 – Solution Highlights – Secondary DNS in an Amazon VPC Use Case Advantages Limitations • Customers cannot use the native Route 53 Resolver forwarding features • Customers that don’t want to build or manage conditional forwarder instances • Customers that do not have in house DevOps expertise • Infrequently changing DNS environment • Low management overhead • Low operational cost • Highly resilient DNS infrastructure • Low possibility for instances to breach the PPS per network interface limit • Onpremises instances can’t query Route 53 Resolver directly for Amazon EC2 hostnames without creating a forwarding target • Works well only when on premises DNS server records must be replicated to Route 53 • Requires on premises DNS server to support full zone transfer query • Requires working with the Route 53 API limits This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 12 Figure 3 – Secondary DNS running on Route 53 private hosted zones Description  CloudWatch Events invokes a Lambda function The scheduled event is configured based on a JSON string that is passed to the Lambda function that sets a number of parameters including the DNS domain source DNS server and Route 53 zone ID This configuration allows you to reuse a single Lambda function for multiple zones  A new network interface is created in the VPC’s subnets and attached to the Lambda function This allo ws the function to access any internal network resources based on the security group that you defined  The Lambda function transfers the source DNS zone from the IP address specified in the JSON parameters Configure DNS servers to allow full zone trans fers which happen over TCP and UDP port 53  The Route 53 DNS zone is retrieved using the AWS API  The two zone files are compared and then the resulting differences are returned as a set of actions to be performed using Route 53  Updates to the Route 53 zone are made using the AWS API and then the Start of Authority (SOA) is updated to match the source version This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 13 There are several benefits to using this approach Aside from the initial solution setup there is little management overhead after th e environment is set up as the solution continues working without any manual intervention Also there is no client side setup because the DHCP options that you set configure each instance to use Route 53 Resolver (aka AmazonProvidedDNS) by default This solution can be one of the more scalable hybrid DNS solutions in a VPC because queries for any domain go directly to Route 53 Resolver from each instance and then to the Amazon Route 53 infrastructure This ensures that each instance uses its own PPS per network interface limit There is also no correlation and impact of implementing this solution in one or more VPCs as you choose to associate the Route 53 hosted zone with multiple VPCs The possibility of failure of a DNS component is lower because of the highly available and reliable Amazon Route 53 infrastructure Note however that there is a hard limit of 1000 private hosted zone associations The main disadvantage of this solution is that it requires full zone transfer query (AXFR) so it isn’t app ropriate for customers that run DNS servers that don’t support AXFR Also because this solution involves working with the Route 53 APIs you must stay within the Route 53 API limits 8 This solution does not provide a method for resolving EC2 records from on premises directly Decentralized Conditional Forwarders While the Route 53 solution enables you to avoid the complexities in running a hybrid DNS architecture you might still prefer to configure your DNS infrastructure to use conditional forwarders within your VPCs One reason you may choose to run your own forwarders is to log DNS queries See DNS Logging (under additional considerations) to determine if this is right for you There are two options under this solution The first option called highly distributed forwarders discusses how to run forwarders on e very instance of the environment trying to mimic the scale that the Route 53 solution provides The second option called zonal forwarders using supersede presents a strategy of localizing forwarders to a specific Availability Zone and its instances The following table highlights these two options followed by their detailed discussion: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybri d Cloud DNS Options for Amazon VPC 14 Table 2 – Solution highlights – Decentralized conditional forwarders Option Use Case Advantages Limitations Highly Distributed Forwarders • Workload generates high volumes of DNS queries • Infrequently changing DNS environment • Resilient DNS infrastructure • Low possibility for instances to breach the PPS per network interface limit • Complex setup and management • Investment in relevant skill sets for configuration manageme nt Zonal Forwarders using Supersede • Customers with existing set of conditional forwarders • Environment that doesn’t generate a high volume of DNS traffic • Fewer forwarders to manage • Zonal isolation provides better overall resiliency • Complex setup and management as the DNS environment grows • Possibility of breaching the PPS per network interfaces limit is higher than the highly distributed option Highly Distributed Forwarders This option decentralizes forwarders and runs a small lightweight DNS forwarder on every instance in the environment The forwarder is configured to serve the DNS needs of only the instance it is running on which eliminates bottlenecks and dependency on a central set of instances Given the implementation and management complexity of this solution we recommend that you use a mature configuration management solution The following diagram shows how this solution functions in a single VPC: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 15 Figure 4 – Distributed forwarders in a single VPC Description  Each instance in the VPC runs its own conditional forwarder (unbound) The resolvconf has a single DNS Server entry pointing to 127001 A straightforward approach for modifying resolvconf would be by creating DHCP options set that has 127001 as the domain name server value You may alternatively choose to overwrite any existing DHCP options settings using the supersede option in the dhclientconf  Records requested for on premises hosted zones are forwarded to the on premises DNS server by the forwarder running locally on the instance  Any requests that don’t match the on premises forwarding filters are forwarded to Resolver Similar to the Route 53 solution this solution allows every single instance to use the limit of 1024 PPS per networ k interfaces to Route 53 Resolver to its full potential The solution also scales up as additional instances are added and works the same way regardless of whether you’re using a single or multi VPC setup The DNS infrastructure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 16 is low latency and the fail ure of a DNS component such as an individual forwarder does not affect the entire fleet due to the decoupled nature of the design This solution poses implementation and management complexities especially as the environment grows You can manage and modif y configuration files at instance launch using Amazon EC2 user data 9 After instance launch you can use the Amazon EC2 Run Command 10 or AWS OpsWorks for Chef Automate 11 to deploy and maintain your configuration files The implementation of these solutions is outside the scope of this whitepaper but it is important to know that they provide the flexibility and power to manage configuration files and their state at a large scale Greater flexibility brings with it the challenge of greater complexity Consider additional operational costs including the need to have an inhouse DevOps workforce Zonal Forwarders Using Supersede If you don’t want to manage and implement a forwarder on each instance of your environment and you want to have conditional forwarder instances as the center piece of your hybrid DNS arc hitecture you should consider this option For this option you localize instances in an Availability Zone to forward queries to conditional forwarders only in the same Availability Zone of the Amazon VPC For reasons discussed in the Linux Resolver section eac h instance can have up to three DNS servers in their resolvconf as shown in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options fo r Amazon VPC 17 Figure 5 – Zonal forwarders with supersede option Description  Instances in Availability Zone A are configured using the supersede option which uses a list of DNS forwarders that are local to that Availability Zone To avoid burdening any specific forwarder in the Availability Zone randomize the order for the DNS forwarders across instances in the Availability Zone  Records requested for on premis es hosted zones are directly forwarded to the on premises DNS server by the DNS forwarder  Any requests that don’t match the on premises forwarding filters are forwarded to the Route 53 Resolver This illustration doesn’t depict the actual flow of traff ic It’s presented for representation purposes only  Similarly other Availability Zones in the VPC can be set up to use their own set of local conditional forwarders that serve the respective Availability Zone You determine the number of conditional forwarders serving an Availability Zone based on your need and the importance of the environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 18 If one of the three instances in Availability Zone A fails the other two instances continue serving DNS traffic It is important to note that pl acement groups must be used in order to guarantee that the forwarders are not running on the same parent hardware which is a single point of failure To ensure separate parent hardware you may set up and take advantage of Amazon Elastic Cloud Compute Placement Groups to avoid this type of failure domain If all three DNS forwarders in Availability Zone A fail at the same time the instances in Availability Zone A fails to resolve any DNS requests because they are unaware of the presence of forwarders in other Availability Zones This prevents the impact from spreading to multiple Availability Zones and ensures that other Availability Zones continue to function normally Currently the DHCP options that you set apply to the VPC as a whole Therefore you must self manage the list of DNS servers that are local to instances in each Availability Zone In addition we recommend that you don’t use the same order of DNS servers in your resolvconf for all instances in the Availability Zone because it would burden the first server in the list and push it closer to breaching the PPS per network interfaces limit While each Linux instance can only have three resolvers if you’re ma naging the resolver list yourself you can have as many resolvers as you wish per Availability Zone Each instance should be configured with three random resolvers from the resolver list Scaling DNS Management Across Multiple Accounts and VPCs In alignme nt with AWS best practices many organizations tend to build out a cloud environment with multiple accounts Whether you’re using Shared VPCs with multiple accounts hosted in a single VPC to share resources or using the more traditional model where a VPC is tied to a single account there are architectural considerations that must be made This whitepaper focuses on the more traditional model For more information on Shared VPCs see Working with Shared VPCs While having multiple accounts and VPCs helps provide a reduction of blast radius and granular account level billing it can make DNS infrastructure more complex Route 53’s ability to associate Private Hosted Zones (PHZs ) with VPCs and accounts helps reduce these complexities for both centralized and de centralized architectures We discuss both centralized and decentralized design paradigms in this section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 19 Multi Account Centralized In this type of architecture Route 53 Private Hosted Zones (PHZs) are centralized in a shared services VPC This allows for central DNS management while enabling inbound Route 53 resolver endpoints to natively query the Private Hosted Zones This leaves th e need for VPC toVPC DNS resolution unaddressed Fortunately PHZs can be associated with many VPCs A simple CLI or API request can associate each PHZ with VPCs in accounts outside of the shared services VPC More information about cross account PHZ sha ring see Associating an Amazon VPC and a Private Hosted Zone That You Created with Different AWS Accounts Figure 6 – MultiAccount Centralized DNS with Private Hosted Zone sharing Description  Instances within a VPC use the Route 53 Resolver (Amazon Provided DNS)  Private hosted zones are associated with a shared services VPC  Private hosted zones are also associated with other VPCs in the environment  Conditional forward rule(s) from the on premises DNS servers have an inbound Route 53 Resolver endpoint as their destination  Rule(s) for on premises domain names are created that leverage an outbound Route 53 Resolver endpoint This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 20 While this architecture provides for centralization you may require each VPC to have its own fully qualified domain name (FQDN) hosted within each account so that account owners can change and modify their own DNS records The n ext section provides more information on how this design paradigm is accomplished Multi Account Decentralized An organization may want to delegate DNS ownership and management to each AWS account This can have the advantages of decentralization of contro l and isolating the blast radius for failure to a specific account The ability to associate PHZs to VPCs between accounts again becomes useful in this scenario Each VPC can have its own PHZ(s) and then associate it with multiple other VPCs across accoun ts and across Regions This architecture is depicted in Figure 7 For unified resolution with the on premises environment this only requires that the shared services VPC be associated with each VPC hosting a PHZ Figure 7 – Multi Account DNS Decentralized Description  Instances within a VPC use the Route 53 Resolver (Amazon Provided DNS)  Private hosted zones are associated with a shared services VPC  Private hosted zones are also associated with other VPCs in the e nvironment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 21 Description  Conditional forward rule(s) from the on premises DNS servers have an inbound Route 53 Resolver endpoint as their destination  Rule(s) for on premises domain names are created that leverage an outbound Route 53 Resolver endpoint Alternative Approaches Alternative approaches have historically been to deploy DNS proxy servers in EC2 instances or to reply on Active Directory DNS servers This centralization was desired but did not take advantage of the benefits of using the Route 53 Resolver and can cause scaling as well as availability constraints Similarly a common anti pattern is to use Route 53 Resolver endpoints to centralize the management of DNS within a shared services VPC or Transit Gateway This is done by creating both a n inbound and outbound endpoint in the shared services VPC and then creating forwarding rules whose target is the IP address of the inbound endpoint in the centralized VPC These rules are then associated with other VPCs which will use the inbound endpoin t of the central VPC to resolve their DNS queries This has the effect of allowing spoke VPCs to use the DNS view of the central VPC For example if you have an EFS mount in the central VPC the spoke VPC would be able to resolve the EFS mount’s DNS name by forwarding its query to the inbound endpoint of the VPC where the file system is mounted This approach is NOT preferred Cross account sharing of PHZs is highly available and less costly than query forwarding This is because PHZ sharing preserves Avai lability Zone isolation meaning that your queries in VPC A are answered by an Availability Zone local to VPC A whereas your queries in VPC B are answered by an Availability Zone local to VPC B This means that in the event of an availability problem in V PC A VPC B's queries would not be affected as long as they are in two different Availability Zones There is no additional cost to associate a PHZ with a VPC and you can share a VPC with upwards of 1000 zones Query forwarding is optimized for sending qu eries to other DNS resolvers located outside the AWS network It provides a way to allow DNS resolvers from different networks to access each other when they would normally not be visible via a recursive DNS lookup If you choose to use query forwarding in order to resolve DNS answers local to another VPC you would must get an endpoint for every VPC for which you want This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 22 this view of DNS Additionally using endpoints to answer queries between VPCs breaks the previously mentioned Availability Zone isolation Meaning that instead of each VPC resolving queries within its local Availability Zone you have now made several VPC s dependent on the availability of a single VPC Regarding limits each endpoint elastic network interface has a limit of 10000 QPS but keep in mind that if you want to use an endpoint to centralize DNS management you are forwarding more query volume to a central VPC as opposed to distributing the query load between multiple VPCs This anti pattern is generally not recommended Selecting the Best Solution for Your Organization There are various advantages and trade offs with each of these solutions Choo sing the right solution for your organization depends on the specific requirements of each workload You might choose to run different solutions in different VPCs to meet the needs of your specific workloads The following table summarizes the criteria tha t you can use to evaluate what will work best for your organization These include the complexity of the implementation the management overhead the availability of the solution probability of hitting the PPS per network interface limit and the cost of the solution Table 4 – Solutions selection criteria Route 53 Resolver Secondary DNS in a VPC Highly Distributed Forwarders Zonal Forwarders Implementation complexity Low Medium High High Management overhead Low Low High Medium DNS Infrastructure resiliency High High High Medium PPS limit breach Low Low Low Medium Cost* Low Low High Medium * Cost is a combination of the infrastructure and operational expense This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 23 Additional Considerations DNS Logging DNS logging refers to logging specific DNS query from individual host Typically these logs are stored for security forensics and compliance GuardDuty provides machine learning based forensics and anomaly detection on recursive queries originating from local VPC resources If raw historical logging is not required GuardDuty may satisfy your requirements without any additional heavy lifting Route 53 provides query logs for public hosted zones If customers require logging for Private Hosted Zones and queries that originate fr om resources within a VPC they have several options while still following the Well Architected Framework and DNS best practices Centralized query logging distributed (on instance) query logging and a hybrid approach to log a percentage of queries base d on user defined domain whitelisting are three of the most popular and scalable methods for query logging available today Centralized Query Logging Query logging is accomplished in a centralized fashion when all queries are forwarded to a resolver that is not the Route 53 Resolver (Amazon Provided DNS) This resolver can be local to the VPC such as several instances running unbound or an on premises resource over DX VPN or the Internet Gateway The latter adds additional latency and dependencies outsi de of the VPC and is typically not recommended for that reason As with any centralized or distributed system it comes with pros and cons Centralization of query logs allows for easy aggregation and a single plane of glass to view and parse DNS client q ueries With centralization additional attention needs be directed at the scale of the instances acting as resolvers and number of queries that are directed at any single instance These instances become single points of failure and can become a barrier d ue to DNS packets per second limits Each EC2 instance is limited to 1024 packets per second for DNS queries toward the Route 53 Resolver (Amazon Provided DNS) If the request being sent to the customer managed instance based DNS resolvers are not distribu ted effectively and are not implementing caching techniques with high volume the DNS instances may exceed the 1024 per instance packet per second limit to the Route 53 Resolver DNS resolver with the VPC This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 24 Distributed Query Logging Another approach is logging DNS queries in a distributed fashion on instance This is accomplished by running unbound or another logging capable resolver or forwarder on each instance that requires logging With the distributed model of logging DNS queries each instance runs a local resolver in order to capture all DNS queries locally on each instance These logs can then be aggregated upstream to a centralized Amazon S3 bucket for historical collection and centralized parsing Depending on the aggregation process this may create a delayed ability for centralized parsing and forensics but removes any single points of failure and reduces the overall blast radius of any given upstream instance based resolver failure If On Demand Instance parsing is required the delivery Window can be shorted Depending on your operational model you may or may not allow on box forensics or external access so the logging delivery schedule should be considered With the launch of VPC Traffic mirroring at re:Inforce 2019 an a lternative off instance distributed logging mechanism can be achieved for supported instance types At this time all AWS Nitro based instances support VPC Traffic Mirroring By enabling traffic mirroring for TCP and UDP based traffic on port 53 on individ ual instance ENIs you have the ability to capture DNS requests in PCAP format Traffic Mirroring for DNS logs shares similar availability and scalability constructs as other distributed methods but increases simplicity and flexibility as it does not requ ire the application or Amazon Machine Image (AMI) to incorporate any additional DNS logic A Traffic Mirroring session can be attached and detached to instance ENIs as needed Traffic Mirroring is priced per elastic network interface that traffic mirroring is enabled on and the customer is responsible for configuring and managing the traffic mirror target For more information on Amazon VPC Traffic Mirroring see Traffic Mirroring Concepts Hybrid Query Logging The third option is a hybrid approach that allows more granularity on what queries are filtered This approach may be desired when companies are able to define “trusted” zones and “untrusted” zones Trust ed zones are approved by the organization and may not require logging while anything unapproved falls under the untrusted category to be logged and possibly acted upon such as a blacklist of the response For example any zones that are owned and operated by the organization and VPC local resources are trusted and everything else is to be logged and controlled This hybrid approach is now possible with the release of the Amazon Route 53 Resolver Service because of its ability to provide conditional This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 25 forward ing rules by zone In this approach all local VPC resources resolve to Route 53 Resolver (Amazon Provided DNS) as normal but when a query is made to an untrusted zone that matches an Amazon Route 53 Resolver conditional forwarding rule it will then be for warded to a specified instance or on premises based resolver such as the centralized DNS resolver mentioned above This approach does not require any modifications on the instance and removes any single points of failure for all trusted zones Custom EC2 D NS Resolver You can choose to host your own custom DNS resolver on Amazon EC2 that leverages public DNS Servers to perform recursive public DNS resolution instead of using Route 53 Resolver This is a good choice because of the nature of the application an d the ability to have more control and flexibility over the DNS environment You could also do this if the PPS per network interface limit is a hindrance in your ability to scale and none of the solutions discussed thus far suit your needs This whitepape r does not describe the details of architecting such a solution but we wanted to point out some caveats that will help you plan better in such a scenario The following diagram illustrates an approach to a hybrid VPC DNS setup where you have your own DNS resolver on Amazon EC2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 26 Figure 8 – Amazon EC2 DNS instances with segregated resolver and forwarder Description  DNS queries for internal EC2 names and Route 53 private hosted zones are forwarded to Route 53 Resolver  DNS queries bound for on premises servers are conditionally forwarded to onpremises DNS servers  DNS queries for public domains are conditional ly forwarded to the custom DNS resolver in the public subnet The resolver then recursively resolves public domains using the latest root hints available from the Internet Assigned Number Authority (IANA) For security reasons we recommend that the Cond itional forwarder instance that requires connectivity to on premises sits separately in a private subnet of the VPC As This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DN S Options for Amazon VPC 27 the custom DNS resolver must be able to query public DNS Servers it runs in its own public subnet of the VPC Ideally you would have security group rules on the EC2 instance running the custom DNS resolver but if this custom DNS resolver has high rates of querying out to the internet then there is a possibility that you will hit connection tracking limits as discussed in the connection tracking section Therefore to avoid running into such a scenario connection tracking by itself must be avoided and it is possible to do so by opening up all ports TCP and UDP to the whole world at the security group level both inbound and outbound As this is granting permissive rules to instance level security group you will have to handle the security of the instance at a different layer At the least it is recommended to control the traffic entering into the entire public subnet by using Network Access Control Lists (NACL) which thereby restricts access to the instance or you could use application level control mechanisms like access control provided by a DNS resolver like Unbound 12 Custom DNS resolvers might develop a reputation upstream on the internet If the instance is assigned a dynamic public IP address that belonged to another customer and previously earned a bad reputation requests upstream could be throttled or even blocked To avoid being throttled or blocked consider assigning Elastic IP addresses to these resolver instances This provides these IP addresses that talk to the upstream servers with the opportunity to build a good reputation over time that can be owned and maintained Scaling concerns can be mitigated through the use of a DNS server fleet sitting behind a Network Load Balancer(NLB) that is configured with both TCP and UDP listener on port 53 Microsoft Windows Instances Typically Microsoft Win dows instances are joined using Active Directory Domain Services (AD DS) In scenarios where you use the Amazon VPC DHCP options set unlike the Linux resolver you can set the full set of four DNS servers You can set the DNS servers independently from th e DHCP supplied IP address similar to the supersede option discussed earlier This can be accomplished using Active Directory Group Policy or via configuration management tools such as Amazon EC2 Run Com mand 13 or AWS OpsWorks for Chef Automate 14 mentioned earlier In addition the Windows DNS client also enables you to cache recently resolved queries which reduces the overall demand on the primary DNS server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 28 The Windows DNS client service is designed to prompt a dynamic update from the DNS server if a change is made to its IP address information When prompted the DNS server updates the host record IP address for that computer (according to RFC 2136) Microsoft DNS provides support for dynamic updates and this is enabled by default in any Active Directory integrated DNS zone When you use a lightweight forwarder like unbound for Windows instances note that there isn’t any support for thes e dynamic updates and it can’t support RFC 2126 If you want to do this you should use the Microsoft DNS server as a primary for these instances Unbound – Additional Options Unbound caches the results for subsequent queries until the time to live (TTL) expires after which it forwards the request By enabling the prefetch option in unbound you can ensure that frequently used records are pre fetched before they expire to keep the cache up todate Also if the on premises DNS server is not available when the cache expires unbound returns SERVFAIL To protect yourself against such a situation you can enable the serve expired option to serve old responses from the cache with a TTL of zero in the response without waiting for the actual resolution to finish After the resolution is completed the response is cached for subsequent use DNS Forwarder – Forward First Some DNS servers (notably BIND) include a forward first option enabl ed by default which causes the server to query the forwarder first and if there is no response to recursively retry the internet DNS servers For private DNS domains in this scenario the internet DNS servers return an authoritative NXDOMAIN which is a nonexistent Internet or Intranet domain name or they return the public address if you’re using split horizon DNS for public zones which is used to provided different answers for private vs public IP addresses Therefore it is critical to specify the f orward only option which specifies that retries are made against the forwarders which means that you avoid ever seeing the response from public name servers The unbound DNS server has the forward first option disabled by default DNS Server Resiliency The solutions in this whitepaper are intended to provide high availability in the event that there is an issue with your primary DNS server However there are factors can prevent or delay this failover from occurring These factors include but are not lim ited to the timeout value in resolvconf configuration issues with the superseded DNS or incorrect DHCP options set settings In some cases these factors could impact the availability of This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 29 applications that are dependent on name resolution There are a f ew simple approaches to ensure the resilience of your forwarders in case there is an issue with the underlying hardware or instance software While these approaches don’t eliminate the need for wellarchitected design they can help you increase the overal l resiliency of your solution EC2 Instance Recovery In the case of an underlying hardware failure of a DNS forwarder instance you can use EC2 instance recovery to start the instance on a new host A recovered instance is identical to the original instan ce including the instance ID private IP addresses Elastic IP addresses and all instance metadata To do this you can create a CloudWatch alarm that monitors an EC2 instance and automatically recovers the instance if it becomes impaired You can use th e CloudWatch alarm to monitor issues like loss of network connectivity loss of system power software issues on the physical host or hardware issues on the physical host that affect network reachability For more information about instance recovery see Recover Your Instance in the Amazon EC2 User Guide for Linux Instances 15 For step bystep instructions on using CloudWatch alarms to recover an instance see Create Alarms That Stop Terminate Reboot or Recover an instance in the Amazon EC2 User Guide for Linux Instances 16 Secondary IP Address In an Amazon VPC instances ca n be assigned secondary IP addresses which are transferrable If an instance fails the secondary IP can be transferred to a standby instance and this avoids the need for every instance to reconfigure their resolver IP addresses This approach redirects tr affic to the healthy instance so that it can respond to DNS queries This approach is appropriate for scenarios where EC2 instance recovery might not provide fast enough recovery or might not be appropriate (for example an operating system fault or softwa re issue) For more information about working with multiple IP addresses see Multiple IP Addresses in the Amazon EC2 User Guide for Linux Instances 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon V PC 30 Conclusion For organi zations with on premises resources operating in a hybrid architecture is a necessary part of the cloud adoption process As such architecture patterns that streamline this transition are essential for success We discussed concepts as well as constraints to help you obtain a better understanding of the fundamental building blocks of the solutions provided here as well as the limitations that help to create the most optimal solution for your workload The solutions that were provided included how to use Route 53 Resolver endpoints with conditional forwarding rules how to set up Secondary DNS in the Amazon VPC with AWS Lambda and Route 53 Private hosted zones and solutions leveraging decentralized forwarders using the Unbound DNS server We also provided guidance on how to select the appropriate solution for your intended workload Finally we examined some additional considerations to help you to better tailor your solution for different workload requirements faster failover and better DNS server resili ency By using the architectures provided you can achieve the most ideal private DNS interoperability between your on premises environments and your Amazon VPC Contributors Contributors to this document include: • Anthony Galleno Senior Technical Account Manager • Gavin McCullagh Principal Systems Development Engineer • Gokul Bellala Kuppuraj Technical Account Manager • Harsha Warrdhan Sharma Technical Account Manager • James Devine Senior Specialist Solutions Architect • Justin Davies Principal Network Specialist • Maritza Mills Senior Product Manager Technical • Sohamn Chaterjee Cloud Infrastructure Architect This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 31 Document Revisions Date Description November 2019 Minor edits September 2019 Fourth Publication June 2018 Third Publication November Second Publication October 2017 First Publication 1http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_DHCP_Optionshtml #DHCPOptionSets 2http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VP C_DHCP_Optionshtml #AmazonDNS 3 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml 4 http://docsawsamazoncom/Route53/latest/DeveloperGuide/hosted zones privatehtml 5 http://d ocsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml#security group connection tracking 6 https://a wsamazoncom/blogs/security/how tosetupdnsresolution between on premises networks andawsbyusing unbound/ 7 https://awsamazoncom/blogs/compute/powering secondary dnsinavpcusing aws lambda andamazon route 53private hosted zones/ 8http://docsawsamazoncom/Route53/latest/DeveloperGuide/DNSLimitationshtml#limit sapirequests Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 32 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/user datahtml 10 https://awsamazoncom/ec2/run command/ 11 https://awsamazoncom/opsworks/chefautomate/ 12 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_ACLshtml 13 https://awsamazoncom/ec2/run command/ 14 https://awsamazoncom/opsworks/chefautomate/ 15 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance recoverhtml 16http://docsawsamazoncom/AWSEC2/latest/UserGuide/UsingAlarmActionshtml 17 http://docsawsamazoncom/AWSEC2/latest/UserGuide/MultipleIPhtml",General,consultant,Best Practices Import_Windows_Server_to_Amazon_EC2_with_PowerShell,"Import Windows Server to Amazon EC2 with PowerShell February 2017 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: http://awsamazoncom/whitepapers Archived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Amazon EC2 1 Amazon EC2 Dedicated Instances 1 Amazon EC2 Dedicated Hosts 1 AWS Server Migration Service 2 VM Import/Export 2 AWS Tools for Windows PowerShell 3 AWS Config 3 Licensing Considerations 3 Preparing for the Walkthroughs 5 Overview 5 Prerequisites 5 Walkthrough: Import Your Custom Image 6 Walkthrough: Launch a Dedicated Instance 9 Walkthrough: Configure Microsoft KMS for BYOL 11 Walkthrough: Allocate a Dedicated Host and Launch an Instance 13 Conclusion 16 Contributors 16 Further Reading 16 Archived Abstract This whitepaper is for Microsoft Windows IT professionals who want to learn how to use Amazon Web Services (AWS) VM Import/Export to import custom Windows Server images into Amazon Elastic Compute Cloud (Amazon EC2) PowerShell code is provided to demonstrate one way you could automate the task of importing images and launching instances but there are many other DevOps automation techniques that could come into play in a well thoughtout cloud migration process ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 1 Introduction Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud Amazon EC2 reduces the time required to obtain and boot new server instances It changes the economics of computing by allowing you to pay only for capacity that you actually use You have full administrator access to each EC2 instance and you can interact with your instances just as you do with your onpremises servers You can stop your instance and retain the data on your boot partition then restart the same instance using PowerShell or a browser interface Amazon EC2 Dedicated Instances Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer Your Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts However Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances Dedicated Instances allow you to bring your own licenses for Windows Server For more information see http://awsamazoncom/dedicatedinstances Amazon EC2 Dedicated Hosts An Amazon EC2 Dedicated Host is a physical server with Amazon EC2 instance capacity fully dedicated to your use Dedicated Hosts can help you address compliance requirements and reduce costs by allowing you to use your existing serverbound software licenses Dedicated Hosts allow you to allocate a physical server and then launch one or more Amazon EC2 instances of a given type on it You can target and reuse specific physical servers and be within the terms of your existing software licenses In addition to allowing you to Bring Your Own License (BYOL) to the cloud to reduce costs Amazon EC2 Dedicated Hosts can help you to meet stringent ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 2 compliance and regulatory requirements some of which require control and visibility over instance placement at the physical host level In these environments detailed auditing of changes is also crucial You can use the AWS Config service to record all changes to your Dedicated Hosts and instances Dedicated Hosts allow you to use your existing persocket percore or per virtual machine ( VM) software licenses including Microsoft Windows Server and Microsoft SQL Server Learn more at https://awsamazoncom/ec2/dedicated hosts/ AWS Server Migration Service AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of onpremises workloads to AWS AWS SMS allows you to automate schedule and track incremental replications of live server volumes making it easier for you to coordinate large scale server migrations Each server volume replicated is saved as a new Amazon Machine Image (AMI) which can be launched as an EC2 instance in the AWS Cloud AWS SMS currently supports VMware virtual machines and support for other physical servers and hypervisors is coming soon AWS SMS supports migrating Windows Server 2003 2008 2012 and 2016 and Windows 7 8 and 10 VM Import/Export VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your onpremises environment This allows you to use your existing virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon Simple Storage Service (Amazon S3) ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 3 You can use PowerShell to import a Hyper V or VMware image VM Import will convert your virtual machine (VM) into an Amazon EC2 AMI which you can use to run Amazon EC2 instances AWS Tools for Windows PowerShell The AWS Tools for Windows PowerShell are a set of PowerShell cmdlets that are built on top of the functionality exposed by the AWS SDK for NET AWS Tools for Windows PowerShell enable you to script operations on your AWS resources from the PowerShell command line Although the cmdlets are implemented using the service clients and methods from the SDK the cmdlets provide an idiomatic PowerShell experience for specifying parameters and handling results For example the cmdlets for Tools for Windows PowerShell support PowerShell pipelining —that is you can pipeline PowerShell objects both into and out of the cmdlets Learn more at https://awsamazoncom/documentation/powershell/ AWS Config AWS Config is a fully managed service that provides you an inventory of your AWS resources as well as configuration history and configuration change notifications to enable security and governance Config Rules enable you to automatically check the configuration of your AWS resources You can discover existing and deleted AWS resources determine your overall compliance against rules and dive into configuration details of a resource at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting This enables you to manage your Windows Server licenses on Dedicated Hosts as required by Microsoft Licensing Considerations Organizations that own Microsoft software licenses and Software Assurance have the option of bringing their own licenses (BYOL) to the cloud under the terms of Microsoft’s License Mobility program (included with Software Assurance) In many cases software license costs can dominate the cost of the computing storage and networking infrastructure in the cloud so BYOL can be very beneficial However you must evaluate BYOL carefully ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 4 For Windows Server and SQL Server AWS also offers License Included (LI) as an option It’s called License Included because the software is preinstalled in the AMI and the complete software licenses are included when you launch an Amazon EC2 instance with those AMIs even Client Access Licenses (CALs) You pay as you go for the Windows Server and SQL Server licenses either hourly while the instance is running or with a 1 or 3year Reserved Instance Reserved Instances offer substantial discounts The LI model is convenient and flexible but if you move a licensed onpremises workload to the cloud with LI instances then you would essentially be paying for dual software licenses Even though that sounds expensive it still might make sense to do in some cases particularly if you plan to consolidate some of your workloads or replatform some application servers or discontinue purchasing Software Assurance So you need to consider your options including BYOL carefully However don’t assume that BYOL is always more economical It’s advisable to create a simple spreadsheet to make a balanced comparison of BYOL vs LI With BYOL if you haven’t bought the licenses yet you need to know your Microsoft reseller bulk license discount You also need to include the cost of Software Assurance (even if it’s already a sunk cost consider whether you plan to renew it) and the cost of EC2 Dedicated Hosts and I nstances Don’t forget to include the correct number of licenses for all the cores on the instances you plan to use for Windows Server and SQL Server With LI you need to consider whether you are purchasing Reserved Instances which offer substantial discounts Tip: When using the AWS Simple Monthly Calculator to determine instance costs without licenses select Amazon Linux even though you’ll be importing your own Windows Server image This avoids the license cost that the calculator automatically assumes for Windows Server Also there are considerable advantages with LI:  The licenses are fully managed by AWS so you don’t need to worry about auditing  You can forego the cost of Software Assurance for those licenses ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 5  You don’t need to buy CALs  Each LI for Windows Server includes two Remote Desktop CALs  LI reduces your costs if you decide to consolidate workloads later  LI reduce your costs when you stop the instances  LI reduces your costs if you don’t need the full capacity of a Dedicated Host  You retain the freedom to replatform your workload Preparing for the Walkthroughs Overview The remainder of this paper walks you through several activities with Windows PowerShell You can adapt and reuse these code snippets in your own AWS account to automate the following tasks:  Import a Windows Server virtual machine to Amazon EC2  Launch and terminate a Dedicated Instance using your custom AMI  Configure Microsoft Key Management Services (KMS) to apply user supplied licensing  Allocate a Dedicated Host and launch an instance in the host using your custom AMI and then terminate the instance and the Dedicated Host Important: If you choose to follow along with the remaining sections in this paper you will be creating resources in your AWS account which will incur billing charges Prerequisites These walkthroughs assume that you have previously exported a Windows Server image (for example from VMware as an Open Virtualization Archive or OVA file) and stored it in an Amazon S3 bucket in your account VM Import/Export also supports Microsoft HyperV but an OVA is referenced here as an example ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 6 You’ll need to have the AWS Tools for Windows PowerShell and grant security rights for PowerShell to access your AWS account The easiest way to do that is to launch a Windows Server instance in Amazon EC2 with an AWS Identity and Access Management (IAM) role You’ll also need an Amazon Virtual Private Cloud VPC a subnet a security group and a keypair in the Region where you import the image You certainly can create those in PowerShell but it’s gener ally more reliable to create as much of your infrastructure as possible using AWS CloudFormation The reason is that you need to consider how to roll back your stack in case any errors occur while building it AWS CloudFormation provides a simple mechanism to automatically roll back so that you won’ t be left paying the bill for an incomplete stack after an error occurs To roll back in PowerShell you would need to trap potential errors at the point where each resource is created in your script and then write the code to remove or deallocate every other resource that the script had successfully created up to that point That would get very tedious in regular PowerShell but could be more easily handled with PowerShell Desired State Configuration (DSC) To comply with your Windows Server license terms and implement BYOL you’ll need to a have a Microsoft KMS instance running in your VPC The walkthrough shows you how to configure the BYOL instance for Microsoft KMS though you can proceed with this walkthrough without having Microsoft KMS running Finally these walkthroughs assume that your own workstation is running Windows Server 2016 though these steps should work with other versions with minor modifications Walkthrough: Import Your Custom Image 1 On the Windows Start menu choose Windows PowerShell ISE 2 In the Windows PowerShell ISE press Ctrl+R to show the Script Pane (or on the View menu choose Show Script Pane ) 3 The AWS Tools for PowerShell allow you to specify the AWS Region separately i n most cmdlets but it’s simpler to set the default Region for your whole session For example run the following commands in PowerShell t o set “uswest2 ” as the default Region You’ll be using the ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 7 “lab_region ” variable again later in this walkthrough so make sure you set it here to your preferred Region $lab_region = ""uswest2"" SetDefaultAWSRegion $lab_region 4 To use the VM import service role in your own AWS account create an IAM policy document to grant access for the Amazon EC2 I mport API (vmieamazonawscom) You must name the role “ vmimport ” (Note: you could create this policy in the AWS Management Console but this example shows how to do it with a document in PowerShe ll) $importPolicyDocument=@"" { ""Version"":""2012 1017"" ""Statement"":[ { ""Sid"":"""" ""Effect"":""Allow"" ""Principal"":{ ""Service"":""vmieamazonawscom"" } ""Action"":""sts:AssumeRole"" ""Condition"":{ ""StringEquals"":{ ""sts:ExternalId"":""vmimport"" } } } ] } ""@ NewIAMRole RoleName vmimport AssumeRolePolicyDocument $importPolicyDocument 5 Associate a policy with the “vmimport ” role so that VM Import/Export can access the VM image in your S3 bucket and create an AMI in Amazon EC2 If you’d like to create your own restrictive policy for security reasons see this page for guidance: http://docsawsamazoncom/vmimport/latest/userguide/import vm imagehtml AWS also provides a couple of managed (builtin) policies ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 8 that make it convenient to grant access to the VM import service r ole to Amazon S3 and Amazon EC2 (Note : This code consists of two commands that are wrapped to fit the document) Register IAMRolePolic y RoleName vmimport –PolicyArn arn:aws:iam::aws:policy/AmazonS3FullAccess Register IAMRolePolicy RoleName vmimport PolicyArn arn:aws:iam::aws:policy/AmazonEC2FullAccess 6 Create a userBucket object to define the location of your image file and an ImageDiskContainer parameter both of which are passed to the ImportEC2Image cmdlet However before running these commands replace with the name of the bucket where you stored the OVA file If you are importing HyperV change the Format property to “VHD” $userBucket = NewObject AmazonEC2ModelUserBucket $userBucketS3Bucket = """" $userBucketS3Key = $file $windowsContainer = NewObject AmazonEC2ModelImageDiskContainer $windowsContainerFormat = ""OVA"" $windowsContainerUserBucket = $userBucket 7 Now create an object for the remaining parameters for the import task Set the ""Platform ” parameter to match the imported operating system type The “LicenseType ” parameter controls how the instance that is imported is configured for licensing Set it to BYOL $params=@{ ""ClientToken""="" MyCustomWindows_"" + (Get Date) ""Description""=""My custom Windo ws image"" ""Platform""=""Windows"" ""LicenseType""=""BYOL"" } 8 Now you’re ready to start the import task When you run this command the import process will take about 45 minutes but you can proceed with the remaining steps in this paper if you’re willing to temporarily use ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 9 other AMI IDs This command is all one line but wrapped here to fit the page ImportEC2Image DiskContainer $windowsContainer @params –region $lab_region 9 You can check the progress of the import task with the followin g command which will show the Progress property and the Status property The Progress property reports the current percentage complete status for the import task The Status property indicates the migration phase GetEC2ImportImageTask region $lab_regio n Walkthrough: Launch a Dedicated Instance 1 While waiting for your own image to be imported you can follow the rest of the walkthroughs using an AWS AMI All the steps will work the same regardless of the AMI except that you’ll need to provide a key pair to access an AWS AMI When you launch an instance from your own imported AMI you don’t need to provide a key pair if you already have an Administrator password The command below obtains the AMI ID of the latest version of the AWS AMI for Windows Server 20 16 (“base” means without SQL Server) The my_ami variable will be used later so make sure you set it here If you run this step after your import process is complete you can use that AMI ID instead $my_ami = (GetEC2ImageByName ""Windows_2016_Base"")ImageId 2 Configure two variables for use when launching the instance Setting the instance type to ""dedicated"" means that you want a Dedicated Instance With the exception of the t2 instance type most instance types can be used for Dedicated Instances $tenancy_type = ""dedicated"" $instance_type = ""m4large"" ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 10 3 This step configures variables to store the networking parameters you ’ll use when you launch a new instance Enter the Classless InterDomain Routing (CIDR) address of a subnet you’v e created in your VPC where you want to launch the new instance If you don’t provide a private IP address during launch one will be assigned automatically within the subnet However you may want to script it for various reasons The NewEc2Instance cmdlet will use this private_IP address and you will log into the instance in the next walkthrough to configure Microsoft KMS If your workstation is not an EC2 instance in a public subnet in the same VPC where you are launching this instance in a private subnet then you will need to do one of the following: (a) launch the instance in a public subnet; (b) use Remote Desktop Protocol (RDP) to allow remote connections into another instance in its associated public subnet; or (c) set up a Remote Desktop Gateway in its public subnet (see Remote Desktop Gateway on the AWS Cloud: Quick Start Reference Deployment http://docsawsamazoncom/quickstart/latest/rd gateway/welcomehtml ) $private_IP = ""1050310"" $Subnet = ""105030/24"" $SubnetObj = GetEC2Subnet Filter @{Name=""cidr""; Values=$Subnet} 4 Configure a variable to store the security group parameter you will use when you launch the new instance Later in this walkthrough y ou’ll login to the instance through Remote Desktop to set up KMS for BYOL so make sure the security group allows inbound RDP access from the Internet $SecurityGroup = ""MySecurityGroup"" $SGObj = Get EC2SecurityGroup Filter @{Name=""tag value""; Values=$SecurityGroup} 5 Create a variable for the keypair name parameter you will use to decrypt the administrator password for the new instance Do n’t include the file extension PEM If you are launching an imported image on which you know the administrator password you don’t need to provide a keypair ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 11 $key_pair = """" 6 Now you’re ready t o launch your Dedicated Instance Many other optional parameters can be configured with this cmdlet to customize the instance However the following is the minimum you need to launch an instance with BYOL $my_instance = NewEC2Instance ` ImageId $my_ami ` Tenancy $tenancy_type ` InstanceType $instance_type ` SubnetId $SubnetObjSubnetID ` PrivateIpAddress $ private_IP ` securityGroupId $SGObjGroupID ` KeyName $ key_pair 7 It’s a good idea to c reate a name tag for the new instance The last two lines are a single cmdlet wrapped here to fit the page $Tag = NewObject amazonEC2ModelTag $TagKey = 'Name' $TagValue = ""Server2016 Imported"" Newec2Tag ResourceID $my_instancerunninginstance[0]instanceID Tag $Tag Walkthrough: Configure Microsoft KMS for BYOL To comply with Microsoft licensing requirements for EC2 Dedicated Instances using the BYOL model you must either supply a Windows license key for the instance or configure it to use Microsoft KMS on a server that you manage In this task you will configure the Dedicated Instance to use a manually specified Microsoft KMS You will connect to the new instance using Windows Remote Desktop Connection If you used an AWS AMI to launch this instance you need to decrypt the password using the lab keypair in order to connect If ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 12 you launched this instance using your imported image you already know the local administrator account and password 1 Log in to the AWS Management Console and go to the EC2 Dashboard 2 Select only the instance you just launched with PowerShell 3 Choose Connect 4 In the Connect To Your Instance dialog box choose Get Password You might need to retry this a couple of times to give the instance a few minutes to initialize 5 For Key Pair Path choose Choose File (the button is named Browse in some browsers) 6 Browse to the pem file on your local machi ne for the keypair you specified when launching the instance and choose Open 7 Choose Decrypt Password 8 Copy the decrypted password to your clipboard buffer 9 Run Remote Desktop Connection 10 In the Computer box enter the IP address of the Dedicated Instance you launched and choose Connect 11 When prompted for credentials log in as Administrator and paste the decrypted password from your clipboard buffer 12 On the Remote Desktop Connection warning dialog box choose Yes to ignore the verification warning 13 In the Remote Desktop Connection session for the Server2016Imported instance when the desktop appears choose No in the Networks dialog box to disable discovery (this is a Windows Server 2016 feature that is not available in earlier versions) 14 In the Remote Desktop Connection session for the Server2016Imported instance launch Windows PowerShell and run the following command t o display the current configuration settings of the Microsoft KMS client slmgrvbs /dlv ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 13 15 Enter the following commands to update the active Microsoft KMS configuration and confirm the change Replace the IP address with a functioning KMS server that you have installed in your VPC This command won’t immediately fail if you don’t have a running KMS instance at the given IP address slmgrvbs /skms 10503100 slmgrvbs /dlv 16 Close the Remote Desktop Connection to the Dedicated Instance and return to your workstation instance where you launched the instance Terminate the Dedicated Instance This cmdlet should be entered as a single line RemoveEC2Instance InstanceId $my_instanceInstances[0]InstanceId Force Walkthrough: Allocate a Dedicated Host and Launch an Instance In this task you will launch and terminate an instance in a Dedicated Host 1 Create variables for the Availability Zone and quantity parameters Edit the $AZ variable appropriately before running this command $AZ = 'uswest2a' $Qty = 1 $AutoPlace = 'On' 2 Request a Dedicated Host This reuses the $instance_type variable you created earlier which was m4large Note that Dedicated Hosts are not available for all instance types newEC2hosts ` InstanceType $ instance_type ` AvailabilityZone $AZ ` quantity $Qty ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 14 ` AutoPlacement $AutoPlace 3 Query the properties of your Dedicated Host This command may initially return no data Wait a moment and retry it This command returns the number of physical CPU cores and sockets the total number of virtual CPUs and the type of instance supported on your Dedicated Host (getEC2hosts)HostProperties 4 List the instances running on your Dedicated Host This shows that initially there are no instances running in the host (getEC2hosts)Instances 5 Specify the tenancy type ""host ” to launch an instance inside the Dedicated Host $tenancy_type = ""host"" 6 Indicate the AMI ID to be deployed in the Dedicated Host There are Microsoft licensing restrictions for Dedicated Hosts AWS and AWS Marketplace AMIs for Windows cannot be used Ordinarily you would specify the AMI ID of your imported image here However if the import task you started earlier is still running in the background that AMI is not available yet In order to demonstrate how to deploy instances to a Dedicated Host you can use an Amazon Linux AMI as a placeholder for the next few tasks $my_ami = (Get EC2Image – Filters @{Name = ""name""; Values = ""Amazon_CentOS*""})ImageID 7 Launch the instance inside the Dedicated Host Once again the only difference is the requirement to provide a keypair when launching an AWS AMI ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 15 $host_instance = NewEC2Instance ` ImageId $my_ami ` Tenancy $tenancy_type ` InstanceType $ instance_type ` SubnetId $SubnetObjSubnetID ` PrivateIpAddress $ private_IP ` securityGroupId $SGObjGroupID ` KeyName $ key_pair 8 Create a name tag for the new instance The last two lines are a single cmdlet $Tag = NewObject amazonEC2ModelTag $TagKey = 'Name' $Tagvalue = ""DedicatedHost Instance"" Newec2Tag ResourceID $host_instancerunninginstance[0]instanceID Tag $Tag 9 List the instances running on your Dedicated Host (getEC2hosts)Instances 10 You must terminate all instances on a Dedicated Host before you can release it RemoveEC2Instance –InstanceId ` $host_instanceInstances[0]InstanceId Force 11 Finally release the Dedicated Host The command below reports successful and u nsuccessful attempts to release hosts It doesn’t report success until all running instances have been terminated Repeat this command until your hostid is listed in the Successful column $dedicated_host = getEC2hosts | Select Object first 1 RemoveEC2Hosts HostId $dedicated_hostHostId –Force ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 16 12 Switch back to the EC2 Dashboard in your browser In the navigation pane choose Dedicated Hosts to confirm that DedicatedHost Instance has been terminated You might need to refresh the console display Conclusion This paper has demonstrated how to use Windows PowerShell and VM Import/Export to import a custom Windows Server image into Amazon EC2 You can adapt and reuse the PowerShell code snippets to automate the process in your own AWS account In addition to VM Import/Export consider using the AWS Server Migration Service It currently supports VMware vCenter and support for additional image formats is coming soon Contributors The following individuals and organizations contributed to this document:  Scott Zimmerman Solutions Architect AWS Further Reading For additional information please consult the following sources:  Getting Started with Amazon EC2 Windows Instances http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win _GetStartedhtml Archived",General,consultant,Best Practices Infrastructure_as_Code,"ArchivedInfrastructure as Code July 2017 This paper has been archived For the latest technical c ontent about the A WS Cloud see the AWS Whitepaper s & Guides page: https://awsamaz oncom/whitepapersArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction to Infrastructure as Code 1 The Infrastructure Resource Lifecycle 1 Resource Provisioning 3 AWS CloudFormation 4 Summary 9 Configuration Management 10 Amazon EC2 Systems Manager 10 AWS OpsWorks for Chef Automate 14 Summary 17 Monitoring and Performance 18 Amazon CloudWatch 18 Summary 21 Governance and Compliance 21 AWS Config 22 AWS Config Rules 23 Summary 25 Resource Optimization 25 AWS Trusted Advisor 26 Summary 27 Next Steps 28 Conclusion 28 Contributors 30 Resources 30 Archived Abstract Infrastructure as Code h as emerged as a best practice for automating the provisioning of infrastructure services This paper describes the benefits of Infrastructure as C ode and how to leverage the capabilities of Amazon Web Services in this realm to support DevOps initiatives DevOps is the combination of cultural philosophies practices and tools that increases your organization’s ability to deliver applications and services at high velocity This enables your organization to be more respons ive to the needs of your customers The practice of Infrastructure as Code can be a catalyst that makes attaining such a velocity possible ArchivedAmazon Web Services – Infrastructure as Code Page 1 Introduction to Infrastructure as Code Infrastructure management is a process associated with software engineering Organizations have traditionally “racked and stacked” hardware and then installed and configured operating systems and applications to support their technology needs Cloud computing takes advantage of virtualization to enable the ondemand provisioning of compute net work and storage re sources that constitute technology infrastructures Infrastructure managers have often performed such provisioning manually The manual proc esses have certain disadvantages including : • Higher cost because they requir e human capital that could otherwise go toward more important business needs • Inconsistency due to human error leading to deviations from configuration standards • Lack of a gility by limit ing the speed at which your organization can release new versions of services in respons e to customer needs and market drivers • Difficulty in attaining and maintaining compliance to corporate or industry standards due to the absence of repeatable processes Infrastructure as Code addresses these deficiencies by bringing automation to the provisioning process Rather than relying on manually performed steps both administrators and developers can instantiate infrastructure using configuration files Infrastructure as C ode treats these configuration file s as software code These files can be used to produce a set of artifacts namely the compute storage network and application services that comprise an operating environment Infrastructure as Code eliminates configuration drift through automation thereby increasing the speed and agility of infrastructure deployments The Infrastructure Resource Lifecycle In the previous section we presented Infrastructure as Code as a way of provisioning resources in a repeatable and consistent manner The underlying concepts are also relevant to the broader roles of infrastructure technology operations Consider the following diagram ArchivedAmazon Web Services – Infrastructure as Code Page 2 Figure 1 : Infrastructure r esource lifecycle Figure 1 illustrates a common view of the lifecycle of infrastructure resources in an organization The stages of the lifecycle are as follows: 1 Resource provisioning Administrators provision the resources according to the specifications they want 2 Configuration management The resources become components of a configuration management system that supports activities such as tuning and patching 3 Monitoring and performance Monitoring and performance tools validate the operational status of the resources by examining items such as metrics synthetic transactions and log files 4 Compliance and governance Compliance and governance frameworks drive additional validation to ensure alignment with corporate and industry standards as well as regulatory requirements ArchivedAmazon Web Services – Infrastructure as Code Page 3 5 Resource optimization Administrators review performance data and identify changes needed to optimize the environment around criteria such as performance and cost management Each stage involve s procedures that can leverage code This extends the benefits of Infrastructure as Code from its traditional role in provisioning to the entire resource lifecycle Every lifecycle then benefits from the consistency and repeatability that Infrastructure as Code offers This expanded view of Infrastructure as Code results in a higher degree of maturity in the Information Technology (IT) organization as a whole In the following sections we explore each stage of the lifecycle – provisioning configuration management monitoring and performance governance and compliance and optimization We will consider the various tasks associated with each stage and discuss how to accomplish those tasks using the capabilities of Amazon Web Services (AWS) Resource Provisioning The information resource lifecycle begins with resource provisioning Administrators can use the principle of Infrastructure a s Code to streamline the provisioning process Consider the following situations : • A release manager needs to build a replica of a cloud based production environment for disaster recovery purposes The administrator designs a template that models the production environment and provision s identical infrastructure in the disaster recovery location • A university professor wants to provision resources for classes each semester The students in the class need an environment that contains the appropriate too ls for their studies The professor creates a template with the appropriate infrastructure components and then instantiate s the template resources for each student as needed • A service that has to meet certain industry protection standards requires infras tructure with a set of security controls each time the service is installed The security administrator integrates the security controls into the configuration template so that the security controls are instantiated with the infrastructure ArchivedAmazon Web Services – Infrastructure as Code Page 4 • The manager of a software project team needs to provide development environments for programmers that include the necessary tools and the ability to interface with a continuous integration platform The manager creates a template of the resources and publishes the template in a resource catalog This enables the team members to provision their own environments as needed These situations have one thing in common: the need for a repeatable process for instantiating resources consistently Infrastructure as Code provides th e framework for such a process To address this need AWS offers AWS CloudFormation 1 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create manage provision and update a collection of related AWS resources in an orderly and predictable way AWS CloudFormation uses templates written in JSON or YAML format to describe the collection of AWS resources (known as a stack ) their associated dependencies and any required runtime parameters You can use a template repeatedly to create identical copies of the same stack consistently across AWS Regions After deploying the resources you can modify and update them in a controlled and predictable way In effect you are applying version control to your AWS infrastructure the sa me way you do with your application code Template Anatomy Figure 2 shows a basic AWS CloudFormation YAML formatted template fragment Templates contain parameters resource declaration and outputs Templates can reference the outputs of other templates which enables modularization ArchivedAmazon Web Services – Infrastructure as Code Page 5 AWSTemplateFormatVersion: ""version date"" Description: String Parameters: set of parameters Mappings: set of mappings Conditions: set of conditions Transform: set of transforms Resources: set of resources Outputs: set of outputs Figure 2 : Structure of a n AWS CloudFormation YAML template Figure 3 is an example of a n AWS CloudFormation template The template requests the name of an Amazon Elastic Compute Cloud (EC2 ) key pair from the user in the parameters section 2 The resources section of the template then creates an EC2 instance using that key pair with an EC2 security group that enables HTTP ( TCP port 80) access ArchivedAmazon Web Services – Infrastructure as Code Page 6 Parameters: KeyName: Description: The EC2 key pair to allow SSH access to the instance Type: AWS::EC2::KeyPair::KeyName Resources: Ec2Instance: Type: AWS::EC2::Instance Properties: SecurityGroups: !Ref InstanceSecurityGroup KeyName: !Ref KeyName ImageId: ami 70065467 InstanceSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Enable HTTP access via port 80 SecurityGroupIngress: IpProtocol: tcp FromPort: '80' ToPort: '80' CidrIp: 0000/0 Figure 3 : Example of a n AWS CloudFormation YAML template Change Sets You can update AWS CloudFormation templates with application source code to add modify or delete stack resources The change sets feature enables you to preview proposed changes to a stack without performing the associated updates 3 You can control t he ability to create and view change sets using AWS Identity and Access Management (IAM) 4 You can allow some developers to create and preview change sets while reserving the ability to update stacks or execute change sets to a select few For example you could allow a developer to see the impact of a template change before promoting that change to the testing stage There are three pri mary phases associated with the use of change sets 1 Create the change set ArchivedAmazon Web Services – Infrastructure as Code Page 7 To create a change set for a stack submit the changes to the template or parameters to AWS CloudFormation AWS CloudFormation generates a change set by comparing the current stack with your changes 2 View the change set You can use the AWS CloudFormation console AWS CLI or AWS CloudFormation API to view change sets The AWS CloudFormation console provides a summary of the changes and a detailed list of changes in JSON format The AWS CLI and AWS CloudFormation API return a detailed list of changes in JSON format 3 Execute the change set You can select and execute the change set in the AWS CloudFormation console use the aws cloudformation executechangeset command in the AWS CLI or the ExecuteChangeSet API The change sets capability allow s you to go beyond version control in AWS CloudFormation by enabling you to keep track of what will actually change from one version to the next Developers and administrators can gain more insight into the impact of changes before promoting them and minimize the risk of introducing errors Reusable Templates Many programming languages offer ways to modularize code with constructs such as functions and subroutines Similarly AWS CloudFormation offers multiple ways to manage and organize your stacks Although you can maintain all your resourc es within a single stack large single stack templates can become difficult to manage There is also a greater possibility of encountering a number of AWS CloudFormation limits 5 When designing the architecture of your AWS CloudFormation stacks you can group the stacks logically by function Instead of creating a single template that includes all the res ources you need such as virtual private clouds ( VPCs ) subnets and security gro ups you can use nested stacks or cross stack references 6 7 The nested stack feature allows you to create a new AWS CloudFormation stack resource within a n AWS CloudFormation template and establish a parent child relationship between the two stacks Each time you create an AWS ArchivedAmazon Web Services – Infrastructure as Code Page 8 CloudFormation stack from the parent template AWS CloudFormation also creates a new child stack This approach allows you to share infrastructure code across projects while maintaining completely separate stacks for each project Cross stack references enable an AWS CloudFormation stack to export values that other AWS CloudFormation stacks can then import Cross stack references promote a service oriented model with loose coupling that allow s you to share a single set of resources across multiple projects Template Linting As with application code AWS CloudFormation templates should go through some form of static analysis also known as linting The goal of linting is to determine whether the code is syntactically correct identify potent ial errors and evaluate adherence to specific style guidelines In AWS CloudFormation linting validates that a template is correctly written in either JSON or YAML AWS CloudFormation provides the ValidateTemplate API that checks for proper JSON or YAML syntax 8 If the check fail s AWS CloudFormation returns a template validation error For example you can run the following command to validate a template stored in Amazon Simple Storage Service ( Amazon S3 ): 9 aws cloudformation validatetemplate templateurl \ s3://examplebucket/example_templatetemplate You can also use third party validation tools For example cfnnag performs additional evaluations on templates to look for potential security concerns Another tool cfncheck performs deeper ch ecks on resource specifications to identify potential errors before they emerge during stack creation 10 11 Best Practices The AWS CloudFormation User Guide provides a list of best practices for designing and implementing AWS CloudFormation templates 12 We provide links to these practices below Planning and organizing • Organize Your Stacks By Lifecycle and Ownership13 • Use IAM to Control Access14 ArchivedAmazon Web Services – Infrastructure as Code Page 9 • Reuse Tem plates to Replicate Stacks in Multiple Environments15 • Use Nested Stacks to Reuse Common Template Patterns16 • Use Cross Stack References to Export Shared Resources17 Creating templates • Do Not Embed Credentials in Your Templates18 • Use AWS Specific Parameter Types19 • Use Parameter Constraints20 • Use AWS::CloudFormation::Init to Deploy Software Applications on Amazon EC2 Instances21 • Use the Latest Helper Scripts22 • Validate Templates Before Using Them23 • Use Parameter Store to Centrally Manage Parameters in Your Templates24 Managing stacks • Manage All Stack Resources Through AWS CloudFormation25 • Create Change Sets Before Updating Your Stacks26 • Use Stack Policies27 • Use AWS CloudTrail to Log AWS CloudFormation Calls28 • Use Code Reviews and Revision Controls to Manage Your Templates29 • Update Your Amazon EC2 Linux Instances Regularly30 Summary The information resource lifecycle starts with the provisioning of resources AWS CloudFormation provides a template based way of creating infrastructure and managing the dependencies between resources during the creation process With AWS CloudFormation you can maintain your infrastructure just like application source code ArchivedAmazon Web Services – Infrastructure as Code Page 10 Configuration Management Once you provision your infrastructure resources and that infrastructure is up and running you must address the ongoing configuration management needs of the environment Consider the following situations : • A release manager wants to deploy a version of an application across a group of servers and perform a rollback if there are problems • A system administrator receives a request to install a new operating system package in developer environments but leave the other environments untouched • An application administrator needs to periodically update a configuration file across all servers housing an application One way to address these situations is to return to the provisioning stage provision fresh resources with the required changes and dispose of the old resources This appr oach also known as infrastructure immutability ensures that the provisioned resources are built anew according to the code base line each time a change is made This eliminates configuration drift There are times however when you might want to take a different approach In environments that have high levels of durability it might be preferable to have ways to make incremental changes to the current resources instead of reprovisioning them To address this need AWS offers Amazon EC2 Systems Manager and AWS OpsWorks for Chef Automate 31 32 Amazon EC2 Systems Manager Amazon EC2 Systems Manager is a collection of capabilities that simplifies common maintenance management deployment and execution of operational tasks on EC2 instances and servers or virtual machines ( VMs ) in on premises environments Systems Manager helps you easily understand and control the current state of your EC2 instance and OS configurations You can track and remotely manage system configuration OS patch levels application configurations and other details about deployments as they occur over time These capabilities h elp with automating complex and repetitive tasks defining system configurations preventing drift and maintaining software compliance of both Amazon EC2 and on premises configurations ArchivedAmazon Web Services – Infrastructure as Code Page 11 Table 1 lists the tasks that Systems Manager simplifies Tasks Details Run Command33 Manage the configuration of managed instances at scale by distributing commands across a fleet Inventory34 Automate the collection of the software inventory from managed instances State Manager35 Keep managed instances in a defined and consistent state Maintenance Window36 Define a maintenance window for running administrative tasks Patch Manager37 Deploy software patches automatically across groups of instances Automation38 Perform common maintenance and deployment tasks such as updating Amazon Machine Images ( AMIs ) Parameter Store39 Store control access and retrieve configuration data whether plain text data such as database strings or secrets such as passwords encrypted through AWS Key Management System (KMS ) Table 1: Amazon EC2 Systems Manager tasks Document Structure A Systems Manager document defines the actions that Systems Manager performs on your managed instances Systems Manager includes more than a dozen p reconfigured documents to support the capabilities listed in Table 1 You can also create custom version controlled documents to augment the capabilities of Systems Manager You can set a default version and share it across AWS accounts Steps in the document specify the execution order All documents are written in JSON and include both parameters and actions As with AWS OpsWorks for Chef Automate documents for Systems Manager become part of the infrastructure code base bringing Infrastructure as Code to configuration management The following is a n example of a custom document for a Windows based host The document uses the ipconfig command to gather the network configuration of the node and then installs MySQL { ""schemaVersion"": ""20"" ""description"": ""Sample version 20 document v2"" ""parameters"": {} ArchivedAmazon Web Services – Infrastructure as Code Page 12 ""mainSteps"": [ { ""action"": ""aws:runPowerShellScript"" ""name"": ""runShellScript"" ""inputs"": { ""runCommand"": [""ipconfig""] } } { ""action"": ""aws:applications"" ""name"": ""installapp"" ""inputs"": { ""action"": ""Install"" ""source"": ""http://devmysqlcom/get/Downloads/MySQLInstaller/mysql installer community 56220msi"" } } ] } Figure 4 : Example of a Systems Manager document Best Practices The best practices for each of the Systems Manager capabilities appear below Run Command • Improve your security posture by leveraging Run Command to access your EC2 instances instead of SSH/RDP 40 • Audit all API calls made by or on behalf of R un Command using AWS CloudTrail • Use the rate control feature in Run Command to perform a staged command execution 41 • Use fine grained access permissions for Run Command (and all Systems Manager capabilities ) by using AWS Identity and Access Management (IAM) policies 42 ArchivedAmazon Web Services – Infrastructure as Code Page 13 Inventory • Use Inventory in combination with AWS Config to audit your application configuration overtime State Manager • Update the SSM agent periodically (at least once a month) using the preconfigured AWS UpdateSSMAgent document 43 • Bootstrap EC2 instances on launch using EC2Config for Windows 44 • (Specific to Windows) Upload the PowerShell or Desired State Configuration (DSC ) module to Amazon S3 and use AWS InstallPowerShellModule • Use tags to create application groups Then target instances using the Targets parameter instead of specifying individual instance IDs • Automatically remediate findings generated by Amazon Inspector by using Systems Manager 45 • Use a centralized configuration repository for all of your Systems Manager documents and share documents across your organization 46 Maintenance Windows • Define a schedule for performing disruptive actions on your instances such as OS patching driver updates or software installs Patch Manager • Use Patch Manager to roll out patches at scale and to increase fleet compliance visibility across your EC2 instances Automation • Create self serviceable runbooks for infrastructure as Automation documents • Use Automation to simplify creating AMIs from the AWS Marketplace or custom AMIs using public documents or authoring your own workflows ArchivedAmazon Web Services – Infrastructure as Code Page 14 • Use the documents AWS UpdateLinuxAmi or AWS UpdateWindowsAmi or create a custom Automation document to build and maintain images Parameter Store • Use Parameter Store to manage g lobal configuration settings in a centralized manner 47 • Use Parameter Store for secrets managements encrypted through AWS KMS 48 • Use Parameter Store with Amazon EC2 Container Service (ECS) task definitions to store secrets 49 AWS OpsWorks for Chef Automate AWS OpsWorks for Chef Automate brings the capabilities of Chef a configuration management platform to AWS OpsWorks for Chef Automate further builds on Chef’s capabilities by providing additional features that support DevOps capabilities at scale Chef is based on the concept of recipes configuration scripts written in the Ruby language that perform tasks such as installing services Chef recipes like AWS CloudFormation templates are a form of source code that can be version controlled thereby extending the principle of Infrastructure as C ode to the configuration management stage of the resource lifecycle OpsWorks for Chef Automate expands the capabilities of C hef to enable your organization to implement DevOps at scale OpsWorks for Chef Automate provides three key capabilities that you can configure to support DevOps practices : workflow compliance and visibility Workflow You can use a workflow in OpsWorks for Chef Automate to coordinate development test and deployment The workflow includes quality gates that enable users with the appropriate privileges to promote code between phases of the release management process This capability can be very useful in supporting collaboration between teams Each team can implement its ow n gates to ensure compatibility between the projects of each team ArchivedAmazon Web Services – Infrastructure as Code Page 15 Compliance OpsWorks for Chef Automate provides features that can assist you with organizational compliance as part of configuration management Chef Automate can provide reports that highlight matters associat ed with compliance and risk You can also leverage p rofiles from well known groups such as the Center for Internet Security (CIS) Visibility OpsWorks for Chef Automate provides visibility into the state of workflows and compliance within projects A Chef user can create and view dashboards that provide information about related events and query the events through a user interface Recipe Anatomy A Chef recipe consists of a set of resource definitions The definition s describe the desired state of the resources and how Chef can bring them to that state Chef supports over 60 resource types A list of common resource types appears below Resource Name Purpose Bash Execute a script using the bash interpreter Directory Manage directories Execute Execute a single command File Manage files Git Manage source resources in Git repositories Group Manage groups Package Manage packages Route Manage a Linux route table entry Service Manage a service User Manage users Table 2: Common Chef resources The following is an example of a Chef recipe This example defines a resource based on the installation of the Apache web server The resource definition includes a check for the underlying operating system It uses the case operator to examine the value of node[:platform] to check for the underlying ArchivedAmazon Web Services – Infrastructure as Code Page 16 operating system The action: install directive brings the resource to the desired state (that is it installs the package) package 'apache2' do case node[:platform] when 'centos''redhat''fedora''amazon' package_name 'httpd' when 'debian''ubuntu' package_name 'apache2' end action :install end Figure 5 : Example of a Chef recipe Recipe Linting and Testing A variety of tools is available from both Chef and the Chef user community that support linting (syntax checking ) and unit and integration testing We highlight some of the most common platforms in the following sections Linting with Rubocop and Foodcritic Linting can be done on infrastructure code such as Chef recipes using tools such as Rubocop and Foodcritic 50 51 52 Rubocop performs static analysis on Chef recipes based on the Ruby style guide (Ruby is the language used to create Chef recipes ) This tool is part of the Chef Development Kit and can be integrated into the software development workflow Foodcritic checks Chef recipes for common syntax errors based on a set of built in rules which can be extended by community contributions Unit Testing with ChefSpec ChefSpec can provide unit testing on Chef cookbooks 53 These tests can determine whether Chef is being asked to do the appropriate tasks to accomplish the desired goals ChefSpec requires a configuration test specification that is then evaluated against a recipe For example ChefS pec would not actually check whether Chef installed the Apache package but instead checks whether a Chef recipe asked to install Apache The goal of the test is to validate whether the recipe reflects the intentions of the programmer ArchivedAmazon Web Services – Infrastructure as Code Page 17 Integration Testing with Test Kitchen Test Kitchen is a testing platform that creates test environments and then use s bussers which are test frameworks to validate the creation of the resources specified in the Chef recipes 54 By leveraging the previous testing tools in conjunction with OpsWorks for Chef Automate workflow capabilities developers can automate the testing of their infrastructures during the development lifecycle These tests are a form of code themselves and are another key part of the Infrastructure as Code approach to deployments Best Practices The strategies techniques and suggestions presented here will help you get the maximum benefit and optimal outcomes from AWS OpsWorks for Chef Automate : • Consider storing your Chef recipes in an Amazon S3 archive Amazon S3 is highly reliable and durable Explicitly version each archive file by using a naming convention Or use Amazon S3 versioning which provides an audit trail and an easy way to revert to an earlier version • Establish a backup schedule that meets your organizational governance requirem ents • Use IAM to limit access to the OpsWorks for Chef Automate API calls Summary Amazon EC2 Systems Manager lets you deploy c ustomize enforce and audit an expected state configuration to your EC2 instances and servers or VMs in your onpremises environment AWS OpsWorks for Chef Automate enables you to use Chef recipes to support the configuration of an environment You can us e OpsWorks for Chef Automate independently or on top of an environment provisioned by AWS CloudFormation The run documents and policies associated with Systems Manager and the recipes associated with OpsWorks for Chef Automate can become part of the infrastructure code base and be controlled just like application source code ArchivedAmazon Web Services – Infrastructure as Code Page 18 Monitoring and Performance Having reviewed the role of Infrastructure as Code in the provisioning of infrastructure resources and configuration management we now look at infrast ructure health Consider how the following events could affect the operation of a web site during periods of peak demand: • Users of a web application are experiencing timeouts because of latency of the load balancer making it difficult to browse the product catalogs • An application server experiences performance degradation due to insufficient CPU capacity and can no longer process new orders • A database that track s session state doesn’t have enough throughput This causes delays as users transition through the various stages of an application These situations describe operational problems arising from infrastructure resources that don’t meet their performance expectations It’s important to capture key metrics to as sess the health of the environment and take corrective action when problems arise Metrics provide visibility With metrics your organization can respond automatically to events Without metrics your organization is blind to what is happening in its infrastructure thereby requiring human intervention to address all issues With scalable and loosely coupled systems written in multiple languages and frameworks it can be difficult to capture the relevant metrics and logs and respond accordingly To address this need AWS offers the Amazon CloudW atch services 55 Amazon CloudWatch Amazon CloudWatch is a set of service s that ingests interprets and responds to runtime metrics logs and events CloudWatch automatically collects metrics from many AWS services such as Amazon EC2 Elastic Load Balancing (ELB) and Amazon DynamoDB56 57 58 Responses can include built in actions such as sending notifications or custom actions handled by AWS Lambda a serverless event driven compute platform 59 The code for Lambda functions becomes part of the infrastructure code base thereby extending Infrastructure as Code to the operational level CloudWatch consists of three services: the main CloudWatch service Amazon CloudWatch Logs and Amazon CloudWatch Events We now consider each of these in more detail ArchivedAmazon Web Services – Infrastructure as Code Page 19 Amazon CloudWatch The main Amazon CloudWatch service collects and tracks metrics for many AWS services such as Amazon EC2 ELB DynamoDB and Amazon Relational Database Service ( RDS ) You can also cre ate custom metrics for services you develop such as applications CloudWatch issues alarms when metrics reach a given threshold over a period of time Here are some examples of metrics and potential responses that could apply to the situations mentioned at the start of this section : • If the latency of ELB exceeds five seconds over two minutes send an email notification to the systems administrators • When the average EC2 instance CPU usage exceeds 60 percent for three minutes launch another EC2 instance • Increas e the capacity units of a DynamoDB table when excessive throttling occurs You can implement responses to metrics based alarms using built in notifications or by writing custom Lambda functions in Python Nodejs Java or C# Figure 6 shows a n example of how a CloudWatch alarm use s Amazon Simple Notification Service ( Amazon SNS) to trigger a DynamoDB capacity update CloudWatch Alarm Th rottledEvents > 2 over 5 minutesSNS Notification Publish to DynamoDB TopicLambda Function Cal l API DynamoDB UpdateTable Figure 6 : Example of a CloudWatch alarm flow Amazon CloudWatch Logs Amazon CloudWatch Logs monitors and stores logs from Amazon EC2 AWS CloudTrail and other sources EC2 instances can ship logging information using the CloudWatch Logs Agent and logging tools such as Logstash Graylog and Fluentd 60 Logs stored in Amazon S3 can be sent to CloudWatch Logs by configuring an Amazon S3 event to trigger a Lambda function ArchivedAmazon Web Services – Infrastructure as Code Page 20 Ingested log data can be the basis for new CloudWatch metrics that can in turn trigger Cloud Watch alarms You can use this capability to monitor any resource that generates logs without writing any code whatsoever If you need a more advanced response procedure you can create a Lambda function to take the appropri ate actions For example a Lambda function can use the SESSendEmail or SNSPublish APIs to pu blish information to a Slack channel when NullPointerException errors appear in production logs 61 62 Log processing and correlation allow for deeper analysis of application behavior s and can expose internal details that are hard to figure out from metrics CloudWatch Logs provides both the storage and analysis of logs and processing to enable data driven responses to operational issues Amazon CloudWatch Events Amazon CloudWatch Events produces a stream of events from changes to AWS environments applies a rules engine and delivers matching events to specified target s Examples of even ts that can be streamed include EC2 instance state changes Auto Scaling actions API calls published by CloudTrail AWS c onsole sign ins AWS Trusted Advisor opti mization notifications custom application level events and time scheduled actions Targets can include built in actions such as SNS notifications or custom responses using Lambda functions The ability of an infrastructure to respond to selected events offers benefits in both operations and security From the operations perspective events can automate maintenance activities without having to manag e a separate scheduling system With regard to information security events can provide notifications of console logins authentication failures and risky API calls recorded by CloudTrail In both realms incorporating event responses into the infrastructure code promotes a greater degree of selfhealing and a higher level of operational maturity Best Practices Here are some recommendations for best practices related to monitoring : • Ensure that all AWS resources are emitting metrics • Create CloudWatch a larms for metrics that provide the appropriate responses as metric related events arise ArchivedAmazon Web Services – Infrastructure as Code Page 21 • Send logs from AWS resources including Amazon S3 and Amazon EC2 to CloudWatc h Logs for analysis using log stream triggers and Lambda functions • Schedule ongoing maintenance tasks with CloudWatch and Lambda • Use CloudWatch custom events to respond to application level issues Summary Monitoring is essential to understand systems behavior and to automat e data driven reactions CloudWatch collects observations from runtime environments in the form of metrics and logs and makes those actionable through alarms streams and events Lambda functions written in Python Nodejs Java or C# can respond to events thereby extending the role of Infrastructure as Code to the operational realm and improving the resiliency of operating environments Governance and Compliance Hav ing considered how you can use Infrastructure as Code to monitor the health of your organization’s environments we now turn our focus to technolo gy governance and compliance Many organizations require visibility into their infrastructures to address indu stry or regulatory requirements The dynamic provisioning capabilities of the cloud pose special challenges because visibility and governance must be maintained as r esources are added removed or updated Consider the following situations : • A user is added to a privileged administration group and the IT organization is unable to explain when this occurred • The network access rules restricting remote management to a limited set of IP addresses are modified to allow access from additional locations • The RAM and CPU configurations for several servers has unexpectedly doubled resulting in a much larger bill than in previous months Although you have visibility into the current state of your AWS resource configurations using the AWS CLI and API calls addr essing the se situations requires the ability to look at how those resources have change d over time To address this need AWS offers the AWS Config service63 ArchivedAmazon Web Services – Infrastructure as Code Page 22 AWS Config AWS Config enables you to assess audit and evaluate the configurations of your AWS resources AWS Config automatically builds an inventory of your resources and track s changes made to them Figure 7 shows an example of a n AWS Config inventory of EC2 instances Figure 7 : Example of an AWS Config resource inventory AWS Config also provides a clear view of the resource change timeline including changes in both the resource configurations and the associations of those resources to other AWS resources Figure 8 shows the information maintained by AWS Config for a VPC resource ArchivedAmazon Web Services – Infrastructure as Code Page 23 Figure 8 : Example of AWS Config resource timeline When many different resources are changing frequently and automatically automating compliance can become as important as automating the delivery pipeline To respond to changes in the environment you can use AWS Config rules AWS Config Rules With AWS Config rules every change triggers an evaluation by the rules associated with the resources AWS provides a collection of managed rules for common requirements such as IAM users having good passwords groups and policies or for determining if EC2 instances are on the correct VPCs and Security Groups AWS Config rules can quickly identify noncompliant resources and help with reporting and remediation For val idations beyond those provided by the managed rules AWS Config rules also support the creation of custom rules using Lambda functions 64 These rules become part of the infrastructure code base thus bringing the concept of Infrastructure as Code to the governance and compliance stages of the information resource lifecycle Rule Structure When a custom rule is invoked through AWS Config rules the associated Lambda function receives the configuration events processes them and returns results The following function determines if Amazon Virtual Private Cloud (Amazon VPC) flow logs are enabled on a given Amazon VPC ArchivedAmazon Web Services – Infrastructure as Code Page 24 import boto3 import json def evaluate_compliance(config_item vpc_id): if (config_item['resourceType'] != 'AWS::EC2::VPC'): return 'NOT_APPLICABLE' elif is_flow_logs_enabled(vpc_id): return 'COMPLIANT' else: return 'NON_COMPLIANT' def is_flow_logs_enabled(vpc_id): ec2 = boto3client('ec2') response = ec2describe_flow_logs( Filter=[{'Name': 'resource id''Values': [vpc_id]}] ) if len(response[u'FlowLogs']) != 0: return True def lambda_handler(event context): invoking_event = jsonloads(event['invokingEvent']) compliance_value = 'NOT_APPLICABLE' vpc_id = invoking_event['configurationItem']['resourceId'] compliance_value = evaluate_compliance( invoking_event['configurationItem'] vpc_id) config = boto3client('config') response = configput_evaluations( Evaluations=[ { 'ComplianceResourceType': invoking_event['configurationItem']['resourceType'] 'ComplianceResourceId': vpc_id 'ComplianceType': compliance_value 'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTim e'] } ] ResultToken=event['resultToken']) Figure 9 : Example of a Lambda function to support AWS Config rules ArchivedAmazon Web Services – Infrastructure as Code Page 25 In this example w hen a configuration event on an Amazon VPC occurs the event passes to the function lam bda_handler This code extracts the ID of the Amazon VPC and uses the describe_flow_logs API call to determine whether the flow logs are enabled The Lambda function returns a value of COMPLIANT if the flow logs are enabled and NON_COMPLIANT otherwise Best Practices Here are some recommendations for implementing AWS Config in your environments : • Enable AWS Config for all regions to record the configuration item history to facilitate auditing and compliance tracking • Implement a process to respond to change s detected by AWS Config This could include email notifications and the use of AWS Config rules to respond to changes programmatically Summary AWS Config extends the concept of infrastructure code into the realm of governance and compliance AWS Config can continuously record the configuration of resources while AWS Config rules allow for event driven responses to changes in the configuration of tracked resources You can use this capability to assist your organization with the monitoring of compliance controls Resource Optimization We now focus on the final stage in the information resource lifecycle resource optimization In this stage administrators review performance data and identify changes needed to optimize the environment around criteria such as security performance and cost management It’s important for all application stakeholders to regularly evaluate the infrastructure to determine if it is being used optimally Consider the following questions : • Are there provisioned resources that are underutilized? ArchivedAmazon Web Services – Infrastructure as Code Page 26 • Are there ways to reduce the charges associated with the operating environment ? • Are there any suggestions for improving the performance of the provisioned resources ? • Are there any service limits that apply to the resources used in the environment and if so is the current usage of resources close to exceeding these limits ? To answer these questions we need a way to interrogate the operating environment retrieve data related to optimization and use that data to make meaningful d ecisions To address this need AWS offers AWS Trusted Advisor 65 AWS Trusted Advisor AWS Trusted Advisor helps you observe best practices by scanning your AWS resources and comparing their usage against AWS best practices in four categories: cost optimization performance security and f ault tolerance As part of ongoing improvement to your infrastructure and applications taking advantage of Trusted Advisor can help keep your resources provisioned optimally Figure 10 shows a n example of the Trusted Advisor dashboard Figure 10 : Example of the AWS Trusted Advisor dashboard Checks Trusted Advisor provides a variety of check s to determine if the infrastructure is following best practices The checks include detailed descriptions of ArchivedAmazon Web Services – Infrastructure as Code Page 27 recommended best practices alert criteria guidelines for action and a list of useful resources on the topic Trusted Advisor provides the resul ts of the checks and can also provide ongoing weekly notifications for status updates and cost savings All customers have access to a core set of Trusted Advisor checks Customers with AWS Business or Enterprise support can access all Trusted Advisor che cks and the Trusted Advisor APIs Using the APIs you can obtain information from Trusted Advisor and take corrective action s For example a program could leverage Trusted Advisor to examine current account service limits If current resource usage s approach the limits you can automatically create a support case to increase th e limit s Additionally Trusted Advisor now integrates with Amazon CloudWatch Events You can design a Lambda function to notify a Slack channel when the status of Trusted Advisor checks changes These examples illustrate how the concept of Infrastructure as Code can be extended to the resource optimization level of the information resource lifecycle Best Practices The best practices for AWS Trusted Advisor appear below • Subscr ibe to Trusted Advisor notifications through email or an alternative delivery system • Use distribution lists and ensure that the appropriate recipients are included on all such notifications • If you have AWS Business or Enterprise support use the AWS Support API in conjunction with Trusted Advisor notifications to create cases with AWS Support to perform remediation Summary You must continuously monitor your infrastructure to optimize the infrastructure resources with regard to p erformance security and cost AWS Trusted Advisor provides the ability to use APIs to interrogate your AWS infrastructure for recommendations thus extending Infrastructure as Code to the optimization phase of the information resource lifecycle ArchivedAmazon Web Services – Infrastructure as Code Page 28 Next Steps You can be gin the adoption of Infrastructure as Code in your organization by viewing your infrastructure specifications in the same way you view your product code AWS offers a wide range of tools that give you more control and flexibility over how you provision manage and operationalize your cloud infrastructure Here are some key actions you can take as you implement Infrastructure as Code in your organization : • Start by using a managed source control service such as AWS CodeCommit for your infrastructure code • Incorporate a quality control process via unit tests and static code analysis before deployments • Remove the human element and automate infrastructure provisioning including infrastructure permission policies • Create idempotent infrastructure code that you can easily redeploy • Roll out every new update to your infrastructure via code by updating your idempotent stacks Avoid making one off changes manually • Embrace endtoend automation • Include infrastructure automation work as part of regular product sprints • Make your changes auditable and make logging mandatory • Define common standards across your organization and continuously optimize By embracing these principles your infrastructure can dynamically evolve and accelerate with your rapidly changing busi ness needs Conclusion Infrastructure as C ode enables you to encode the definition of infrastructure resources into configuration files and control version s just like application ArchivedAmazon Web Services – Infrastructure as Code Page 29 software We can now update our lifecycle diagram and show how AWS support s each stage through code AWS CloudFormation AWS OpsWorks for Chef Automate Amazon EC 2 Systems Manager AWS ConfigAWS Trusted Advisor Amazon CloudWatch Figure 11: Information resource lifecycle with AWS AWS CloudFormation AWS OpsWorks for Chef Automate Amazon EC2 Systems Manager Amazon CloudWatch AWS Config and AWS Trusted Advisor enable you to integrate the concept of Infrastructure as Code into all phases of the project lifecycle By u sing Infrastructure as Code your organization can auto matically deploy consistently built environments that in turn can help your organization to improve its overall maturity ArchivedAmazon Web Services – Infrastructure as Code Page 30 Contributors The following individuals and organizations contributed to this document: • Hubert Cheung solutions architect Amazon Web Services • Julio Faerman technical evangelist Amazon Web Services • Balaji Iyer professional services consultant Amazon Web Services • Jeffrey S Levine solutions architect Amazon Web Services Resources Refer to the following resources to learn more about our best practices related to Infrastructure as Code Videos • AWS re:Inv ent 2015 – DevOps at Amazon66 • AWS Summit 2016 DevOps Continuous Integration and Deployment on AWS67 Documentation & Blogs • DevOps and AWS68 • What is Continuous Integration69 • What is Continuous Delivery70 • AWS DevOps Blog71 Whitepapers • Introduction to DevOps on AWS72 • AWS Operational Checklist73 • AWS Security Best Practices74 • AWS Risk and Compliance75 ArchivedAmazon Web Services – Infrastructure as Code Page 31 AWS Support • AWS Premium Support76 • AWS Trusted Advisor77 1 https://awsamazoncom/cloudformation/ 2 https://awsamazoncom/ec2/ 3 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/using cfnupdating stacks changesetshtml 4 http://awsamazoncom/iam 5 https://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cloudf ormation limitshtml 6 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws properties stackhtml 7 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/walkth rough crossstackrefhtml 8 http://docsawsamazoncom/AWSCloudFormation/latest/APIReference/API _ValidateTemplatehtml 9 http://awsamazoncom/s3 10 https://stelligentcom/2016/04/07/finding security problems early inthe development process ofacloudformation template with cfnnag/ 11 https://wwwnpmjscom/package/cfn check 12 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml 13 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#organizingstacks 14 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#use iamtocontrol access Notes ArchivedAmazon Web Services – Infrastructure as Code Page 32 15 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#reuse 16 http://docsawsamazoncom/AWSCloudFormation/latest/User Guide/best practiceshtml#nested 17 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cross stack 18 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#creds 19 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#parmtypes 20 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#parmconstraints 21 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cfninit 22 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtm l#helper scripts 23 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#validate 24 https://awsamazoncom/ec2/systems manager/parameter store/ 25 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#donttouch 26 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cfn best practices changesets 27 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#stackpolicy 28 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cloudtrail 29 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#code 30 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#update ec2linux 31 https://awsamazoncom/ec2/systems manager/ 32 https://awsamazoncom/opsworks/chefautomate/ ArchivedAmazon Web Services – Infrastructure as Code Page 33 33 http://docsawsamazoncom/AWSEC2/latest/UserGuide/execute remote commandshtml 34 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manager inventoryhtml 35 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manager statehtml 36 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manag er amihtml 37 https://awsamazoncom/ec2/systems manager/patch manager/ 38 https://awsamazoncom/ec2/systems manager/automation/ 39 https://awsamazoncom/ec2/systems manager/parameter store/ 40 https://awsamazoncom/blogs/mt/replacinga bastion host with amazon ec2systems manager/ 41 http://docsawsamazoncom/systems manager/latest/userguide/send commands multiplehtml 42 http ://docsawsamazoncom/systems manager/latest/userguide/sysman configuring access iamcreatehtml 43 https://awsamazoncom/blogs/mt/replacinga bastio nhost with amazon ec2systems manager/ 44 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/ec2 configuration managehtml 45 https://awsamazoncom/blogs/security/how toremediate amazon inspector security findings automatically/ 46 http://docsawsamazoncom/systems manager/latest/userguide/ssm sharinghtml 47 http:/ /docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 48 http://docsawsamazoncom/systems manager/latest/userguide/sysm an paramstore walkhtml 49 https://awsamazoncom/blogs/compute/managing secrets foramazon ecs applications usingparameter store andiamroles fortasks/ 50 https://enwikipediaorg/wiki/Lint_(software) 51 https://docschefio/rubocophtml ArchivedAmazon Web Services – Infrastructure as Code Page 34 52 https://docschefio/foodcritichtml 53 https://docschefio/chefspechtml 54 https://docschefio/kitchen html 55 https://awsamazoncom/cloudwatch/ 56 https://awsamazoncom/dynamodb/ 57 https://awsamazoncom/ec2/ 58 https://awsamazoncom/elasticloadbalancing/ 59 https://awsamazoncom/lambda/ 60 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/QuickStartEC2 Instancehtml 61 http://docsawsamazoncom/ses/latest/APIReference/API_S endEmailhtml 62 http://docsawsamazoncom/sns/latest/api/API_Publishhtml 63 https://awsamazoncom/config/ 64 http://docsawsamazoncom/config/latest/developerguide/evaluate config_develop ruleshtml 65 https://awsamazoncom/premiumsupport/trustedadvisor/ 66 https://wwwyoutubecom/watch?v=esEFaY0FDKc 67 https://wwwyoutubecom/watch?v=Du rzNeBQ WU 68 https://awsamazoncom/devops/ 69 https://awsamazoncom/devops/continuous integration/ 70 https://awsamazoncom/devops/continuous delivery/ 71 https://awsamazoncom/blogs/devops/ 72 https://d0aws staticcom/whitepapers/AWS_DevOpspdf 73 https://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 74 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practic espdf 75 https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Complia nce_Whitepaperpdf ArchivedAmazon Web Services – Infrastructure as Code Page 35 76 https://awsamazoncom/premiumsupport/ 77 https://awsamazoncom/premiumsupport/trustedadvisor/",General,consultant,Best Practices Infrastructure_Event_Readiness,ArchivedInfrastructure Event Readiness AWS Guidelines and Best Practices December 2018 This paper has been archived For the latest technical guidance about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent asses sment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual comm itments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between A WS and its customers ArchivedContents Introduction 1 Infrastructure Event Readiness Planning 2 What is a Planned Infrastructure Event? 2 What Happens During a Planned Infrastructure Event? 2 Design Principles 4 Discrete Workloads 4 Automation 8 Diversity/Resiliency 10 Mitigating Against External Attacks 13 Cost Optimization 16 Event Management Process 17 Infrastructure Event Schedule 17 Planning and Preparation 18 Operational Readiness (Day of Event) 27 Post Event Activi ties 29 Conclusion 31 Contributors 32 Further Reading 32 Appendix 33 Detailed Architecture Review Checklist 33 ArchivedAbstract This whitepaper describes guidelines and best practices for customers with production workloads deployed on Amazon Web Services (AWS) who want to design and provision their cloud based applications to handle planned scaling events such as product launches or seasonal traffic spikes gracefully and dynamically We address general design principles as well as provide specific best practices and guidance across multiple concept ual areas of infrastructure event planning We then describe operational readiness considerations and practices a nd post event activities ArchivedAmazon Web Services – Infrastructure Event Readiness Page 1 Introduction Infrastructure event readiness is about designing and preparing for anticipated and significant events that have an impact on your business During such events it is critical that the company web service is reliable responsive and highly fault tolerant ; under all conditions and changes in traffic patterns Examples of such events are expansion into new territories new product or feature launches seasonal events or significant business announcement s or marketing e vents An infrastructure event that is not properly planned c an have a negative impact on your company’s business reputation continuity or finances Infrastructure event failures can take the form of unanticipated service failures load related performance degradations network latency storage capacity limitation s system limits (such as API call rates ) finite quantities of available IP addresses poor understanding of the behaviors of components of an application stack due to insufficient monitoring unanticipated dependencies on a third party service or component not set up for scale or some other unforeseen error condition To minimize the risk of unanticipated failures during an important event companies should invest time and resources to plan and prepare to train employees and to design and document relevant processes The amount of investment in infrastructure event planning for a particular cloud enabled application or set of applications can vary depending on the system’s complexity and global reach Regardless of the scope or comple xity of a company’s cloud presence the design principles and best practices guidanc e provided in this whitepaper are the same With Amazon Web Services (AWS) your company can scale up its infrastructure in preparation for a planned scaling event in a dyn amic adaptable pay asyougo basis Amazon’s rich array of elastic and programmable products and services gives your company access to the same highly secure reliable and fast infrastructure that Amazon uses to run its own global network and enables your company to nimbly adapt in response to its own rapidly changing business requirements This whitepaper outlines best practices and design principles to guide your infrastructure event planning and execution and how you can use AWS ArchivedAmazon Web Services – Infrastructure Event Readiness Page 2 services to ensure that your applications are ready to scale up and scale out as your business needs dictate Infrastructure Event Readiness Planning This section describes what constitutes a planned infrastructure event and the kinds of activities that typically occur during s uch an event What is a Planned Infrastructure Event ? A planned infrastructure event is a business driven anticipated and scheduled event window during which it is business critical to maintain a highly responsive highly scalable and fault tolerant web service Such events can be driven by marketing campaigns news events related to the company’s line of business product launches territorial expansion or any similar activity that results in additional traffic to a company’s web based applications and underlying infrastructure What Happens During a Planned Infrastructure Event ? The primary concern in most planned infrastructure events is being able to add capacity to your infrastructure to meet higher traffic demands In a traditional onpremise s environment provisioned with physical compute storage and networking resources a company’s IT department provision s additional capacity based on their best estimates of a theoretical maximum peak This incurs the risk of insufficiently provisioning capacity and the company suffering business loss due to overloaded web servers slow response times and other run time errors Within the AWS Cloud infrastructure is programmable and elastic This means it can be provisioned quickly in response to real time demand Additionally infrastructure can be configured to respond to system metrics in an automated intelligent and dynamic fashion —growing or shrinking resources such as web server clusters provisioned throughput storage capacity available compute cores number of streaming shards and so on as needed ArchivedAmazon Web Services – Infrastructure Event Readiness Page 3 Additionally m any AWS services are fully managed includ ing storage database analytic application and deployment services As a result AWS customers don’t have to worry about the complexities of configuring these services for a high traffic event AWS fully managed services are designed for scalabilit y and high availability Typically in preparation for a planned infrastructure event AWS customer s conduct a system review to evaluate their applic ation architecture and operational readiness considering both scalability and fault tolerance Traffic estimates are considered and compared to normal business activity performance Capacity metrics and estimate s of required additional capacity are determined Potential bottlenecks and third party upstream and downstream dependencies are identified and addressed Geography is also considered if the planned event includes an expansion of territory or introduction of new audiences Expansion into additional AWS Regions or Availability Z ones is undertaken in advance of the planned event A review of the customer’s AWS dynamic system settings such as Auto Scaling load balancing geo routing high availability and f ailover measures is also conducted to ensure these are configured to correctly handle the expected increases in volume and transaction rate Static settings such as AWS resource limits and location of content delivery network ( CDN ) origin servers are also considered and modified as needed Monitoring and notification mechanism s are reviewed and enhanced as needed to provide realtime transparency into events as they occur and for post mortem analysis after the planned event has completed During the planned event AWS customers can open support cases with AWS for troubleshooting or real time support (such as a server going down ) Customer s who subscribe to the AWS Enterprise S upport plan have the additional flexibility to talk with support engineers immediately and to raise critical severity cases if rapid response is required After the event AWS resources are designed to automatically scale down to appropriate levels to match traffic levels or continue to scale up as events dictate ArchivedAmazon Web Services – Infrastructure Event Readiness Page 4 Design Principles Preparation for planned events starts with a design at the beginning of any implementation of a cloud based application stack or workload that follows best practices Discrete W orkloads A design based on best practices is essential to th e effective management of planned event workloads at both normal and elevated traffic levels From the start design discrete and independent functional groupings of resources centered on a specific business application or product This section describes the multiple dimensions to this design goal Tagging Tags are used to label and organize resources They are an essential component of managing infrastructure resources during a planned infrastructure event On AWS tags are customer managed keyvalue labels applied to an individual managed resource such as a load balancer or an Amazon Elastic Compute Cloud ( Amazon EC2) instance By referencing well defined tags that have been attached to AWS resources you can easily identify which resources within your over all infrastructure comprise your planned event workload Then using this information you can analyz e it for preparedness Tags can also be used for cost allocation purposes Tags can be used to organize for example Amazon EC2 instances Amazon Machine Image ( AMI) images load balancers security groups Amazon Relational Database Service ( Amazon RDS) resources Amazon Virtual Private Cloud ( Amazon VPC) resources Amazon Route 53 health check s and Amazon Simple Storage Service ( Amazon S3) buckets For more information on effective tagging strategies refer to AWS Tagging Strategies 1 For examples of how to create and manage tags and put them in Resource Groups see Resource Groups and Tagging for AWS 2 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 5 Loose Coupling When architecting for the cloud design e very component of your application stack to operate as independently as possible from each other This gives cloud based workloads the advantage of resiliency and scalability You can reduce interdependencies between components in a cloud based application stack by design ing each component as a black box with well defined interfaces for inputs and outputs (for example RESTful APIs) If the components are n’t applications but are services that together comprise an application this is known as a microservices architecture For communication and coordination between application components you can use event driven notification mechanisms such as AWS message queues to pass messages between the components as shown in Figure 1 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 6 Figure 1 Loose coup ling using RESTful interfaces and message queues Using mechanisms such as that illustrated above a change or failure in one component has much less chance of cascading to other components For example i f a server in a multi tiered application stack becomes unresponsive applications that are loosely coupled can be designed to bypass the unresponsive tier or switch to degraded mode alternative transactions Loosely coupled application components using intermediate message queues can also be designed f or asynchronous integration Because an application’s components do not employ direct point topoint communication but instead use an intermediate and persistent messaging layer (for example an Amazon Simple Queue Service ( SQS) queue or a streaming data m echanism like Amazon Kinesis Stream s) they can withstand sudden increases in activity in one component while downstream components process the incoming queue ArchivedAmazon Web Services – Infrastructure Event Readiness Page 7 If there is a component failure the messages persist in the queues or streams until the failed component can recover For more information on message q ueueing and notification services offered by AWS refer to Amazon Simple Queue Service 3 Services Not Servers Managed services and service endpoints free you from having to worry about security or access backups or restores patch management or change control monitoring or reporting setups or administ ration of traditional systems management details These cloud resources can be provisioned prior to an event for high availability and resilience using multiple Availability Zone (or in some cases multiple Region ) configurations Cloud resources can be scaled up or down often with no downtime and you can configure them on the fly through either the AWS M anagement Console or API/CLI calls Managed services and service endpoints can be used to power customer application stacks with capabilities such a s relational and NoSQL database systems data warehousing event notification object and file storage real time streaming big data analytics machine learning search transcoding and many others An endpoint is a URL that is the entry way for an AWS service For example https://dynamodbus west 2amazonawscom is an entry point for the Amazon DynamoDB s ervice By using managed services and their service endpoints you can leverage the power of production ready resources as part of your design solution for handling increased volume reach and transaction rates during a planned infrastructure event You d on’t need to provision and administer your own servers that perform th e same functions as managed services For more information on AWS service endpoint s see AWS Regions and Endpoints 4 See also Amazon EMR 5 Amazon RDS 6 and Amazon ECS7 for examples of managed services that have endpoints Serverless Archi tectures Leverage AWS Lambda as a strategy to effectively respond to dynamically changing processing loads during a planned infrastructure event Lambda is an event driven serverless computing platform It ’s a dynamically invoked ArchivedAmazon Web Services – Infrastructure Event Readiness Page 8 service that runs Python Nodejs or Java code in response to events (via notifications) and automatically manages the compute resources specified by that code Lambda doesn’t require provisioning prior to the event of Amazon Elastic Compute Cloud ( EC2) resources The Amazon Si mple N otification Service ( Amazon SNS) can be configured to trigger Lambda functions See Amazon Simple Notification Service8 for details Lambda serverless functions can execute code that access or invoke other AWS services such as database operations data transformation s object or file retrieval or even scaling operations in response to external events o r internal system load metrics AWS Lambda can also generate new notifications or events of its own and even launch other L ambda functions AWS Lambda provides the ability to exercise fine control over scaling operations during a planned infrastructure event For example Lambda can be used to extend the functionality of Auto Scaling operations to perform actions such as notifying third party systems that they also need to scale or for adding additional network interfaces to new instances as they are provisioned See Using AWS Lambda with Auto Scaling Lifecycle Hooks9 for examples of how to u se Lambda to customize scaling operation s For more information on AWS Lambda see What is AWS Lambda?10 Automation Auto Scaling A critical component of infrastructure event planning is Auto Scaling Being able to automatically scale an application’s capacity up or down according to predefined conditions helps to maintain application availability during fluctuations in traffic patterns and volume that occur in a planned infrastructur e event AWS provides A uto Scaling capability across many of its res ources including EC2 instances database capacity containers etc Auto Scaling can be used to scale groupings of instances such as a fleet of servers that comprise a cloud based application so that they scale automatically based on specified criteria Auto Scaling can also be used to maintain a fixed number of instances even when an instance becomes ArchivedAmazon Web Services – Infrastructure Event Readiness Page 9 unhealthy This automatic scaling and maintaining of the number of instances is the core functionality of the Auto Scaling service Auto Scaling maintains the number of instances that you specif y by performing periodic health checks on the instances in the group If an instance becomes unhealthy the group terminates the unhealthy instance and launches another instance to replace it Auto S caling policies can be used to automatically increase or decrease the number of running EC2 instances in a group of servers to meet changing conditions When the scaling policy is in effect the Auto Scaling group adjusts the desired capacity of the group and launches or terminates the instances as needed either dynamically or alternatively on a schedule if there is a known and predictable ebb and flow of traffic Restarts and Recovery An important design element in any planned infrastructure event is to have procedures and automation in place to handle compromised instances or servers and to be able to rec over or restart the m on the fly Amazon EC2 instances can be set up to automatically recover when a system status check of the underlying hardware fails The instance reboot s (on new hardware if necessary) but retains its instance ID IP address Elastic IP addresses Amazon Elastic Block Store ( EBS) volume attachments and other configuration details For more information on auto recovery of EC2 instances see Auto Recovery of Amazon EC2 11 Configuration Management/Orchestration Integral to a robust reliable and responsive planned infrastructure event strategy is the incorporation of configuration management and orchestration tools for individual resource state management and application stack deployment Configuration management tools typically handle the provisioning and configuration of server instances load balancers Auto Scaling individual application deployment and application health monitoring They also provide the ability to int egrate with additional services such as databases stor age volumes and caching layers ArchivedAmazon Web Services – Infrastructure Event Readiness Page 10 Orchestration tools one layer of abstraction above configuration management provide the means to specify t he relationships of these various resource s allowing custom ers to provision and manage multiple resources as a unified cloud application in frastructure without worrying about resource dependencies Orchestration tools define and describe individual resources as well as their relationships as code As a result this code can be version controlled facilitating the ability to (1) roll back to prior versions or (2) create new branches for testing and development purposes It is also possible to define orchestrations and configurations optimized for an infrastructu re event and then roll back to the standard configuration following such an event Amazon Web Services recommends the following tools to achieve hardware as code deployments and orchestrations : • AWS Config with Config Rules or an AWS Config Partner to prov ide a detailed visual and searchable inventory of AWS resources configuration history and resource configuration compliance • AWS CloudFormation or third party AWS resource orchestration tool s to manage AWS resource provisioning update and termination • AWS OpsWorks Elastic Beanstalk or third party server configuration management tool s to manage operating system (OS ) and app lication configuration changes See Infrastructure Configuration Management for more details about ways to manage hardware as code12 Diversity/Resiliency Remove Single Points of Failure and Bottlenecks When planning for an infrastructure event analyze your application stacks for any single points of f ailure (SPOF) or performance bottlenecks For example is there any single instance of a server data volume d atabase NAT gateway or load balancer that would cause the entire application or significant portions of it to stop working if it were to fail? ArchivedAmazon Web Services – Infrastructure Event Readiness Page 11 Secondly as the cloud based application scales up in traffic or transaction volume is there any part of the infrastructu re that will encounter a physical limit or constraint such as network bandwidth or CPU processing cycles as the volume of data grows along the data flow path? These risks once identified can be mitigated in a variety of ways Design for Failure As ment ioned earlier using loose coupling and message queues with RESTful interfaces is a good strategy for achieving resiliency against individual resource failures or fluctuations in traffic or transaction volume Another dimension of resilient design is to configure application components to be as stateless as possible Stateless applications require no knowledge of prior transactions and have loose dependency on other application components They store no session information A stateless application can scale horizontally as a member of a pool or cluster since any request can be handled by any instance within the pool or cluster You can add more resources as needed using Auto Scaling and health check criteria to programmatically hand le fluctuating compute capacity and throughput requirements Once an application is designed to be stateless it could potentially be refactored onto serverless architecture using Lambda functions in the place of EC2 instances Lambda functions also have built in dynamic scali ng capability In the situation where an application resource such as a web server cannot avoid having state data about transactions consider designing your applications so that the portions of the application that are stateful are decoupled from the serv ers themselves For example an HTTP cookie or equivalent state data could be stored in a database such as DynamoDB or in an S3 bucket or EBS volume If you have a complex multistep workflow where there is a need to track the current state of each step i n the workflow Amazon Simple Workflow Service (SWF) can be used to centrally store execution history and make these workloads stateless Another resiliency measure is to employ distributed processing For u se cases that require processing large amounts of data in a timely manner where o ne ArchivedAmazon Web Services – Infrastructure Event Readiness Page 12 single compute resource can’ t meet the need you can design your workloads so that tasks and data are partitioned into smaller fragments and executed in parallel across a cluster of compute resources Distributed processi ng is stateless since the independent nodes on which the partitioned data and tasks are being processed may fail In this case auto restart of failed tasks on another node of the distributed processing cluster is automatically handled by the distributed processing scheduling engine AWS offers a variety of distributed data processing engine s such Amazon EMR Amazon Athena and Amazon Machine Learning ; each of which is a managed service providing endpoints and shield ing you from any complexity involving pa tching main tenance scaling failover etc For real time processing of streaming data Amazon Kinesis Streams can partition data in to multiple shards that can be processed by multiple consumers of that data such as Lambda functions or EC2 instances For more information on these types of workloads see Big Data Analytics Options on AWS 13 Multi Zone and MultiRegion AWS services are hosted in multiple locations worldwide These locations are composed of Regions and Availability Zones A Region is a separate geographic area Each Region has multiple isolated locations which are known as Availability Zones AWS provide s customers wit h the ability to place resources such as instances and data in multiple locations Design your applications so that they are distributed across multiple Availability Zones and Regions In conjunction with distributing and replicating resources across Availability Zones and Regions design your apps using load balancing and failover mechanisms so that your application stacks automatically re route data flows and traffic to these alternative locations in the event of a failure Load Balancing With the El astic Load Balancing service (ELB ) a fleet of application servers can be attached to a load balancer and yet be distributed across multiple Availability Zones When the EC2 instanc es in a particular Availability Zone ArchivedAmazon Web Services – Infrastructure Event Readiness Page 13 sitting behind a load balancer fail th eir health checks the load balancer stops sending traffic to those nodes When combined with Auto Scaling the number of healthy nodes is automatically rebalanced with the other Availability Zones and no manual intervention is require d It’s also possible to have load balancing across Regions by using Amazon Route 53 and latency based DNS routing algor ithms See Latency Ba sed Routing for more information14 Load Shedding Strategies The concept of load shedding in cloud based infrastructure s consists of redirect ing or proxying traffic elsewhere to relieve pressure on the primary systems In some cases the load shedding strat egy can be a triage exercise where you choose to drop certain streams of traffic or reduce functionality of your application s to lighten the processing load and to be able to serve at least a subset of the incoming requests There a re numerous techniques that can be used for load shedding such as caching or latency based DNS routing With latency based DNS routing the IP addresses of those application servers that are responding with the least latency are returned by the DNS servers in response to name resolution requests Caching can take place close to the application using an in memory caching layer such as Ama zon ElastiCache You can also deploy a caching layer that is closer to the user’s edge location using a global content distribution network such as Amazon CloudFront For more information about ElastiCache and CloudFront see Getting Started with ElastiCache 15 and Amazon CloudFront CDN 16 Mitigating Against External Attacks Distributed Denial of Service (DDoS) Attacks Planned infrastructure events can attract attention which may increase the risk of your application being targeted by a Distributed Denial of Service (DDoS) a ttack A DDoS attack is a deliberate attempt to make your application unavailable to users by flooding it with traffic from multiple sources These attacks include network layer attacks which aim to saturate the Internet capacity of a network or application transport layer attacks which aim to ArchivedAmazon Web Services – Infrastructure Event Readiness Page 14 exhaust the connection handling capacity of a device and application layer attacks which aim to exhaust the ability of an application to process requests There are numerous actions you can take at each of these la yers to mitigate against such an attack For example you can protect against saturation events by overprovisioning network and server capacity or implementing auto scaling technologies that are configured to react to attack patterns You can also make u se of purpose built DDoS mitigation systems such as application firewalls dynamic load shedding at the edge using Content Distribution Networks (CDNs) network layer threat pattern recognition and filtering or routing your traffic or requests through a D DoS mitigation provider AWS provides automatic DDoS protection as part of the AWS Shield Standard which is included in all AWS services in every AWS Region at no additional cost When a network or transport layer attack is detected it is automatically mitigated at the AWS border before the traffic is routed to an AWS Region To make use of this capability it is important to architect your application for DDoS resiliency The optimal DDoS resiliency is achieved by using services that operate from the AWS Global Edge Network like Amazon CloudFront and Amazon Route 53 which provides comprehensive protection against all known network and transport layer attacks For a reference architecture that includes these services see Figure 2 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 15 Summary of DDOS Miti gation Best Practices (BP) AWS Edge Locations AWS Regions Amazon CloudFront (BP1) with AWS WAF (BP2) Amazon Route 53 (BP3) Elastic Load Balancing (BP6) Amazon A PI Gateway (BP4) Amazon VPC (BP5) Amazon EC2 with Auto Scaling (BP7) Layer 3 ( for example UDP reflection) attack mitigation ✔ ✔ ✔ ✔ ✔ ✔ Layer 4 ( for example SYN flood) attack mitigation ✔ ✔ ✔ ✔ Layer 6 ( for example TLS) attack mitigation ✔ ✔ ✔ Reduce attack surface ✔ ✔ ✔ ✔ ✔ Figure 2: DDoS resilient reference architecture This reference architecture includes several AWS services that can help you improve your web application’s resiliency against DDoS attacks In addition to architecting for DDoS resiliency you can optionally subscribe to AWS Shield Advanced to receive add itional features that are useful for monitoring your application mitigating larger or more complex DDoS attacks and managing the cost of an attack With AWS Shield Advanced you can monitor for DDoS events via the provided APIs and AWS CloudWatch metrics In case of an attack that causes impact to the availability of your application you can raise a case with AWS Support and where necessary receive escalation to the AWS DDoS Response Team (DRT) You also receive AWS WAF for AWS Shield Advanced protected resources and AWS Firewall Manager at no additional cost If an attack causes an increase in your AWS bill AWS Shield Advanced allows you to request a limited refund of costs related to the DDoS event To learn more about using AWS Shield Advanced see Getting Started with AWS Shield Advanced 17 Bots and Exploits To mitigate application layer attacks consider operating your application at scale and implementing a Web Application Firewall (WAF) which allows you to ArchivedAmazon Web Services – Infrastructure Event Readiness Page 16 identify and block unwanted requests The combination of these techniques can help you mitigate high volume bots that could otherwise harm the availability of your application and lower volume bots that could steal content or exploit vulnerabilities Use these mitigation techniques to significantly reduce the volume of unwanted requests that reach your application and have resilience against unwanted requests that are not blocked On AWS you can implement a WAF from the AWS Marketplace or use AWS WAF which allows you to build your own rules or subscribe to rules managed by Marketplace vendors With AWS WAF you can use regular rules to block known bad patterns or rate based rules to temporarily block requests from sources that match conditions you define and exceed a given rate Deploy these rules using an AWS CloudFormation template If you have applications distributed across many AWS accounts deploy and manage AWS WAF rules for y our entire organization by using AWS Firewall Manager To learn more about deploying preconfigured protections with AWS WAF see AWS WAF Security Automations 18 To learn more about rules available from Marketplace vendors see Managed Rules for AWS WAF19 To learn more about managing rules with AWS Firewall Manager see Getting Started with AWS Firewall Manager20 Cost Optimization Reserved vs Spot vs On Demand Controlling the costs of provisioned resources in the cloud is c losely tied to the ability to dynamically provision these resources based on systems metrics and other performance and health check criteria With Auto Scaling resource utilization can be closely matched to actual processing and storage needs minimizing wasteful expense and underutil ized resources Another dimension of cost control in the cloud is being able to choose from the following: OnDemand instances Reserved I nstances (RIs) or Spot Instances In addition DynamoDB offers a reservation capacity capability With On Demand instances you pay for only the Amazon EC2 instances you use OnDemand instances let you pay for compute capacity by the hour with no long term commitments ArchivedAmazon Web Services – Infrastructure Event Readiness Page 17 Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to OnDemand instance pricing and provide a capacity reservation when used in a specific Availability Zone Aside from the availability reservation and the billing discount there is no functional difference between Reserved Instances and On Demand instances Spot Instances allow you to bid on spare Amazon EC2 computing capacity Spot Instances are often available at a discount compared to On Demand pricing which significantly reduce s the co st of running your cloud based applications When designing for the cloud some use cases are better suited for the use of Spot Instances than others For example since Spot I nstances can be retired at any time once the bid price goes above your bid you should consider running Spot Instances only for relatively stateless and horizontally scaled application stacks For stateful applications or expensive processing loads Reserved Instances or On Demand instances may be more appropriate For mission critical applications where capacity limitations are out of the question R eserved Instances are the optimal choice See Reserved Instances21 and Spot Instances22 for more details Event Management Process Planning for an infrastructure event is a group activity involvin g application developers administrators and business stakeholders Weeks prior to an infrastructure event establish a cadence of recurring meetings involving the key technical staff who own and operate each of the key infrastructure components of the web service Infrastructure Event Schedule Planning for an infrastructure event should begin several weeks prior to the date of the event A typical timeline in the p lanned event lifecycle is shown in Figure 3 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 18 Figure 3 Typical infrastructure event timeline Planning and Preparation Schedule We recommend the following schedule of activities in the weeks leading up to an infrastructure event: Week 1 : • Nominate a team to drive planning and engineering for the infrastructure event • Conduct m eeting s between stakeholders to understand the parameters of the event (scale duration time geographic reach affected workloads) and the success criteria • Engage any downstream or upstream partners and vendors Week s 23: • Review architecture and adjust as needed • Conduct operational r eview ; adjust as needed • Follow best practices described in this paper and in footnoted references • Identify risks and develop mitigation plans ArchivedAmazon Web Services – Infrastructure Event Readiness Page 19 • Develop a n event runbook Week 4 : • Review all cloud vendor services that require scaling based on expected load • Check service limits and increase limits as needed • Set up monitoring dashboard and alerts on defined thresholds Architecture Review An e ssential part of your preparation for an infrastructure event is an architectural review of the application stack that will experience the upsurge in traffic The purpose of the review is to verify and identify potential areas of risk to either the scalability or reliability of the application and to identify opportunities for optimization in advance of the event AWS provides its Enterprise Support customers a framework for reviewing customer application stacks that is centered around five design pillars These are Security Reliability Performance Efficiency Cost Optimization and Operational Excellence as described in Table 1 Table 1: Pillars of w ellarchitected applications Pillar Name Pillar Definition Relevant Area of Interest Security The ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies Identity Management Encryption Monitoring Logging Key Management Dedicated Instances Compliance Governance Reliability The ability of a system to recover from infrastructure or service failures dynamically acquire computing resources to meet demand and mitigate disru ptions such as misconfigurations or transient network issues Service Limits Multi ple Availability Zones and Region s Scalability Health Check/Monitoring Backup/ Disaster Recovery (DR) Networking Self Healing Automation Performance Efficiency The ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve Right AWS Services Resource Utilization Storage Architecture Caching Latency Requirements Cost Optim ization The ability to avoid or eliminate unneeded cost or suboptimal resources Spot/ Reserved Instances Environment Tuning Service Selection Volume Tuning Account Management Consolidated Billing Decommission Resources ArchivedAmazon Web Services – Infrastructure Event Readiness Page 20 Operational Excellence The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures Runbooks Playbooks Continuous Integration/Continuous Deployment (CI/CD ) Game Days Infrastructure as Code Root Cause Analysis ( RCA)s A detailed checklist of architectural review items which can be used to review an AWS based application stack is available in the Appendix Operational Review In addition to an ar chitectural review which is more focused on the design components of an application review your cloud operations and management practices to evaluate how well you are addressing the management of your cloud workloads The goal of the review is to identify operational gaps and issues and take actions in advance of the event to minimize them AWS offers a n operation al review to its Enterprise Support customers which can be a valuable tool for preparing for an infrastructure event The review focuses on asse ssing the following areas: • Preparedness –You must have the right mix of organizational structure processes and technology Ensure clear roles and responsibilities are defined for the staff managing your application stack Define processes in advance to align with the event Automat e procedures where possible • Monitoring–Effective mon itoring measur es how an application is performing Monitoring is critical to detecting anomalies before they become problems and provides opportunities to minimize impact from adverse events • Operations –Operational activities need to be carried out in a timely and reliable way leveragi ng automation wherever possible while also dealing with unexpected operational events that require escalations • Optimization –Conduct a postmortem analysis using collected metrics operational trends and lessons learned to capture and report opportunities for improvement during future events Optimi zation plus prepare dness creates a feedback loop to address operational issues and prevent them from reoccurr ing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 21 AWS Service Limits During a planned infrastructure event it is crucial to avoid exceeding any service limits that may be imposed by a cloud p rovider while scaling an application or workload Cloud services providers typically have limits on the different resources that you can use Limits are usually imposed on a per account and per region basis The resources affected include instances volumes streams serverless invocations snapshots number of VPCs security rules and so on Limits are a safety measure against runaway code or rogue actors attempting to abuse resources and as a control to help minimize billing risk Some service limi ts are raised automatically over time as you expand your footprint in th e cloud though most of these services require that you request limit increases by opening a support case While some s ervice limits can be increased via support cases other services have limits that can ’t be changed AWS provides Enterprise and Business support customer s with Trusted Advisor which provides a Limit Check dashboard to allow customers to proactively manage all service limit s For more information on limits for various A WS services and how to check them see AWS Service Limits23 and Trusted Advisor 24 Pattern Recognition Baselines You should document “back to healthy” values for key metrics prior to the commencement of an infrastructure event This help s you to determine when an application/service is safe ly returned to normal levels following the completion/end of the event For example identifying that the normal transaction rate through a load balancer is 2500 requests per second will help determine when it is safe to begin wind down procedures after the event Data Flows and Dependencies Understanding how data flows through the various components of an application helps you identify potential bottlenecks and dependencies Are the application tiers or components that are consumers of data in a data flow ArchivedAmazon Web Services – Infrastructure Event Readiness Page 22 sized appropriately and set up to auto scale correctly if the tiers or components in an application stack that are producers of data scale upwards? In the event of a component failure c an data be queued until that component recovers? Are any downstream or upstream data providers or consumers scalable in response to your event? Proportionality Review the proportionality of scal ing required by the various components of an application stack when preparing for an infrastructure event This proportionality is not always one toone For example a ten fold increase in transactions per second across a load balancer m ight require a twe ntyfold increase in storage capacity number of streaming shards or number of database read and write operations ; due to processing that might be taking place in the front facing application Communications Plan Prior to the event develop a communications plan Gather a list of internal stakeholders and support groups and identify who should be contacted at various stages of the event in various scenarios such as beginning of the event during the event end of the event post event analysis emergency contacts contacts during troubleshooting situations etc Persons and groups to be contacted may include the following: • Stakeholders • Operations managers • Developers • Support teams • Cloud service provider teams • Network operations cen ter (NOC ) team As you gather a list of internal contacts you should also develop a contact list of external stakeholders involved with the continuous live delivery of the application These stakeholders include partners and vendors supporting key components of the stack downstream and upstream vendors providing external services data feeds authentication services and so on ArchivedAmazon Web Services – Infrastructure Event Readiness Page 23 This external contact list should also include the following: • Infrastructure hosting vendors • Telecommunications vendors • Live data streaming partners • PR marketing contacts • Advertising partners • Technical consultants involved with service engineering Ask for the following i nformation from each provider: • Live points of contact during time of e vent • Critical support contact and escalation process • Name telephone number and email address • Verification that live technical contacts will be available AWS customers subscribed to Enterprise Support also have Technical Account Managers (TAMs) assigned t o their account who can coordinate and verify that dedicated AWS support staff is aware of and prepared for support of the event TAMs are also on call during the event present in the war room and available to drive support escalations if they are needed NOC Preparation Prior to the event instruct your operations and/or developer team to create a live metrics dashboard that monitors each critical component of the web service in production as the event occurs Ideally the dashboard should automatically present updated metrics every minute or at an interval that is suitable and effective during the event Consider monitoring the following c omponents: • Resource utilization of each server (CPU disk and memory utilization) • Web service response time • Web traffic metrics (users page views sessions) ArchivedAmazon Web Services – Infrastructure Event Readiness Page 24 • Web traffic per visitor region (global customer segments) • Database server utilization • Marketing flow conversion funnels such as conversion rates and fallout percentage • Applicat ion error logs • Heartbeat monitoring Amazon CloudWatch provides a means to gather most of these metrics from AWS resources into a single pane of glass using CloudWatch custom dashboards Additionally CloudWatch offers the capability to import custom metri cs in to CloudWatch wherever AWS isn’ t already providing that metric automatically See the Monitor section for more details on AWS monitoring tools and capabilities Runbook Preparation You should develop a runbook in preparation for the infrastructure event A runbook is an operational manual containing a compilation of procedures and operations that your operator s will carry out during the event Event runbooks can be outgrowths of existing runbooks used f or routine operations and exception handling Typically a runbook contains procedures to begin stop supervise and debug a system It should also describe procedures for handling unexpected events and contingencies A runbook should include the following s ections: • Event details : Briefly describe s of the event success criteria media coverage event dates and contact details of the main stakeholders from the customer side and AWS • List of AWS services: Enumerates all AWS services to be used during the event Also the expected load on these services Region s affected and account IDs • Architecture and application review : Document s load testing results any stress points in the infrastructure and application design resiliency measures for the wo rkload single points of failure and potential bottlenecks ArchivedAmazon Web Services – Infrastructure Event Readiness Page 25 • Operational review : Highlight s monitoring setup health criteria notification mechanisms and service restoration procedures • Preparedness checklist : Includes such considerations as service limits checks pre warming of application stack components such as load balancers pre provisioning of resources such as stream shards DynamoDB partitions S3 partitions and so on For more information see the Architecture Review Detailed Checklist in the Appendix Monitor Monitoring Plan Database application and operating system monitoring is crucial to ensure a successful event Set up c omprehensive monitoring systems to effectively detect and respond i mmediately to serious incidents during the infrastructure event Incorporate both AWS and customer monitoring data Ensure that monitoring tools are instrumented at the appropriate level for an application based on its business criticality Implementing a monitoring plan that collectively gathers monitoring data from all of your AWS solution segments will help in debugging a complex failure if it occurs The monitoring plan should address the following questions : • What monitoring tools and dashboard s must be set up for the event? • What are the monitoring objectives and the allowed thresholds? What events will trigger actions? • What resources and metrics from these resources will be monitored and how often must they be polled ? • Who will perform the monitoring tasks? What monitoring alerts are in place? Who will be alerted? • What remediation plan s have been set up for common and expected failures? What about unexpected events ? • What is the escalation process in the case of operational failure of any critical syste ms components ? The following AWS monitoring tools can be used as part of your plan : • Amazon CloudWatch : Provided as a n out ofthebox solution for AWS dashboard metrics monitoring alert ing and automated provisioning ArchivedAmazon Web Services – Infrastructure Event Readiness Page 26 • Amazon CloudWatch custom metrics : Used for operating systems application and business metrics collection The Amazon CloudWatch API allows for the collection of virtually any type of custom metric • Amazon EC2 instance health : Used for vie wing status checks and for scheduling events for you r instances based on their status such as auto rebooting or restarting an instance • Amazon SNS: Used for setting up operating and sending event driven notifications • AWS X Ray: Used to debug and analyz e distributed applications and microservices architecture by analyzing data flows across system components • Amazon Elasticsearch Service : Used for centralized log collection and realtime log analysis For rapid heuristic detection of problems • Third party tools : Used for a real time analytics and full stack monitoring and visibility • Standard operating system monitoring tools: Used for OS level monitoring For more details about AWS monitoring tools see Automated and Manual Monitoring25 See also Using Amaz on CloudWatch Dashboards26 and Publish Custom Metrics 27 Notifications A crucial operational element in your desig n for infrastructure event s is the configur ation of alarms and notifications to integrate with your monitoring solution s These alarms and notifications can be used with services such as AWS Lambda to trigger actions based on the alert Automating responses to operational events is a key element to enabling mitigation rollback and recovery with maximum responsiveness Tools should also be in place to centrally monitor workloads and create appropriate alerts and notifications based on available logs and metrics that relate to key operational indicators This includes alerts and notifications for outofbound anomalies as well as service or component failures Ideally when low performance thresholds are crossed or failures occur the s ystem has been architected to automatically self heal or scale in response to such notifications and alerts ArchivedAmazon Web Services – Infrastructure Event Readiness Page 27 As previously noted AWS offers services (Amazon Simple Queue Service (SQS) and Amazon SNS) to ensure appropriate alerting and notification in response to unplanned operational events as well as for enabling automated responses Operational Readiness (Day of Event) Plan Execution On the day of the event the core team involved with the infrastructure event should be on a conference call monitoring realtime dashboards Runbooks should be fully developed and available Make sure that t he communications plan is well defined and known to all support staff and stakeholders and that a contingency plan is in place War Room During the event have an open conference bridge with the following participants: • The responsible application and operations team s • Operations team leadership • Technical support resources from external partners directly involved with technical delivery • Business stakeholders Throughout mos t of the event the conversation of this conference b ridge should be minimal If an adverse operational event arises the key people who can respond to the event will already be on this bridge ready to act and consult Leadership Reporting During the event send an email hourly to key leadership stakeholders This update should include the following : • Status summary: Green (on track) Yellow (issues encountered) Red (major issue) • Key metrics update ArchivedAmazon Web Services – Infrastructure Event Readiness Page 28 • Issues encountered status of remedy plan and the estimated time to resolution (ETA) • Phone number of the war room conference bridge ( so stakeholders may join if needed ) At the conclusion of the event a summary email should be sent that follow s the following format : • Overall event summary with synopsis of issues encountered • Final metrics • Updated remedy plan that details the issues and resolutions • Key points ofcontact for any follow ups that stakeholders may have Contingency Plan Each step in the event’s preparation process should have a corresponding contingency action that has been verified in a test environment Address the following q uestions as you put together a contingency plan: • What are the worst case scenarios that can occur during the event? • What types of events would cause a negative public relations impact? • Which third party components and services m ight fail during the event? • Which metrics should be monitored that would indicate that a worst case scenario is occurring? • What is the rollback plan for each identified worst case scenario? • How long will each rollback process take? What is the acceptable Recovery Point Objective (RPO) and Recovery Time Objective (RTO)? (See Using AWS for Disaster Recovery28 for additional information on these concepts ) Consider the following t ypes of contingency plans : ArchivedAmazon Web Services – Infrastructure Event Readiness Page 29 • Blue/Green Deployment : If rolling out a new production app or environment keep the prior production build online and available (in case a switch back is needed) • Warm Pilot: Launch a minimal environment in a second Region that can quickly scale up if needed If a failure occu rs in the primary Region scale up the second Region and switch traffic over to it • Maintenance Mode Error Pages : Check any pre configured error page s and triggers at each layer of your web service Be prepared to inject a more specific message into these error pages if any operational failures of any of these layers occur s Test the contingency plan for each documented worst case scenario Post Event Activities Post Mortem Analysis We recommend a postmortem analysis as part of an infrastructure event management lifecycle Post mortems allow you to collaborate with each team involved and identify areas that m ight need further optimization such as operational procedures implementation details failover and recovery procedures etc This is especia lly relevant if an application stack encountered disruptions during the event and a r oot cause analysis (RCA) is needed A postmortem analysis help s provide data points and other essential information needed in an RCA document Wind Down Process Immediate ly following the conclusion of the infrastructure event the wind down process should begin During this period monitor relevant application s and services to ensure traffic has reverted back to normal production levels Use the health dashboards created during the event’s preparation phase to verify the normalization of traffic and transaction rates Wind down periods for some events may be linear and straightforward while others may experience uneven or more gradual reductions in volume Some traffic patterns from the event may persist For example recovering from a surge in traffic generally requires straightforward wind down procedures whereas an application deployment or expansion into a new geographical Region may have lon glasting effects requi ring you to careful ly monitor new traffic patterns as part of the permanent application stack ArchivedAmazon Web Services – Infrastructure Event Readiness Page 30 At some point following the completion of the event you must determine when it is safe to end event management operations Refer to the previously documented “normal” values for key metrics to help determine when to declare that an event is completed or ended We recommend splitting wind down activities into two branches which could have different timelines Focus the first branch on operational m anagement of the even t such as sending communications to internal and external stakeholders and partners and the resetting of service limits Focus the second branch on technical aspects of the wind down such as scale down procedures validation of the health of the environment and criteria for determining whether architectural changes should be reve rted or committed The timeline associated with each of those branches can vary depending on the nature of the event key metrics and c ustomer comfort We’ve outlined some common tasks associated with each branch in Tables 2 and 3 to help you determine the appropriate time toend management for an event Table 2: First branch: o perational winddown tasks Task Description Communications Notification to internal and external stakeholders that the event has ended The time toend communication should be aligned with the definition of the completion of the event Use “back to healthy” metrics to determine when it is appropriate to end commun ication Alternatively you can end communication in tiers For example you could end the war room bridge but leav e the event escalation procedures intact in case of post event failures Service Limits/Cost Containment Although it may be tempting to retain an elevated service limit after an event keep in mind that service limits are also used as a safety net Service limits protect you and your costs by preventing excess service usage be that a compromised account or misconfigured automation Repo rting and Analysis Data collection and collation of event metrics accom panied by analytical narratives showing patterns trends problem areas successful procedures ad hoc procedures timeline of event and whether or not success criteria were met shoul d be develope d and distributed to all internal parties identified in the communications plan A detailed cost analysis should also be developed to show the operational expense of supporting the event Optimization Tasks Enterprise organizations evolve over time as they continue to improve their operations Operational optimization requires the constant collection of metrics operational trends and lessons learned from events to uncover opportunities for improvement Optimiz ation ties back with prepar ation to form a feedback loop to address operational issues and prevent them from reoccurr ing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 31 Table 3: Second branch: t echnical winddown tasks Task Description Service Limits/Cost Containment Although it may be tempting to retain elevated service limit s after an event keep in mind that service limits also serve the purpose of being a safety net Service limits protect your operations and operating costs by prevent ing excess service usage either through malicious activity stemming from a compromis ed account or through misconfigured automation Scale Down Procedures Revert resources that were scaled up d uring the preparation phase The se ite ms are unique to your architecture but the following examples are common : • EC2/RDS instance size • Auto S caling configuration • Reserved capacity • Provisioned Input/Output Operations Per Second (PIOPS ) Validation of Health of Environment Compar e to baseline metrics and review production health to verify that after the event and after scale down procedures have been completed the systems affected are reporting normal behavior Disposition of Architectural Changes Some changes made in preparation for the event may be worth keeping depending on the nature of the event and observation of operational metrics For exam ple expansion into a new geographical Region might require a permanent increase of resources in that Region or raising certain service limits or configuration parameters such as number of partitions in a DB or shards in a stream of PIOPS in a volume might be a performance tuning measure that should be persisted Optimize Perhaps the most important component of infrastructure event management is the post event analysis and the identification of operational and architectural challenges observed and opp ortunities for improvement Infrastructure events are rarely one time events They might be seasonal or coincid e with new releases of an application or they might be part of the growth of the company as it expands into new markets and territories Thus every infrastructure event is an opportunity to observe improve and prepare more effectively for the next one Conclusion AWS provides building blocks in the form of elastic and programmable products and services that your company can assemble to support virtually ArchivedAmazon Web Services – Infrastructure Event Readiness Page 32 any scale of workload With AWS infrastructure event guidelines and best practices coupled with our complete set of highly available services your company can design and prepare for major business events and ensure that scaling demands can be met smoothly and dynamically ensuring fast response and global reach Contributors The following individuals and organizations contributed to this document: • Presley Acun a AWS Enterprise Support Manager • Kurt Gray AWS Global Solutions Architect • Michael Bozek AWS Sr Technical Account Manager • Rovan Omar AWS Technical Account Manager • Will Badr AWS Technical Account Manager • Eric Blankenship AWS Sr Technical Account Manager • Greg Bur AWS Technical Account Manager • Bill Hesse AWS Sr Technical Account Manager • Hasan Khan AWS Sr Technical Account Manager • Varun Bakshi AWS Sr Technical Account Manager • Fatima Ahmed AWS Specialist Technical Account Manager (Security) • Jeffrey Lyon AWS Manager DDoS Ops Engineering Further Reading For additional reading on operational and architectural best practices see Operational Checklists for AWS 29 We recommend that reader s review AWS Well Architected Framework30 for a structured approach to evaluating their cloud based application delivery stacks AWS offers Infrastructure Event Manage ment (IEM) as a premium support offering for customers desiring more direct involvement of AWS Technical Account Manager and Support Engineers in their design planning and day of event operations For more details about the AWS IEM premium support offerin g please see Infrastructure Event Management 31 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 33 Appendix Detailed Architecture Review Checklist YesNo N/A Security Y—N—N/A We rotate our AWS Identity and Access Management ( IAM ) access keys and user password and the credentials for the resources involved in our application at most every 3 months as per AWS security best practices We apply password policy in every account and we use hardware or virtual multifactor authentication (MFA) devices Y—N—N/A We have internal security processes and controls for controlling unique role based least privilege access to AWS APIs leveraging IAM Y—N—N/A We have removed any confidential or sensitive information including embedded public/pr ivate instance key pairs and have reviewed all SSH authorized keys files from any customized Amazon Machine Images (AMIs) Y—N—N/A We use IAM roles for EC2 instances as convenient instead of embedding any credentials inside AMIs Y—N—N/A We segregate IAM administrative privileges from regular user privileges by creating an IAM administrative role and restricting IAM actions from other functional roles Y—N—N/A We apply the latest security patches on our EC2 instances for either Windows or Linux instan ces We use operating system access controls including Amazon EC2 Security Group rules VPC network access control lists OS hardening host based firewall intrusion detection/prevention monitoring software configuration and host inventory Y—N—N/A We e nsure that the network connectivity to and from the organization’s AWS and corporate environments uses a transport of encryption protocols Y—N—N/A We apply a centralized log and audit management solution to identify and analyze any unusual access pattern s or any malicious attacks on the environment Y—N—N/A We have Security event and incident management correlation and reporting processes in place Y—N—N/A We ma ke sure that there isn’t unrestricted access to AWS resources in any of our security groups Y—N—N/A We use a secure protocol (HTTPS or SSL) up todate security policies and cipher protocol s for a front end connection (client to load balancer) The requests are encrypted between the clients and the load balancer which is more secure Y—N—N/A We configure our Amazon Route 53 MX resource record set to have a TXT resource record set that contains a corresponding Sender Policy Framework (SPF) value to specify the servers that are authorized to send email for our domain Y—N—N/A We archite ct our application for DDoS resiliency by using services that operate from the AWS Global Edge Network like Amazon CloudFront and Amazon Route 53 as well as additional AWS services that mitigate against Layer 3 through 6 attacks (see Summary of DD oS Miti gation Best Practices in the Appendix ) ArchivedAmazon Web Services – Infrastructure Event Readiness Page 34 YesNo N/A Reliability Y—N—N/A We deploy our application on a fleet of EC2 instances that are deployed into an Auto Scaling group to ensure automatic horizontal scaling based on a pre defined scaling plans Learn more Y—N—N/A We us e an Elastic Load Balancing health check in our Auto Scaling group configuration to ensure that the Auto Scaling group acts on the health of the underlying EC2 instances (Applicable only if you use load balancers in Auto Scaling groups ) Y—N—N/A We deploy critical components of our applications across multiple Availability Zones are appropriately repl icating data between zones We test how failure within these components affects application availability using Elastic Load Balanc ing Amazon Route 53 or any appropriate third party tool Y—N—N/A In the database layer we deploy our Amazon RDS instances i n multiple Availability Zones to enhance database availability by synchronously replicating to a standby instance in a different Availability Zone Y—N—N/A We define processes for either automatic or manual failover in case of any outage or performance de gradation Y—N—N/A We use CNAME records to map our DNS name to our services We DON’T use A records Y—N—N/A We configure a lower timetolive (TTL) value for our Amazon Route 53 record set This avoid s delays when DNS resolvers request updated DNS recor ds when rerouting traffic (For example this can occur when DNS failover detects and responds to a failure of one of your endpoints ) Y—N—N/A We have at least two VPN tunnels configured to provide redundancy in case of outage or planned maintenance of the devices at the AWS endpoint Y—N—N/A We use AWS Direct Connect and have two Direct Connect connections configured at all times to provide redundancy in case a device is unavailable The connections are provisioned at different Direct Connect locations to provide redundancy in case a location is unavailable We configure the connectivity to our virtual private gateway to have multiple virtual interfaces configured across multiple Direct Connect connections and location s Y—N—N/A We us e Windows instances and en sure that we are using the latest paravirtual ( PV) drivers PV driver helps optimize driver performance and minimize runtime issues and security risks We ensure that EC2Config agent is running the latest version on our Windows instance Y—N—N/A We take snapshots of our Amazon Elastic Block Store (EBS) volumes to ensure a point intime recovery in case of failure Y—N—N/A We use separate Amazon EBS volumes for the operating system and application/database data wh ere appropriate Y—N—N/A We appl y the latest kernel software and drivers patches on any Linux instances ArchivedAmazon Web Services – Infrastructure Event Readiness Page 35 YesNo N/A Performance Efficiency Y—N—N/A We fully test our AWS hosted application components including performance testing prior to going live We also perform load testing to ensure that we have used the right EC2 instance size number of IOPS RDS DB instance size etc Y—N—N/A We run a usag e check report against our services limits and ma ke sure that the current usage across AWS services is at or less than 80% of the service limits Learn more Y—N—N/A We us e Content Delivery/Distribution Network (CDN) to utilize caching for our application (Amazon CloudFront) and as a way to optimize the delivery of the content and the automatic distribution of the content to the nearest edge location to the us er Y—N—N/A We understand that some dynamic HTTP request headers that Amazon CloudFront receives (User Agent Date etc) can impact the performance by reducing the cache hit ratio and increasing the load on the origin Learn more Y—N—N/A We ensure that the maximum throughput of an EC2 instance is greater than the aggregate maximum throughput of the attached EBS volumes We also use EBS optimized instances with PIOP S EBS volumes to get the expected performance out of the volumes Y—N—N/A We ensure that the solution design doesn’t have a bottleneck in the infrastructure or a stress point in the database or the application design Y—N—N/A We deploy monitoring on application resources and configure alarms based on any performance breaches using Amazon CloudWatch or third party partner tools Y—N—N/A In our designs we avoid using a large number of rules in security group (s) attached to our application instances A large number of rules in a security group may degrade performance YesNo N/A Cost Optimization Y—N—N/A We note whether the infrastructure event may involve over provisioned capacity that needs to be cleaned up after the event to avoid unnecessary cost Y—N—N/A We use right sizing for all of our infrastructure components including EC2 instance size RDS DB instance size caching cluster nodes size and numbers Redshift Cluster nodes size and numbers and EBS vol ume size Y—N—N/A We use Spot Instances when it’s convenient Spot Instances are ideal for workloads that have flexible start and end times Typical use cases for Spot instances are: batch processing report generation and high performance computing work loads Y—N—N/A We have predictable application capacity minimum requirements and take advantage of Reserved Instances Reserved Instances allow you to reserve Amazon EC2 computing capacity in exchange for a significantly discounted hourly rate compared to On Demand instance pricing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 36 1 https://awsamazoncom/answers/account management/aws tagging strategies/ 2 https://awsamazoncom/blogs/aws/resource groups andtagging/ 3 https://awsamazoncom/sqs/ 4 http://docsawsamazoncom/general/latest/gr/randehtml 5 https://aw samazoncom/emr/ 6 https://awsamazoncom/rds/ 7 https://awsamazoncom/ecs/ 8 https://awsamazoncom/sns/ 9 https://awsamazoncom/blogs/compute/using awslambda with auto scaling lifecycle hooks/ 10 http://docsawsamazoncom/lambda/latest/dg/welcomehtml 11 https://awsamazoncom/blogs/aws/new auto recovery foramazon ec2/ 12 https://awsamazoncom/answers/configuration management/aws infrastructure configuration management/ 13 https://d0awsstaticcom/whitepapers/Big_Data_Analytics_Options_on_AW S%20pdf 14 http://docsawsamazoncom/Route53/latest/DeveloperGuide/routing policyhtml#routing policy latency 15 https://awsamazoncom/elasticache/ 16https://awsamazoncom/cloudfront/ 17 https://docsawsamazoncom/waf/latest/developerguide/getting started ddoshtml 18 https://awsamazoncom/answers/security/aws wafsecurity automations/ 19 https://awsamazoncom/mp/security/WAFManagedRules/ Notes ArchivedAmazon Web Services – Infrastructure Event Readiness Page 37 20 https://docsawsamazoncom/waf/latest/developerguide/ getting started fmshtml 21 http://docsawsamazoncom/AWSEC2/latest/UserGuide/concepts on demand reserved instanceshtml 22http://docsawsamazoncom/AWSEC2/latest/UserGuide/using spot instanceshtml 23 https://docsawsamazoncom/general/latest/gr/aws_service_limitshtml 24 https://awsamazoncom/about aws/whats new/2014/07/31/aws trusted advis orsecurity andservice limits checks nowfree/ 25 http://docsawsamazoncom/AWSEC2/latest/UserGuide/monitoring_auto mated_manualhtml 26 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/Cloud Watch_Dashboardshtml 27 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/publis hingMetricshtml 28 https://awsamazoncom/blogs/aws/new whitepaper useawsfor disaster recovery/ 29 http://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 30 http://d0awsstaticcom/w hitepapers/architecture/AWS_Well Architected_Frameworkpdf 31 https://awsamazoncom/premiumsupport/iem/,General,consultant,Best Practices Installing_JD_Edwards_EnterpriseOne_on_Amazon_RDS_for_Oracle,"Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle First published December 20 16 Updated March 25 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement betw een AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why JD Edwards EnterpriseOne on Amazon RDS? 1 Licensing 2 Performance management 3 Instance sizing 3 Disk I/O management —provisioned IOPS 4 High availability 4 High availability features of Amazon RDS 5 Oracle security in Amazon RDS 6 Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance 7 Prerequisites 7 Preparation 8 Key installation tasks 8 Creating your Oracle DB instance 8 Configure SQL Developer 13 Installing the platform pack 14 Modifying the default scripts 16 Advanced configuration 23 Running the installer 27 Logging into JD Edwards EnterpriseOne on the deployment server 28 Validation and testing 29 Running on Amazon RDS for Oracle Enterprise Ed ition 30 Conclusion 31 Appendix: Dumping deployment service to RDS 31 Contributors 33 Document revisions 33 Abstract Amazon Relational Database Service (Amazon RD S) is a flexible costeffective easy touse service fo r running relational database s in the cloud In thi s whitepaper you will learn how to deplo y Oracle’ s JD Edward s EnterpriseOne (version 92 ) using Amazon RD S for Oracle Because thi s whitepape r focuse s on the database component s of the installation process ite ms such a s JD Edwards EnterpriseOne application serve rs and application serve r node scaling will not be covered This whitepaper is aimed at IT directors JD Edwards EnterpriseOne architects CNC administrators DevOps engineers and Oracle Database Administrators Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 1 Introduction There are two ways to de ploy the Oracle database backend for a JD Edwards EnterpriseOne installation on Amazon Web Services (AWS): by using a database managed by the Amazon Relational Database Service (Amazon RDS) or by deploying and managing a database on Amazon Elastic Compute Cloud (Amazon EC2) infrastructure This whitepaper focuses on the deployment of JD Edwards EnterpriseOne in an AWS environment using Amazon RDS for Oracle Why JD Edwards EnterpriseOne on Amazon RDS? Simplicity scalability and stability are all important reasons to install the JD Edwards Enter priseOne applications suite on Amazon RDS Integrated high availability features provide seamless recoverability between AWS Availability Zones (AZs) without the complications of log shipping and Oracle Data Guard Using RDS you can quickly back up and restore your database to a chosen point in time and change the size of the server or speed of the disks all within the AWS Management Console Management advantages are at your fingertips with the AWS Console Mobile Application All this coupled with intelligent monitoring and management tools provid es a complete solution for implementing Oracle Database in Amazon RDS for use with JD Edwards EnterpriseOne When designing your JD Edwards EnterpriseOne footprint consider the entire lifecycle of JD Edwards EnterpriseOne on AWS which includes complete disaster recovery Disaster recovery is not an afterthought it’s encapsulated in the design fundamentals When your installation is complete you can take backups refresh subsid iary environments and manage and monitor all critical aspects of your environment from the AWS Management Console You can enable monitoring to ensure that everything is sized correctly and performing well Using Amazon RDS for Oracle you can have enterp risegrade high availability in the database layer implementing Amazon RDS Multi AZ configuration You can use this high availability feature even with Oracle Standard Edition to reduce the to tal cost of ownership (TCO) for running the JD Edwards application in the cloud AWS gives you the ability to disable hyperthreading and the numb er of vCPUs in use in your Amazon Elastic Compute Cloud (Amazon EC2) instances and your RDS for Oracle instances to reduce licensing cost and TCO In JD Edwards EnterpriseOne the application processing is CPU intensive and the CPU frequency and number of cores available to the enterprise server plays a large part affecting the performance and throughput of the system AWS provides a wide range of instance classes including z1d Instances delivering a sustained all core frequency of up to 40 gigah ertz (GHz) the fastest of any cloud instance Using such Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 2 high clock frequency instances for the application tier can help reduce the number of cores needed to run the same workload This means you can get the same performance using a smaller instance clas s This makes AWS a highly suitable public cloud environment for running JD Edwards applications with high performance and throughput requirement AWS Support provides a mix of tools and technology p eople and programs designed to proactively help you optimize performance lower costs and innovate faster With core technological capabilities for running high performance JD Edwards deployments combined with a strong support framework AWS provides a g reat experience for customers as a preferred choice for hosting their JD Edwards implementations Amazon RDS for Oracle is a great fit for JD Edwards EnterpriseOne JD Edwards EnterpriseOne also provides for heterogeneous database support which means that there is a loose coupling between enterprise resource planning (ERP) and the database allowing i nstallation of Microsoft SQL Server for example as an alternative to Oracle Licensing Purchase of JD Edwards EnterpriseOne includes the Oracle Technology Foundation component The Oracle Techno logy Foundation for JD Edwards EnterpriseOne provides all the software components you need to run Oracle’s JD Edwards EnterpriseOne applications Designed to help reduce integration and support costs it’s a complete package of the following integrated ope n standards software products that enable you to easily implement and maintain your JD Edwards EnterpriseOne applications: • Oracle Database • Oracle Fusion Middleware • JD Edwards EnterpriseOne Tools If you have these licenses you can take advantage of the A mazon RDS for Oracle Bring Your OwnLicense (BYOL) option See the Oracle Cloud Licensing Policy for details Note: With the BYOL option you may need to acquire addition al licenses for standby database instances when running Multi AZ deployments See the JD Edwards EnterpriseOne Licensing Information User Manual for a detailed description of the restricted use licenses provided in the Oracle Technology Foundation for the JD Edwards EnterpriseOne product Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 3 Some historical JD Edwards EnterpriseOne licensing agreements do not include Oracle Technology Foundation If that is the case for you you can choose the A mazon RDS “License Included” option which includes licensing costs in the hourly price of the service If you have questions about any of your licensing obligations contact your JD Edwards EnterpriseOne licensing representative For details about licensi ng Oracle Database on AWS see the Oracle Cloud Licensing Policy Performance management Instance sizing Increasing the performance of a database (DB) instance requires an understanding of which server resource is causing the performance constraint If database performance is limited by CPU memory or network throughput you can scale up by choosing a larger instance type In an Amazon RDS environment this type of scali ng is simple Amazon RDS supports several DB instance types At the time of this writing instance types that support the Standard Edition 2 (SE2) socket requirements range from: • The burstable “small” ( dbt3small ) • The latest generation general purpose dbm54xlarge which features 16vCPU 64 gigabytes (GB) of memory and up to 10 billions of bits per second (Gbps) of network performance • The latest generation memory optimized dbr54xlarge with 16 vCPU 128 GB of memory and up to 10 Gbps of network performance • The latest generation memory optimized DB instance class dbz1d3xlarge with a sustained all core frequency of up to 40 GHz 12 vCPUs 96 GB memory and up to 10Gbps network perf ormance • The latest generation memory optimized DB instance class dbx1e4xlarge with very high memory to vCPU ratio 16 vCPUs 488 GB memory and up to 10Gbps network performance For current available instance classes and options see the DB instance class support for Oracle The first time you start your Amazon RDS DB instance choose the instance type that seems most relevant in terms of the number of cores and amount of memory you are using With that as the starting point you can then monitor the performance to determine whether it’s a good fit or whether you need to pick a larger or smaller instance type Amazon Web Services Installing JD Edwards Ente rpriseOne on Amazon RDS for Oracle 4 You can modify the instance class for your Amazon RDS DB instance by using the AWS Management Console or the AWS command line interface (AWS CLI) or by making application programming interface (API) calls in applications written with the AWS Software Development Kit (SDK) Modifying the instance class will cause a restart of your DB instance which you can set to occur right away or during the next weekly maintenance window that you specify when creating the instance (Note that the weekly maintenance window setting can als o be changed) Increasing instance storage size Amazon RDS enables you to scale up your storage without restarting the instance or interrupting active processes The main reason to increase the Amazon RDS storage size is to accommodate database growth but you can also do this to improve input/output (I/O) For an existing DB instance with gp2 EBS volumes you might observe some I/O capacity improvement if you scale up your storage Scaling storage capacity can be done manually or you can set up autoscalin g for storage For details on RDS storage management see Working with Storage for Amazon RDS DB Instances Disk I/O management —provisioned IOPS Provisioned I/O operations per second (IOPS) is a storage option that gives you control over your database storage performance by enabling you to specify your IOPS rate Provisioned IOPS is desig ned to deliver fast predictable and consistent I/O performance At the time of this writing you can provision up to 80000 maximum IOPS per instance for EBSoptimized instance classes The maximum storage size supported in an instance is 64 tebibytes (TiB) Here are some important points about Provisioned IOPS in Amazon RDS: • The maximum ratio of Provisioned IOPS to requested volume size (in GiB) is 50:1 For example a 100 GiB volume can be provisioned with up to 5000 IOPS • If you are using Provisioned IOPS storage AWS recommend s that you use DB instance types that are optimized for Provisioned IOPS You can also convert a DB instance that uses standard storage to use Provisioned IOPS storage • The actual amount of your I/O throughput can vary depending on your workload High availability The Oracle database provides a variety of features to enhance the availability of your databases You can use the following Oracle Flashback technology f eatures in both Amazon RDS and in Amazon EC2 which support multiple types of data recovery: Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 5 • Flashback Transaction Query enables you to see all the changes made by a specific transaction • Flashback Query enables you to query any data at some point in time in the past In addition to these features design a database architecture that protects you against hardware failures data center problems and disasters You can do this by using replication technologies and the high availability features of Amazon RDS described in the following section High availability features of Amazon RDS Amazon RDS makes it simple to create a high availability architecture First in the event of a hardware failure Amazon RDS automatically replaces the compute instance powering y our deployment Second Amazon RDS supports Multi AZ deployments where a secondary (or standby) Oracle DB instance is provisioned in a different Availability Zone (location) within the same region This architecture allows the database to survive a failur e of the primary DB instance network and storage or even of the Availability Zone The replication between the two Oracle DB instances is synchronous helping to ensure that all data written to disk on the primary instance is replicated to the standby instance This feature is available for all editions of Oracle including the ones that do not include Oracle Data Guard providing you with out ofthebox high availability at a very competitive cost For details about high availability features in RDS fo r Oracle see Amazon RDS Multi AZ Deployments The following figure shows an example of a high availability architecture in Amazon RDS High availability architecture in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Ama zon RDS for Oracle 6 You should also deploy the rest of the application stack including application and web servers in at least two Availability Zones to ensure that your applications continue to operate in the event of an Availability Zone failure In the design of your high availabi lity implementation you can also use Elastic Load Balancing which automatically distributes the load across application servers in multiple Availability Zones A failover to the standby DB instan ce typically takes between one and three minutes and will occur in any of the following events: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB instance • Compute unit failure on the primary DB instance • Storage failure on the primary DB instance • Scaling of the compute class of your DB instance either up or down • System maintenance such as hardware replacement or operating system upgrades Running Amazon RDS in multiple Availability Zones has additional bene fits: • The Amazon RDS daily backups are taken from the standby DB instance which means that there is usually no I/O impact to your primary DB instance • When you need to patch the operating system or replace the compute instance updates are applied to the standby DB instance first When complete the standby DB instance is promoted as the new primary DB instance The availability impact is limited to the failover time resulting in a shorter maintenance window Oracle security in Amazon RDS Amazon RDS enables you to control network access to your DB instances using security groups By default network access is limited to other hosts in the Amazon Virtual Private Cloud (Amazon VPC) where your instance is deployed Using AWS Identity and Access Management (AWS IAM) you can manage access to your Amazon RDS DB instances For example you can authorize (or deny) administrative users under your AWS Account to creat e describe modify or delete an Amazon RDS DB instance You can also enforce multi factor authentication (MFA) For more information about using IAM to manage administrative access to Amazon RDS see Identity and access management in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 7 Amazon RDS offers optional storage encryption that uses AES 256 encryption and automatically encrypts any snapshots and snapshot restores You can control who can decrypt your data by using AWS Key Management Service (AWS KMS) In addition Amazon RDS supports several Oracle Database security features: • Amazon RDS can protect data in motion using Secure Sockets Layer (SSL) or native network encryption that protects data in motion using Oracle Net Services You can choose between AES Triple DES and RC4 encryption • You can also store database credentials using AWS Secrets Manager Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance Installing JD Edwards EnterpriseOne is often seen as a complex task that involves setting up a server manager and the JD Edwards EnterpriseOne deployment server followed by installing the platform pack In this section you will learn an alternative process for installing the platform pack which is tailored to ensure a successful installation of JD Edwards EnterpriseOne on an Amazon RDS for Oracle database instance (referred to from this point on as an Oracle DB instance) Prerequisites To install J D Edwards EnterpriseOne on Amazon RDS for Oracle: • You should be familiar with the JD Edwards EnterpriseOne installation process and have an understanding of the fundamentals of AWS architecture • You should have a functional AWS account with appropriate IAM permissions • You should have created an Amazon VPC with associated Subnet Groups and Security Groups and it is ready for use by the Amazon RDS for Oracle service • You should have a local database on your deployment server that you can connect to with Oracle SQL Developer Note: The deployment server will have two separate sets of Oracle binaries: a 32 bit client and a 64 bit server engine (named e1local ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Orac le 8 Preparation The proc ess described in this whitepaper is based on the standard JD Edwards EnterpriseOne installation processes which are described in the JD Edwards EnterpriseOne Applications Installation Guide Prior to continuing follow the instructions in the JD Edwards EnterpriseOne Applications Installation Guide until section 45 (“Understanding the Oracle Installation” ) When you have completed the steps leading up to section 45 follow the rest o f the instructions in this whitepaper to successfully install JD Edwards EnterpriseOne on an Oracle DB instance Key installation tasks The key elements of installing JD Edwards EnterpriseOne on an Oracle DB instance include: • Creating the instance • Configur ing the SQL *Plus Instant Client • Installing the platform pack • Modifying the original installation scripts that are provided Creating your Oracle DB instance Using the AWS Management Console follow these steps 1 From the top menu bar choose Services 2 Choose Database > RDS This opens the Amazon RDS dashboard where you will create your Oracle DB instance 3 Choose Create data base 4 To create an Oracle SE2 environment from the Create database screen do the following: a Under database creation method choose Standard Create b Under Engine options choose Oracle 5 Under Edition choose Oracle Standard Edition Two 6 Under Version choose the latest quarterly release of Oracle Database 19c (which is 19000ru 2020 04rur 2020 04r1 at the time of this publication) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 9 7 Under License choose bring your ownlicense Oraclese2 must be used in compliance with the latest Oracle l icensing Contact Oracle should further information be required 8 Under Templates choose Production (AWS Management Console recommends using the default values for a production ready environment or a development environment For the purposes of this white paper you will use a production environment) 9 Under Settings enter the configuration details for the database instance and credentials For this example use the following information: • DB Instance Identifier — jde92poc • Master Username — jde92pocMaster • Master Password — jde92pocMasterPassword 10 Under DB instance size choose Memory Optimized classes (includes r and x classes) 11 From the dropdown menu choose db r5xlarge 12 Under Storage : a For Storage type choose General Purpose (SSD) b For Allocated storag e choose 150 GiB c Select (check) Enable storage autoscaling d For Maximum storage threshold select of 500 GiB For the purposes of this example use the settings mentioned above in step 5 steps 8 and 9 and step 10 to choose the Oracle version instance class and storage respectively These settings can be tailored to meet your specific requirements AWS encourage s you to consult with a JD Edwards EnterpriseOne supplier to ensure these settings are appropriate for your specific use case 13 Under Availability & durability choose Create a standby instance (recommended for production usage) 14 Under Connectivity use the preconfigured VPC (JDE92) and the settings shown in the following figure If you have appropriately configured Subnet Groups and VPC Security Groups you can use them here Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 10 Configure network and security settings Note: The rest of this procedur e assumes that you have already created a VPC to accommodate the Amazon RDS for Oracle installation and that the VPC name used is JDE92 If you need help see VPC documentation 15 Under Database authentication options choose Password authentication 16 Expand the Additional configuration section for Database options enter the following settings: • Initial database name — jde92poc • DB par ameter group — defaultoracle se219 • Option group — defaultoracle se219 • Character set — WE8MSWIN1252 17 For the Backup Encryption and Performance Insights sections use the default settings for this example However because these settings do not impact the ability to install JD Edwards EnterpriseOne AWS encourage s you to experiment with and test these settings in your actual implementation 18 Under Monitori ng choose Enable Enhanced monitoring a For Granularity choose 15 seconds b For Monitoring Role select default c Under Log exports choose Alert log Listener log and Trace log d For Maintenance and Deletion Protection select the defaults Because these settings do not impact the ability to install JD Edwards EnterpriseOne you should experiment with and test these settings 19 Click Create database to create the RDS Oracle instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 11 Creation of the Oracle DB instance begins This can take some time to comp lete Search for your instance to view the progress Click the refresh icon to watch the progress of the Oracle DB instance creation Refreshing the progress view When the Oracle DB instance is available for use the Status changes to available Connecting to your Oracle DB instance When Amazon RDS creates the Oracle DB instance it also creates an endpoint Using this endpoint you can construct the connection string required to connect directly with your Oracle DB instance To allow network requests to your running Oracle DB instan ce you will need to authorize access For a detailed explanation of how to construct your connection string and get started see the Amazon RDS User Guide Endpoint for the Oracle DB instance The endpoint is allocated a Domain Name System (DNS) entry which you can use for connecting However to facilitate a better instal lation experience for JD Edwards EnterpriseOne a CNAME record is created so the endpoint can be more human readable The CNAME should be created in the Amazon Route 53 local internal zone and should point t o the new Oracle DB instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 12 Note: Creating an Amazon Route 53 record set is beyond the scope of this document For more assistance see the Amazo n Route53 User Guide As shown in the following figure you are creating a simple record called jde2poc You provide the RDS instance's endpoint in the Value/Route traffic to section CNAME record set To ensure that connectivity is permitted from the internal subnets in both Availability Zones you will need to edit the security group for the Ora cle DB instance As shown in the following figure you have added an oraclerds inbound rule that is allowing connectivity from our internal IP (source) to the RDS instance Updating the security group Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 13 Configure SQL Developer Oracle SQL Developer i s used to validate that the appropriate connectivity and permissions are in place and that the Oracle DB instance is accessible SQL Developer is installed by default with your Oracle client Optionally however see SQL Developer 1921 Downloads to download a standalone version of SQL Developer The configuration information used to create the Oracle DB instance will be used as the SQL Developer con figuration parameters that are required to connect to the Oracle DB instance 1 In the New/Select Database Connection dialog box choose Test to perform a test connection to the Oracle DB instance A status of Success indicates that the test connection has run and successfully connected to the Oracle DB instance At this point connectivity to both e1local and jde92poc has been proven using the default 64 bit drivers supplied with SQL Developer Note: The 64 bit driver is selected by default based on the order of the client drivers in the Servers environment variable 2 To check the deployment server path variables in File Explorer (assuming Microsoft Windows 10) right click This PC and choose Properties 3 On the Advanced tab choose Environment Variables 4 Locate the Path environment system variable in the list Path s ystem variable Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 14 This enables the observation of the Path environment system variable The following example shows the 64 bit binaries listed before the 32 bit binaries for Oracle C:\JDEdwards \E920_1\PLANNER\bin32;C: \JDEdwards \E920_1\system\bin32; C:\Oracle64db\E1Local\bin;C:\app\e1dbuser \product\1210\client_1 \b in;C:\ProgramData \Oracle\Java\javapath;%SystemRoot% \system32;%Syste mRoot%;%SystemRoot% \System32 \Wbem;%SYSTEMROOT% \System32 \WindowsPowe rShell\v10\;C:\ProgramFiles \Amazon\cfnbootstrap \;C:\Program Files\Amazon\AWSCLI\” 5 To ensure that the remainder of the installation process works it is critical that SQL*Plus works correctly ; specifically name resolution with tnsnamesora From the deployment server EC2 instance open a command window and enter the following command: tnsping ellocal The file used for tnsping is located in the C:\Oracle64db \E1Local\network\admin folder In this directory you’ll make changes to the tnsnamesora file; specifically configuration of the e1local database (64 bit installat ion) 6 This step relates to the 64 bit libraries not to the libraries that the JD Edwards EnterpriseOne deployment server code uses The JD Edwards EnterpriseOne deployment server code uses 32 bit executables and the tnsnamesora file on the client side to connect to databases (which are 64bit) For this example these files are located in C:\app\e1dbuser\product\1210\client_1\network\admin Ensure that the Oracle DB instance is in the tnsnamesora file in both locations (32bit and 64 bit) To proceed you must be able to log into SQL*Plus to the Oracle DB instance using tnsnamesora Installing the platform pack The platform pack is run from the deployment server connecting to a remote database To proceed you need the Oracle Platform Pack for Windows You can obtain it from https://edeliveryoraclecom with the appropriate MOS (My Oracle Support) login Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 15 In this section the installation directory is C:\software\windowsPlatformPack \install To in stall the platform pack: 1 To run the Java based installation program for the Oracle Platform Pack for Windows run setupexe from within the installation directory 2 Choose Next 3 Under Select Installation Type choose Database and then choose Next 4 Under Specify Home Destination > Destination Leave the Name field as the default Under Path choose where to locate the installer files based on the installation preferences This is a temporary location and you can remove these files after the database is populated After you enter the file path choose Next 5 Under Would you like to Install or Upgrade EnterpriseOne choose Install and then choose Next 6 Under Database Options enter database information: a Database type — Oracle b Database server — The database server name is not important and you can use the name of the deployment server (in this case jde92dep ) c Enter and confirm your password d Choose Next 7 Under Administration and End User Roles use the defaults and choose Next 8 A warning appears Ignore it and choose Next Ignore the Database Server name warning Configuration for the Oracle DB instance and a username and password are supplied on the form Unique string identifiers are provided for the tablespace directory (c:\tablespace001 ) and the Index tablespace directory (c:\indexspace001 ) These will be replaced at a later stage of the installation process 9 Choose Run Scripts Manually to defer the execution of the installation scripts Important : Should the installation s cripts run at this stage the installation will fail Choose Next The installation process will attempt to connect to jde92poc using the information you provided This connection must succeed for the installation to proceed The following figure indicates that the installation process was able to connect to the Oracle DB instance specified ( jde92poc ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 16 Installation process connected to the Oracle DB instance 10 Choose Install The installation process starts and creates a set of specific database installati on scripts for the options selected throughout the platform pack installation wizard When installation is complete instead of the default scripts the custom values you provided are configured Because you selected Run Scripts Manually the database is not loaded but scripts are created specifically for the current input parameters As the installation process proceeds you can view logging at C:\JDEdwardsPPack \E920_1 Modifying the default scripts After modifying the default scripts the post installati on wizard installation scripts are created; however it is assumed that they will run on the database server itself As a result you need to modify these scripts to ensure a seamless installation on the Oracle DB instance When you view the specified inst allation directory ( C:\JDEdwardsPPack \E920_1 ) you will see that a folder structure was created You will make the required modifications within this directory Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 17 Folder structure for the installation directory The modifications required to achieve a seamless installation are summarized as follows : • Change the dpump_dir1 entry in all scripts to DATA_PUMP_DIR The Data Pump files need to be moved from the various directories on the deployment server install media to t he DATA_PUMP_DIR directory on the RDS DB instance using DBMS_FILE_TRANSFERPUT_FILE You can also use the Amazon S3 integration feature now available with RD S Oracle to move the dump file For details see Integrating Amazon RDS for Oracle with Amazon S3 using S3_integration and Image X • Change the syntax of the CREATE TABLESPACE statements Amazon RDS supports Oracle Managed Files (OMF) only for data files log files and control files When creating data files and log files you cannot specify physical file names See Changing the Syntax of the CREATE TABLESPACE Statements in this document for additional details • Rename the pristine data dump file and the import data script Change the name of the pristine data dump file and also the import data script for the TEST environment and pristine environment (The standard scripts change the import DIR and you are going to change the filename) • Change the database grants Change the database grants to remove “create any directory” as this is not a grant that works on Amazon RDS See Changing the Database Grants in this document for additional details Throughout this process the updated scripts are located in the ORCL directory You can run these scripts at any time by executing the following command However this is the master script for the database installation and you should NOT run it at this stage Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 18 cmd> InstallOracleDatabaseBAT If throughout this process you make any mistakes or encounter failures run the following command This command completely unloads and drops any database components that were created by the installation script cmd> drop_dbbat You should back up all the scripts in the ORCL directory If required you can run the installer again to generate a set of new pristine scripts Create the JDE Installers standard data pump directories From SQL Developer connected to the Amazon RDS for Oracle database instance perform the following steps The Windows global search and replace commands were completed using notepad++ However you can use any text editor Changing dpump_dir1 Use the global search and replace for *sql and *bat files in t he c:\JDEdwardsPPack \E920_1\ORCL directory : • Replace dpump_dir_1 with DATA_PUMP_DIR • Replace log_dir1 with DATA_PUMP_DIR Find and replace the *sql and *bat files Amazon Web Services Installing JD Edwa rds EnterpriseOne on Amazon RDS for Oracle 19 Now create the datapump directories ‘ log_dir1’ and ‘dpump_dir1’ as shown: Sqldeveloper> exe c rdsadminrdsadmin_utilcreate_directory('log_dir1'); Sqldeveloper> exec rdsadminrdsadmin_utilcreate_directory('dpump_dir1'); • Confirmation messages such as anonymous block completed are displayed; you can safely ignore them • You can confirm that the directory was created by running the following SQL statement : SELECT directory_name directory_path FROM dba_directories; After replacing the sql and bat files the code output changes For example this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY= dpump_dir1 DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE= log_dir1 :Import_%USER%log TABLE_EXISTS_ACTION=TR UNCATE EXCLUDE=USER Becomes this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY=DATA_PUMP_DIR DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE=DATA_PUMP_DIR:Import_%USER%log TABLE_EXISTS_ACTION=TRUNCATE EXC LUDE=USER Changing the syntax of the CREATE TABLESPACE statements By default pristine create tablespace statements found in the files such as crtabsp_cont crtabsp_shnt and crtabsp_envnt look like the following example CREATE TABLESPACE &&PATH&&RELEASEt Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 20 logging datafile '&&TABLE_PATH \&&PATH&&RELEASEt01dbf' size 1500M '&&TABLE_PATH \&&PATH&&RELEASEt02dbf' size 1500M autoextend on next 60M maxsize 5000M extent management local autoallocate segment space management auto online; These statements must be modified to reflect the following example CREATE bigfile TABLESPACE &&PATH&&RELEASEt logging Datafile SIZE 1500M AUTOEXTEND ON MAXSIZE 5G; Note: The next step of applying updates is either a manual or a scripted task due to differences in many of the tablespaces The following updates must be applied crtabsp_cont create bigfile tablespace &&PATH&&RELEASEt logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; create bigfile tablespace &&PATH&&RELEASEi logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; crtabsp_shnt create bigfile tablespace sy&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 750M; create bigfile tablespace sy&&RELEASEi logging datafile size 100M AUT OEXTEND ON MAXSIZE 750M; create bigfile tablespace svm&&RELEASEt logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; Amazon Web Services Installing JD Edwards EnterpriseOn e on Amazon RDS for Oracle 21 create bigfile tablespace svm&&RELEASEi logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace ol&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 350M; create bigfile tablespace ol&&RELEASEi logging datafile size 100M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace dd&&RELEASEt logging datafile size 350M AUTOEXTEND ON MAXSIZE 450M; create bigfile tablespace dd&&RELEASEi logging datafile size 125M AUTOEXTEND ON MAXSIZE 750M; crtabsp_envnt create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 100 0M AUTOEXTEND ON MAXSIZE 4500M; Renaming the pristine data dump file and the Import data script These changes are made to ORCL\InstallOracleDatabaseBAT You are changing DTA to DDTA to load the DEMO data as opposed to the empty tables Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 22 approx line 363 PRISTINE @REM @set USER=%PS_DTA_USER% @set PSSWD=%PS_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %USER% Business Data Tables @echo @echo ""Calling Load for %PS_DTA_USER% load type DTA"" >> logs\OracleStatustxt @echo ""InstallOracleDatabase:#6 call load %PS_DTA_USER% DTA T STDTA @callLoadbat @if ERRORLEVEL 4 ( @goto abend approx line 554 – TESTDTA @rem @if ""%RUN_MODE""==""INSTALL""( @set user=%DV_DTA_USER% @set PSSWD=%DV_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %DV_DTA_USER% Business Data Tables @echo @echo ""Calling Load for %DV_DTA_USER%load type DTA"" >>logs\OracleStatustxt @call Loadbat @if ERRORLEVEL 4( @goto abend Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 23 Changing the database grants Create_dirsql has the following statement that you need to change Amazon RDS for Oracle does not support creating directories on the RDS instance so you must remove this statement Before grant create session create table create view create any directory select any dictionary to jde_role; After grant create session create table create view select any dictionary to jde_role; Advanced configuration Start an SQL Developer session to the RDS DB instance and log in as the administrative user ( jde92pocmaster ) Run the following SQL command SELECT directory_name directory_path FROM dba_directories ; This is the result: DIRECTORY_NAME DIRECTORY_PATH BDUMP /rdsdbdata/log/trace ADUMP /rdsdbdata/log/audit OPATCH_LOG_DIR /rdsdbbin/oracle/QOpatch OPATCH_SCRIPT_DIR /rdsdbbin/oracle/QOpatch DATA_PUMP_DIR /rdsdbdata /datapump OPATCH_INST_DIR /rdsdbbin/oracle/Opatch LOG_DIR1 /rdsdbdata/userdirs/01 DPUMP_DIR1 /rdsdbdata/userdirs/02 To see files in DATA_PUMP_DIR1 directory run the following Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 24 SELECT * FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DATA_PUMP_DIR 1’))ORDER BY mtime; SELECT * FROM TABLE (RDSADMINRDS_FILE_UTIL LISTDIR(‘LOG_DIR1’ )) ORDER BY mtime; The following command deletes a single file named Import_TESTCTL_CTLlog from the LOG_DIR1 directory stored on the Oracle DB instance exec utl_fileremove(‘LOG_DIR1’’Import_TESTCTL_CTLlog’); exec utl_filefremove('DATA_PUMP_DIR''Import_TESTCTL_CTLlog'); The DATA_PUMP_DIR is used in the following SQL command to generate deletes for all log files in LOG_DIR1DATA_PUMP_DIR SELECT ’exec utl_filefremove (‘DATA_PUMP_DIR ’’’’’|| filename|| ‘’’);’ FROM TABLE (RDSMDMINRDS_FILE_UTILLISTDIR (‘LOG_DIR1 ’)) WHERE filename LIKE ‘%log’ ORDER BY mtime; Moving DMP files When connected to e1local on the deployment server using SQL Developer run the following commands DROP DATABASE LINK jde92poc; CREATE DATABASE LINK jde92poc CONNECT TO jde92pocmaster IDENTIFIED BY ""aws_Poc_Password"" USING'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=jde92pocjde92 loca l)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=jde92po c)))'; SELECT directory_name directory_path FROM dba_directories; 'C:\Oracle64db \admin\e1local\dpdump'; These commands create the following: • A new database directory to read the dump files from the deployment server Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 25 • A database link to the Amazon RDS for Oracle DB instance to be a conduit to move the dump files from the deployment server to the Oracle DB instance Copying DMP files from an ORCL directory to a specified DATA_PUMP directory Locate *dmp files in the ORCL directory and copy them to C:\Oracle64db \admin\e1local\dpdump as defined in the previous e1local database directory ( DATA_PUMP_SRM ) You'll see that there are two DUMP_DTADMP files in the find results The one in demodta must be renamed DUMP_DDTADMP It’s important to name it exactly as specified because there are associated changes in the import scripts DUMP_DTADMP comes from ORCL\proddta The reason for this renaming is that one of the dump files (the larger one ) is for DEMO data which is imported into TESTDTA and PRISTINE while the smaller file (DUMP_DTADMP ) does not contain any data – just table and index structures Now all of the *dmp files that must be copied into the Oracle DB instance are in an e1local directory named DATA_PUMP_SRM It’s time to move these files to the RDS DB instance directory named DPUMP_DIR1 that you created The following figure shows how this directory looks on the deployment server DPUMP_DIR1 directory on deployment server In the Appendix you will find a script you can use to copy the dmp files from the deployment server to the RDS DB instance via a database link Run this script from SQL Developer connected to the e1local database Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 26 When these commands finish successfully you can run the following command against the Oracle DB instance ( jde92poc ) to ensure that the files have arrived SELECT substr(filename130)type filesize MTIME FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DPUMP_DIR1')) ORDER BY mtime; The following output indicates that the files were transferred correctly A screens hot that shows the files were transferred correctly Confirming files are transferred : create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M ; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 27 Change the database grants to not include ‘create any directory’ Because Amazon RDS Oracle does not support creating directories on the RDS instance the creation of directories in the installation scripts must be done manually You do this by using the AW S custom function rdsadminrdsadmin_utilcreate_directory Grants before Grant create session create table create view create any directory select any dictionary to jde_role; Grants after Grant create session create table create view select any dictionary to jde_role; Running the installer At this point you have made all the modifications that are required to facilitate the smooth installation of JD Edwards EnterpriseOne If you encounter any issues be sure that anything you defined in the installation wizard is also defined in ORCL\ORCL_SETBAT If you forget items such as passwords or settings you can retrieve them from this file However be sure to delete this file wh en the installation is complete Open a command window on the deployment server and run InstallOracleDatabasebat from the C: \JDEdwardsPPack \E920_1\ORCL directory You can use C:\JDEdwardsPPack \E920_1\ORCL\logs to track progress and view the script output You cannot view the output of the data pump operations because they are not multiples of the block size of the database When the installation is complete you should see that the database is populated The following screenshot is from Oracle SQL Develop er and shows you the properties of the target database All JD Edwards EnterpriseOne tablespaces now have space allocated and tables created Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 28 Properties of the target database You’ve now completed all the tasks for installing JD Edwards EnterpriseOne on the Amazon RDS Oracle DB instance The following steps enable you to verify that you can connect to the populated instance Logging into JD Edwards EnterpriseOne on the deployment server 1 Click the application launch icon to start JD Edwards EnterpriseOne The JD Edwards EnterpriseOne login screen is displayed 2 Enter your UserID and password 3 For Environment enter DV920 4 For Role enter *ALL Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 29 Logging in to DV920 for testing 5 Log out and then log back in to the jdeplan environment and continue with the standard installation Because there are no further deviations from a standard installation beyond this point you can proceed to create an installation plan and run the installation workbench Follow the instructions in section 5 of the JD Edwards EnterpriseOne installation process “ Working with Installation Planner for an Install ” Validation and testing The s uccessful completion of the installation workbench will give you confidence that the Amazon RDS Oracle database installation is working Proceeding to install web servers and enterprise servers and connecting them to the Amazon RDS for Oracle DB instance a re some of the remaining installation steps Remember to delete the dmp files on the Amazon RDS instance to ensure that they do not contribute to the amount of storage you are using on the Amazon RDS instance Any files stored in database directories con tribute to the space you are using in the Amazon RDS instance Use the following statement to build the commands you need to run to delete the dmp files Run this statement only when you know that your installation succeeded SELECT 'exec utl_filefremove(''DPUMP_DIR1'''''||filename|| ''');' FROM table(RDSADMINRDS_FILE_UTILLISTDIR('DPUMP_DIR1')) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 30 WHERE filename LIKE '%DMP' ORDER BY mtime; Running on Amazon RDS for Oracle Enterprise Edition This paper walks through the implementation of J D Edwards on Amazon RDS for Oracle standard edition only However if you are running or plan to run on Amazon RDS for Oracle Enterprise Edition there are some additional features you can leverage in the areas of high availability and security • Flashback Table recovers tables to a specific point in time This can be helpful when a logical corruption is limited to one table or a set of tables instead o f to the entire database At the time of this publication the Flashback Database feature is available only on self managed Oracle databased on Amazon EC2 and not in Amazon RDS for Ora cle • Transparent Data Encryption (TDE) protects data at rest for customers who have purchased the Oracle Advanced Security option TDE provides transparent encryption of stored data to support your privacy and compliance efforts Applications do not have to be modified and will continue to work as before Data is automatically encrypted before it is written to disk and autom atically decrypted when reading from storage Key management is built in which eliminates the task of creating managing and securing encryption keys You can choose to encrypt tablespaces or specific table columns using industry standard encryption algorithms including Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) • Oracle Virtual Private Database (VPD) enables you to create security polici es to control database access at the row and column level Essentially Oracle VPD adds a dynamic WHERE clause to an SQL statement that is issued against the table view or synonym to which an Oracle VPD security policy was applied Oracle VPD enforces se curity to a fine level of granularity directly on database tables views or synonyms Because you attach security policies directly to these database objects and the policies are automatically applied whenever a user accesses data there is no way to bypa ss security • Fine Grained Auditing (FGA) can be understood as policy based auditing It enables you to specify the conditions necessary to generate an audit record FGA p olicies are programmatically bound to a table or view They allow you to audit an event only when conditions that you define are true; for example only if a specific column has been selected or updated Because every access to a table is not always record ed this creates more meaningful audit trails This can be critical given the often commercially sensitive nature of the data retained in the JD Edwards EnterpriseOne backend databases Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 31 As dbz1d instances class delivers a sustained all core frequency of u p to 40 GHz the fastest of any cloud instance; this can also reduce the costs for customers using core based licensing cost while running enterprise edition since they will need to have fewer cores now Conclusion This whitepaper described many of the ca pabilities and advantages of using AWS and Amazon RDS as the foundation for installing the JD Edwards EnterpriseOne application Specifically this whitepaper focused on a way of configuring Amazon RDS for Oracle as the underlying database for the JD Edwar ds EnterpriseOne application The whitepaper articulated all the steps for installing the JD Edwards EnterpriseOne application and the steps required to set up an Amazon RDS Oracle DB instance Having JD Edwards EnterpriseOne and Amazon RDS for Oracle run ning in the AWS Cloud enables you to enjoy the advantages of simple deployment high availability security scalability and many additional services supported by Amazon RDS and AWS Appendix: Dumping deployment service to RDS The following code snippet shows example usage of DBMS_FILE_TRANSFER package to transfer the datapump dumpfile for deployment service to RDS Oracle Begin DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_CTLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_CTLDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC01DMP' destination_directory_ob ject=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC01DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC02DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC02DMP' Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 32 destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC03DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> ' RDBSPEC03DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC04DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC04DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DTADMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_OLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_OLDMP' destination_database=> 'jde 92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_SYDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_SYDMP' destination_database=> 'jde92poc' ); Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 33 DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDTADMP' destination_database=> 'jde92poc' ); END; Contributors Contributors to this document include: •Marc Teichtahl AWS Solutions Architect •Shannon Moir Lead Engineer at Myriad IT •Saikat Banerjee Database Solutions Architect AWS Document revisions Date Description March 24 2021 Document review and addition of various new RDS Oracle capabilities Dec 2016 First publication",General,consultant,Best Practices Integrating_AWS_with_Multiprotocol_Label_Switching,"Integrating AWS with Multiprotocol Label Switching December 2016 This paper has been archived For the latest technical content on this subject see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Why Integrate with AWS? 1 Introduction to MPLS and Managed MPLS Services 2 Overview of AWS Networking Services and Core Technologies 3 Amazon VPC 3 AWS Direct Connect and VPN 3 Internet Gateway 4 Customer Gateway 5 Virtual Private Gateway and Virtual Routing and Forwarding 5 IP Addressing 5 BGP Protocol Overview 6 Autonomous System 6 AWS APN Partners – Direct Connect as a Service 8 Colocation with AWS Direct Connect 9 Benefits 9 Considerations 10 Architecture Scenarios 10 MPLS Architecture Scenarios 14 Scenario 1: MPLS Connectivity over a Single Circuit 14 Scenario 2: Dual MPLS Connectivity to a Single Region 22 Conclusion 28 Contributors 28 Further Reading 28 Notes 29 ArchivedAbstract This whitepaper outlines highavailability architectural best practices for customers who are considering integration between Amazon Virtual Private Cloud (Amazon VPC) in one or more regions with their existing Multiprotocol Label Switching (MPLS) network The whitepaper provides best practices for connecting single and/or multiregional configurations with your MPLS provider It also describes how customers can incorporate VPN backup for each of their remote offices to maintain connectivity to AWS Regions in the event of a network or MPLS outage The target audience of this whitepaper includes technology decision makers network architects and network engineers ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 1 Introduction Many midsized to largesized enterprises leverage Multiprotocol Label Switching (MPLS) services for their Wide Area Network (WAN) connection As cloud adoption increases companies seek ways to integrate AWS with their existing MPLS infrastructure in a costeffective way without redesigning their WAN architecture Companies want a flexible and scalable solution to bridge current onpremises data center workloads and their cloud infrastructure They also want to provide a seamless transition or extension between the cloud and their onpremises data center Why Integrate with AWS? There are a number of compelling business reasons to integrate AWS into your existing MPLS infrastructure:  Business continuity One of the benefits of adopting AWS is the ease of building highly available geographically separated workloads By integrating your existing MPLS network you can take advantage of native benefits of the cloud such as global disaster recovery and elastic scalability without losing any of your current architectural implementations standards and best practices  User data availability By keeping data closer to your users your company can improve workload performance customer satisfaction as well as meet regional compliance requirements  Mergers & acquisitions During mergers and acquisitions your company can realize synergies and improvements in IT services very quickly by moving acquired workloads into the AWS Cloud By integrating AWS into MPLS your company has the ability to: o Minimize or avoid costly and serviceimpacting data center expansion projects that can require either the relocation or purchase of technology assets o Migrate workloads into Amazon Virtual Private Cloud (Amazon VPC) to realize financial synergies very quickly while developing longerterm transformational initiatives to finalize the acquisition ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 2 To accomplish this companies can design their network with AWS to do the following:  Enable seamless transition of the acquired remote offices and data centers with AWS by connecting the newly acquired MPLS network to AWS  Simplify the migration of workloads from the acquired data center into an isolated Amazon VPC while maintaining connectivity to existing AWS workloads  Optimize availability and resiliency Enterprise customers who want to maximize availability and performance by using one or more WAN/MPLS solutions are able to continue with the same level of availability by peering with AWS in multiple faultisolated regions This whitepaper highlight s several options you have as a mid tolarge scale enterprise to cost effectively migrate and launch new services in AWS without overhauling and redesigning your current MPLS/WAN architecture Introduction to MPLS and Managed MPLS Services MPLS is an encapsulation protocol used in many service provider and large scale enterprise networks Instead of relying on IP lookups to discover a viable ""nexthop"" at every single router within a path (as in traditional IP networking) MPLS predetermines the path and uses a label swapping push pop and swap method to direct the traffic to its destination This gives the operator significantly more flexibility and enables users to experience a greater SLA by reducing latency and jitter For a simple overview of MPLS basics see RFC3031 Many service providers offer a managed MPLS solution that can be provisioned as Layer 3 (IPbased) or Layer 2 (single broadcast domain) to provide a logical extension of a customer’s network When referring to MPLS in this document we are referring to the service providers managed MPLS/WAN solution See the following RFCs for an overview on some of the most common MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 3 solutions:  L3VPN: https://toolsietforg/html/rfc4364 (obsoletes RFC 2547)  L2VPN (BGP): https://toolsietforg/html/rfc6624  Pseudowire (LDP): https://toolsietforg/html/rfc4447 Although AWS does not natively integrate with MPLS as a protocol we provide mechanisms and best practices to connect to your currently deployed MPLS/WAN via AWS Direct Connect and VPN Overview of AWS Networking Services and Core Technologies We want to provide a brief overview of the key AWS services and core technologies discussed in this whitepaper Although we assume you have some familiarity with these AWS networking concepts we have provided links to more indepth information Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network dedicated to your AWS account1 Within Amazon VPC you can launch AWS resources and define your IP addressing scheme This includes your subnet ranges routing table constructs network gateways and security setting Your VPC is a security boundary within the AWS multitenant infrastructure that isolates communication to only the resources that you manage and support AWS Direct Connect and VPN You can connect to your Amazon VPC over the Internet via a VPN connection by using any IPsec/IKEcompliant platform (eg routers or firewalls) You can set up a statically routed VPN connection to your firewall or a dynamically routed VPN connection to an onpremises router To learn more about setting up a VPN connection see the following resources:  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpn connectionshtml ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 4  https://wwwyoutubecom/watch?v=SMvom9QjkPk Alternatively you can connect to your Amazon VPC by establishing a direct connection using AWS Direct Connect 2 Direct Connect uses dedicated private network connections between your intranet and Amazon VPC Direct Connect currently provides 1G and 10G connections natively and sub1G through Direct Connect Partners At the heart of Direct Connect is your ability to carve out logical virtual connections within the physical direct connect circuit based on the 8021Q VLAN protocol Direct Connect leverage virtual LANs (VLANs) to provide network isolations and enable you to create virtual circuits for different types of communication These logical virtual connections are then associated with virtual interfaces in AWS You can create up to 50 virtual interfaces across your direct connection AWS has a soft limit on the number of virtual interfaces you can create Using Direct Connect you can categorize VLANs that you create as either public virtual interfaces or private virtual interfaces Public virtual interfaces enable you to connect to AWS services that are accessible via public endpoints for example Amazon Simple Storage Service (Amazon S3) Amazon DynamoDB and Amazon CloudFront You can use private virtual interfaces to connect to AWS services that are accessible through private endpoints for example Amazon Elastic Compute Cloud (Amazon EC2) AWS Storage Gateway and your Amazon VPC Each virtual interface needs a VLAN ID interface IP address autonomous system number ( ASN ) and Border Gateway Protocol (BGP) key To learn more about working with Direct Connect virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Internet Gateway An Internet gateway (IGW) is a horizontally scaled redundant and highly available VPC component that allows communication between instances in your VPC and the Internet3 To use your IGW you must explicitly specify a route pointing to the IGW in your routing table ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 5 Customer Gateway A customer gateway (CGW) is the anchor on your side of the connection between your network and your Amazon VPC4 In an MPLS scenario the CGW can be a customer edge (CE) device located at a Direct Connect location or it can be a provider edge (PE) device in an MPLS VPN network For more information on which option best suits your needs see the Colocation section later in this document Virtual Private Gateway and Virtual Routing and Forwarding A virtual private gateway (VGW) is the anchor on the AWS side of the connection between your network and your Amazon VPC This software construct enables you to connect to your Amazon VPCs over an Internet Protocol Security (IPsec) VPN connection or with a direct physical connection You can connect from the CGW to your Amazon VPC using a VGW In addition you can connect from an onpremises router or network to one or more VPCs using a virtual routing and forwarding (VRF) approach5 VRF is a technology that you can use to virtualize a physical routing device to support multiple virtual routing instances These virtual routing instances are isolated and independent AWS recommends that you implement a VRF if you are connecting to multiple VPCs over a direct connection where IP overlapping and duplication may be a concern IP Addressing IP addressing is the bedrock of effective cloud architecture and scalable topologies Properly addressing your Amazon VPC and your internal network enables you to do the following:  Define an effective routing policy An effective routing policy enables you to associate adequate governance around what networks your infrastructure can communicate with internally and externally It also enables you to effectively exchange routes between and within domains systems and internal and external entities  Have a consistent and predictable routing infrastructure Your network should be predictable and fault tolerant During an outage or a ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 6 network interruption your routing policy ensures that routing changes are resilient and fault tolerant Use resources effectively By controlling the number of routes exchanged across the boundaries you prevent data packets from travelling across the entire network before getting dropped With proper IP addressing only segments with active hosts are propagated while networks without a host do not appear in your routing table This prevents unnecessary data charges when hosts are sending erroneous IP packets to systems that do not exist or that you choose not to communicate with Maintain security By effectively controlling which networks are advertised to and from your VPC you can minimize the impact of targeted denial of service attacks on subnets If these subnets are not defined within your VPC such attacks originating outside of your VPC will not impact your VPC Define a unique network IP address boundary in your VPC Amazon VPC supports IP address allocation by subnets which allows you to segment IP address spaces into defined CIDR ranges between /16 and /28 A benefit of segmentation is that you can sequentially assign hosts into meaningful blocks and segments while conserving your IP address allocations Amazon AWS also supports route summarization which you can use to aggregate your routes to control the number of routes into your VPC from your internal network The largest CIDR supported by Amazon VPC is a /16 You can aggregate your routes up to a /16 when advertising routes to AWS BGP Protocol Overview Autonomous System An autonomous system (AS) is a set of devices or routers sharing a single routing policy that run under a single technical administration An example is your VPC or data center or a vendor’s MPLS network Each AS has an identification number (ASN) that is assigned by an Internet Registry or a provider If you do not have an assigned ASN from the Internet Registry you can request one from your circuit provider (who may be able to allocate an ASN) or choose to assign a Private ASN from the following range: 65412 to 65535 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 7 We recommend that you use Border Gateway Protocol (BGP) as the routing protocol of choice when establishing one or more Direct Connect connections with AWS For more information on why you should use BGP see http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml As an example AWS assigns an AS# of 7224 This AS# defines the autonomous system in which your VPC resides To establish a connection with AWS you have to assign an AS# to your CGW After communication is established between the CGW and the VGW they become external BGP peers and are considered BGP neighbors BGP neighbors exchange their predefined routing table (prefixlist) when the connection is first established and exchange incremental updates based on route changes Establishing neighbor relationships between two different ASNs is considered an External Border Gateway Protocol connection (eBGP) Establishing a connection between devices within the same ASN is considered an Internal Border Gateway Protocol connection (iBGP) BGP uses a TCP transport protocol port 179 to exchange routes between BGP neighbors Exchanging Routes between AWS and CGWs BGP uses ASNs to construct a vector graph of the network topology based on the prefixes exchanged between your CGW and VGW The connection between two ASNs forms a path and the collection of all these paths form a route used to reach a specific destination BGP carries a sequence of ASNs which indicate which routes are transversed To establish a BGP connection the CGW and VGW must be connected directly with each other While BGP supports BGP multihopping natively AWS VGW does not support multihopping All BGP neighbor connections have to terminate on the VGW Without a successful neighbor relationship BGP updates are not exchanged AWS does not support iBGP neighbor relationship between CGW and VGW AWSSupported BGP Metrics and Path Selection Algorithm The VGW receives routing information from all CGWs and uses the BGP best path selection algorithm to calculate the set of preferred paths The rules of that algorithm as it applies to VPC are: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 8 1 The most specific IP prefix is preferred (for example 10000/24 is preferable to 10000/16) For more information see Route Priority in the Amazon VPC User Guide 6 2 When the prefixes are the same statically configured VPN connections (if they exist) are preferred 3 For matching prefixes where each VPN connection uses BGP the algorithm compares the AS PATH prefixes and the prefix with the shortest AS PATH is preferred Alternatively you can prepend AS_PATH so that the path is less preferred 4 When the AS PATHs are the same length the algorithm compares the path origin s Prefixes with an Interior Gateway Protocol (IGP) origin are preferred to Exterior Gateway Protocol (EGP) origins and EGP origins are preferred to unknown origins 5 When the origins are the same the algorithm compares the router IDs of the advertising routes The lowest router ID is preferred 6 When the router IDs are the same the algorithm compares the BGP peer IP addresses The lowest peer IP address is preferred Finally AWS limits the number of routes per BGP session to 100 routes AWS will send a reset and tear down the BGP connection if the number of routes exceeds 100 routes per session AWS APN Partners – Direct Connect as a Service Direct Connect partners in the AWS Partner Network (APN) can help you establish sub1G highspeed connectivity as a service between your network and a Direct Connect location To learn more about how APN partners can help you extend your MPLS infrastructure to a Direct Connect location as a service see https://awsamazoncom/directconnect/partners/ ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 9 Colocation with AWS Direct Connect Colocation with Direct Connect means placing the CGW in the same physical facility as Direct Connect location (https://awsamazoncom/directconnect/partners/) to facilitate a local cross connect between the CGW and AWS devices Establishing network connectivity between your MPLS infrastructure and an AWS colocation center offers you an additional level of flexibility and control at the AWS interconnect If you are interested in establishing a Direct Connect connection in the Direct Connect facility you will need to order a circuit between your MPLS Provider and the Direct Connect colocation facility and connect the circuit to your device A second circuit will then need to be ordered through the AWS Direct Connect console from the CE/CGW to AWS Benefits AWS Direct Connect offers the following benefits:  Traffic separation and isolation You can satisfy compliance requirements that call for data segregation You also have the ability to define a public and private VRF across the same Direct Connect connection and monitor specific data flows for security and billing requirements  Traffic engineering granularity You have greater ability to define and control how data moves in to and out of your AWS environment You can define complex BGP routing rules filter traffic paths move data in to and out of one VPC to another VPC You also have the ability to define which data flows through which VRF This is particularly important if you need to satisfy specific compliance for data intransit  Security and monitoring functionality If you choose to monitor onpremises communication you can span ports or install tools that monitor traffic across a particular VRF You can place firewalls in line to meet internal security requirements You can also control communication by enforcing certain IP addresses to communicate across specific VLANs  Simplified integration of IT and data platforms in mergers and acquisitions In a merger and acquisition (M&A) scenario where both companies have the same MPLS provider you can ask the MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 10 provider to attach a network tonetwork interface ( NNI ) between the two companies This will enable both companies to have a direct path to Amazon VPCs Your colocation router can serve as a transit to allow for the exchange of routes between the two companies If the companies do not share the same MPLS provider the acquiring company can order an additional circuit from their CGW to the acquired compan y’s MPLS to the colocation router and carve out a VRF for that connection Considerations There are a few business and technology design requirements to consider if you are interested in setting up your router in a colocation facility:  Design Requirements: The technical requirements for certain large enterprise customer can be complex A colocation infrastructure can simplify the integration with complex network designs especially if there is a need to manipulate routes or a need to extend a private MPLS network to the CGW  PE/CE Management: Some MPLS providers offer managed Customer Equipment support bundled with their MPLS service offering Taking advantage of this service may reduce operational burden while taking advantage of the discounted bundled pricing that comes with the service Architecture Scenarios Colocation Architecture At a very high level a customer’s colocated CGW sits between the AWS VGW and the MPLS PE The CGW connects to AWS VGW over a cross connection and connects to the customers MPLS provider equipment over a last mile circuit (cross connect that may or may not reside in the same colocation facility) It is possible that the MPLS provider edge (PE) resides in the same direct connect facility In that situation two LOA’s will exist The first between your CGW and AWS and the second between your CGW and your MPLS provider The first LOA can be requested via AWS console and either you or the MPLS provider can request the second LOA via the direct connect facility ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 11 Figure 1 shows a physical colocation topology for single data center connectivity to AWS Figure 1: Single data center connection over MPLS with customermanaged CGW in a colo cation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram above will be a cross connection Figure 2 outlines the logical colocation topology for single data center connecti on to AWS In this scenario you establish an eBGP connection between the customer ’s colocat ed router/device and AWS We recommend that the customer also establish an eBGP connectivity from their CGW to the customer ’s MPLS PE Figure 2: Highlevel eBGP topology in a colocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 12 Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection NonColocation Topology At a high level there are two possible scenarios for a noncolocation architecture  The first architectural consideration is a scenario where the customers MPLS or circuit provider has facility access to AWS Direct Connect facility You create an LOA request from AWS console and work with your MPLS provider to request the facility cross connection  The secondary architectural consideration is a scenario where are customers MPLS provider does not have facility access and needs to work with one of our Direct Connect partners to extend a circuit from the MPLS PE to the AWS environment For a list of AWS partners please use this link: https://awsamazoncom/directconnect/partners/ The following noncolocation topology diagram shows how the MPLS providers PE is used as the CGW The customer can request their vendor to create the required 8021Q VLAN s on the vendors PE routers Note Some vendor s may c onsider this request a custom configuration so it is worth checking with the provider if this type of setup is supportable ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 13 Figure 3: Single dat a center connection over MPLS with vendor PE as CGW in a noncolocation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection Similar to the previous colocation BGP design the customer has to establish eBGP connections However this time instead of peering with a colocated device the customer can peer directly with the MPLS provider ’s PE Figure 4 shows an example of a the logical eBGP noncolocation topology Figure 4: Highlevel eBGP connection in a noncolocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 14 MPLS Architecture Scenarios The following three scenarios illustrate how you can integrate AWS into an MPLS architecture Scenario 1: MPLS Connectivity over a Single Circuit Architecture Topology The diagram below shows a highlevel architecture of how existing or new MPLS locations can be connected to AWS In this architecture customers can achieve any toany connectivity between their geographically dispersed office or data center locations with their VPC Figure 5 : Single MPLS connectivity into Amazon VPC Physical Topology The customer decides how much bandwidth is required to connect to their AWS Cloud Based on your last mile connectivity requirements one end of this circuit extends through the MPLS provider ’s point of presen ce (POP) to the Provider Equipment (PE) device The other end of the circuit terminates in a meet me room or telecom cage located in one of Direct Connect facilities The Direct ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 15 Connect facility will set up a crossconnection that extends the circuit to AWS devices Figure 6: Highlevel physical topology between AWS and MPLS PE The following are the prerequisites to establish an MPLS connection to AWS: 1 Create an AWS account if you don’t already have one 2 Create an Amazon VPC T o learn how to set up your VPC see http://docsawsamazoncom/AmazonVPC/latest/GettingStarted Guide/gettingstartedcreatevpchtml 3 Request an AWS Direct Connect connection by selecting the region and your partner of choice : http://docsawsamazoncom/directconnect/latest/UserGuide/Col ocationhtml 4 Once completed AWS will email you a Letter of Authorization (LOA ) which describes the circuit information at the Direct Connect facility 5 If the MPLS provider has facility access to the AWS Direct Connect facility they can establish the required cross connection based on the LOA If the MPLS provider is not already in the Direct Connect facility a new connection must be built into the facility or the MPLS provider can utilize a Direct Connect partner (tier 2 extension) to gain facility access ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 16 Once the physical circuit is up the next step is to establish IP data communication and routing between AWS the P E device and the customer ’s network Create a virtual interface to begin using your Direct Connect connection A virtual interface is an 8021Q Layer 2 VLAN that helps segment and direct the appropriate traffic over the Direct Connect interface You can create a public virtual interface to connect to public resources or a private virtual interface to connect to resources in your VPC To learn more about working with virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Work with your MPLS provider to create the corresponding 8021Q Layer 2 VLAN on the PE Once the layer 2 VLAN link is up the next step is to assign IP Addresses and establish BGP connectivity You can download the IP/BGP configuration information from your AWS Management Console which can act as a guide for setting up your IP/BGP connection To learn more about downloading the router configuration see http://docsawsamazoncom/directconnect/latest/UserGuide/getstartedhtml# routerconfig When the BGP communication is established from each location and routes are exchanged all locations connected to the MPLS network should be able to communicate with the attached VPC on AWS Make sure to verify any routing policy that may be implemented within the MPLS provider and Customer Network that may be undesirable Figure 7: Logical 8021q VLANs diagram ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 17 In the setup in Figure 7 you can create VLANs that connect your MPLS PE device to AWS VPC Each VLAN (represented by different colors) is tagged with a VLAN ID that identifies the logical circuit and isolates traffic from one VLAN to another Design Decisions and Criteria There are a few design considerations you should be aware of:  Contact your MPLS provider to confirm support to create an 8021Q VLAN’s on their MPLS PE and if they have a VLAN ID preference (if they have multiple circuits utilizing the same physical Direct Connect interface they may require control of the VLAN ID)  Validate the number of VPCs you will need to support your business and if VPC Peering will support your InterVPC communication For more information about VPC Peering see: http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/peering scenarioshtml  If multiple circuits are using the same physical Direct Connect interface verify that the interface is configured for the appropriate bandwidth  Validate if your business requirements or existing technology constraints such as IP overlap dictate the need to design complex VRF architectures NAT or complex interVPC routing  Validate if your BGP routing policy requires complex BGP prefix configurations such as community strings ASPath Filtering etc You may have to consider a colocation design if:  Your MPLS provider is unable to provide 8021Q VLAN configurations  You have a requirement to implement additional complex routing functionalities that will require route path manipulation or stripping off AS# or integrating BGP communities with routes you are learning from AWS before injecting them into your routing domain See the following section for colocation scenarios ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 18 Exchanging Routes AWS supports only BGP v4 as the routing protocol of choice between your AWS VGW and CGW BGP v4 allows you to exchange routes dynamically between the AWS VGW and the customer CGW or MPLS provider edge (PE) There are a few design considerations when setting up your BGP v4 routing with AWS We will consider two basic topology scenarios Scenario 11 : MPLS PE as CGW – MPLS provider supports VLANs In this scenario the customer has plans to use the MPLS PE as their CGW The MPLS provider will be responsible for the following configuration changes on the PE:  Set up 8021q VLANs required to support the number of VPCs or VLANs that the customer n eeds across the DX Connection Each VLAN will be assigned a /31 IP address (larger prefixes are supported if equipment does not support /31)  Enable a BGP session between AWS and the MPLS provider’s PE across each VLAN Both the customer and the MPLS provider will have to agree on the BGP AS# to assign to the PE The peering relationship in this scenario will look similar to this: AWS ASN (7224)  eBGP  MPLS PE ASN eBGP Customer ASN Figure 8 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 19 Figure 8: BGP peering relationship Note The customer will have to work with the MPLS provider to limit the number of routes advertised to AWS to 100 routes per BGP peer session AWS will tear down the BGP sessions if more than 100 routes are received from the MPLS provider Scenario 12: CE is located in an AWS colocation facility In this scenario the customer plans to deploy a customer managed CGW in the Direct Connect colocation facility for the following reasons: 1 The MPLS provider cannot support multiple VLANs directly on their PE 2 The customer requires control of configuration changes and does not want to be restricted to the MPLS provider’s maintenance windows or other constraints The customer has to maintain strict technology configuration standards of all devices in their domain 3 The customer seeks to achieve the following additional technical objectives: a Ability to remove AWS BGP Community Strings or add BGP community strings before injecting routes into the customers MPLS network ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 20 b Ability to strip BGP AS number and/or inject routes into an IGP to support interVPC routing c A merger and acquisition scenario where the customer will terminate multiple MPLS circuits into their device to facilitate data migration into AWS d The customer plans to integrate each VLAN into its own VRF for compliance reasons or to support a complex routing functionality e The customer requires security demarcation such as a firewall between AWS and the customers MPLS network to meet internal security policies f The customer wants to extend their Private Layer 2 MPLS network to their CGW Colocation Physical Topology The end toend connection between AWS and the MPLS PE can be broken down into the following components as shown in Figure 9 Figure 9: End toend physical and logical connection  VPC to Virtual Private Gateway VGW o This logical construct extends your VPC to the VGW For more information about VGW see http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC _VPNhtml  VGW to colocated CGW ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 21 o The connection between the VGW to the colocated CGW is a physical cross connect that connects AWS equipment to the customers colocated CGW The logical connection from your VPC is extended over a Layer 2 VLAN across the cross connect to a port on the CGW  CGW to MPLS PE: o This is the connection between the colocate d CGW and the MPLS PE The customer can order this circuit from their provider of choice After the physical topology is confirmed and tested the next step is to establish BGP connectivity between the following:  AWS and the customer’s CGW  The CGW and the MPLS PE As a best practice AWS recommends the use of VRFs to achieve high agility security and scalability VRFs provide an additional level of isolation across the routing domain to simplify troubleshooting See the article Connecting A Single customers router to Multiple VPC to learn more about how to deploy VRFs Similar to the BGP topology in scenario 11 the customer must assign an ASN # for each VRF Each eBGP peering relationship in this scenario will look like the following: VPC  eBGP  CGW  eBGP  MPLS PE eBGP Customer AS# Figure 10 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 22 Figure 10: BGP connection over 8021Q VLAN This topology offers the customer the highest level of control and flexibility at the cost of supporting colocated devices AWS recommends a best practice of building a highavailability colocation architecture that supports dual routers dual last mile circuits and dual direct connections In the previous scenario each virtual network interface (VIF) is associated with a single VLAN which in turn is associated with a unique eBGP peering session The colocation router acts as your CGW and exchanges routing updates across each VIF Scenario 2: Dual MPLS Connectivity to a Single Region Architecture Topology This architecture builds upon Scenario 1 and incorporates a highly available redundant connection to AWS The difference between Scenario 1 and Scenario 2 is the additional MPLS circuit in Scenario 2 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 23 Figure 11: Dual MPLS connection to a single AWS Region This whitepaper will consider two dual connectivity architectures in the way we considered single connectivity architecture The first architectural scenario will focus on the customer leveraging their MPLS Provider PE as their CGW and the second architectural scenario will focus on a colocati on strategy Architectural Scenario 21: MPLS PE as CGW In this scenario the customer plans to have dual connectivity from their MPLS network to AWS in the same region AWS APN partners offer geographically dispersed POP s if you want to have dual last mile connectivity to AWS For example if you are planning to connect to the USEast Region you can connect to a New York Point of Presence (POP) and to a Virginia Point of Presence (POP) as well POP diversity offers the highest level of redundancy resilience and availability from the POP and circuit diversity perspective You can be protected within a region from an MPLS circuit outage and MPLS POP outages Figure 12 depicts dual connectivity from geographically dispersed MPLS POP s to AWS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 24 Figure 12: Dual physical connection to multiple MPLS POPs Highly Available topology considerations In this scenario you can desig n an active/active or active/passive BGP routing topology Active/Passive An active/passive routing design calls for a routing policy that uses one path as primary and leverages a second path in the event that the primary circuit is down Active/Active An active/active routing design calls for a routing policy that load balances data across both MPLS last mile circuits as they send or receive data from AWS You can influence outbound traffic from AWS by advertising the routes using equal ASPath lengths Likewise AWS advertises routes from AWS equally across both circuits to your MPLS network You can also design your network to support perdestination routing where you send half your routes over one link and the other half over the second link Each link will serve as a redundant path for nonprimary destinations With this approach both circuits are used actively and only if any one of the links fail all traffic flow through the other link In either case the ASPath between the MPLS provider and AWS may resemble something like this: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 25 AWS ASN  eBGP  CGW ASN  eBGP  MPLS AS N Path 1 AWS ASN  eBGP  CGW ASN  eBGP  MPLS AS N Path 2 Figure 13 depicts a possible BGP topology design Figure 13: In region dual connectivity BGP topology An eBGP neighbor relationship is established between AWS and the two CGWs otherwise known as the provider PEs Similar to Scenario 1 you work with your MPLS provider to support 8021Q VLANs on your PE The routing topology can be more granular and can offer additional levels of traffic differentiation based on the design you select You can choose to direct all traffic that f its a specific profile across one physical link while using the secondary link as a failover path Each VPC can be presented with two logical direct connections (a single VGW per VPC) This allows you to load balance traffic from each VPC across each circuit by creating the required VLANs VIFs and establishing two BGP neighbor relationships across each VLAN ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 26 Figure 14: BGP routing topology scenario Connectivity from Two AWS Locations to a single MPLS POP There are a few situations where it can be better to have both customer devices (CGWs) in the same POP:  MPLS providers may not have POPs close to each AWS POP location  You may have a requirement for active/active circuit topology and your application is extremely sensitive to latency differenc es between the circuits originating from different POPs  Due to MPLS POP diversity limitations one of the circuits may require a longhaul connectivity causing packets to arrive at different times which can impact the ability to load balance  Redundant facilities and long haul termination may be cost prohibitive If you are faced with these issues you can still achieve regional diversity by connecting both DX locations to a single MPLS POP Design Decisions and Criteria The difference between an architecture with MPLS POP diversity and one without is geographical diversity However you must still exercise due diligence when setting up both circuits ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 27 1 Ensure you have end toend circuit diversity from your circuit provider Ensure circuits sharing the same conduit and/or fiber path leaving the facility and throughout the path to the final destination 2 Ensure the circuit does not terminate on the same switch or router to mitigate hardware failure 3 Ensure each device leverages different power source s and Layer 1 infrastructure These design principles are the same regardless of geographical diversity Architectural Scenario 2 2: CGW Colocated in AWS Facility The rationale to colocate are the same as those outlined in Scenario 1 If you decide that colocation is a good approach then you can design a highly available fully redundant architecture to a single region In this scenario the customer can colocate their equipment in AWS facility by either working with an AWS partner who has local facility access or by the customer setting up local facility access in one of our AWS Direct Connect facilities To achieve the higher level of redundancy resilience and scalability the customer can incorporate the following best practice designs:  Dual connection between both CGW s A dual connection between the routers will allow you to accomplish the following: o Create a highly available path to each routing device o Extend each VLAN to each routing device in a highly available manner  Dual connection from each CGW to two MPLS PEs This will provide a high level of resilience and redundancy between your CGW and PE Traffic can be load balanced and provide failover capability in the event of circuit or equipment failure ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 28 Figure 15: Dual circuit to a single MPLS POP BGP topology Conclusion AWS offers customers the ability to connect different WAN technologies in a highly reliable redundant and scalable way The goal of AWS is to ensure that customers are not limited by constraints when accessing their resources on AWS Contributors The following individuals and organizations contributed to this document:  Authors o Jacob Alao Solutions Architect o Justin Davies Solutions Architect  Reviewer o Aarthi Raju Partner Solutions Architect Further Reading For additional information about Layer 3 MPLS technology see the following: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 29  http://wwwnetworkworldcom/article/2297171/networksecurity/mpls explainedhtml  http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html For additional Information about Layer 2 MPLS technology see the following :  http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html Notes 1 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Introducti onhtml 2 http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml 3 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Internet_ Gatewayhtml 4 http://docsawsamazoncom/AmazonVPC/latest/NetworkAdminGuide/Intro ductionhtml 5 https://awsamazoncom/articles/5458758371599914 6 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Route_Ta bleshtml#routetablespriority Archived",General,consultant,Best Practices Introduction_to_Auditing_the_Use_of_AWS,"Archived Introduction to Auditing the Use of AWS Octob er 2015 THIS PAPER HAS BEEN ARCHIVED For the latest information see the Cloud Audit Academy eLearning: https://wwwawstraining/Details/eLearning?id=41556ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 2 of 28 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 3 of 28 Contents Abstract 4 Introduction 5 Approaches for using AWS Audit Guides 6 Examiners 6 AWS Provided Evidence 6 Auditing Use of AWS Concepts 8 Identifying assets in AWS 9 AWS Account Identifiers 9 1 Governance 10 2 Network Configuration and Management 14 3 Asset Configuration and Management 15 4 Logical Access Control 17 5 Data Encryption 19 6 Security Logging and Monitoring 20 7 Security Incident Response 21 8 Disaster Recovery 22 9 Inherited Controls 23 Appendix A: References and Further Reading 25 Appendix B: Glossary of Terms 26 Appendix C: API Calls 27 ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 4 of 28 Abstract Security at AWS is job zero All AWS customers benefit from a data center and network architecture built to satisfy the needs of the most securitysensitive organizations In order to satisfy these needs AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities will be shared By tying together governancefocused audit friendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment AWS manages the underlying infrastructure and you manage the security of anything you deploy in AWS AWS as a modern platform allows you to formalize the design of security as well as audit controls through reliable automated and verifiable technical and operational processes built into every AWS customer account The cloud simplifies system use for administrators and those running IT and makes your AWS environment much simpler to audit sample testing as AWS can shift audits towards a 100% verification verses traditional sample testing Additionally AWS ’ purposebuilt tools can be tailored to customer requirements and scaling and audit objectives in addition to supporting realtime verification and reporting through the use of internal tools such as AWS CloudTrail Config and CloudWatch These tools are built to help you maximize the protection of your services data and applications This means AWS customers can spend less time on routine security and audit tasks and are able to focus more on proactive measures which can continue to enhance security and audit capabilities of the AWS customer environment ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 5 of 28 Introduction As more and more customers deploy workloads into the cloud auditors increasingly need not only to understand how the cloud works but additionally how to leverage the power of cloud computing to their advantage when conducting audits The AWS cloud enables auditors to shift from percentagebased sample testing toward a comprehensive realtime audit view which enables 100% auditability of the customer environment as well as realtime risk management The AWS management console along with the Command Line Interface (CLI) can produce powerful results for auditors across multiple regulatory standards and industry authorities This is due to AWS supporting a multitude of security configurations to establish security compliance by design and realtime audit capabilities through the use of:  Automation Scriptable infrastructure (eg Infrastructure as Code) allows you to create repeatable reliable and secure deployment systems by leveraging programmable (APIdriven) deployments of services  Scriptable Architectures – “Golden” environments and Amazon Machine Images (AMIs) can be deployed for reliable and auditable services and they can be constrained to ensure realtime risk management  Distribution Capabilities provided b y AWS CloudFormation give systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion  Verifiable Using AWS CloudTrail Amazon CloudWatch AWS OpsWorks and AWS CloudHSM enables evidence gathering capability ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 6 of 28 Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shar ed Responsibility” model between AWS and the customer The audit guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements In general AWS services should be treated similar ly to onpremise infrastructure services that have been traditionally used by customer s for operating services and applications Policies and processes that apply to devices and servers should also apply when those functions are supplied by AWS Controls pertaining solely to policy or pr ocedure are generally entirely the responsibility of the customer Similarly AWS management either via the AWS Console or Command Line API should be treated like other privileged administrator access See the appendix and referenced points for more information AWS Provided Evidence Amazon Web Services Cloud Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities will be shared Each certification means that an auditor has verified that specific security controls are in place and operating as intended You can view the applicable compliance reports by contacting your AWS account representative For more information about the security regulations and standards with which AWS complies visit the AWS Compliance webpage To help you meet specific government industry and company security standards and regulations AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards including: ISO 27001 SOC the PCI Data Security Standard FedRAMP the Australian Signals Directorate (ASD) Information Security Manual and the Singapore MultiTier Cloud Security Standard (MTCS SS 584) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 7 of 28 For more information about the security regulations and standards with which AWS complies see the AWS Compliance webpage ArchivedAuditing Use of AWS Concepts The following concepts should be considered during a security audit of an organization’s systems and da ta on AWS:  Security measures that the cloud service provider (AWS) implements and operates – ""security of the cloud""  Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – ""security in the cloud"" While AWS manages security of the cloud security in the cloud is the responsibility of the customer Customers retain control of what security they choose to implement to protect their own content platform applications systems and networks no differently than they would for applications in an on site datacenter Additional detail can be found at the AWS Security Center at AWS Compliance and in the publically available AWS whitepapers found at: AWS Whitepapers ArchivedIdentifying assets in AWS A customer’s AWS assets can be instances data stores applications and the data itself Auditing the use of AWS generally starts with asset identification Assets on a public cloud infrastructure are not categorically different than in house environments and in some situations can be less complex to inventory because AWS provides visibility into the assets under management AWS Account Identifiers AWS assigns two unique IDs to each AWS account: an AWS account ID and a canonical user ID The AWS account ID is a 12digit number such as 123456789012 that you use to construct Amazon Resource Names (ARNs) When you refer to resources like an IAM user or an Amazon Glacier vault the account ID distinguishes your resources from resources in other AWS accounts Amazon Resource Names (ARNs) and AWS Service Namespaces Amazon Resource Names (ARNs) uniquely identify AWS resources We require an ARN when you need to specify a resource unambiguously across all of AWS such as in IAM policies Amazon Relational Database Service (Amazon RDS) tags and API calls ARN Format example: In addition to Account Identifiers Amazon Resource Names (ARNs) and AWS Service Namespaces each AWS service creates a unique service identifier (eg Amazon Elastic Compute Cloud (Amazon EC2) instance ID: i3d68c5cb or Amazon Elastic Block Store (Amazon EBS) Volume ID volecd8c122) which can be used to create an environmental asset inventory and used within work papers for scope of audit and inventory Each certification means that an auditor has verified that specific security controls are in place and operating as intended Archived Amazon Web Services – OCIE Cybersecurity Audit Guide September 2015 Page 10 of 28 1 Governance Definition: Governance provides assurance that customer direction and intent are reflected in the se curity posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services have been purchased what kin ds of systems and information you plan to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Understand what AWS services and resources are being used and ensur e your security or risk management program has taken into account the use of the public cloud environment Audit approach: As part of this audit determine who within your organization is an AWS account and resource owner as well as the AWS services and resources they are using Verify policies plans and procedures include cloud concepts and that cloud is included in the scope of the customer ’s audit program Governance Checklist Checklist Item Understand use of AWS within your organization Approaches might include:  Polling or interviewing your IT and development teams  Performing network scans or a more indepth penetration test  Review expense reports and/or Purchase Orders (PO’s) payments related to Amazoncom or AWS to understand what services are being used Credit card charges appear as “AMAZON WEB SERVICES AWSAMAZONCO WA” or similar Note: Some individuals within your organization may have signed up for an AWS account under their personal accounts as such consider asking if this is the case when polling or interviewing your IT and development teams Identify assets Each AWS account has a contact ema il address associated with it and can be used to identify account owners It is important to understand that this email address may be from a public email service provider depending on what the user specified when registering  A formal meeting can be conducted with each AWS account or asset owner to ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 11 of 28 Checklist Item understand what is being deployed on AWS how it is managed and how it has been integrated with your organization’s security policies procedures and standards Note : The AWS Accou nt owner may be someone in the finance or procurement department but the individual who implements the organization’s use of the AWS resources may reside in the IT department You may need to interview both Define your AWS boundaries for review The review should have a defined scope Understand your organization’s core business processes and their alignment with IT in its noncloud form as well as current or future cloud imple mentations  Obtain a description of the AWS services being used and/or being considered for use  After identifying the types of AWS services in use or under consideration determine the services and business solutions to be included in the review  Obtain and review any previous audit reports with remediation plans  Identify open issues in previous audit reports and assess updates to the documents with respect to these issues Assess policies Assess and review your organization’s securit y privacy and data classification policies to determine which policies apply to the AWS service environment  Verify if a formal policy and/or process exists around the acquisition of AWS services to determine how purchase of AWS services is authorized  Verify if your organization’s change management processes and policies include consideration of AWS services Identify risks Determine whether a risk assessment for the applicable assets has been performed Review risks Obtain a copy of any risk assessment reports and determine if they reflect the current environment and accurately describe the residual risk environment Review risks documentation After each element of your review review risk treatment plans and timelines/ milestones against your risk management policies and ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 12 of 28 Checklist Item procedures Documentation and Inventory Verify your AWS network is fully docume nted and all AWS critical systems are included in their inventory documentation with limited access to this documentation  Review AWS Config for AWS resource inventory and configuration history of resources (Exampl e API Call 1)  Ensure that resources are appropriately tagged and associated with application data  Review application architecture to identify data flows planned connectivity between application components and r esources that contain data  Review all connectivity between your network and the AWS Platform by reviewing the following:  VPN connections where the customers on premise Public IPs are mapped to customer gateways in any VPCs owned by the Customer (Example API Call 2 & 3) Direct Connect Private Connections which may be mapped to 1 or more VPCs owned by the customer (Example API Call 4 ) Evaluate risks Evaluate the significance of the AWS deployed data to the organization’s overall risk profile and risk tolerance Ensure that these AWS assets are integrated into the organization’s formal risk assessmen t program  AWS assets should be identified and have protection objectives associated with them depending on their risk profiles Incorporate use of AWS into risk assessment Conduct and/or incorporate AWS service elements into your organizational risk assessment processes Key risks could include:  Identify the business risk associated with your use of AWS and identify business owners and key stakeholders  Verify that the business risks are aligned rated or classified within your use of AWS services and your organizational security criteria for protecting confidentiality ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 13 of 28 Checklist Item integrity and availability  Review previous audits related to AWS services (SOC PCI NIST 800 53 related audits etc)  Determine if the risks identified previously have been appropriately addressed  Evaluate the overall risk factor for performing your AWS review  Based on the risk assessment identify changes to your audit scope  Discuss the risks with IT management and adjust the risk assessment IT Security Program and Policy Verify that the customer includes AWS services in its security policies and procedures including AWS account level best practices as highlighted within the AWS s ervice Trusted Advisor which provides best practice and guidance across 4 topics – Security Cost Performance and Fault Tolerance  Review your information security policies and ensure that it includes AWS services  Confirm you have has assigned an employee(s) as authority for the use and security of AWS services and there are defined roles for those noted key roles including a Chief Information Security Officer Note : any published cybersecurity risk management proces s standards you have used to model information security architecture and processes  Ensure you maintain documentation to support the audits conducted for AWS services including its review o f AWS third party certifications  Verify internal training records include AWS security such as Amazon IAM usage Amazon EC2 Security Groups and remote access to Amazon EC2 instances  Confirm a cybersec urity response policy and training for AWS services is maintained Note : any insurance specifically related to the customers use of AWS services and any claims related to losses and expenses attributed to cybersecurity events as a result Service Provider Oversight Verify the contract with AWS includes a requirement to implement and maintain privacy and security safeguards for cybersecurity requirements ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 14 of 28 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS whether additional transmission protection is needed in the form of a VPN and whether to limit inbound and outbound traffic Customers who must perform monitoring of their network can do so using hostbased intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer ’s private networks Note : AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network seg mentation is applied within the AWS environmen t  Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and firewall setting or AWS services (Example API Call 5 8)  Verify you have a procedure for granting remote Internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and systems  Review the following to maintain an enviro nment for testing and development of software and applications that is separate from it s business environment:  VPC isolation is in place between business environment and environments used for test and development  By reviewing VPC peering connectivity betw een VPCs to ensure network ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 15 of 28 Checklist Item isolation is in place between VPCs  Subnet isolation is in place between business environment and environments used for test and development  By reviewing NACLs associated to Subnets in which Business and Test/Development environm ents are located to ensure network isolation is in place  Amazon EC2 instance isolation is in place between business environment and environments used for test and development  By reviewing Security Groups associated to 1 or more Instances which are associated with Business Test or Development environments to ensure network isolation is in place between Amazon EC2 instances  Review DDoS layered defense solution running which operates directly on AWS reviewing components which are l everaged as part of a DDoS solution such as:  Amazon CloudF ront configuration  Amazon S3 configuration  Amazon Route 53  ELB configuration  Note: The above services do not use Customer owned Public IP addresses and offer DoS AWS inherited DoS mitigation feature s  Usage of Amazon EC2 for Proxy or WAF Further guidance can be found within the “ AWS Best Practices for DDoS Resiliency Whitepaper” Malicious Code Contr ols Assess the implementation and management of anti malware for Amazon EC2 instances in a similar manner as with physical systems 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything installed on AWS resources or connect to AWS resources Secure management of the customer ’s AWS resources means knowing what resources you are using (asset inventory) securely configuring the guest OS and applications on your resources (secure configuration settings patching and anti malware) and controlling changes to the resources (change management) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 16 of 28 Major audit focus: Manage your operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate the OS and applications are designed configured patched and hardened in accordance with your policies procedures and standards All OS and application management practices can be common between on premise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Assess configuration management Verify the use of your configuration management practices for all AWS system components and validate that these standards meet baseline configurations • Review t he procedure for conduct ing a specialized wipe proc edure prior to deleting the volume for compliance with established requirements • Review your Identity Access Management system (which may be used to allow authenticated access to the applications hosted on top of AWS servic es) • Confirm penetration testing has been completed Change Management Controls Ensure use of AWS services follows the same change cont rol pro cesses as internal series  Verify AWS services are included within an internal patch management process Review d ocumented process for c onfiguration and patching of Amazon EC2 instances:  Amazon Machine Images (AMIs) (Example API Call 9 10)  Operating systems  Applications  Review API calls for in scope services for delete calls to ensure IT assets have been properly disposed of ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 17 of 28 4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but also the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer ’s corporate directory (single signon) AWS Identity and Access Management (IAM) enables users to securely control access to AWS services and resources Using IAM you can create and manage AWS users and groups and use permissions to allow and deny permissions to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up for the services in AWS It is also important to ensure you are securely managing the credentials associated with all AWS accounts Audit approach: Validat e permissions for AWS assets are being managed in accordance with organizational policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for manag ing access to AWS services and Amazon EC2 instances  Ensure documentation of use and configuration of AWS access controls examples and options outlined below:  Description of how Amazon IAM is used for access management  List of controls that Amazon IAM is used to manage – Resource management Security Groups VPN object permissions etc  Use of native AWS access controls or if access is managed through ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 18 of 28 Checklist Item federated authentication which leverages the open sta ndard Security Assertion Markup Language (SAML) 20  List of AWS Accounts Roles Groups and Users Policies and policy attachments to users groups and roles (Example API Call 11)  A description of Am azon IAM acco unts and roles and monitoring methods  A description and configuration of systems within EC2 Remote Access Ensure there is an approval process logging process or controls to prevent unauthorized remote access Note: All access to AWS and Amazon EC2 instances is “remote access” by definition unless Direct Connect has been configured • Review process for preventing unauthorized access which may include:  AWS CloudT rail for l ogging of Service level API calls  AWS Clou dWatch logs to meet logging objectives  IAM Policies S3 Buc ket Policies Security Groups for controls to prevent unauthorized access  Review connectivity between firm network and AWS:  VPN Connection between VPC and firm’s network  Direct Connect (cross connect and private interfaces) between firm and AWS  Defined Security Groups Network Access Control Lists and Routing tables in order to cont rol access between AWS and the network Personnel C ontrol Ensure restric tion of users to those AWS services strictly for their business function (Example API Call 12)  Review the type of access control in place as it relates to A WS services  AWS access control at an AWS level – using IAM with Tagging to control management of Amazon EC2 instances (start/stop/terminate) within networks  Customer Access Control – using IAM (LDAP solution) to manage access to resources w hich exist in networks at the Operating System / Application layers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 19 of 28 Checklist Item  Network Access control – using AWS Security Groups ( SGs) Network Access Control Lists (NACLs) Routing Tables VPN Connections VPC Peering to control network access to resources within customer owned VPCs 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However customers who have sensitive data may require additional protection by encrypting the data when it is stored on AWS Only the Amazon S3 service currently provides an automated server side encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data Major audit focus: Data at rest should be encrypted in the same way as on premise data is protected Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper protection of data could create a security exposure Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential information in transport while using AWS services  Review methods for connection to AWS Console management API S3 RDS and Amazon EC2 VPN for enforcement of encryption  Review internal policies and procedures for key management including AWS services and Amazon EC2 instances  Review e ncryption methods used if any to protect PINs at rest – AWS offer s a number of key management services such as KMS CloudHSM and Server Side ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 20 of 28 Checklist Item Encryption for S3 whic h could be used to assist with data at r est encryptio n (Example API Call 13 15) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within your information systems and networks Audit logs are used to identify activity that may impact the security of those systems whether in realtime or after the fact so the proper configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on Amazon EC2 instances and that implementation is in alignment with your policies and procedures especially as it relates to the storage protection and analysis of the logs Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review logging and monitoring policies and procedures for adequacy retention defined thresholds and secure maintenance specifically for detecting unauthori zed activity for AWS services  Review logging and monitoring policies and procedures and ensure the inclusion of AWS services including Amazon EC2 instances for security related events  Verify that logging mechanisms are conf igured to send logs to a centralized server and ensure that for Amazon EC2 instances the proper type and format of logs are retained in a similar manner as with physical systems  For customers usi ng AWS CloudWatch review the process and re cord of the use of network monitoring  Ensure analytics of events are utilized to improve defensive measures and policies  Review AWS IAM Credential report for unau thorized users AWS Config and resource tagging for u nauthorized devices (Example API Call 16 ) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 21 of 28 Checklist Item  Confirm aggregation and correlation of event data from multiple sources using AWS services such as:  VPC Flow lo gs to identify accepted/rejected network packets entering VPC  AWS CloudT rail to identify authenticated and unauthenticated API calls to AWS services  ELB Logging – Load balancer logging  AWS CloudF ront Logging – Logging of CDN distributions Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems  Review AWS provided evidence on where information on intrusion detection processes can be reviewed 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may by monitored by the interaction of both AWS and the AWS customer AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application You should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporting Ensure the incident response plan and policy for cybersecurity incidents includes AWS services and addresses controls that mitigate cybersecurity ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 22 of 28 incidents and aid recovery  Ensure leveraging of existing incident monitoring tools as well as AWS available tools to monitor the use of AW S services  Verify that the Incident Response Plan undergoes a periodic review and changes related to AWS are made as needed  Note if the Incident Response Plan has notification procedures and how the customer addresses responsibility for losses associated with attacks or impacting instructions 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design often utilizes multiple components in different AWS availability zones and involve data replication Audit approach: Understand the DR and determine the faulttolerant architecture employed for critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 23 of 28 Disaster Recovery Checklist : Checklist Item Business Continuity Plan (BCP) Ensure there is a comprehensive BCP for AWS services utilized that addresses mitigation of the effects of a cybersecurity incident and/or recover from such an incident  Within the Plan ensure that AWS is included in the emergency preparedness and crisis management elements senior manager oversight responsibilities and the testing plan Backup and Storage Controls Review the customer’s periodic test of their backup system for AWS services (Example API Call 17 18) 1 Review inventory of data backed up to AWS services as off site backup 9 Inherited Controls Definition: Amazon has m any years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if he or she continues to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate appropriate due diligence in selecting service providers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 24 of 28 Audit approach: Understand how you can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can be reviewed that are managed by AWS for physical sec urity controls Conclusion There are many thirdparty tools that can assist you with your assessment Since AWS customers have full control of their operating systems network settings and traffic routing a majority of tools used inhouse can be used to assess and audit the assets in AWS A useful tool provided by AWS is the AWS Trusted Advisor tool AWS Trusted Advisor draws upon best practices learned from AWS’ aggreg ated operational history of serving hundreds of thousands of AWS customers The AWS Trusted Advisor performs several fundamental checks of your AWS environment and makes recommendations when opportunities exist to save money improve system performance or close security gaps This tool may be leveraged to perform some of the audit checklist items to enhance and support your organizations auditing and assessment processes ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 25 of 28 Appendix A: References and Further Reading 1 Amazon Web Services: Overview of Security Processes https://d0awsstaticcom/whitepapers/Security/AWS%20Security%20Whitepape rpdf 2 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Compliance_ Whitepaperpdf 3 AWS OCIE Cybersecurity Workbook https://d0awsstaticcom/whitepapers/compliance/AWS_SEC_Workbookpdf 4 Using Amazon Web Services for Disaster Recovery http://mediaamazonwebservicescom/AWS_Disaster_Recoverypdf 5 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 6 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=searchQuery &x=20&y=25&fromSearch=1&searchPath=all&searchQuery=identity%20federati on 7 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8&queryAr g=searchQuery&fromSearch=1&searchQuery=Token%20Vending%20machine 8 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 9 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 10 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 26 of 28 Appendix B: Glossary of Terms Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make web scale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 27 of 28 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 3 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 4 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 5 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 6 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 7 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 8 Alternatively use Security Group focused CLI: aws ec2 describesecuritygroups 9 List AMI currently owned/registered by the customer aws ec2 describeimages –owners self 10 List all Instances launched with a specific AMI aws ec2describeinstances filters “Name=image idValues=XXXXX” (where XXXX = imageid value eg ami12345a12 ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 28 of 28 11 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 12 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies role name XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 13 List KMS Keys aws kms listaliases 14 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 15 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes ""Name=encryptedValues=true"" targeted eg useast 1) 16 Credential Report aws iam generatecredentialreport aws iam get credentialreport 17 Create Snapshot/Backup of EBS volume aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 18 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX)",General,consultant,Best Practices Introduction_to_AWS_Security_by_Design,1 of 14 Introduction to AWS Security by Design A Solution to Automate Security Compliance and Auditing in AWS November 2015 Amazon Web Services – Introduction Secure by Design November 2015 2 of 14 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Amazon Web Services – Introduction Secure by Design November 2015 3 of 14 Contents Abstract 4 Introduction 5 Security in the AWS Environment 5 Security by Design: Overview 6 Security by Design Approach 6 Impact of Security by Design 8 SbD Approach Details 9 SbD: How to Get Started 12 4 of 14 Abstract Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing This whitepaper discusses the concepts of Security by Design provides a fourphase approach for security and compliance at scale across multiple industries points to the resources available to AWS customers to implement security into the AWS environment and describes how to validate controls are operating 5 of 14 Introduction Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing It is a systematic approach to ensure security; instead of relying on auditing security retroactively SbD provides you with the ability to build security control in throughout the AWS IT management process SbD encompasses a fourphase approach for security and compliance at scale across multiple industries standards and security criteria AWS SbD is about designing security and compliance capabilities for all phases of security by designing everything within the AWS customer environment: the permissions the logging the use of approved machine images the trust relationships the changes made enforcing encryption and more SbD enables customers to automate the frontend structure of an AWS account to make security and compliance reliably coded into the account Security in the AWS Environment The AWS infrastructure has been designed to provide the highest availability while putting strong safeguards in place regarding customer privacy and segregation When deploying systems in the AWS Cloud AWS and its customers share the security responsibilities AWS manages the underlying infrastructure while your responsibility is to secure the IT resources deployed in AWS AWS allows you to formalize the application of security controls in the customer platform simplifying system use for administrators and allowing for a simpler and more secure audit of your AWS environment There are two aspects of AWS security: Security of the AWS environment The AWS account itself has configurations and features you can use to build in security Identities logging functions encryption functions and rules around how the systems are used and networked are all part of the AWS environment you manage Security of hosts and applications The operating systems databases stored on disks and the applications customers manage need security protections as well This is up to the AWS customer to manage Security process tools Amazon Web Services – Introduction Secure by Design November 2015 6 of 14 and techniques which customers use today within their onpremise environments also exist within AWS The Security by Design approach here applies primarily to the AWS environment The centralized access visibility and transparency of operating with the AWS cloud provides for increased capability for designing endtoend security for all services data and applications in AWS Security by Design: Overview SbD allows customers to automate the fundamental structure to reliably code security and compliance of the AWS environment making it easier to render noncompliance for IT controls a thing of the past By creating a secure and repeatable approach to the cloud infrastructure approach to security; customers can capture secure and control specific infrastructure control elements These elements enable deployment of security compliant processes for IT elements such as predefining and constraining the design of AWS Identify and Access Management (IAM) AWS Key Management Services (KMS) and AWS CloudTrail SbD follows the same general concept as Quality by Design or QbD Quality by Design is a concept first outlined by quality expert Joseph M Juran in Juran on Quality by Design Designing for quality and innovation is one of the three universal processes of the Juran Trilogy in which Juran describes what is required to achieve breakthroughs in new products services and processes The general shift in manufacturing companies moving to a QbD approach is to ensure quality is built into the manufacturing process moving away from using postproduction quality checks as the primary way in which quality is controlled As with QbD concepts Security by Design can also be planned executed and maintained through system design as a reliable way to ensure realtime scalable and reliable security throughout the lifespan of a technology deployment in AWS Relying on the audit function to fix present issues around security is not reliable or scalable Security by Design Approach SbD outlines the inheritances the automation of baseline controls the operationalization and audit of implemented security controls for AWS infrastructure operating systems services and applications running in AWS This Amazon Web Services – Introduction Secure by Design November 2015 7 of 14 standardized automated and repeatable architectures can be deployed for common use cases security standards and audit requirements across multiple industries and workloads We recommend building in security and compliance into your AWS account by following a basic fourphase approach: • Phase 1 – Understand your requirements Outline your policies and then document the controls you inherit from AWS document the controls you own and operate in your AWS environment and decide on what security rules you want to enforce in your AWS IT environment • Phase 2 – Build a “secure environment” that fits your requirements and implementation Define the configuration you require in the form of AWS configuration values such as encryption requirements (forcing server side encryption for S3 objects) permissions to resources (which roles apply to certain environments) which compute images are authorized (based on hardened images of servers you have authorized) and what kind of logging needs to be enabled (such as enforcing the use of CloudTrail on all resources for which it is available) Since AWS provides a mature set of configuration options (with new services being regularly released) we provide some templates for you to leverage for your own environment These security templates (in the form of AWS CloudFormation Templates) provide a more comprehensive rule set that can be systematically enforced We have developed templates that provide security rules that conform to multiple security frameworks and leading practices These prepackaged industry template solutions are provided to customers as a suite of templates or as stand alone templates based on specific security domains (eg access control security services network security etc) More help to create this “secure environment” is available from AWS experienced architects AWS Professional Services and partner IT transformation leaders These teams can work alongside your staff and audit teams to focus on high quality secure customer environments in support of thirdparty audits • Phase 3 – Enforce the use of the templates Enable Service Catalog and enforce the use of your template in the catalog This is the step which enforces the use of your “secure environment” in new Amazon Web Services – Introduction Secure by Design November 2015 8 of 14 environments that are being created and prevents anyone from creating an environment that doesn’t adhere to your “secure environment” standard rules or constraints This effectively operationalizes the remaining customer account security configurations of controls in preparation for audit readiness • Phase 4 – Perform validation activities Deploying AWS through Service Catalog and the “secure environment” templates creates an auditready environment The rules you defined in your template can be used as an audit guide AWS Config allows you to capture the current state of any environment which can then be compared with your “secure environment” standard rules This provides audit evidence gathering capabilities through secure “read access” permissions along with unique scripts which enable audit automation for evidence collection Customers will be able to convert traditional manual administrative controls to technically enforced controls with the assurance that if designed and scoped properly the controls are operating 100% at any point in time versus traditional audit sampling methods or pointintime reviews This technical audit can be augmented by preaudit guidance; support and training for customer auditors to ensure audit personnel understand the unique audit automation capabilities which the AWS cloud provides Impact of Security by Design SbD Architecture is meant to achieve the following: • Creating forcing functions that cannot be overridden by the users without modification rights • Establishing reliable operation of controls • Enabling continuous and realtime auditing • The technical scripting your governance policy The result is an automated environment enabling the customer’s security assurance governance security and compliance capabilities Customers can now get reliable implementation of what was previously written in policies standards and regulations Customers can create enforceable security and compliance which in turn creates a functional reliable governance model for AWS customer environments Amazon Web Services – Introduction Secure by Design November 2015 9 of 14 SbD Approach Details Phase 1 – Understand Your Requirements Start by performing a security control rationalization effort You can create a security Controls Implementation Matrix (CIM) that will identify inherency from existing AWS certifications accreditations and reports as well as identify the shared customer architecture optimized controls which should be implemented in any AWS environment regardless of security requirements The result of this phase will provide a customer specific map (eg AWS Control Framework) which will provide customers with a security recipe for building security and compliance at scale across AWS services CIM works to map features and resources to specific security controls requirements Security compliance and audit personnel can leverage these documents as a reference to make certifying and accrediting of systems in AWS more efficient The matrix outlines control implementation reference architecture and evidence examples which meet the security control “risk mitigation” for the AWS customer environment Figure 1: NIST SP 80053 rev 4 control security control matrix • Security Services Provided (Inherency) Customers can reference and inherit security control elements from AWS based on their industry and the AWS associated certification attestation and/or report (eg PCI FedRAMP ISO etc) The inheritance of controls can vary based on certifications and reports provided by AWS • Cross Service Security (Shared) Cross service security controls are those which both AWS and the customer implement within the host operating system and the guest operating systems These controls include technical operational and administrative (eg IAM Security Groups Configuration Management etc) controls which in some case can be partially inherited (eg Fault Amazon Web Services – Introduction Secure by Design November 2015 10 of 14 Tolerance) Example: AWS builds its data centers in multiple geographic regions as well as across multiple Availability Zones within each region offering maximum resiliency against system outages Customers should leverage this capability by architecting across separate Availability Zones in order to meet their own fault tolerance requirements • Service Specific Security (Customer) Customer controls may be based on the system and services they deploy in AWS These customer controls may also be able to leverage several cross service controls such as IAM Security Groups and defined configuration management processes • Optimized IAM Network and Operating Systems (OS) Controls These controls are security control implementations or security enhancements an organization should deploy based on leading security practices industry requirements and/or security standards These controls typically cross multiple standards and service and can be scripted as part of a defined “secure environment” through the use of AWS CloudFormation templates and Service Catalog Phase 2 – Build a “Secure Environment” This enables you to connect the dots on the wide range of security and audit services and features we offer and provide security compliance and auditing personnel a straightforward way to configure an environment for security and compliance based on “least privileges” across the AWS customer environment This helps align the services in a way that will make your environment secure and auditable real time verses within point in time or period in time • Access Management Create groups and roles like developers testers or administrators and provide them with their own unique credentials for accessing AWS cloud resources through the use of groups and roles • Network Segmentation Set up subnets in the cloud to separate environments (that should remain isolated from one another) For example to separate your development environment from your production environment and then configure network ACLs to control how traffic is routed between them Customers can also set up separate management environments to ensure security integrity through the use of a Bastion host for limiting direct access to Amazon Web Services – Introduction Secure by Design November 2015 11 of 14 production resources • Resource Constraints & Monitoring Establish hardened guest OS and services related to use of Amazon Elastic Compute Cloud (Amazon EC2) instances along with the latest security patches; perform backups of your data; and install antivirus and intrusion detection tools Deploy monitoring logging and notification alarms • Data Encryption Encrypt your data or objects when they’re stored in the cloud either by encrypting automatically on the cloud side or on the client side before you upload it Phase 3 – Enforce the Use of Templates After creating a “secure environment” you need to enforce its use in AWS You do this by enforcing Service Catalog Once you enforce the Service Catalog everyone with access to the account must create their environment using the CloudFormation templates you created Every time anyone uses the environment all those “secure environment” standard rules and/or constraints will be applied This effectively operationalizes the remaining customer account security configurations of controls and prepares you for audit readiness Phase 4 – Perform Validation Activities The goal of this phase is to ensure AWS customers can support an independent audit based on public generallyaccepted auditing standards Auditing standards provide a measure of audit quality and the objectives to be achieved when auditing a system built within an AWS customer environment AWS provides tooling to detect whether there are actual instances of noncompliance AWS Config gives you the pointintime current settings of your architecture You can also leverage AWS Config Rules a service that allows you to use your secure environment as the authoritative criteria to execute a sweeping check of controls across the environment You’ll be able to detect who isn’t encrypting who is opening up ports to the Internet and who has databases outside a production VPC Any measurable characteristic of any AWS resource in the AWS environment can be checked The ability to do a sweeping audit is especially valuable if you are working on an AWS account for which you did not first establish and enforce a secure environment This allows you to check the entire account no matter how it was Amazon Web Services – Introduction Secure by Design November 2015 12 of 14 created and audit it against your secure environment standard With AWS Config Rules you can also continually monitor it and the console will show you at any time which IT resources are and aren’t in compliance In addition you will know if a user was out of compliance even if for a brief period of time This makes pointintime and periodintime audits extremely effective Since auditing procedures differ across industry verticals AWS customers should review the audit guidance provided based on their industry vertical If possible engage audit organizations that are “cloudaware” and understand the unique audit automation capabilities that AWS provides Work with your auditor to determine if they have experience with auditing AWS resources; if they do not AWS provides several training options to address how to audit AWS services through an instructorled eighthour class including handson labs For more information please contact: awsaudittraining@amazoncom Additionally AWS provides several audit evidence gathering capabilities through secure read access along with unique API (Application Programming Interface) scripts which enable audit automation for evidence collection This provides auditors the ability to perform 100% audit testing (versus testing with a sampling methodology) SbD: How to Get Started Here are some starter resources for you to get you and your teams ramped up: • Take the selfpaced training on “Auditing your AWS Architecture” This will allow for hands on exposure to the features and interfaces of AWS in particular the configuration options that are available to auditors and security control owners • Request more information about how SbD can help email: awssecuritybydesign@amazoncom • Be familiar with additional relevant resources available to you: o Amazon Web Services: Overview of Security Processes o Introduction to Auditing the Use of AWS Whitepaper o Federal Financial Institutions Examination Council (FFIEC) Audit Guide Amazon Web Services – Introduction Secure by Design November 2015 13 of 14 o SEC Cybersecurity Initiative Audit Guide Further Reading • AWS Compliance Center: http://awsamazoncom/compliance • AWS Security by Design: http://awsamazoncom/compliance/securitybydesign • AWS Security Center: http://awsamazoncom/security • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Compliance_Whitepaperpdf • Security Best Practices Whitepaper: https://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/contact/ • Government and Education on AWS https://awsamazoncom/governmenteducation/ • AWS Professional Services https://awsamazoncom/professionalservices,General,consultant,Best Practices Introduction_to_AWS_Security_Processes,"ArchivedIntroduction to AWS Security Processes June 2016 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 2 of 45 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates su ppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 3 of 45 Table of Contents Introduction 5 Shared Security Responsibility Model 5 AWS Security Responsibilities 6 Customer Security Responsibilities 7 AWS Global Security Infrastructure 7 AWS Compliance Programs 8 Physical and Environmental Security 9 Fire Detection and Suppression 9 Power 9 Climate and Temperature 9 Management 10 Storage Device Dec ommissioning 10 Business Continuity Management 10 Availability 10 Incident Response 10 Company Wide Executive Review 11 Communication 11 AWS Access 11 Account Review and Audit 11 Back ground Checks 12 Credentials Policy 12 Secure Design Principles 12 Change Management 12 Software 12 Infrastructure 13 AWS Account Security Features 13 AWS Credentials 14 Passwords 15 AWS Multi Factor Authentication (AWS MFA) 15 Access Keys 16 Key Pairs 17 X509 Certificates 18 Individual User Accounts 18 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 4 of 45 Secu re HTTPS Access Points 19 Security Logs 19 AWS Trusted Advisor Security Checks 20 Networking Services 20 Amazon Elastic Load Balancing Security 20 Amazon Virtual Private Cloud (Amazon VPC) Security 22 Amazon Route 53 Security 28 Amazon CloudFront Security 29 AWS Direct Connect Security 32 Appendix – Glos sary of T erms 33 Document Revisions 44 Jun 2016 44 Nov 2014 44 Nov 2013 44 May 2013 45 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 5 of 45 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence This document is intended to answer questions such as “How does AWS help me protect my data?” Specifically AWS physical and operational security processes are described for the network and server infrastructure under AWS’ management as well as service specific security implementations Shared Security Responsibility Model When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: • What content they choose to store on AWS • Which AWS services are used with the content • In what country that content is stored • The format and structure of that content and whether it is masked anonymised or encrypted • Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Under the shared responsibility model AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of their operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consid er the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate that controls ar e operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 6 of 45 Figure 1: AWS Shar ed Security Responsib ility Model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security features such as individual user accounts and credentials SSL/TLS for data transmissions and user a ctivity logging that you should configure no matter which AWS service you use For more information about these security features see the “AWS Account Security Features” section below AWS Security Responsi bilities AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud This infrastructure is comprised of the hardware software networking and facilities that run AWS services Protecting this infrastructure is AWS ’ number one priority and while you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulatio ns (for more information visit ( awsamazoncom/compliance ) Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon Elastic MapReduce Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS will handle basic security tasks like guest operating system (OS) and database patching firewall configuration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 7 of 45 and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may require additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desktops in minutes instead of weeks You can also use cloudbased analytic s and workflow tools to process y our data as you need it and then store it in the cloud or in your own data centers Whi ch AWS services you use will determ ine how much configuration wor k you have to perform as part of your security responsib ilities AWS products that fall into the well understood category of Infrastructure as a Serv ice (IaaS) such as Amazon EC2 and Amazon VPC are completely under your control and require you to perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amaz on RDS or Amaz on Redshift provide all of the resources you need in order to perform a specific task but without the configuration work that can c ome with them With managed services you don’t have to worr y about laun ching and maintaining instan ces patching the guest OS or database or replicating databases AWS handles that for you However as with all services you shou ld prote ct your AWS Account credentia ls and set up individu al user accounts with Amazon Identity and Access Management (IAM) so that each of your users has their own credentials and you can implement segregation of duties We also recommend usin g mult ifactor authent ication (MFA) with each account requ iring the use of SSL/TLS to commun icate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Sec urity Resources webpage AWS Global Security Infrastructure AWS operates the global cloud infrastructure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that support the provisioning and use of these resources The AWS global infra structure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building web architectures on top of some of the most secure computing infrastr ucture in the world ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 8 of 45 AWS Compliance Program s Amazon Web Services Comp liance enables customers to understand the robust contro ls in place at AWS to maintain security and data protect ion in the cloud As systems are built on top of the AWS cloud infrastructure comp liance responsib ilities will be shared By tying together governance focused audit friend ly service features with applicable comp liance or audit standards AWS Comp liance enab lers build on traditional programs; help ing customers to establish and operate in an AWS security contro l environment The IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT securit y standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA • FedRAMP • DOD SRG Levels 2 and 4 • PCI DSS Level 1 • EU Model Clauses • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • IRAP • FIPS 1402 • MLPS Level 3 • MTCS In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Services ( CJIS ) • Cloud Security Alliance ( CSA ) • Family Educational Rights and Privacy Act ( FERPA ) • Health Insurance Portability and Accountability Act ( HIPAA ) • Motion Picture Association of America ( MPAA ) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 9 of 45 AWS provides a wide range of information regarding its IT control environment to customers through white papers reports certifications accreditations and other thirdparty attestations More information is available in the Risk and Compliance whitepaper available at http://awsamazoncom/compliance/ Physical and Environmental Security AWS’ data centers are state of the art utilizing innovative architectural and engineering approaches AWS has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS dat a centers are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterruptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 10 of 45 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses techniques detailed NIST 800 88 (“Guidelines for Media Sanitization as part of the decommissioning process“) Business Continuity Management AWS’ infrastructure has a high level of availability and provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Amazon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the rem aining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone T his means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 11 of 45 provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Audit group regularly reviews AWS resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Commu nication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees; regular management meetings for updates on business performance and other matters; and electronic means such as video conferencing electronic mail messages and the posting of information via the Amazon int ranet AWS has also implemented various methods of external communication to support its customer base and the communit y M echan isms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A ""Service Health Dashboard "" is available and maintained by the customer support team to alert customers to any issues that may be of broad impact The “AWS Security Center ” is available to provide you with securit y and comp liance details about AWS You can also subscribe to AWS Support offerin gs that include direct commun ication with the customer support team and proacti ve alerts to any c ustomer impacting issues AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazon Corporate network relies on user IDs passwords and Kerberos while the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS cloud components must explicitly request access through the AWS access management sy stem All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatical ly revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 12 of 45 Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automatically revoked Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the empl oyee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Security has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles AWS’ development process follows secure software development best practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration changes to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to AWS’ infrastructure are done to minimize any impact on the customer and their use of the services AWS will communicate with customers either via email or through the AWS Service Health Dashboard (when service use is likely to be adversely affected ) Software AWS applies a systematic approach to managing change so that changes to customerimpacting services are thoroughly revie wed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to the customer Changes deployed into production environments are: • Reviewed: Peer reviews of the technical aspects of a change are required • Tested: Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 13 of 45 • Approved: All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate Perio dically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actio ns are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infrastructure Amazon’s Corporate Applications team develops and manages software to automa te IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team maintains and operates a UNIX/Linux configuration management framework to address hardw are scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change AWS is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configurati on management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM u ser accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 14 of 45 services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentials and their uses: Credentia l Type Use Descrip tion Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters MultiFactor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A sixdigit single use code that is required in addition to your password to log in to your AWS Account or IAM user account Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST /Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmat ic requests that you make to AWS Key Pairs • SSH login to EC2 instances • CloudFront signed URLs • Windows instances To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SSH With Windows instances you use a key pair to obtain the administrator password and then log in using RDP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 15 of 45 X509 Certificates • Digita lly signed SOAP requests to AWS APIs • SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (curren tly used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Credential Report You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials whether they use a password whether their password expires and must b e changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials have been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular basis without any downtime to your application This can help to mit igate risk from lost or compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Passwords are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials page AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies go to Managing Passwords in Using IAM AWS Multi Factor Authentication (AWS MFA) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 16 of 45 AWS Multi Factor Authentication (AWS MFA) is an additional la yer of security for accessing AWS services When you enable this optional feature you will need to provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this single use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the precise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MF A protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additio nal layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA devic e uses a software application that generates six digit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than hardware MFA devices However you should be aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authentication for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this b y adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obta in hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information about AWS MFA is available on the AWS websit e Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 17 of 45 calculation is done for you; otherwise you can have your application calculate it and include it in your REST or Query requests by following the directions in our documentation Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a key that is derived from your secret access key rather than using the secret access key itself In addition you der ive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misused if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your cod e For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 uses public –key cryptography to encrypt and decrypt login information Public –key cryptography uses a public key to encrypt a piece of data such as a password then the recipient uses the private key to decrypt the data The public and private keys are known as a key pair To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SS H With Windows instances you use a key pair to obtain the administrator password and then log in using RDP Creating a Key Pair You can use Amazon EC2 to create your key pair For more information see Creating Your Key Pair Using Amazon EC2 Alternatively you could use a third party tool and then import the public key to Amazon EC2 For more information see Importing Your Own Key Pair to Amazon EC2 Each key pair requires a name Be sure to choose a name that is easy to ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 18 of 45 remember Amazon EC2 associates the public key with the name that you specify as the key name Amazon EC2 stores the public key only and you store the private key Anyone who possesses your private key can decrypt your login information so it's important that you store your private keys in a secure place The keys that Amazon EC2 uses are 2048 bit SSH 2 RSA keys You can have up to five thousand key pairs per region X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date that AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then inc lude that signature in the request along with your certificate AWS verifies that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that y ou uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certifica te (signing certificate) by using third party software In contrast with root account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP reque sts X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a certificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certificate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or y ou can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management ( IAM ) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS resources either programmatically or through the AWS Management ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 19 of 45 Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permissions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section below for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you should use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering a nd forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what hap pened and when but also identify the source To help you with your after thefact investigations and near realtime intrusion detection AWS CloudTrail provides a log of requests for AWS resources within your account for supported services For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures information about every API call to every supported AWS resource including sign in events Once you have enabled CloudTrail event logs are delivered every 5 minutes You can configure CloudTrail so that it aggregates log files from multiple regions into a single Amazon S3 bucket From there you can then upload them to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns By default log files are stored securely in Amazon S3 but you can also archive them to Amazon Glacier t o help meet audit and compliance requirements ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 20 of 45 In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in nea rreal time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Advisor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on sev eral of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your organization to automatically receive a weekly email with an updated status of your Trust ed Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: specific ports unrestricted IAM use and MFA on root account And when you sign up for Busine ss or Enterprise level AWS Support you receive full access to all Trusted Advisor checks Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network connection to the AWS cloud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Amazon Elastic Load Balancing Security Amazon Elastic Load Balancing is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports creation and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 21 of 45 • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Amazon Elastic Load Balancing configures your load balancer with a predefined cipher set that is used for T LS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI SOX etc) from clients to ensure that standards are met In these cases Amazon Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers Y ou can choose to enable or disable the ciphers depending on your specific requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the ciph er suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer will select a cipher suite based on the server’s prioritization of cipher suites rather than the client’s This gives you more c ontrol over the level of security that clients use to connect to your load balancer For even greater communication privacy Amazon Elastic Load Balancer allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Amazon Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTT PS or TCP load balancing Typically client connection information such as IP address and port is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitel ists of IP addresses Amazon Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance tha t processed ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 22 of 45 the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to b ack end instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 instance you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS c loud and launch Amazon EC2 instances that have private (RFC 1918 ) addresses in the range of your choice (eg 10000/16 ) You can define subnets within your VPC group ing simil ar kinds of instances based on IP address rang e and then set up routin g and secur ity to contro l the flow of traffi c in and out of the instan ces and subnets AWS offers a variet y of VPC archite cture templates with configurations that provi de varying levels of public access : • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs and security groups can be used to provide strict control over inbound and outbound network traffic to your instances • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances are not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public su bnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This configuration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instance s run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs usin g a p rivate IP address which allows instances in the two VPCs to communicate with each other as if they are within the same networ k You can c reate a VPC peerin g connect ion between your own VPCs or with a VPC in another AWS account within a single region Security feature s within Amazon VPC inclu de security group s network ACLs routin g tables and externa l gateways Each of these items is complementary to providing a ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 23 of 45 secure isolated network that can be extended throu gh selective enab ling of direct Internet access or private c onnect ivity to another network Am azon EC2 instance s runn ing within an Amazon VPC inherit all of the benefits describ ed belo w related to the guest OS and prote ction again st packet sniffing Note howe ver that you must create VPC securit y groups specifically for your Amazon VPC; any Amaz on EC2 secur ity groups you have created will not work inside your Amazon VPC Also Amaz on VPC securit y groups have additional capab ilities that Amazon EC2 secur ity groups do not have such as bein g able to change the security group after the instance is launched and bein g able to specify any proto col with a standard protoco l number (as opposed to just TCP UDP or ICM P) Each Amaz on VPC is a distinct isolated netwo rk with in the cloud; netwo rk traffi c within each Amazon VPC is isolated from all other Amaz on VPC s At creation time you select an IP address range for each Amazon VPC You may c reate and attach an Internet gateway virtual private gatewa y or both to estab lish externa l connec tivity subject to the contro ls below API Access : Calls to create and delete Amaz on VPCs change routin g securit y group and netwo rk ACL parameters and perform other functions are all signed by y our Amazon Secret Acce ss Key which could be either the AWS Account ’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behal f In addition API calls can be encr ypted with SSL to maintain confidentialit y Amazon recommends alwa ys usin g SSLprote cted API endpo ints AWS IAM also enables a customer to further contro l what APIs a newly created user has perm issions to call Subn ets a nd Rou te Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 securit y attacks including MAC spoo fing and ARP spoo fing are blocked Each subnet in an Amaz on VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routin g table to determ ine the dest ination Firewa ll (Securi ty Groups): Like Amazon EC2 Amazon VPC supports a complete firew all solution enab ling filterin g on both ingress and egress traffic from an instance The default group enables inbound commun ication from other members of the same group and outbound communication to any destination Traffic can be restricted by any IP protoco l by service port as well as source/destination IP address (individu al IP or Classless InterDomain Routin g (CIDR) block) The firewall isn’t contro lled throu gh the guest OS; rather it can be mod ified only throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrati ve functions on the instances and the firewall therefore enab ling you to implement additional securit y throu gh separation ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 24 of 45 of duties The level of secur ity afforded by the firew all is a function of which ports you open and for what durat ion and purpose Wellinformed traffi c management and secur ity design are still required on a perinstance basis AWS further encourages you to apply additional perinstance filters with host based firewal ls such as IPtables or the Win dows Firewall Figure 5: A mazon VPC Netwo rk Architectu re Netwo rk Access Control Lists: To add a further layer of secur ity within Amazon VPC you can c onfigure netwo rk ACLs These are stateless traffi c filters that apply to all traffi c inbound or outbound from a subnet within Amazon VPC These ACL s can contain ordered rules to allow or deny traffic based upon IP protoco l by se rvice port as well as source/destination IP address Like securit y groups networ k ACL s are managed throu gh Amazon VPC APIs adding an additional layer of protection and enab ling additional securit y throu gh separation of duties The diagram below depicts how the secur ity contro ls above interrelate to enab le flexible networ k topo logies while providing c omplete contro l over networ k traffic flows ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 25 of 45 Figure 6: Flexible N etwo rk Topologies Virtual Priv ate Gateway: A virtual private gateway enables private connec tivity between the A mazon VPC and another netwo rk Netwo rk traffic with in each virtual private gatewa y is isolated from netwo rk traff ic within all other virtual private gateways You can estab lish VPN connect ions to the virtual private gateway from gateway devices at your prem ises Each connection is secured by a preshared key in conjun ction with the IP address of the customer gatewa y device Internet Gateway: An Internet gateway may be attached to an A mazon VPC to enable direct connect ivity to Amazon S3 other AWS services and the Internet Each instance desirin g this access must either have an Elas tic IP asso ciated with it or route traffi c throu gh a NAT instance Additionally netwo rk routes are configured (see above) to direct traffic to the Internet gateway AWS provides reference NAT AMIs that you can extend to perform networ k logging deep packet inspection application layer filterin g or other securit y contro ls This access can only be mod ified throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrative functions on the instances and the Internet gateway therefore enab ling y ou to implement additional secur ity throu gh separation of duties ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 26 of 45 Dedic ated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on singletenant hardware ) An A mazon VPC can be created with ‘dedicated ’ tenan cy so that all instances launched into the Amazon VPC will utiliz e this feature Alternativel y an Amazon VPC may be created with ‘default ’ tenan cy but you can specif y dedicated tenan cy for parti cular instances launched into it Elastic Netwo rk Interfa ces: Each Amaz on EC2 instance has a default networ k interface that is assigned a private IP address on your Amazon VPC netwo rk You can c reate and attach an additional netwo rk interface known as an elasti c netwo rk interface (ENI) to any Amazon EC2 instance in your Amazon VPC for a total of two netwo rk interfaces per instance Attach ing more than one networ k interface to an instance is useful when you want to create a management netwo rk use netwo rk and security appliances in your Amazon VPC or create dualhomed instances with workloads/ro les on distin ct subnets An ENI' s attributes including the private IP address elastic IP addresses and MAC address will follow the ENI as it is attached or detached from an instance and reattached to another instance More information about Amazon VPC is availab le on the AWS website: http:/ /awsamaz oncom/ vpc/ Addi tiona l Netwo rk Access Control wi th EC2VPC If you launch instances in a region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatic ally provisioned in a ready touse default VPC You can c hoose to create additional VPCs or you can create VPCs for instances in regions where you alread y had instances before we launched EC2VPC If you create a VPC later using regular VPC you specif y a CIDR block create subnets enter the routin g and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automati cally performed for you When you launch an instance into a default VPC usin g EC2VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone • Create an Internet gateway and connect it to your default VPC • C reate a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also recei ve a publi c IP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 27 of 45 The following table summa rizes the differences between instances launched into EC2 Classic instan ces launched into a default VPC and instan ces launched into a nondefault VPC Charac teristic EC2Classic EC2VPC (Default VPC ) Regula r VPC Public IP address Your instance receives a public IP address Your instance launched in a default subnet receives a public IP address by default unless you specify otherwise during launch Your instance doesn't receive a public IP address by default unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2 Classic range each time it's started Your instance receives a stat ic private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple p rivate IP addresses We select a single IP address for your instance Multiple IP addre sses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A secu rity group can reference secu rity groups that belong to other AWS accounts A secu rity group can reference secu rity groups for your VPC only A secu rity group can reference secu rity groups for your VPC only Security group association You must terminate your instance to change its secu rity group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inbound traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 28 of 45 Tenancy Your instance runs on shared hard ware; you cannot run an instance on single tenant hard ware You can run your instance on shared hardware or single tenant hard ware You can run your instance on shared hardware or single tenant hardware Note that security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC yo u can change security groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS queries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure outside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) — all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to endpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) D uring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 29 of 45 information from the public Whois database in order to help thwart scraping and spamming Amazon Route 53 is buil t using AWS’ highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsi ve Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by c reating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latency and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possible performance Amazon CloudFront is optimized to work with other AWS services lik e Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive ver sions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original defin itive copies of objects delivered by Amazon CloudFront If you want contro l over who is able to downlo ad content from Am azon CloudFront you can enab le the service’s private content feature This feature has two ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 30 of 45 components : the first controls how content is delivered from the Amazon CloudFront edge lo cation to view ers on the Internet The second controls how the Amaz on Cloud Front edge locati ons access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To control access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Console Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create po licy documents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the c lient making the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects Whe n Amazon CloudFront receives a request it will decode the signature using your public key Amazon CloudFront will only serve requests that have a valid policy document and matching signature Note that private content is an optional feature that must be enabled when you set up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront will acc ept requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 31 of 45 Figure 7: A mazon CloudFront Encrypted Transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the objects For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront supports the TLSv11 and TLSv12 protocols for HTTPS connections between CloudFront and your custom origin webserver (along with SSLv3 and TLSv10) and a selection of cipher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origi n ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note that if you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a th ird party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTTPS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or Dedicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content because some older browsers do not support SNI With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests fo r content including the object requested the date and time of the request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 32 of 45 the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provide a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) With AWS Direct Connect you bypass Internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS Direct Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest A WS region You can access all AWS services available in that region AWS Direct Connect locations in the US can also access the public endpoints of the other AWS regions using a public virtual interface Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resou rces such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments AWS Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash using your secret key Y ou can have AWS automatically generate a BGP MD5 key or you can provide your own Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 33 of 45 Appen dix – Glos sary of Terms Access Key ID : A string that AWS distributes in order to uniquely identify each AWS user; it is an alphanumeric token associated with your Secret Access Key Access control list (ACL) : A list of permissions or rules for accessing an object or network resource In Amazon EC2 security groups act as ACLs at the instance level controlling which users have permission to access specific instances In Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users In Amazon VPC ACLs act like network firewalls and control access at the subnet level AMI : An Amazon Machine Image (AMI) is an encrypted machine image stored in Amazon S3 It contains all the information necessary to boot instances of a customer’s software API : Application Programming Interface (API) is an interface in computer science that defines the ways by which an application program may request services from libraries and/or operating systems Archive : An archive in Amazon Glacier is a file that you want to store and is a base unit of storage in Amazon Glacier It can be any data such as a photo video or document Each archive has a unique ID and an optional description Authentication : Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Not only do users need to be authenticat ed but every program that wants to call the functionality exposed by an AWS API must be authenticated AWS requires that you authenticate every request by digitally signing it using a cryptographic hash function Auto Scaling : An AWS service that allows customers to automatically scale their Amazon EC2 capacity up or down according to conditions they define Availability Zone : Amazon EC2 locations are composed of regions and availability zones Availability zones are distinct locations that are engineere d to be insulated from ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 34 of 45 failures in other availability zones and provide inexpensive low latency network connectivity to other availability zones in the same region Bastion host : A computer specifically configured to withstand attack usually placed on the external/public side of a demilitarized zone (DMZ) or outside the firewall You can set up an Amazon EC2 instance as an SSH bastion by setting up a public subnet as part of an Amazon VPC Bucket : A container for objects stored in Amazon S3 Every object is contained within a bucket For example if the object named photos/puppyjpg is stored in the johnsmith bucket then it is addressable using the URL: http ://johnsmiths3amazonawscom/photos/pupp yjpg Certific ate: A credential that some AWS products use to authenticate AWS Accounts and users Also known as an X509 certificate The certificate is paired with a private key CIDR Block : Classless Inter Domain Routing Block of IP addresses Client side encryption : Encrypting data on the client side before uploading it to Amazon S3 CloudFormation: An AWS provisioning tool that lets customers record the baseline configuration of the AWS resources needed to run their applications so that they can provision and update them in an orderly and predictable fashion Cognito : An AWS service that simplifies the task of authenticating users and storing managing and syncing their data across multiple devices platforms and applications It works with multiple existing identity providers and also supports unauthenticated guest users Credentials : Items that a user or process must have in order to confirm to AWS services during the authentication process that they are au thorized to access the service AWS credentials include passwords secret access keys as well as X509 certificates and multi factor tokens Dedicated instance : Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) Digital signature : A digital signature is a cryptographic method for demonstrating the authenticity of a digital message or document A valid digital signature gives a recipient reason to believe that the message was create d by an authorized sender and that it was not altered in transit Digital signatures are used ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 35 of 45 by customers for signing requests to AWS APIs as part of the authentication process Direct Connect Service : Amazon service that allows you to provision a direct link between your internal network and an AWS region using a high throughput dedicated connection With this dedicated connection in place you can then create logical connections directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC bypassing Internet service providers in the network path DynamoDB Service : A managed NoSQL database service from AWS that provides fast and predictable performance with seamless scalability EBS : Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are off instance storage that persists independently from the life of an instance ElastiCache: An AWS web service that allows you to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fast managed in memory caching system instead of relying entirely on slower disk based databases Elastic Beanstalk : An AWS deployment and management tool that automates the functions of capacity provisioning load balancing and auto scaling for customers’ applications Elastic IP Address : A static public IP address that you can assign to any instance in an Amazon VPC thereby making the instance public Elastic IP addresses also enable you to mask instance failures by rapidly remapping your public IP addresses to any instance in the VPC Elastic Load Balancing : An AWS service that is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits such as taking over the encr yption/decryption work from EC2 instances and managing it centrally on the load balancer Elastic MapReduce (EMR) Service: An AWS service that utilizes a hosted Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 Elastic MapReduce enables customers to easily and cost effectively process extremely large quantities of data (“big data”) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 36 of 45 Elastic Network Interface : Within an Amazon VPC an Elastic Network Interface is an optional second network interface that you can attach to an EC2 instance An Elastic Network Interface can be useful for creating a management network or using network or security appliances in the Amazon VPC It can be easily detached from an instance and reattached to another instance Endpoint : A URL that is the entry point for an AWS service To reduce data latency in your applications most AWS services allow you to select a regional endpoint to make your requests Some web services allow you to use a general endpoint that doesn't specify a region; these generic endpoints resolve to the service's us east1 endpoint You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Federated users : User systems or applications that are not currently authorized to access your AWS services but that you want to give temporary access to This access is provided using the AWS Security Token Service (STS) APIs Firewall : A hardware or software component that controls incoming and/or outgoing network traffic according to a specific set of rules Us ing firewall rules in Amazon EC2 you specify the protocols ports and source IP address ranges that are allowed to reach your instances These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destin ation Traffic can be restricted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) Guest OS : In a virtual machine environment multiple operating systems can run on a single piece of hardware Each one of these instances is considered a guest on the host hardware and utilizes its own OS Hash : A cryptographic hash function is used to calculate a digital signature for signing requests to AWS APIs A cryptographic h ash is a one way function that returns a unique hash value based on the input The input to the hash function includes the text of your request and your secret access key The hash function returns a hash value that you include in the request as your signa ture HMAC SHA1/HMAC SHA256 : In cryptography a keyed Hash Message Authentication Code (HMAC or KHMAC) is a type of message authentication code (MAC) calculated using a specific algorithm involving a cryptographic hash function ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 37 of 45 in combination with a secr et key As with any MAC it may be used to simultaneously verify both the data integrity and the authenticity of a message Any iterative cryptographic hash function such as SHA 1 or SHA 256 may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC SHA1 or HMAC SHA256 accordingly The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function on the size and quality of the key and the size of the hash output length in bits Hard ware security module (HSM) : An HSM is an appliance that provides secure cryptographic key storage and operations within a tamper resistant hardware device HSMs are designed to securely store cryptographic key material and use the key material without expo sing it outside the cryptographic boundary of the appliance The AWS CloudHSM service provides customers with dedicated single tenant access to an HSM appliance Hypervisor : A hypervisor also called Virtual Machine Monitor (VMM) is computer software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently Identity and Access Management (IAM) : AWS IAM enables you to create multiple users and manage the permissions for each of these users wi thin your AWS Account Identity pool : A store of user identity information in Amazon Cognito that is specific to your AWS Account Identity pools use IAM roles which are permissions that are not tied to a specific IAM user or group and that use temporary security credentials for authenticating to the AWS resources defined in the role Identity Provider : An online service responsible for issuing identification information for users who would like to interact with the service or with other cooperating serv ices Examples of identity providers include Facebook Google and Amazon Import/Export Service : An AWS service for transferring large amounts of data to Amazon S3 or EBS storage by physically shipping a portable storage device to a secure AWS facility Instance : An instance is a virtualized server also known as a virtual machine (VM) with its own hardware resources and guest OS In EC2 an instance represents one running copy of an Amazon Machine Image (AMI) IP address : An Internet Protocol (IP) address is a numerical label that is assigned ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 38 of 45 to devices participating in a computer network utilizing the Internet Protocol for communication between its nodes IP spoofing : Creation of IP packets with a forged source IP address called spoofing with the p urpose of concealing the identity of the sender or impersonating another computing system Key : In cryptography a key is a parameter that determines the output of a cryptographic algorithm (called a hashing algorithm) A key pair is a set of security credentials you use to prove your identity electronically and consists of a public key and a private key Key rotation : The process of periodically changing the cryptographic keys used for encrypting data or digitally signing requests Just like changing pas swords rotating keys minimizes the risk of unauthorized access if an attacker somehow obtains your key or determines the value of it AWS supports multiple concurrent access keys and certificates which allows customers to rotate keys and certificates into and out of operation on a regular basis without any downtime to their application Mobile Analytics : An AWS service for collecting visualizing and understanding mobile application usage data It enables you to track customer behaviors aggregate metrics and identify meaningful patterns in your mobile applications Multi factor authentication (MFA) : The use of two or more authentication factors Authentication factors include something you know (like a password) or something you have (like a token that generates a random number) AWS IAM allows the use of a six digit single use code in addition to the user name and password credentials Customers get this single use code from an authentication device that they keep in their physical possession (ei ther a physical token device or a virtual token from their smart phone) Network ACLs : Stateless traffic filters that apply to all traffic inbound or outbound from a subnet within an Amazon VPC Network ACLs can contain ordered rules to allow or deny traf fic based upon IP protocol by service port as well as source/destination IP address Object : The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of nam evalue pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as Content Type The developer can also specify custom metadata at the time the Object is stored ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 39 of 45 Paravirtualization : In computing paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware Peering : A VPC peering connection is a networking connection betw een two VPCs that enables you to route traffic between them using private IP addresses Instances in either VPC can communicate with each other as if they are within the same network Port scanning : A port scan is a series of messages sent by someone atte mpting to break into a computer to learn which computer network services each associated with a ""well known"" port number the computer provides Region: A named set of AWS resources in the same geographical area Each region contains at least two availab ility zones Replication : The continuous copying of data from a database in order to maintain a second version of the database usually for disaster recovery purposes Customers can use multiple AZs for their Amazon RDS database replication needs or use Read Replicas if using MySQL Relational Database Service (RDS) : An AWS service that allows you to create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS i s available for Amazon Aurora MySQL PostgreSQL Oracle Microsoft SQL Server and MariaDB database engines Role : An entity in AWS IAM that has a set of permissions that can be assumed by another entity Use roles to enable applications running on your Amazon EC2 instances to securely access your AWS resources You grant a specific set of permissions to a role use the role to launch an Amazon EC2 instance and let EC2 automatically handle AWS credential management for your applications that run on Amazo n EC2 Route 53: An authoritative DNS system that provides an update mechanism that developers can use to manage their public DNS names answering DNS queries and translating domain names into IP address so computers can communicate with each other Secr et Access Key : A key that AWS assigns to you when you sign up for an AWS Account To make API calls or to work with the command line interface each AWS user needs the Secret Access Key and Access Key ID The user signs each request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 40 of 45 with the Secret Access Key and includes the Access Key ID in the request To help ensure the security of your AWS Account the Secret Access Key is accessible only during key and user creation You must save the key (for example in a text file that you store securely) if you wa nt to be able to access it again Security group : A security group gives you control over the protocols ports and source IP address ranges that are allowed to reach your Amazon EC2 instances; in other words it defines the firewall rules for your instan ce These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Security Token Service (STS) : The AWS STS APIs return temporary security credentials consisting of a security token an Access Key ID and a Secret Access Key You can use STS to issue security credentials to users who need temporary access to your resources These users can be existing IAM users non AWS users (federated identities) systems or applications that need to a ccess your AWS resources Server side encryption (SSE) : An option for Amazon S3 storage for automatically encrypting data at rest With Amazon S3 SSE customers can encrypt data on upload simply by adding an additional request header when writing the object Decryption happens automatically when data is retrieved Service: Software or computing ability provided across a network (eg Amazon EC2 Amazon S3) Shard : In Amazon Kinesis a shard is a uniquely identified group of data records in an Amazon Kinesis stream A Kinesis stream is composed of multiple shards each of which provides a fixed unit of capacity Signature : Refers to a digital signature which is a mathematical way to confirm the authenticity of a digital message AWS uses signatures c alculated with a cryptographic algorithm and your private key to authenticate the requests you send to our web services Simple Data Base (Simple DB) : A non relational data store that allows AWS customers to store and query data items via web services req uests Amazon SimpleDB creates and manages multiple geographically distributed replicas of the customer’s data automatically to enable high availability and data durability Simple Email Service (SES) : An AWS service that provides a scalable bulk and tran sactional email sending service for businesses and developers In order to maximize deliverability and dependability for senders Amazon SES takes proactive ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 41 of 45 steps to prevent questionable content from being sent so that ISPs view the service as a trusted e mail origin Simple Mail Transfer Protocol (SMTP) : An Internet standard for transmitting email across IP networks SMTP is used by the Amazon Simple Email Service Customers who used Amazon SES can use an SMTP interface to send email but must connect to an SMTP endpoint via TLS Simple Notification Service (SNS) : An AWS service that makes it easy to set up operate and send notifications from the cloud Amazon SNS provides developers with the ability to publish messages from an application and immediate ly deliver them to subscribers or other applications Simple Queue Service (SQS) : A scalable message queuing service from AWS that enables asynchronous message based communication between distributed components of an application The components can be com puters or Amazon EC2 instances or a combination of both Simple Storage Service (Amazon S3) : An AWS service that provides secure storage for object files Access to objects can be controlled at the file or bucket level and can further restricted based on other conditions such as request IP source request time etc Files can also be encrypted automatically using AES 256 encryption Simple Workflow Service (SWF) : An AWS service that allows customers to build applications that coordinate work across distri buted components Using Amazon SWF developers can structure the various processing steps in an application as “tasks” that drive work in distributed applications Amazon SWF coordinates these tasks managing task execution dependencies scheduling and concurrency based on a developer’s application logic Single sign on: The capability to log in once but access multiple applications and systems A secure single sign on capability can be provided to your federated users (AWS and non AWS users) by creating a URL that passes the temporary security credentials to the AWS Management Console Snapshot : A customer initiated backup of an EBS volume that is stored in Amazon S3 or a customer initiated backup of an RDS database that is stored in Amazon RDS A snaps hot can be used as the starting point for a new EBS volume or Amazon RDS database or to protect the data for long term durability and recovery Secure Sockets Layer (SSL) : A cryptographic protocol that provides security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 42 of 45 over the Internet at the Application Layer Both the TLS 10 and SSL 30 protocol specifications use cryptographic mechanisms to implement the security services that establish and maintain a secure TCP/IP connection The secure connection prevents eavesdropping tampering or message forgery You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Stateful firewall : In computing a stateful firewall (any firewall that performs stateful packet inspection (SPI) or stateful inspection) is a firewall that keeps track of the state of network connections (such as TCP streams UDP communication) traveling across it Storage Gateway : An AWS service that securely connects a customer’s on premises software appliance with Amazon S3 storage by using a VM that the custome r deploys on a host in their data center running VMware ESXi Hypervisor Data is asynchronously transferred from the customer’s on premises storage hardware to AWS over SSL and then stored encrypted in Amazon S3 using AES 256 Temporary security credenti als: AWS credentials that provide temporary access to AWS services Temporary security credentials can be used to provide identity federation between AWS services and non AWS users in your own identity and authorization system Temporary security credentials consist of security token an Access Key ID and a Secret Access Key Transcoder : A system that transcodes (converts) a media file (audio or video) from one format size or quality to another Amazon Elastic Transcoder makes it easy for customers to transcode video files in a scalable and cost effective fashion Transport Layer Security (TLS) : A cryptographic protocol that provides security over the Internet at the Application Layer Customers who used Amazon’s Simple Email Service must connect to an SMTP endpoint via TLS Tree hash : A tree hash is generated by computing a hash for each megabyte sized segment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data Amazon Glacier checks the ha sh against the data to help ensure that it has not been altered en route Vault : In Amazon Glacier a vault is a container for storing archives When you create a vault you specify a name and select an AWS region where you want to create the vault Each vault resource has a unique address Versioning : Every object in Amazon S3 has a key and a version ID Objects with ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 43 of 45 the same key but different version IDs can be stored in the same bucket Versioning is enabled at the bucket layer using PUT Bucket versio ning Virtual Instance : Once an AMI has been launched the resulting running system is referred to as an instance All instances based on the same AMI start out identical and any information on them is lost when the instances are terminated or fail Virt ual MFA : The capability for a user to get the six digit single use MFA code from their smart phone rather than from a token/fob MFA is the use of an additional factor (the single use code) in conjunction with a user name and password for authentication Virtual Private Cloud (VPC) : An AWS service that enables customers to provision an isolated section of the AWS cloud including selecting their own IP address range defining subnets and configuring routing tables and network gateways Virtual Private N etwork (VPN): The capability to create a private secure network between two locations over a public network such as the Internet AWS customers can add an IPsec VPN connection between their Amazon VPC and their data center effectively extending their dat a center to the cloud while also providing direct access to the Internet for public subnet instances in their Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side WorkSpaces : An AWS managed desktop service that enables you to provision cloud based desktops for your users and allows them to sign in using a set of unique credentials or their regular Active Directory credentials X50 9: In cryptography X509 is a standard for a Public Key Infrastructure (PKI) for single sign on and Privilege Management Infrastructure (PMI) X509 specifies standard formats for public key certificates certificate revocation lists attribute certificates and a certification path validation algorithm Some AWS products use X509 certificates instead of a Secret Access Key for access to certain interfaces For example Amazon EC2 uses a Secret Access Key for access to its Query interface but it uses a signing certificate for access to its SOAP interface and command line tool interface WorkDocs : An AWS managed enterprise storage and sharing service with feedback capabilities for user collaboration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 44 of 45 Document Revisions Jun 2016 • Updated compliance programs • Updated regions Nov 2014 • Updated compliance programs • Updated shared security responsibility model • Updated AWS Account security features • Reorganized services into categories • Updated several services with new features: CloudWatch CloudTrail CloudFront EBS ElastiCache Redshift Route 53 S3 Trusted Advisor and WorkSpaces • Added Cognito Security • Added Mobile Analytics Security • Added WorkDocs Security Nov 2013 • Updated regions • Updated several services with new features: CloudFront DirectConnect DynamoDB EBS ELB EMR Amazon Glacier IAM OpsWorks RDS Redshift Route 53 Storage Gateway and VPC • Added AppStream Security • Added CloudTrail Security • Added Kinesis Security • Added WorkSpaces Security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 45 of 45 May 2013 • Updated IAM to incorporate roles and API access • Updated MFA for API access for customer specified privileged actions • Updated RDS to add event notification multi AZ and SSL to SQL Server 2012 • Updated VPC to add multiple IP addresses static routing VPN and VPC By Default • Updated several other services with new features: CloudFront CloudWatch EBS ElastiCache Elastic Beanstalk Route 53 S3 Storage Gateway • Added Glacier Security • Added Redshift Security • Added Data Pipeline Security • Added Transcoder Security • Added Trusted Advisor Security • Added OpsWorks Security • Added CloudHSM Security",General,consultant,Best Practices Introduction_to_AWS_Security,Introduction to AWS Security January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Security of the AWS Infrastructure 1 Security Products and Features 2 Infrastructure Security 2 Inventory and Configuration Management 2 Data Encryption 3 Identity and Acces s Control 3 Monitoring and Logging 4 Security Products in AWS Marketplace 4 Security Guidance 4 Compliance 6 Further Reading 7 Document Revisions 8 Abstract Amazon Web Services (AWS) delivers a scalable cloud computing platform designed for high availability and dependability providing the tools that enable you to run a wide range of applications Helping to protect the confidentiality integrity and availability of your systems and data is of the utmost importance to AWS as is maintaining your trust and confidence This document is intended to provide an introduction to AWS’s approach to security including the controls in the AWS environment and some of the products and features that AWS makes available to customers to meet your security objectives Amazon Web Services Introduction to AWS Security Page 1 Security of the AWS Infrastructure The AWS infrastructu re has been architected to be one of the most flexible and secure cloud computing environments available today It is designed to provide an extremely scalable highly reliable platform that enables customers to deploy applications and data quickly and sec urely This infrastructure is built and managed not only according to security best practices and standards but also with the unique needs of the cloud in mind AWS uses redundant and layered controls continuous validation and testing and a substantial amount of automation to ensure that the underlying infrastructure is monitored and protected 24x7 AWS ensures that these controls are replicated in every new data center or service All AWS customers benefit from a data center and network architecture bui lt to satisfy the requirements of our most security sensitive customers This means that you get a resilient infrastructure designed for high security without the capital outlay and operational overhead of a traditional data center AWS operates under a shared security responsibility model where AWS is responsible for the security of the underlying cloud infrastructure and you are responsible for securing workloads you deploy in AWS ( Figure 1) This gives you the flexibility and agility you need to implement the most applicable security controls for your business functions in the AWS environment You can tightly restrict access to environments that process sensitive data or deploy less stringent controls for information you want to make public Figure 1: AWS Shared Security Responsibility Model Amazon Web Services Introduction to AWS Security Page 2 Security Products and Features AWS and its partners offer a w ide range of tools and features to help you to meet your security objectives These tools mirror the familiar controls you deploy within your on premises environments AWS provides security specific tools and features across network security configuration management access control and data security In addition AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment Infrastructure Security AWS provides several security capabilities and services to increase privacy and control network access These include: • Network firewalls built into Amazon VPC let you create private networks and control access to your instances or applications Customers can control encryption in transit with TLS acros s AWS services • Connectivity options that enable private or dedicated connections from your office or on premises environment • DDoS mitigation technologies that apply at layer 3 or 4 as well as layer 7 These can be applied as part of application and con tent delivery strategies • Automatic encryption of all traffic on the AWS global and regional networks between AWS secured facilities Inventory and Configuration Management AWS offers a range of tools to allow you to move fast while still enabling you to ensure that your cloud resources comply with organizational standards and best practices These include: • Deployment tools to manage the creation and decommissioning of AWS resources according to organization standards • Inventory and configuration managemen t tools to identify AWS resources and then track and manage changes to those resources over time • Template definition and management tools to create standard preconfigured hardened virtual machines for EC2 instances Amazon Web Services Introduction to AWS Security Page 3 Data Encryption AWS offers you the ab ility to add a layer of security to your data at rest in the cloud providing scalable and efficient encryption features These include: • Data at rest encryption capabilities available in most AWS services such as Amazon EBS Amazon S3 Amazon RDS Amazon Redshift Amazon ElastiCache AWS Lambda and Amazon SageMaker • Flexible key management options including AWS Key Management Service that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your own keys • Dedicated hardware based cryptographic key storage using AWS CloudHSM allowing you to help satisfy your compliance requirements • Encrypted message queues for the transmission of sensitive data using server side encryption (SSE) for Amazon SQS In addition AWS provides APIs for you to integrate encryption and data protection with any of the services you develop or deploy in an AWS environment Identity and Access Control AWS offers you capabilities to define enforce and manage user access policie s across AWS services These include: • AWS Identity and Access Management (IAM) lets you define individual user accounts with permissions across AWS resources AWS Multi Factor Authentication for privileged accounts including options for software and hardware based authenticators IAM can be used to grant your employees and applicat ions federated access to the AWS Management Console and AWS service APIs using your existing identity systems such as Microsoft Active Directory or other partner offering • AWS Directory Service allows you to integrate and federate with corporate directories to reduce administrative overhead and improve end user experience • AWS Single Si gnOn (AWS SSO ) allows you to manage SSO access and user permissions to all of your accounts in AWS Organizations centrally AWS provides native identity and access management integration across many of its services plus API integration with any of your own applications or services Amazon Web Services Introduction to AWS Security Page 4 Monitoring and Logging AWS provides tools and features that enable you to see what’s happening in your AWS environment These include: • With AWS CloudTrail you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command lin e tools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address the calls were made from and when the calls occurred • Amazon CloudWatch provides a reliable scalable and flexible monitoring solution that you can start using within minutes You no longer need to set up manage and scale your own monitoring systems and infrastructure • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads Amazon GuardDuty exposes notifications via Amazon CloudWatch so you can trigger an automated response or notify a human These tools and features give you the visibility you need to spot issues before they impact the business and allow you to improve security posture and reduce the risk profile of your environment Security Products in AWS Marketplace Moving production workloads to AWS can enable organizations to improve agility scalability innovation and cost savings — while maintaining a secure environment AWS Marketplace offers security industry leading products that are equivalent identical to or integrate with existing controls in your on premises environments These products complemen t the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments Security Guidance AWS provides customers with guidance and expertise through online too ls resources support and professional services provided by AWS and its partners Amazon Web Services Introduction to A WS Security Page 5 AWS Trusted Advisor is an online tool that acts like a customized cloud expert helping you to configure your resources to follow best practices Trusted Advisor inspects y our AWS environment to help close security gaps and finds opportunities to save money improve system performance and increase reliability AWS Account Teams provide a first point of contact guiding you through your deployment and implementation and po inting you toward the right resources to resolve security issues you may encounter AWS Enterprise Support provides 15 minute response time and is available 24×7 by phone chat or email; along with a dedicated Technical Account Manager This concierge ser vice ensures that customers’ issues are addressed as swiftly as possible AWS Partner Network offers hundreds of industry leading products that are equivalent identical to or integrated w ith existing controls in your on premises environments These products complement the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments as well as hundreds of certified AWS Consulting Partners worldwide to help with your security and compliance needs AWS Professional Services houses a Security Risk and Compliance specialty practice to help you d evelop confid ence and technical capability when m igrating your most sensitive workloads to the AWS Cloud AWS Professional Services helps customers develop securi ty policies and practice s based on well proven designs and helps ensure that cus tomers’ security design meets internal and external compliance requirements AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find test buy and deploy software that runs on AWS AWS Marketplace Security products complement the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments AWS Security Bulletins provides security bulletins around current vulnerabilities and threats and enables customers to work with AWS security experts to address concerns like report ing abuse vulnerabilities and penetration testing We also have online resources for vulnerability reporting AWS Security Documentation shows how to configure AWS services to meet your security and compliance objectives AWS customers benefit from a data center and Amazon Web Services Introduction to AWS Security Page 6 network architecture that are built to meet the requirements of the most security sensitive organizations AWS Well Architected Framework helps cloud architects build secure high performing resilient and efficient infrastructure for their applications The AWS Well Architected Framework includes a security pillar that focuses on protecting information and systems Key topics include confidentiality and integrity of data identifying and managing who can do what with privilege management protecting systems and establ ishing controls to detect security events Customers can use the Well Architected service from the console or engage the services of one of the APN partners to assist them AWS Well Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices This free tool is available in the AWS Management Console and after answering a set of questions regarding operational excellence security reliability performance efficiency and cost optimization The AWS Well Architected Tool then provides a plan on how to architect for the cloud using established best practices Compliance AWS Compliance empowers customers to understand the robust controls in place at AWS to maintain security a nd data protection in the AWS Cloud When systems are built in the AWS Cloud AWS and customers share compliance responsibilities AWS computing environments are continuously audited with certifications from accreditation bodies across geographies and ver ticals including SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) SOC 2 SOC 3 ISO 9001 / ISO 27001 FedRAMP DoD SRG and PCI DSS Level 1i Additionally AWS also has assurance programs that provide templates and control mappings to help customers establish t he compliance of their environments running on AWS for a full list of programs see AWS Compliance Programs We can confirm that all AWS services can be used in compliance with the GDPR This means that in addition to benefiting from all of the measures that AWS already takes to maintain services security customers can deploy AWS services as a part of their GDPR compliance plans AWS offers a GDPR compliant Data Processing Addendum (GDPR DPA) e nabling you to comply with GDPR contractual obligations The AWS GDPR DPA is incorporated into the AWS Service Terms and applies automatically to all customers globally who require it to comply with the GDPR Amazoncom Inc is certified under the EUUS P rivacy Shield and AWS is covered under this certification This helps Amazon Web Services Introduction to AWS Security Page 7 customers who choose to transfer personal data to the US to meet their data protection obligations Amazoncom Inc’s certification can be found on the EU US Privacy Shield website: https://wwwprivacyshieldgov/list By operating in an accredited environment customers reduce the scope and cost of audits they need to perform AWS continuously undergoes assessments of its underlying infra structure —including the physical and environmental security of its hardware and data centers —so customers can take advantage of those certifications and simply inherent those controls In a traditional data center common compliance activities are often ma nual periodic activities These activities include verifying asset configurations and reporting on administrative activities Moreover the resulting reports are out of date before they are even published Operating in an AWS environment allows customers to take advantage of embedded automated tools like AWS Security Hub AWS Config and AWS CloudTrail for validating compliance These tools reduce the effort needed to perform audits since these tasks become routine ongoing and automated By spending les s time on manual activities you can help evolve the role of compliance in your company from one of a necessary administrative burden to one that manages your risk and improves your security posture Further Reading For additional information see the fol lowing resources: For information on … See Key topics research areas and training opportunities for cloud security on AWS AWS Cloud Security Learning The AWS Cloud Adoption Framework which organizes guidance into six areas of focus: Business People Governance Platform Security and Operations AWS Cloud Adoption Framework Specific controls in place at AWS; how to integrate AWS into your existing framework Amazon Web Services: Risk and Compliance Best practices guidance on how to deploy security controls within an AWS environment AWS Security Best Practices Amazon Web Services Introduction to AWS Security Page 8 For information on … See AWS Well Architected Framework security pillar AWS Well Architected Framework Security Pillar Document Revisions Date Description January 2020 Updated for latest services resources and technologies July 2015 First publication,General,consultant,Best Practices Introduction_to_DevOps_on_AWS,"Introduction to DevOps on AWS October 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Continuous Integration 2 AWS CodeCommit 2 AWS CodeBuild 3 AWS CodeArtifact 3 Continuous Delivery 4 AWS CodeDeploy 4 AWS CodePipeline 5 Deployment Strateg ies 6 BlueGreen Deployments 7 Canary Deployments 7 Linear Deployments 7 Allatonce Deployments 7 Deployment Strategies Matrix 7 AWS Elastic Beanstalk Deployment Strategies 8 Infrastructure as Code 9 AWS CloudFormation 10 AWS Cloud Development Kit 12 AWS Cloud Development Kit for Kubernetes 12 Automation 12 AWS OpsWorks 13 AWS Elastic Beanstalk 14 Monitoring and Logging 15 Amazon CloudWatch Metrics 15 Amazon CloudWatch Alarms 15 Amazon CloudWatch Logs 15 Amazon CloudWatch Logs Insights 16 Amazon CloudWatch Events 16 Amazon EventBridge 16 AWS CloudTrail 17 Communication and Collaboration 18 TwoPizza Teams 18 Security 19 AWS Shared Responsibilit y Model 19 Identity Access Management 20 Conclusion 21 Contributors 21 Document Revisions 22 Abstract Today more than ever enterpr ises are embarking on their digital transformation journey to build deeper connections with their customers to achieve sustainable and enduring business value Organizations of all shapes and sizes are disrupting their competitors and entering new markets by innovating more quickly than ever before For these organization s it is important to focus on innovation and software disruption making it critical to streamline their software delivery Organizations that shorten their time from idea to production making speed and agility a priority could be tomorrow's disruptors While there are several factors to consider in becoming the next digital disruptor this white paper focuses on DevOps and the services and features in the AWS platform that will help increase an organization's ability to deliver applications and services at a high velocity Amazon Web Services Introduction to DevOps on AWS 1 Introduction DevOps is the combination of cultural engineering practices and pat terns and tools that increase an organization's ability to deliver applications and services at high velocity and better quality Over time several essential practices have emerged when adopting DevOps: Continuous Integration Continuous Delivery Infrast ructure as Code and Monitoring and Logging This paper highlights AWS capabilities that help you accelerate your DevOps journey and how AWS services can help remove the undifferentiated heavy lifting associated with DevOps adaptation We also highlight h ow to build a continuous integration and delivery capability without managing servers or build nodes and h ow to leverage Infrastructure as Code to provision and manage your cloud resources in a consistent and repeatable manner • Continuous Integration : is a software development practice where developers regularly merge their code changes into a central repository after which automated builds and tests are run • Continuous Delivery : is a software development practice where code changes are automatically bui lt tested and prepared for a release to production • Infrastructure as Code : is a practice in which infrastructure is provisioned and managed using code and software development techniques such as version control and continuous integration • Monitoring a nd Logging : enables organizations to see how application and infrastructure performance impacts the experience of their product’s end user • Communication and Collaboration : practices are established to bring the teams closer and by building workflows and d istributing the responsibilities for DevOps • Security : should be a cross cutting concern Your continuous integration and continuous delivery ( CI/CD ) pipelines and related services should be safeguarded and proper access control permissions should be setup An examination of each of these principles reveals a close connection to the offerings available from Amazon Web Services (AWS) Amazon Web Services Introduction to DevOps on AWS 2 Continuous Integration Continuous Integration (CI) is a software development practice where developers regularly merge their code changes into a central code repository after which automated builds and tests are run CI helps find and address bugs quicker improve software qual ity and reduce the time it takes to validate and release new software updates AWS offers the following three services for continuous integration: AWS CodeCommit AWS CodeCommit is a secure highly scalable managed source control service that hosts private git repositories CodeCommit eliminates the need for you to operate your own source control system and there is no hardware to provision and scale or software to install conf igure and operate You can use CodeCommit to store anything from code to binaries and it supports the standard functionality of Git allowing it to work seamlessly with your existing Git based tools Your team can also use CodeCommit’s online code tools to browse edit and collaborate on projects AWS CodeCommit has several benefits: Collaboration AWS CodeCommit is designed for collaborative software development You can easily commit branch and merge your code enabling you to easily maintain control of your team’s projects CodeCommit also supports pull requests which provide a mechanism to request code reviews and discuss code with collaborators Encryption You can transfer your files to and from AWS CodeCommit using HTTPS or SSH as you prefer Your repositories are also automatically encrypted at rest through AWS Key Management Service (AWS KMS) using customer specific keys Access Control AWS CodeCommit uses AWS Identity and Access Management (IAM) to control and monitor who can access your data as well as how when and where they can access it CodeCommit also helps you monitor your repositories through AWS CloudTrail and Amazon CloudWatch High Availability and Durability AWS CodeCommit stores your repositories in Amazon S imple Storage Service (Amazon S 3) and Amazon DynamoDB Your encrypted data is redundantly stored across multiple facilities This architecture increases the availability and durability of your repository data Amazon Web Services Introduction to DevOps on AWS 3 Notifications and Custom Scripts You can now receive notifications for events impacting your repositories Notifications will come in the form of Amazon S imple Notification Service (Amazon S NS) notifications Each notification will include a stat us message as well as a link to the resources whose event generated that notification Additionally using AWS CodeCommit repository triggers you can send notifications and create HTTP webhooks with Amazon SNS or invoke AWS Lambda functions in response to the repository events you choose AWS CodeBuild AWS CodeBuild is a fully managed continuous integration service that compiles source code runs tests and pro duces software packages that are ready to deploy You don’t need to provision manage and scale your own build servers CodeBuild can use either of GitHub GitHub Enterprise BitBucket AWS CodeCommit or Amazon S3 as a source provider CodeBuild scales c ontinuously and can processes multiple builds concurrently CodeBuild offers various pre configured environments for various version of Windows and Linux Customers can also bring their customized build environments as docker containers CodeBuild also int egrates with open source tools such as Jenkins and Spinnaker CodeBuild can also create reports for unit functional or integration tests These reports provide a visual view of how many tests cases were executed and how many passed or failed The build process can also be executed inside a n Amazon Virtual Private Cloud (Amazon VPC) which can be helpful if your integration services or databases are deployed inside a VPC With AWS CodeBuild your build artifacts are encrypted with customer specific keys that are managed by the KMS CodeBuild is int egrated with IAM so you can assign user specific permissions to your build projects AWS Code Artifact AWS CodeArtifact is a fully managed artifact repository service that can be used by organizations se curely store publish and share software packages used in their software development process CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the lates t versions Amazon Web Services Introduction to DevOps on AWS 4 Software development teams are increasingly relying on open source packages to perform common tasks in their application package It has now become critical for the software development teams to maintain control on a particular version of the o pen source software is vulnerability free With CodeArt ifact you can set up controls to enforce the above CodeArtifact works with commonly used package managers and build tools like Maven Gradle npm yarn twine and pip making it easy to integrate int o existing development workflows Continuous Delivery Continuous delivery is a software development practice where code changes are automatically prepared for a release to production A pillar of modern application development continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage When properly implemented developers will always have a deployment ready build artifact that has passed through a sta ndardized test process Continuous delivery lets developers automate testing beyond just unit tests so they can verify application updates across multiple dimensions before deploying to customers These tests may include UI testing load testing integrat ion testing API reliability testing etc This helps developers more thoroughly validate updates and pre emptively discover issues With the cloud it is easy and cost effective to automate the creation and replication of multiple environments for testing which was previously difficult to do onpremises AWS offers the following services for continuous delivery : • AWS CodeBuild • AWS CodeDeploy • AWS CodePipeline AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon E lastic Compute Cloud (Amazon E C2) AWS Fargate AWS Lambda and your on premises servers AWS CodeDeploy makes it easier for you to rapidly release new features he lps you avoid Amazon Web Services Introduction to DevOps on AWS 5 downtime during application deployment and handles the complexity of updating your applications You can use CodeDeploy to automate software deployments eliminating the need for error prone manual operations The service scales to match you r deployment needs CodeDeploy has several benefits that align with the DevOps principle of continuous deployment: Automated Deployments: CodeDeploy fully automates software deployments allowing you to deploy reliably and rapidly Centralized control: CodeDeploy enables you to easily launch and track the status of your application deployments through the AWS Management Console or the AWS CLI CodeDeploy gives you a detailed report enabling you to view when and to where each application revision was deployed You can also create push notifications to receive live updates about your deployments Minimize downtime: CodeDeploy helps maximize your application availability during the software dep loyment process It introduces changes incrementally and tracks application health according to configurable rules Software deployments can easily be stopped and rolled back if there are errors Easy to adopt: CodeDeploy works with any application and pr ovides the same experience across different platforms and languages You can easily reuse your existing setup code CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (eg AWS CodePipeline GitHub Jenkins) AWS CodeDeploy supports multiple deployment options For more information see Deployment Strategies AWS CodePipeline AWS CodePipeline is a continuous delivery service that enables you to model visualize and automate the steps required to release your software With AWS CodePipeline you model the full release process for building your code deploying to preproduction environments testing your appli cation and releasing it to production AWS CodePipeline then builds tests and deploys your application according to the defined workflow every time there is a code change You can integrate partner tools and your own custom tools into any stage of the r elease process to form an end toend continuous delivery solution Amazon Web Services Introduction to DevOps on AWS 6 AWS CodePipeline has several benefits that align with the DevOps principle of continuous deployment: Rapid Delivery: AWS CodePipeline automates your software release process allowing you to rapidly release new features to your users With CodePipeline you can quickly iterate on feedback and get new features to your users faster Improved Quality: By automating your build test and release processes AWS CodePipeline enables you to increa se the speed and quality of your software updates by running all new changes through a consistent set of quality checks Easy to Integrate: AWS CodePipeline can easily be extended to adapt to your specific needs You can use the pre built plugins or your o wn custom plugins in any step of your release process For example you can pull your source code from GitHub use your on premises Jenkins build server run load tests using a third party service or pass on deployment information to your custom operation s dashboard Configurable Workflow: AWS CodePipeline enables you to model the different stages of your software release process using the console interface the AWS CLI AWS CloudFormation or the AWS SDKs You can easily specify the tests to run and customize the steps to deploy your application and its dependencies Deployment Strategies Deployment strategies define how you want to deliver your software Organizations follow different deployment strat egies based on their business model Some may choose to deliver software which is fully tested and other may want their users to provide feedback and let their users evaluate under development features (eg Beta releases) In the following section we wil l talk about various deployment strategies InPlace Deployments In this strategy the deployment is done line with the application on each instance in the deployment group is stopped the latest application revision is installed and the new version of t he application is started and validated You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete In place deployments can be all atonce assuming a service outa ge or done as a rolling update AWS CodeDeploy and AWS Elastic Beanstalk offer deployment configurations for one at a time half at a time and all at Amazon Web Services Introduction to DevOps on AWS 7 once These same deployment strategies for in place deployments are available within BlueGreen deployments Blue Green Deployments BlueGreen sometimes referred to as redblack deployment is a technique for releasing applications by shift traffic between two identical environments running differing versions of the application Bluegreen deployments help you minimize downtime during application updates mitigating risks surrounding downtime and rollback functionality Bluegreen deployments enable you to launch a new version (green) of your application alongside the old version (blue) and monitor and test the new version before you reroute traffic to it rolling back on issue detection Canary Deployments Traffic is shifted in two increments A canary deployment is a blue green strategy tha t is more risk adverse in which a phased approach is used This can be two step or linear in which new application code is deployed and exposed for trial and upon acceptance rolled out either to the rest of the environment or in a linear fashion Linear Deployments Linear deployments mean t raffic is shifted in equal increments with an equal number of minutes between each increment You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment Allatonce Deployments Allatonce deployments means a ll traffic is shifted from the original environment to the replacement environment all at once Deployment Strategies Matrix The following matrix lists the su pported deployment strategies for Amazon Elastic Container Service (Amazon ECS) AWS Lambda and Amazon EC2/On Premise • Amazon ECS is a fully managed orchestration service • AWS Lambda lets you run code without pr ovisioning or managing servers Amazon Web Services Introduction to DevOps on AWS 8 • Amazon EC2 enables you to run secure resizable compute capacity in the cloud A B C D 1 Deployment Strategies Matrix Amazon ECS AWS Lambda Amazon EC2/On Premise 2 InPlace ✓ ✓ ✓ 3 BlueGreen ✓ ✓ ✓* 4 Canary ✓ ✓ ☓ 5 Linear ✓ ✓ ☓ 6 AllatOnce ✓ ✓ ☓ Note: BlueGreen deployment with EC2/On Premise only works with EC2 instances AWS Elastic Beanstalk Deployment Strategies AWS Elastic Beanstalk supports the following type of deployment strategies: • AllatOnce: Performs in place deployment o n all instances • Rolling : Splits the instances into batches and deploys to one batch at a time Amazon Web Services Introduction to DevOps on AWS 9 • Rolling with Additional Batch: Splits the deployments into batches but for the first batch creates new EC2 instances instead of deploying on the existing EC2 instances • Immutable: If you need to deploy with a new instance instead of using an existing instance • Traffic Splitting: Perfor ms immutable deployment and then forwards percentage of traffic to the new instances for a pre determined duration of time If the instances stay healthy then forward all traffic to new instances and terminate old instances Infrastructure as Code A fundam ental principle of DevOps is to treat infrastructure the same way developers treat code Application code has a defined format and syntax If the code is not written according to the rules of the programming language applications cannot be created Code i s stored in a version management or source control system that logs a history of code development changes and bug fixes When code is compiled or built into applications we expect a consistent application to be created and the build is repeatable and r eliable Practicing infrastructure as code means applying the same rigor of application code development to infrastructure provisioning All configurations should be defined in a declarative way and stored in a source control system such as AWS CodeCommit the same as application code Infrastructure provisioning orchestration and deployment should also support the use of the infrastructure as code Infrastructure was traditionally provisioned using a com bination of scripts and manual processes Sometimes these scripts were stored in version control systems or documented step by step in text files or run books Often the person writing the run books is not the same person executing these scripts or followi ng through the run books If these scripts or runbooks are not updated frequently they can potentially become a show stopper in deployments This results in the creation of new environments is not always repeatable reliable or consistent In contrast to the above AWS provides a DevOps focused way of creating and maintaining infrastructure Similar to the way software developers write application code AWS provides services that enable the creation deployment and maintenance of infrastructure in a progr ammatic descriptive and declarative way These services provide rigor clarity and reliability The AWS services discussed in this paper are core Amazon Web Services Introduction to DevOps on AWS 10 to a DevOps methodology and form the underpinnings of numerous higher level AWS DevOps principles and pract ices AWS offers following services to define Infrastructure as a code • AWS CloudFormation • AWS Cloud Development Kit (AWS CDK) • AWS Cloud Development Kit for Kubernetes AWS CloudFormation AWS CloudFormation is a service that enables developers create AWS resources in an orderly and predictable fashion Resources are written in text files using J avaScript Object Notation (JSON) or Yet Another Markup Language (YAML) format The templates require a specific syntax and structure that depends on the types of resources being created and managed You author your resources in JSON or YAML with any code e ditor such as AWS Cloud9 check it into a version control system and then CloudFormation builds the specified services in safe repeatable manner A CloudFormation t emplate is deployed into the AWS environme nt as a stack You can manage stacks through the AWS Management Console AWS Command Line Interface or AWS CloudFormation APIs If you need to make changes to the running resources in a stack you update the stack Before making changes to your resources you can generate a change set which is a summary of your proposed changes Change sets enable you to see how your changes might impact your running resources especially for critical resources before implementing them Amazon Web Services Introduction to DevOps o n AWS 11 Figure 1 AWS CloudFormation cre ating an entire environment (stack) from one template You can use a single template to create and update an entire environment or separate templates to manage multiple layers within an environment This enables templates to be modularized and also provide s a layer of governance that is important to many organizations When you create or update a stack in the console events are displayed showing the status of the configuration If an error occurs by default the stack is rolled back to its previous state Amazon Simple Notification Service (Amazon SNS) provides notifications on events For example you can use Amazon SNS to track stack creation and deletion progress via email and integrate with other processes programmatically AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources and lets you describe any dependencies or pass in special parameters when the stack is configured With CloudFormation templates you can work with a broad set of AWS service s such as Amazon S3 Auto Scaling Amazon CloudFront Amazon DynamoDB Amazon EC2 Amazon ElastiCache AWS Elastic Beanstalk Elastic Load Balancing IAM AWS OpsWorks and Amazon VPC For the most recent list of supported resources see AWS resource and property types reference Amazon Web Services Introduction to DevOps on AWS 12 AWS Cloud Development Kit The AWS Cloud Development Kit (AWS CDK) is an open source software development framework to model and provision your cloud application resources using familiar programming languages AWS CDK enables you to model application infrastructure using TypeScript Python Java and NET Developers can leve rage their existing Integrated Development Environment (IDE) leveraging tools like autocomplete and in line documentation to accelerate development of infrastructure AWS CDK utilizes AWS CloudFormation in the background to provision resources in a safe repeatable manner Constructs are the basic building blocks of CDK code A construct represents a cloud component and encapsulates everything AWS CloudFormation needs to create the component The AWS CDK includes the AWS Construct Library containing constructs representing many AWS services By combining constructs together you can quickly and easily create complex architectures for deployment in AWS AWS Cloud Development Kit for Kubernetes AWS Cloud Development Kit for Kubernetes (cdk8s) is an open source software development framework for defining Kubernetes applications u sing general purpose programming languages Once you have defined your application in a programming language ( As of date of publication only Python and TypeScript are supported) cdk8s will convert your application description in to preKubernetes YML This YML file can then be consumed by any Kubernetes cluster running anywhere Because the structure is defined in a programming language you can use the rich features provided by the programming language You can use the abstraction feature of the programming language to create your own boiler plate code and re use it across all of the deployments Automation Another core philosophy and practice of DevOps is automation Automation focuses on the setup configuration deployment and support of infrastructure a nd the applications that run on it By using automation you can set up environments more rapidly in a standardized and repeatable manner The removal of manual processes is a key to a successful DevOps strategy Historically server configuration and appl ication Amazon Web Services Introduction to DevOps on AWS 13 deployment have been predominantly a manual process Environments become nonstandard and reproducing an environment when issues arise is difficult The use of automation is critical to realizing the full benefits of the cloud Internally AWS relies heavily on automation to provide the core features of elasticity and scalability Manual processes are error prone unreliable and inadequate to support an agile business Frequently an organization may tie up highly skilled resources to provide manual configuration when t ime could be better spent supporting other more critical and higher value activities within the business Modern operating environments commonly rely on full automation to eliminate manual intervention or access to production environ ments This includes all software releasing machine configuration operating system patching troubleshooting or bug fixing Many levels of automation practices can be used together to provide a higher level end toend automated process Automation has the following key benefits: • Rapid changes • Improved productivity • Repeatable configurations • Reproducible environments • Leveraged elasticity • Leveraged auto scaling • Automated testing Automation is a cornerstone with AWS services and is internally supported in al l services features and offerings AWS OpsWorks AWS OpsWorks take the principles of DevOps even further than AWS Elastic Beanstalk It can be considered an application management service rather than simply an application container AWS OpsWorks provides even more levels of automation with additional features like i ntegration with configuration management software (Chef) and application lifecycle management You can use application lifecycle management to define when resources are set up configured deployed un deployed or terminated Amazon Web Services Introduction to DevOps on AWS 14 For added flexibility AWS Ops Works has you define your application in configurable stacks You can also select predefined application stacks Application stacks contain all the provisioning for AWS resources that your application requires including application servers web servers d atabases and load balancers Figure 2 AWS OpsWorks showing DevOps features and architecture Application stacks are organized into architectural layers so that stacks can be maintained independently Example layers could include web tier application t ier and database tier Out of the box AWS OpsWorks also simplifies setting up Auto Scaling groups and Elastic Load Balancing load balancers further illustrating the DevOps principle of automation Just like AWS Elastic Beanstalk AWS OpsWorks supports application versioning continuous deployment and infrastructure configuration management AWS OpsWorks also supports the DevOps practices of monitoring and logging (covered in the next section) Monitoring support is provided by Amazon CloudWatch All lif ecycle events are logged and a separate Chef log documents any Chef recipes that are run along with any exceptions AWS Elastic Beanstalk AWS Elastic Beanstalk is a service to rapidly deploy and sc ale web applications developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and IIS Amazon Web Services Introduction to DevOps on AWS 15 Elastic Beanstalk is an abstraction on top of A mazon EC2 Auto Scaling and simplifies the deployment by giving additional features such as cloning bluegreen deployments Elastic Beanstalk Command Line Interface (eb cli) and integration with AWS Toolkit for Visual Studio Visual Studio Code Eclipse and IntelliJ for increa se developer productivity Monitoring and Logging Communication and collaboration are fundamental in a DevOps philosophy To facilitate this feedback is critical In AWS feedback is provided by two core services: Amazon CloudWatch and AWS CloudTrail Tog ether they provide a robust monitoring alerting and auditing infrastructure so developers and operations teams can work together closely and transparently AWS provides the following services for monitoring and logging: Amazon CloudWatch Metrics Amazon CloudWatch metrics automatically collect data from AWS services such as Amazon EC2 instances Amazon EBS volumes and Amazon RDS DB instances These metrics can then be organized as dashboards and alarms or events can be created to trigger events or perform Auto Scaling actions Amazon CloudWatch Alarms You can setup alarms based on the metrics collected by Amazon CloudWatch Metrics The alarm can then send a notification to Amazon Simple Notification Service ( Amazon SNS) topic or initiate Auto Scaling actions An alarm requires period (length of the time to evaluate a metric) Evaluation Period (number of the most recent data points) and Datapoints to Alarm (number of data points within the Evaluation Period) Amazon CloudWatch Logs Amazon CloudWatch Logs is a log aggregation and monitoring service AWS CodeBuild CodeCommit CodeDeploy and CodePipeline provide integrations with CloudWatch logs so that all of the logs can be centrally monitored In addition the previously mentioned services various other AWS services provide direct integration with CloudWatch With CloudWatch Logs you can: Amazon Web Services Introduction to DevOps on AWS 16 • Query Your Log Data • Monitor Logs from Amazon EC2 Instan ces • Monitor AWS CloudTrail Logged Events • Define Log Retention Policy Amazon CloudWatch Logs Insights Amazon CloudWatch Logs Insights scans your logs and enables you to perform interactive queries and visualizations It understands various log formats and a uto discovers fields from JSON Logs Amazon CloudWatch Events Amazon CloudWatch Events delivers a near real time stream of system events that describe changes in AWS resources Using simple rules that you can quickly set up; you can match events and rout e them to one or more target functions or streams CloudWatch Events becomes aware of operational changes as they occur CloudWatch Events responds to these operational changes and takes corrective action as necessary by sending messages to respond to the environment activating functions making changes and capturing state information You can configure rules in Amazon CloudWatch Events to alert you to changes in AWS services and integrate these events with other 3rd party systems using Amazon EventBridg e The following are the AWS DevOps related services that have integration with CloudWatch Events • Application Auto Scaling Events • CodeBuild Events • CodeCommit Events • CodeDeploy Events • CodePipeline Events Amazon EventBridge Amazon CloudWatch Events and EventBridge are the same underlying service and API however EventBridge provides more features Amazon Web Services Introduction to DevOps on AWS 17 Amazon EventBridge is a serverless event bus that enables integrations betwe en AWS services Software as a services (SaaS) and your applications In addition to build event driven applications EventBridge can be used to notify about the events from the services such as CodeBuild CodeDeploy CodePipeline and CodeCommit AWS CloudTrail In order to embrace the DevOps principles of collaboration communication and transparency it’s important to understand who is making modifications to your infrastructure In AWS this transparency is provided by AWS CloudTrail service All AWS interactions are handled through AWS API calls that are monitored and logged by AWS CloudTrail All generated log files are stored in an Amazon S3 bucket that you define Log files are e ncrypted using Amazon S3 server side encryption (SSE) All API calls are logged whether they come directly from a user or on behalf of a user by an AWS service Numerous groups can benefit from CloudTrail logs including operations teams for support security teams for governance and finance teams for billing Amazon Web Services Introduction to DevOps on AWS 18 Communication and Collaboration Whether you are adopting DevOps Culture in your organization or going t hrough a DevOps cultural transformation communication and collaboration is an important part of you approach At Amazon we have realized that there is need to bring a changed in the mindset of the teams and hence adopted the concept of TwoPizza Teams TwoPizza Teams ""We try to create teams that are no larger than can be fed by two pizzas"" said Bezos ""We call that the two pizza team rule"" The smaller the team the better the collaboration Collaboration is also very important as the software releases are moving faster than ever And a team’s ability to deliver the software can be a differentiating factor for your organization against your competition Image a situation in which a new product feature needs to be released or a bug needs to be fixed you w ant this to happen as quickly as possible so you can have a smaller gotomarket timed This is also important as you don’t want the transformation to be a slowmoving process rather than an agile approach where waves of changes start to make an impact Communication between the teams is also important we move towards the shared responsibility model and start moving out of the siloed development approach This bring the concept of ownership in the team and shifts their perspective to look at this as an endtoend Your team should not think about your production environments as black boxes where they have no visibility Cultural transformation is also important as you may be building a common DevOps team or the other approach is that you have a DevOps focus ed member(s) in your team Both of these approaches do introduce Shared Responsibility in to the team Amazon Web Services Introduction to DevOps on AWS 19 Security Whether you are going through a DevOps Transformation or implementing DevOps principles for the first time you should think about Security as integrated in your DevOps processes This should be cross cutting concern across your build test deployment stages Before we talk about Security in DevOps on AWS let’s look at the AWS Shared Responsibility Model AWS Shared Responsibility Model Securi ty is a shared responsibility between AWS and the customer The different parts of the Shared Responsibility Model are explained below: • AWS responsibility “Security of the Cloud” AWS is responsible for protecting the infrastructure that runs all of the s ervices offered in the AWS Cloud This infrastructure is composed of the hardware software networking and facilities that run AWS Cloud services • Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Clo ud services that a customer selects This determines the amount of configuration work the customer must perform as part of their security responsibilities This shared model can help relieve the customer’s operational burden as AWS operates manages and co ntrols the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates This is critical in the cases where customer want to understand the security of their build environ ments Amazon Web Services Introduction t o DevOps on AWS 20 Figure 3 AWS Shared Responsibility Model For DevOps we want to assign permissions based on the least privilege permissions model This model states that “A user (or service) should be granted minimal amount of permiss ions that are required to get job done” Permissions are maintained in IAM IAM is a web service that helps you securely control access to AWS resources You can use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources Identity Access Management AWS Identity and Access Management (IAM) defines the controls and polices that are used to manage access to AWS Resources Using IAM you can create users and groups and define permissions to various DevOps services In addition to the users various services may also need access to AWS resources eg your CodeBuild project may need access to store Docker images in Amazon Elastic Container Registry (Amazon ECR) and will need permissions to write to ECR These types of permissions are defined by a special type role know as service role IAM is one component of the AWS security infrastructure With IAM you can centrally manage groups users service roles and security credentials such as passwords access keys and permissions policies that control which AWS services and resources users c an access IAM Policy lets you define the set of permissions This policy can then be attached to either a Role User or a Service to define their permission You can also use IAM to create roles that are used widely within your desired DevOps strategy In some case it can make perfect sent to programmatically AssumeRole instead directly getting the permissions When a service or user assumes roles they are given temporary credentials to access a service that you normally don’t have access Amazon Web Services Introduction to DevOps on AWS 21 Conclusion In order to make the journey to the cloud smooth efficient and effective ; technology companies should embrace DevOps principles and practices These principles are embedded in the AWS platform and form the cornerstone of numerous AWS services especially those in the deployment and monitoring offerings Begin b y defining your infrastructure as code using the service AWS CloudFormation or AWS Cloud Development Kit (CDK) Next define the way in which your applications are going to use continuous deployment with the help of services like AWS CodeBuild AWS CodeDep loy AWS CodePipeline and AWS CodeCommit At the application level use containers like AWS Elastic Beanstalk AWS Elastic Container Service ( Amazon ECS) or AWS Elastic Kubernetes Service ( Amazon EKS) and AWS OpsWorks to simplify the configuration of co mmon architectures Using these services also makes it easy to include other important services like Auto Scaling and Elastic Load Balancing Finally use the DevOps strategy of monitoring such as Amazon CloudWatc h and solid security practices such as AWS IAM With AWS as your partner your DevOps principles will bring agility to your business and IT organization and accelerate your journey to the cloud Contributors Contributors to this document include : • Muhammad Mansoor Solutions Architect • Ajit Zadgao nkar World Wide Tech Leader Modernization • Juan Lamadrid Solutions Architect • Darren Ball Solutions Architect • Rajeswari Malladi Solutions Architect • Pallavi Nargund Solutions Architect • Bert Zahniser Solutions Architect • Abdullahi Olaoye – Cloud Sol utions Architect • Mohamed Kiswani – Software Development Manager • Tara McCann – Manager Solutions Architect Amazon Web Services Introduction to DevOps on AWS 22 Document Revisions Date Description October 2020 Updated sections to include new services December 2014 First publication",General,consultant,Best Practices Introduction_to_Scalable_Gaming_Patterns_on_AWS,"Introduction to Scal able Game Develo pment Patterns on AWS Second Edition Published December 2019 Updated March 11 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Getting started 1 Game design decisions 1 Game client considerations 3 Launchin g an initial game backend 4 High availability scalability and security 8 Binary game data with Amazon S3 9 Expanding beyond AWS Elastic Beanstalk 10 Reference architecture 11 Games as REST APIs 13 HTTP load balancing 14 HTTP automatic scaling 18 Game servers 20 Matchmaking 21 Routing messages with Amazon SNS 22 Last thoughts on game servers 23 Relational vs NoSQL databases 24 MySQL 24 Amazon Aurora 27 Redis 28 MongoDB 28 Amazon DynamoDB 29 Other NoSQL options 32 Caching 32 Binary game content with Amazon S3 35 Content delivery and Amazon CloudFront 36 Uploading content to Amazon S3 37 Amazon S3 performance considerations 42 Loosely coupled architectures with asynchronous jobs 44 Leaderboards and avatars 44 Amazon SQS 45 Other queue options 47 Cost of the cloud 47 Conclusion and n ext steps 48 Contributors 49 Further reading 49 Document revisions 50 Introduction Whether you’re an up andcoming mobile developer or an established AAA game studio you understand the challenges involved with launching a successful game in the current games landscape Not only must the game be compelling but users also expect a wide range of online features such as friend lists leaderboards weekly challenges various multiplayer modes and ongoing content releases To successfully execute a game lau nch it’s critical to get favorable app store ratings and reviews on popular e retail channels to provide sales and awareness momentum for your game —like the first weekend of a movie release To deliver these features you need a server backend The server backend can consist of both the actual game servers for multiplayer games or servers that power the game services such as chat matchmaking and so on The server backend must be able to scale up at a moment’s notice in the event that the game goes viral and suddenly explodes from 100 to 100000 users At the same time the backend must be cost effective so that you don’t overpay for unused server capacity Amazon Web Services (AWS) is a flexible cost effective easy touse cloud service By running you r game on AWS you can leverage capacity on demand to scale up and down with your users rather than having to guess at your server demands and potentially over purchase or under purchase hardware Many indie mobile and AAA developers have recognized the advantages of AWS and are having success running their games on the AWS Cloud This book is broken into sections covering the different features of modern games such as friend lists leaderboards game servers messaging and user generated content You can start small and just use the AWS components and services you need As your game evolves and grows you can revisit this book and evaluate additional AWS features Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 1 Getting started If you are just getting started developing your game it can be challenging to figure out where to begin with your backend server development Thankfully AWS can help you get started quickly because you don’t have to make a decision about every service that you’re going to use up front As you iterate on your game you can add AWS services over time This approach enables you to develop additional game features or backend functionality without having to plan for everything at the beginning We encourage you to start based on the game features that you need and then add more AWS features as your game evolves In this section we’ll look at some common game features that determine which types of services you’ll need Game design decisions Modern social mobile and AAA games tend to share the following c ommon tenets that affect server architecture: • Pick up and play anywhere – Players expect their saved games profiles and other data to be stored online to allow the easily move from device to device This operation typically involve s synchronizing and merging local data as you move from one device to another so a simple data s torage solution is not always the right solution • Leaderboards and rankings – Players continue to look for a competitive experience similar to classic arcade games Increasingly though the focus is on friends’ leaderboards rather than just a single glob al high score list This requires a more sophisticated leaderboard that can sort in multiple dimensions while maintaining good performance • Free toplay – One of the biggest shifts over the past few years has been the widespread move to free toplay In t his model games are free to download and play and the game earns money through in app purchases for items such as weapons outfits power ups and boost points as well as advertising The game is funded by a small minority of users that purchase these i tems with the vast majority of users playing for free This means that your game backend must be as cost effective as possible and must be able to scale up and down as needed Even for premiere AAA games large r percentages of revenue are now coming from content updates and in game purchases Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 2 • Analytics – Maximizing long tail revenue requires that games collect and analyze a large number of metrics regarding gameplay patterns favorite items purchase preferences and so forth Ensuring that new game featu res target those areas of the game where users are spending their time and money is a critical factor in the success of in game purchases • Content updates – Games that achieve the highest player retention tend to have a continuous release cycle of new item s levels challenges and achievements The continuing trend of games becoming more of a service that a single product reinforces the need for constant post launch changes These features require frequent updates with new data and game assets By using a content delivery network (CDN) to distribute game content you can cut costs and increase download speed • Asynchronous gameplay – Although larger games generally include a real time online multiplayer mode games of all kinds are realizing the importance of asynchronous features to keep players engaged Examples of asynchronous play include competing against your friends based on points unlocks badges or similar achievements This type of game play gives players the feel of a connected game experience even if they aren’t online all the time or if they are using slower networks like 3G or 4G for mobile games • Push notifications – A common method of getting users to come back to the game is to send targeted push notifications to their mobile device For example a user might get a notification that their friend beat their score or that a new challenge or level is available This d raws the user back into the core game experience even when they’re not directly playing • Unpredictable clients – Modern games run on a wide variety of platforms including mobile devices consoles PCs and browsers One user could be roaming on their port able device playing against a console user on Wi Fi and both would expect a consistent experience For this reason it’s necessary to leverage stateless protocols (for example HTTP) and asynchronous calls as much as possible Each of these game features has an impact on your server features and technology For example if you have a simple Top 10 leaderboard you may be able to store it in a single MySQL or Amazon Aurora database table However if you have complex leaderboards with multiple sort dimensi ons it may be necessary to use a NoSQL option such as Amazon Elasti Cache or Amazon DynamoDB (discussed later in this book) Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 3 Game client considerations Although the focus of this book is on the architecture you can deploy on AWS the implementation of your game client can also have an impact on your game’s scalability It also affects how much your game backend costs to run because frequent network requests from the client use more bandwidth and require more server resources Here are a few important guidel ines to follow: • All network calls should be asynchronous and non blocking This means that when a network request is initiated the game client continues on without waiting for a response from the server When the server responds this triggers an event on the client which is handled by a callback of some kind in the client code On iOS AFNetworking is one popular approach Browser games should use a call such as jQueryajax() or the equivalent and C++ clients should consider libcurl std::async or similar libraries Similarly popular game engines usually include an asynchronous method for network and we b requests For example Unity offers UnityWebRequest and Unreal Engine has HttpRequest • Use JSON to transport data It’s compact cross platform fast to parse has lots of library support and contains data type information If you have large payloads simply gzip them because the majority of web servers and mobile clients have native support for gzip Don’t waste time over optimizing —any payload in the range of hundreds of kilobytes should be adequate We have also seen developers use Apache Avro and MessagePack depending on their use case comfort level with the formats and availability of libraries Note : An exception to this rule is multiplayer gameplay packets whic h are typically UDP • Use HTTP/11 with Keepalives and reuse HTTP connections between requests This minimizes the overhead your game incurs when making network requests Each time you have to open a new HTTP socket this requires a three way TCP handshak e which can add upwards of 50 milliseconds (ms) In addition repeatedly opening and closing TCP connections will accumulate large numbers of sockets in the TIME_WAIT state on your server which consumes valuable server resources • Always POST any importan t data from the client to the server over SSL This includes login stats save data unlocks and purchases The same applies for any GET PUT and DELETE requests because modern computers are efficient at handling SSL and the overhead is low AWS enables you to have our Elastic Load Balancer handle the SSL workload which completely offloads it from your servers Amazon Web Services Introduction to Scal able Game Development Patterns on AWS 4 • Never store security critical data such as AWS access keys or other tokens on the client device either as part of your game data or user data Access key IDs and secret access keys allow the possessors of those keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI ) AWS Tools for Windows PowerShell the AWS SDKs or direct HTTP calls using the APIs for individual AWS services If somebody roots or jailbreaks their device you risk the possibility that they could gain access to your server code user data and even your AWS billing account In the case of PC games your keys likely exist in memory when the game client is running and pulling them out isn’t that hard for someone with the know how You have to assume anything you store on a game client will be compromi sed If you want your game client to directly access AWS services consider using Amazon Cognito Federated Identities which allows your application to obtain te mporary limited privilege credentials • As a precaution you should never trust what a game client sends you It’s an untrusted source and you should always validate what you receive Sometimes it’s malicious traffic (SQL Injection XSS etc) but sometim es it can be something as trivial as someone having their device clock set to a time that’s in the past Many of these concerns are not specific to AWS and are typical client/server safety issues but keeping them in mind will help you design a game that performs well and is reasonably secure Launching an initial game backend With the previous game features and client considerations in mind let’s look at a strategy for getting an initial game backend up and running on AWS as quickly as possible We’ll ma ke use of a few key AWS services with the ability to add more as the game evolves To ensure we’re able to scale out as our game grows in popularity we’ll leverage stateless protocols as much as possible Creating an HTTP/JSON API for the bulk of our gam e features allows us to add instances dynamically and easily recover from transient network issues Our game backend consists of a server that talks HTTP/JSON stores data in MySQL and uses Amazon Simple Storage Service (Amazon S3) for binary content Thi s type of backend is easy to develop and can scale effectively Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 5 A common pattern for game developers is to run a web server locally on a laptop or desktop for development and then push the server code to the cloud when it’s time to deploy If you follow this pattern AWS Elastic Beanstalk can greatly simplify the process of deploying your code to AWS Figure 1: A high level overview of your first game backend running on AWS Elastic Beanstalk is a deployment management service that sits on top of other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing and Amazon Relational Database Services (Amazon RDS) Amazon EC2 is a web service that provides secure resizable compute capacity in the cloud It is designed to make at scale cloud computing easier for developers The Amazon EC2 simple web service interface allows you to obtain and configure computing capacity with minimal fri ction It reduces the time required to obtain and boot new server instances to minutes which allows you to quickly scale capacity (up or down) as your computing requirements change Elastic Load Balancing automatically distributes incoming application tra ffic across multiple Amazon EC2 instances It enables you to achieve fault tolerance in your applications Elastic Load Balancing offers three types of load balancers that feature high availability automatic scaling and robust security These are the App lication Load Balancer that routes traffic based on advanced application level information that Amazon Web Services Introduction to Scalable Game Development Patter ns on AWS 6 includes the content of the request and is most suited to HTTP and HTTPS traffic the Network Load Balancer that is best suited for TCP UPD and TLS traffic an d the Classic Load Balancer that works with the EC2 classic network The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances The Application Load Balancer is ideal for applications that need advanced routing capabilities microservices and container based architectures The Network Load Balancer would be ideal for routing messages to persistent game servers chat services and other stateful servers Amazon RDS makes it easy to set up operate and scale a rela tional database in the cloud It provides cost efficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups Amazon RDS supports many familiar database engines i ncluding Amazon Aurora PostgreSQL MySQL and more You can push a zip war or git repository of server code to Elastic Beanstalk Elastic Beanstalk takes care of launching EC2 server instances attaching a load balancer setting up Amazon CloudWatch monitoring alerts and deploying your application to the cloud In short Elastic Beanstalk can set up most of the architecture shown in Figure 1 automatically To see Elastic Beanstalk in action log in to the AWS Management Console and follow the Getting Started Using Elastic Beanstalk tutorial to create a new environment with the programming language of your choice This will launch the sample application and boot a default configuration You can use this environment t o get a feel for the Elastic Beanstalk control panel how to update code and how to modify environment settings If you’re new to AWS you can use the AWS Free Tier to set up these sample environments Note: The sample production environment described in this book will incur costs because it includes AWS resources that aren’t covered under the free tier With the sample application up let’s create a new Elastic Beanstalk application for our game and two new environments one for development and one for production We’ll customize these a bit for our game Use the following table to determine which settings to change depending on the environment type For detailed instructions see Managing and Configuring AWS Elastic Beanstalk Applications and then follow the instructions for Creating an AWS Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 7 Note: In the following table r eplace My Game and mygame values with the name of your game Table 1: Configuration settings for gaming environments Configuration Setting Development Value Production Value Application Name My Game My Game Environment Name mygame dev mygame prod Instance Type t2micro M5large Create RDS DB instance? Yes Yes DB Engine Mysql ** Not recommended Instance Class dbt2micro N/A Allocated Storage 5 GB N/A By using two environments you can enable a simple and effective workflow As you integrate new game backend features you push your updated code to the development environment This triggers Elastic Beanstalk to restart the environment and create a new version In your game client code create two configurations one that points to development and one that points to production Use the development configuration to test your game and then use the production profile when you want to create a new game version to publish to the appropriate app stores When your new game client is ready for release choose the correct server code version from the development environment and deploy it to t he production environment By default deployments incur a brief period of downtime while your app is being updated and restarted To avoid downtime for production deployments you can follow a pattern known as swapping URLs or blue/green deployment In th is pattern you deploy to a standby production environment and then update DNS to point to the new environment For more details on this approach see Blue/Green Deployments with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide Important: We don’t recommend that you use Elastic Beanstalk to manage your database in a production environment because this ties the lifecycle of the database instance (DB instance) to the lifecycle of your application’s environment Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 8 Instead we recommend that yo u run a DB instance in Amazon Aurora and configure your application to connect to it on launch You can also store connection information in Amazon S3 and configure Elastic Beanstalk to retrieve that information during deployment with ebextensions You ca n add AWS Elastic Beanstalk configuration files (ebextensions) to your web application's source code to configure your environment and customize the AWS resources that it contains Configuration files are YAML formatted documents with a config file exten sion that you place in a folder named ebextensions and deploy in your application source bundle For more information see Advanced Environment Customization with Configuration Files (ebextensions) in the AWS Elastic Beanstalk Developer Guide High availability scalability and security For the production environment you need to ensure that your game backend is deployed in a fault tolerant manner Amazon EC2 is hosted in multiple AWS Regions worldwide You should choose a Region that is near the bulk of your game’s customers This ensure s that your users have a low latency experience with your game For more information and a list of the latest AWS Regions see the AWS Global Infrastructure webpage Within each Region are multiple isolated locations known as Availability Zones which you can think of as logical data centers Each of the Availability Zones within a given Region is isolated physically yet connected via high speed networking so they can be used together Balancing your servers across two or more Availability Zones within a Region is a simple way to increase your game’s high availability Using two Availability Zones is a good balance of reliability and cost for most games since you can pair your server instances database instances and cache instances together Elastic Beanstalk can automatically deploy across multiple Availability Zones for you To use multiple Availability Zones with Elastic Beanstalk see Auto Scaling Group for Your Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide For additional scalability you can use automatic scaling to add and remove instances from these Availability Zones For best results consider modifying the automatic scaling trigger to specify a metric (such as CPU usage) and threshold based on your application’s performance profile If the threshold you specify is hit Elastic Beanstalk automatic ally launches additional instances This is covered in more detail in the HTTP Automatic Scaling section of this book Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 9 For development and test environments a single Availability Zone is usually adequate so you can ke ep costs low —assuming you can tolerate a bit of downtime in the event of a failure However if your development environment is actually used by QA testers to validate builds late at night you probably want to treat this more like a production environment In that case leverage multiple Availability Zones like you would in production Finally set up the load balancer to handle SSL termination so that SSL encryption and decryption is offloaded from your game backend servers This is covered in Configuring HTTPS for Your Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide For security reasons we strongly recommend that you use SSL for your game backend For more Elastic Load Balancing tips see the HTTP Load Balancing section of this book Binary game data with Amazon S3 Next you'll need to create an S3 bucket for each Elastic Beanstalk server environ ment that you created previously This S3 bucket stores your binary game content such as patches levels and assets Amazon S3 uses an HTTP based API for uploading and downloading data which means that your game client can use the same HTTP library for talking to your game servers that’s used to download game assets With Amazon S3 you pay for the amount of data you store and the bandwidth for clients to download it For more information see Amazon S3 Pricing To get started create an S3 bucket in the same Region as your servers For example if you deployed Elastic Beanstalk to the us west2 (Oregon) Region choose this same Region for Amazon S3 For simplicity and because S3 requires bucket names to be unique across all S3 use a similar naming convention for the bucket that you used for your Elastic Beanstalk environment (for example mygame dev or mygame prod) along with other unique identification like commycompanymygame dev For step bystep directions see Create a Bucket in the Amazon S imple Storage Service Getting Started Guide Remember to create a separate S3 bucket for each of your Elastic Beanstalk environments (that is development production etc) By default S3 buckets are private and require that users authenticate to download content for security For game content you have two options You could make the bucket public which means that anyone with the bucket name can download your game content but this is not recommended However a better way to manage authentication is to use signed URLs which is a feature that enables you to pass Amazon S3 credentials as part of the URL In thi s scheme your game server code redirects users to Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 10 an Amazon S3 signed URL which you can set to expire after a period of time For instructions on how to create a signed URL see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference If you are using one of the official AWS SDKs with your game server there is also a good chance that the SDK has builtin methods for generating a presigned URL A pre signed URL gives you access to the object identified in the URL provided that the creator of the pre signed URL has permissions to access that object Generating a pre signed URL is a completely offline operation (no API calls are involved) making it a very fast operation Finally as your game grows you can use Amazon CloudFront a content delivery network (CDN) to provide better performance and save you money on data transfer costs For more information see What is Amazon CloudFront in the Amazon CloudFront Developer Guide Expanding beyond AWS Elastic Beanstalk As your game increases in popularity your core game backend must scale and respond to demand over a period of time By using HTTP for the bulk of your calls you are able to easily scale up and down in response to changing usage patt erns Storing binary data in Amazon S3 saves you money compared to serving files from Amazon EC2 and Amazon S3 also takes care of data availability and durability for you Amazon RDS provides you with a managed MySQL database that you can grow over time w ith Amazon RDS features such as read replicas If your game needs additional functionality you can easily expand beyond Elastic Beanstalk to other AWS services without having to start over Elastic Beanstalk supports configuring other AWS services via t he Elastic Beanstalk Environment Resources For example you can add a caching tier using Amazon ElastiCache which is a managed cache service that supports b oth Memcached and Redis For details about adding an ElastiCache cluster see the Example: ElastiCache in the AWS Elastic Beanstalk Deve loper Guide Of course you can always just launch other AWS services yourself and then configure your app to use them For example you could choose to augment or even replace your RDS MySQL DB instance with Amazon Aurora Serverless an on demand auto matic scaling SQL database or Amazon DynamoDB the AWS managed NoSQL offering Even though we’re using Elastic Beanstalk to get started you still have access to all other AWS services as your game grows Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 11 Reference architecture With our core game backend up and running the next step is to examine the other AWS services that could be useful for our game Before continuing let’s look at the following reference architecture for a horizontally scalable game backend This diagram depicts a game backend that supp orts a wide set of game features including login leaderboards challenges chat binary game data user generated content analytics and online multiplayer Not all games have all these components but this diagram provides a good visualization of how t hey would all fit together In the remaining sections of this book we’ll cover each component in detail Figure 2: A fully production ready game backend running on AWS Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 12 Figure 2 may seem overwhelming at first but it's really just an evolution of the initial game backend we launched using Elastic Beanstalk The following table explains numbered areas of the diagram Table 2: Reference architecture callouts Callout Description 1 The diagram shows two Availability Zones set up with identical functionality for redundancy Not all components are shown in both Availability Zones due to space constraints but both Availability Zones function equivalently These Availability Zones can be the same as the two Availability Zones you initially chose using Elastic Beanstalk 2 The HTTP/JSON servers and master/slave DBs can be the same ones you launched using Elastic Beanstalk You continue to build out as much of your game functionality in the HTTP/JSON layer as possible You can use HTTP automatic scaling to add and remove EC2 HTTP instances automatically in response to user demand For more information see the HTTP Automatic Scaling section of this book 3 You can use the same S3 bucket that you initially created for binary data Amazon S3 is built to be highly scalable and needs little tuning over time As your game assets and user traffic continue s to expand you can add Amaz on CloudFront in fro nt of S3 to boost download performance and save costs 4 If your game has features requiring stateful sockets such as chat or multiplayer gameplay these features are typically handled by game servers running code just for those features These servers run on EC2 instances separate from your HTTP instances For more information see the Game Servers section of this book 5 As your game grows and your database load increases the next step is to add caching typically by using Amazon ElastiCache which is the AWS managed caching service Caching frequently accessed items in ElastiCache offloads read queries from your database This is covered in the Caching section of this book 6 The next step is to look at moving some of your server tasks to asynchronous jobs and using Amazon Simple Queue Service (Amazon SQS) to coordinate this work This allows for a loosely coupled architecture where two or more components exist and each has little or no knowledge of other participating components but they interoperate to achieve a specific purpose Amazon SQS eliminates dependencies on the other components in a loosely coupled system For example if your game allows users to upload and share assets such as photos or custom characters you should execute time intensive tasks such as image resizing in a background job This result s in quicker response times for your game while also decreasing the load on your HTTP server instances These strategies are discussed in the Loosely Coupled Architectures with Asynchronous Jobs section of this book Amazon Web Services Introduction to Sc alable Game Development Patterns on AWS 13 Callout Description 7 As your database load continues to grow you can add Amazo n RDS read replicas to help you scale out your database reads even further This also helps reduce the load on your main database because you can read from the replica and you only access the master database to write This is covered in the Relational vs NoSQL Databases section of this book 8 (Not Shown) At some point you may decide to introduce a NoSQL service such as Amazon DynamoDB to supplement your main database for functionality such as leaderboards or to take advantage of NoSQL features such as atomic counters We discuss these options in the Relational vs NoSQL Databases section 9 If your game includes push notifications you can use Amazon Simple Notification Service (Amazon SNS) and its support for Mobile Push to simplify the process of sending push messages across multiple mobile platforms Your EC2 instances can also receive Amazon SNS messages wh ich enables you to do things like broadcast messages to all players currently connected to your game servers If you look at a single Availability Zone in Figure 2 and compare it to the core game backend we launched with Elastic Beanstalk you can see how scaling your game builds on the initial backend pieces by adding caching database replicas and background jobs With this in mind let’s look at each component Games as REST APIs As mentioned earlier to make use of horizontal scalability you should i mplement most of your game’s features using an HTTP/JSON API which typically follows the REST architectural pattern Game clients whether on mobile devices tablets PCs or consoles make HTTP requests to your servers for data such as login sessions f riends leaderboards and trophies Clients do not maintain long lived connections to the server which makes it easy to scale horizontally by adding HTTP server instances Clients can recover from network issues by simply retrying the HTTP request When p roperly designed a REST API can scale to hundreds of thousands of concurrent players This is the pattern we followed in the previous Elastic Beanstalk example RESTful servers are straightforward to deploy on AWS and they benefit from the wide variety o f HTTP development debugging and analysis tools that are available on AWS Nevertheless some modes of gameplay benefit from a stateful two way socket that can receive server initiated messages Examples include real time online multiplayer chat or gam e invites If your game doesn’t have these features you can implement all of Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 14 your functionality using a REST API We’ll discuss stateful servers later in this book but first let’s focus on our REST layer Deploying a REST layer to Amazon EC2 typically co nsists of an HTTP server such as Nginx or Apache plus a language specific application server The following table lists some of the popular packages that game developers use to build REST APIs Table 3: Packages to build REST APIs Language Package Nodejs Express Restify Sails Python Eve Flask Bottle Java Spring Jersey Go Gorilla Mux Gin PHP Slim Silex Ruby Rails Sinatra Grape This is just a sampling –you can build a REST API in any web friendly programming language Since Amazon EC2 gives you complete root access to the instance you can deploy any of these packages For Elastic Beanstalk there are some restrictions on supporte d packages For details see the Elastic Beanstalk FAQs RESTful servers benefit from medium sized instances since this enables more to be deployed horizontally at the same price point Medium sized instances from the general purpose instance family (for example M5) or compute optimized instance family (for example C5) are a good match for REST servers HTTP load balancing Load balancing RESTful servers is very straightforward because HTTP con nections are stateless AWS offers Elastic Load Balancing which is the easiest approach to HTTP load balancing for games on Amazon EC2 You may recall from our example game backend that Elastic Beanstalk automatically deploys an Elastic Load Balancing load balancer to load balance your EC2 instances for you I f you use Elastic Beanstalk to get started you will already have an Elastic Load Balancing load balancer running Amazon Web Services Introduction to Scalable Game Development Patt erns on AWS 15 Follow these guidelines to get the most out of Elastic Load Balancing : • Always configure Elastic Load Balancing to balance between at least tw o Availability Zones for redundancy and fault tolerance Elastic Load Balancing handles balancing traffic between the EC2 instances in the Availability Zones that you specify If you want an equal distribution of traffic on servers you should also enable cross zone load balancing even if there are an unequal number of servers per Availability Zone This ensures optimal usage of servers in your fleet • Configure Elastic Load Balancing to handle SSL encryption and decryption This offloads SSL from your HTTP servers which means that there is more CPU for your application code For more information see Create an HTTPS Load Balancer in the Classic Load Balancer Guide To test SSL for development purposes see How to Create a Self Signed SSL Certificate in the AWS Certificate Manager User Guide • Elastic Load Bala ncing automatically removes any EC2 instances that fail from its load balancing pool To ensure that the health of your HTTP EC2 instances is accurately monitored configure your load balancer with a custom health check URL Then write server code that re sponds to that URL and performs a check on your application’s health For example you could set up a simple health check that verifies that you have DB connectivity The health check return s 200 Ok if your health checks pass or 500 Server Error if your in stance is unhealthy • Each Elastic Load Balancing load balancer that you deploy must have a unique DNS name To set up a custom DNS name for your game you can use a DNS alias (CNAME) to point your game’s domain name to the load balancer For detailed instr uctions see Configure a Custom Domain Name for Your Classic Load Balancer in the Elastic Load Balancing Guide Note that when your load balanc er scales up or down the IP addresses that the load balancer uses change —make sure you are using a DNS CNAME alias to the load balancer and that you’re not referencing the load balancer’s current IP addresses in your DNS domain Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 16 • Elastic Load Balancing is designed to scale up by roughly a factor of 50 percent every 5 minutes For the vast majority of games this works well even when they suddenly go viral However if you are anticipating a sudden huge spike in traffic —perhaps due to a new downloadable con tent release or marketing promotion —Elastic Load Balancing can be pre warmed to scale up in advance for this event To pre warm Elastic Load Balancing submit an AWS support request with the anticipated load (this requires at least Business Level Support ) For more details on Elastic Load Balancing prewarming and best practices for running load tests against Elastic Load Balancing see the AWS article Best Practices in Evaluating Elastic Load Balancing Application Load Balancer Application Load Balancer is the second generation load balancer that provides more granular control over traffic routing based at the HTTP/HTTPS layer In addition to the features described in the previous section the following features that come with Application Load Balancer can be highly beneficial to a gaming centric workload: • Explicit support for Amazon EC2 Container Service (Amazon ECS) – Application Load Balancer can be configured to load balance containers across multiple ports on a single EC2 instance Dynamic ports can be specified in an ECS task definition which will gi ve the container an unused port when scheduled on EC2 instances • HTTP/2 support – A revised edition of the older HTTP/11 protocol HTTP/2 and Application Load Balancer together deliver additional network performance as a binary protocol as opposed to a t extual one Binary protocols are inherently more efficient to process and are much less error prone which can improve stability Additionally HTTP/2 supports multiplexing which enables the reuse of TCP connections for downloading content from multiple o rigins and cuts down on network overhead • Native IPv6 support – With the near exhaustion of IPv4 addresses many application providers are changing to a model where applications without IPv6 support are rejected on their services Application Load Balancer natively supports IPv6 endpoints and routing to VPC IPv6 addresses Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 17 • WebSockets support – Like HTTP/2 Application Load Balancer supports the WebSocket protocol which enables you to set up a longstanding TCP connection between a client and server This is a much more efficient method than standard HTTP connections which were usually held o pen with a sort of heartbeat which contributes to network traffic WebSocket is a great use case for delivering dynamic data like updated leaderboards while minimizing traffic and power use on a mobile device Elastic Load Balancing enables the support of WebSockets by changing the listener from HTTP to TCP However when it’s in TCP Mode Elastic Load Balancing allows the Upgrade header when a connection is established and then the Elastic Load Balancing load balancer terminates any connection that is id le for more than 60 seconds (for example a packet isn’t sent within that timeframe) This means that the client has to reestablish the connection and any WebSocket negotiation fails if the Elastic Load Balancing load balancer sends an upgrade request and establishes a WebSocket connection to other backend instances Custom load balancer Alternatively you can deploy your own load balancer to Amazon EC2 if you need specific features or metrics that Elastic Load Balancing does not provide Popular choices f or games include HAProxy and F5’s BIGIP Virtual Edition both of which can run on Amazon EC2 If you decide to use a custom load b alancer follow these recommendations: • Deploy the load balancer software (such as HAProxy) to a pair of EC2 instances each in a different Availability Zone for redundancy • Assign an Elastic IP address to each instance Create a DNS record containing both of those Elastic IP addresses as your entry point This allows DNS to round robin between your load balancer instances • If you are using Amazon Route 53 our highly available and scalable cloud Domain Name System (DNS) web service use Route 53 health checks to monitor your load balancer EC2 instances to detect failure This ensures that traffic doesn’t get routed to a load balancer that is down • If you want HAProxy to handle SSL traffic use the latest development version of HAProxy 15 or later Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 18 • If you decide to deploy your own load balancer keep in mind that there are several aspects you need to handle on your own First and foremost if your l oad surpasses what your load balancer instances can handle you need to launch additional EC2 instances and follow the previous steps to add them to your application stack In addition new auto scaled application instances aren’t automatically registered with your load balancer instances You need to write a script that updates the load balancer configuration files and restarts the load balancers If you are interested in HAProxy as a managed service consider AWS OpsWorks which uses Chef Automate to manage EC2 instances and can deploy HAProxy as an alternative to Elastic Load Balancing HTTP automatic scaling The ability to dynamically grow and shrink server resources in response to user patterns is a prima ry benefit of running on AWS Auto matic scaling enables you to scale the number of EC2 instances in one or more Availability Zones based on system metrics such as CPU utilization or network throughput For an overview of the functionality that Amazon EC2 Auto Scaling provides see What Is Amazon EC2 AutoScaling? and then walk through Getting Started with Amazon EC2 Auto Scaling You can use Amazon EC2 Auto Scaling with any type of EC2 instance including HTTP a game server or a background worker HTTP servers are the easiest to scale because they sit beh ind a load balancer that distributes requests across server instances Auto Scaling handles the registration or deregistration of HTTPbased instances from Elastic Load Balancing dynamically which means that traffic will be routed to a new instance as soo n as it’s available To use automatic scaling effectively choose appropriate metrics to trigger scale up and scale down activities To determine your metrics follow these guidelines: • CPUUtilization is often a good Amazon CloudWatch metric to use Web servers tend to be CPU limited whereas memory remains fairly constant when the server processes are running A higher percentage of CPU tends to show that the server is becoming overloaded with requests For finer granularity pair CPUUtilization with NetworkIn or NetworkOut Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 19 • Benchmark your servers to determine good values to scale on For HTTP servers you can use a tool such as Apache Bench or HTTPerf to measure your server response times Increase the load on your servers while monitoring CPU or other metrics Make note of the point at which your server response times degrade and see how this correlates to your system metrics • When configuri ng your Amazon EC2 Auto Scaling group choose two Availability Zones and a minimum of two servers This ensure s your game server instances are properly distributed across multiple Availability Zones for high availability Elastic Load Balancing takes care of balancing the load between multiple Availability Zones for you For details on configuring scaling policies see Dynamic Scaling for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide Installing application code When you use automatic scaling with Elastic Beanstalk Elastic Beanstalk takes care of installing your application code on new EC2 instances as they’re scaled up This is one of the advantages of the managed container that Elastic Beanstalk provides However if you’re using automatic scaling without Elastic Beanstalk you need to take care of getting your application code onto your EC2 instances to implement automatic scaling If y ou are already using Chef or Puppet consider using them to deploy application code on your instances AWS OpsWorks automatic scaling which uses Chef to configure instances provides both time based and load based automatic scaling With OpsWorks you can also set up custom startup and shutdown steps for your instances as they scale OpsWorks is a great alternative to managing automatic scaling if you’re already using Chef or if you’re interested in using Chef to manage your AWS resources For more inform ation see Managing Load with Time based and Load based Instances in the AWS OpsWorks User Guide If you’re not using any of these packages you can use the Ubuntu CloudInit package as a simple way to pass shell commands directly to EC2 instances You can use cloud init to run a simple shell s cript that fetches the latest application code and starts up the appropriate services This is supported by the official Amazon Linux AMI as well as the Canonical Ubuntu AMIs For more details on th ese approaches see the Running Commands on Your Linux Instance at Launch article Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 20 Game servers There are some game play scenarios that work well with an event driven RESTf ul model for example turn based play and appointment games which don't require constant real time updates can be built as stateless game servers with the techniques in the previous section Sometimes however a game server’s approach needs to be the opp osite of a RESTful approach Clients establish a stateful two way connection to the game server via UDP TCP or WebSockets enabling both the client and server to initiate messages If the network connection is interrupted the client must perform reconn ect logic and possibly logic to reset its state as well Stateful game servers introduce challenges for automatic scaling because clients can’t simply be round robin load balanced across a pool of servers Historically many games used stateful connection s and long running server processes for all of their game functionality especially in the case of larger AAA and MMO games If you have a game that is architected in this manner you can run it on AWS We offer a managed service in Amazon GameL ift that aids you in deploying operating and scaling dedicated game servers for session based multiplayer games You can also choose to run your own orchestration for game servers that uses Amazon EC2 Both are good choices depending on your requirements However for new games we encourage you to use HTTP as much as possible and only use stateful sockets for aspects of your game that really need it (such as online multiplayer) The following table lists several packages that allow you to build event driven servers Table 4: Packages to build event driven servers Language Package Nodejs Core socketio Async Python Gevent Twisted Java JBoss Netty Go Socketio Erlang Core Ruby Event Machine Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 21 C++ isn’t listed in the table because it tends to be the language of choice for multiplayer game servers Many commercial game engines such as Amazon Lumberyard and Unreal Engine are written in C++ This enables you to take exi sting game code from the client and reuse it on the server This is particularly valuable when running physics or other frameworks on the server (such as Havok) which frequently only support C++ However though there are packages that allow building even t driven services they tend to be more complex than those in the above list Also you wouldn't typically be running game simulation code in an event based service Regardless of programming language stateful socket servers generally benefit from as large an instance as possible since they are more sensitive to issues such as network latency The largest instances in the Amazon EC2 compute optimized instance family (for example c5*) are often the best options These new generation instances use enhance d networking via single root I/O virtualization (SR IOV) which provides high packets per second lower latency and low jitter This makes them ideal for game servers Matchmaking Matchmaking is the feature that gets players into games Typically matchm aking follows a process like the following: 1 Ask the user about the type of game they would like to join (for example deathmatch time challenge etc) 2 Look at what game modes are currently being played online 3 Factor in variables such as the user's geo location (for latency) or ping time language and overall ranking 4 Place the user on a game server that contains a matching game Games servers require long lived processes and they can't simply be round robin load balanced in the way that you can with a n HTTP request After a player is on a given server they remain on that server until the game is over which could be minutes or hours In a modern cloud architecture you should minimize your usage of long running game server processes to only those game play elements that require it For example imagine an MMO or open world shooter game Some of the functionality such as running around the world and interacting with other players requires long running game server processes However the rest of the API operations such as listing friends altering Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 22 inventory updating stats and finding games to play can easily be mapped to a REST web API In this approach game clients would first connect to your REST API and request a stateful game server Your REST API would then perform matchmaking logic and give clients an IP address and port of a server to connect to The game client then connects directly to that game server’s IP address This hybrid approach gives you the best performance for your socket servers because clients can directly connect to the EC2 instances At the same time you still get the benefits of using HTTP based calls for your main entry point For most matchmaking n eeds Amazon GameLift provides a matchmaking system called FlexMatch You would control FlexMatch via your REST API making calls to the Amazon GameLift API to initiate matching and return results You can find more information on FlexMatch in the Amazon GameLift Developer Guide If FlexMatch doesn't suit you needs for matchmaking you can find more information about implementing matchmaking in a custom serverless e nvironment in Fitting the Pattern: Serverless Custom Matchmaking with Amazon GameLift on the AWS Game Tech Blog Routing messages with Amazon SNS There are two main categories of messages in gaming: messages targeted at a specific user like private chat or trade requests and group messages such as chat or gameplay packets A common strategy for sending and receiving messages is t o use a socket server with a stateful connection If your player base is small enough so that everyone can connect to a single server you can route messages between players simply by selecting different sockets In most cases though you need to have mul tiple servers which means those servers also need some way to route messages between themselves Routing messages between EC2 server instances is one use case where Amazon SNS can help Let’s assume you had player 1 on server A who wants to send a messag e to player 2 on server C as shown in the following figure In this scenario server A could look at locally connected players and when it can’t find player 2 server A can forward the message to an SNS topic which then propagates the message to other s ervers Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 23 Figure 3: SNSbacked player to player communication between two servers Amazon SNS fills a role here that is similar to a message queue such as RabbitMQ or Apache ActiveMQ Instead of Amazon SNS you could run RabbitMQ A pache ActiveMQ or a similar package on Amazon EC2 The advantage of Amazon SNS is that you don’t have to spend time administering and maintaining queue servers and software on your own For more information about Amazon SNS see What is Amazon Simple Notification Service? and Create a Topic in the Amazon SNS Developer Guide Mobile push notifications Unlike the previous use case which is designed to handle near real time in game messaging mobile push is best choice for sending a user a message when they are out of game to draw them back in An example might be a user specific event such as a friend beating your high score or a broader game event such as a Double XP Weekend Although Amazon SNS supports the ability to send push notifications directly to mobile clients a better choice wo uld be Amazon Pinpoint which provides not just mobile push notifications but also e mail voice messages and SNS messaging allowing a player pleasing multiple channel notification solution Last thoughts o n game servers It’s easy to become obsessed with finding the perfect programming framework or pattern Both RESTful and stateful game servers have their place and any of the languages discussed previously will work well if programmed thoughtfully More Amazon Web Services Introduction to Scalable Game Development Pa tterns on AWS 24 importantly you need to spend time thinking about your overall game data architecture —where data lives how to query it and how to efficiently update it Relational vs NoSQL databases The advent of horizontally scaled applications has changed the applicat ion tier and the traditional approach of a single large relational database A number of new databases have become popular that eschew traditional Atomicity Consistency Isolation and Durability (ACID) concepts in favor of lightweight access distribute d storage and eventual consistency These NoSQL databases can be especially beneficial for games where data structures tend to be lists and sets (for example friends levels items) as opposed to complex relational data As a general rule the biggest b ottleneck for online games tends to be database performance A typical web based app has a high number of reads and few writes Think of reading blogs watching videos and so forth Games are quite the opposite with reads and writes frequently hitting th e database due to constant state changes in the game There are many database options out there for both relational and NoSQL flavors but the ones used most frequently for games on AWS are Amazon Aurora Amazon Elasti Cache for Redis Amazon DynamoDB Amaz on RDS for MySQL and Amazon DocumentDB (with MongoDB compatibility ) First we’ll cover MySQL because it’s applicable to gaming and remains very popular Combinations such as MySQL and Redis or MySQL and DynamoDB are very successful on AWS All of the da tabase alternatives described in this section support atomic operations such as increment and decrement which are crucial for gaming MySQL As an ACID compliant relational database MySQL has the following advantages: • Transactions – MySQL provides support for grouping multiple changes into a single atomic transaction that must be committed or rolled back NoSQL stores typically lack multi step transactional functionality • Advanced querying – Since MySQL speaks SQL this provides the flexibility to perform complex queries that evolve over time NoSQL databases typically only support access by key or a single secondary index This means you must make careful data design decisions up front Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 25 • Single source of truth – MySQL guarantees data consistency internally Part of what makes many NoSQL solutions faster is distributed storage and eventual consistency (Eventual consistency means you could write a key on one node fetch that key on another node and have it not be there immediately) • Extensive tools – MySQL h as extensive debugging and data analysis tools available for it In addition SQL is a general purpose language that is widely understood These advantages continue to make MySQL attractive especially for aspects of gaming such as account records in app purchases and similar functionality where transactions and data consistency are paramount Even gaming companies that are leveraging NoSQL offerings such as Redis and DynamoDB frequently continue to put transactional data such as accounts and purchases in MySQL If you’re using MySQL on AWS we recommend that you use Amazon RDS to host MySQL because it can save you valuable deployment and support cycles Amazon RDS for MySQL automates the time consuming aspects of database management such as launching EC2 instances configuring MySQL attaching Amazon Elastic Block Store (Amazon EBS) volumes setting up replication running nightly backups and so on In addition Amazon RDS offers advanced features including synchronous Multi AZ replication f or high availability automated primary/ replica failover and read replicas for increased performance To get started with Amazon RDS see Getting Started with Amazon RDS The following table includes some configuration options that we recommend you implement when you create your RDS MySQL DB instances Table 5: Recommended settings per env ironment Option Development/Test Production DB instance class Micro Medium or larger Multi AZ deployment No Yes (enables synchronous Multi AZ replication and failover) For best performance always launch production on an RDS DB instance that is separate from any of your Amazon RDS development/test DB instances Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 26 Option Development/Test Production Auto Minor Version Upgrade Yes Yes Allocated Storage 5 GB 100 GB minimum (to enable Provisioned IOPS) Use Provisioned IOPS N/A Yes Provisioned IOPS guarantees you a certain level of di sk performance which is important for large write loads For more information about PIOPS see Amazon RDS Provisioned IOPS Storage to Improve Performa nce Consider these additional guidelines when you create your RDS MySQL DB instances: • Schedule Amazon RDS backup snapshots and upgrades during your low player count times such as early morning If possible avoid running background jobs or nightly reports during this window to prevent a query backlog • To find and analyze slow SQL queries in production ensure you have enabled the MySQL slow query log in Amazon RDS as shown in the following list These settings are configured using Amazon RDS DB Parameter Groups Note that there is a minor performance penalty for the slow query log o Set slow_query_log = 1 to enable In Amazon RDS slow quer ies are written to the mysqlslow_log table o The value set in long_query_time determines that only queries that take longer than the specified number of seconds are included The default is 10 Consider decreasing this value to 5 3 or even 1 o Make sure t o periodically rotate the slow query log as described in Common DBA Tasks for MySQL DB Instances in the Amazon RDS User Guide As your game grows and your write load increases resize your RDS DB instances to scale up Resizing an RDS DB instance requires some downtime but if you deploy it in Multi AZ mode as you would for production this is limited to the time it takes to initiate a failover (typical ly a few minutes) For more information see Modifying a DB Instance Running the MySQL Database Engine in the Amazon RDS User Guide In addition you Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 27 can add one or more Amazon RDS read replicas to offload reads from your master RDS instance leaving more cycles for database writes For instructions on deploying replicas with Amazon RDS see Working with Read Replicas Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed and availability of high end commercial databases with the simplicity and cost effectiveness of open so urce databases There are several key features that Amazon Aurora brings to a gaming workload: • High performance – Amazon Aurora is designed to provide up to 5x the throughput of standard MySQL running on the same hardware This performance is on par with c ommercial databases for a significantly lower cost On the largest Amazon Aurora instances it’s possible to provide up to 500000 reads and 100000 writes per second with 10 millisecond latency between read replicas • Data durability – In Amazon Aurora each 10 GB chunk of your database volume is replicated six ways across three Availability Zones allowing for the loss of two copies of data without affecting database write availability and three copies without affecting read availability Backups are do ne automatically and continuously to Amazon S3 which is designed for 99999999999% durability with a retention period of up to 35 days You can restore your database to any second during the retention period up to the last five minutes • Scalability – Amazon Aurora is capable of automatically scaling its storage subsystem out to 64 TB of storage This storage is automatically provisioned for you so that you don’t have to provision storage ahead of time As an added benefit this means you pay only fo r what you use reducing the costs of scaling Amazon Aurora also can deploy up to 15 read replicas in any combination of Availability Zones including cross Region where Amazon Aurora is available This allows for seamless failover in case of an instance failure The following are some recommendations for using Amazon Aurora in your gaming workload: • Use the following DB instance classes: t2small instance in you development/test environments and r3large or larger instance in you production environment • Deploy read replicas in at least one additional Availability Zone to provide for failover and read operation offloading Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 28 • Schedule Amazon RDS backup snapshots and upgrades during low player count times If possible avoid running jobs or reports against the d atabase during this window to prevent backlogging If your game grows beyond the bounds of a traditional relational database like MySQL or Amazon Aurora we recommend that you perform a performance evaluation including tuning parameters and sharding In a ddition you should look at using a NoSQL offering such as Redis or DynamoDB to offload some workloads from MySQL In the following sections we’ll cover a few popular NoSQL offerings Redis Best described as an atomic data structure server Redis has so me unique features not found in other databases Redis provides foundational data types such as counters lists sets and hashes which are accessed using a high speed text based protocol For details on available Redis data types see the Redis data type documentation and An introduction to Redis data types and abstractions These unique data types make Redis an ideal choice for leaderboards game l ists player counts stats inventories and similar data Redis keeps its entire data set in memory so access is extremely fast For comparisons with Memcached see Redis Benchmarks There are a few cavea ts concerning Redis that you should be aware of First you need a large amount of physical memory because the entire dataset is memoryresident (that is there is no virtual memory support) Replication support is also simplistic and debugging tools for R edis are limited Redis is not suitable as your only data store But when used in conjunction with a disk backed database such as MySQL or DynamoDB Redis can provide a highly scalable solution for game data Redis plus MySQL is a very popular solution fo r gaming Redis uses minimal CPU but it uses lots of memory As a result it’s best suited to high memory instances such as the Amazon EC2 memory optimized instance family (that is r3*) AWS offers a fully managed Redis service Amazon ElastiCache for Redis ElastiCache for Redis can handle clustering primary/replica replication backups and many other common Redis maintenance tasks For a deep dive on getting the most out of ElastiCache see the AWS whitepaper Performance at Scale with Am azon ElastiCache MongoDB MongoDB is a document oriented database which means that data is stored in a nested data structure similar to a structure you would use in a typical programming Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 29 language MongoDB uses a binary variant of JSON called BSON for com munication which makes programming against it a matter of storing and retrieving JSON structures This has made MongoDB popular for games and web applications since server APIs are usually JSON too MongoDB also offers a number of interesting hybrid fea tures including a SQL like syntax that enables you to query data by range and composite conditions MongoDB supports atomic operations such as increment/decrement and add/remove from list; this is similar to Redis support for these operations For example s of atomic operations that MongoDB supports see the MongoDB documentation on findAndModify MongoDB is widely used as a primary data store for games and is frequently used in conjunction with Redis since the two complement each other well Transient game data sessions leaderboards and counters are kept in Redis and then progress is saved to MongoDB at logical points (for example at the end of a level or when a new achievement is unlocked) Redis yields high speed access for latency sensitive game data and MongoDB provides simplified persistence MongoDB supports native replication and sharding as well although you do have to configure and monitor these features yourse lf For an in depth look at deploying MongoDB on AWS see the AWS whitepaper MongoDB on AWS Amazon DocumentDB (with MongoDB compa tibility ) is a fully managed document database service that supports MongoDB workloads It's designed for high availability performance at scale and is highly secure Amazon DynamoDB Finally DynamoDB is a fully managed NoSQL solution provided by AWS Dy namoDB manages tasks such as synchronous replication and IO provisioning for you in addition to automatic scaling and managed caching DynamoDB uses a Provisioned Throughput model where you specify how many reads and writes you want per second and the rest is handled for you under the hood To set up DynamoDB see the Getting Started Guide Games frequently use DynamoDB features in the following w ays: • Keyvalue store for user data items friends and history • Range key store for leaderboards scores and date ordered data • Atomic counters for game status user counts and matchmaking Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 30 Like MongoDB and MySQL DynamoDB can be paired with a technolog y such as Redis to handle real time sorting and atomic operations Many game developers find DynamoDB to be sufficient on its own but the point is you still have the flexibility to add Redis or a caching layer to a DynamoDB based architecture Let’s revis it our reference diagram with DynamoDB to see how it simplifies the architecture Figure 4: A fully production read game backend running on AWS using DynamoDB Table structure and queries DynamoDB like MongoDB is a loosely structured NoSQL data store that allows you to save different sets of attributes on a per record basis You only need to predefine the primary key strategy you’re going to use: Amazon Web Services Introduction t o Scalable Game Development Patterns on AWS 31 • Partition key – The partition key is a single attribute that DynamoDB u ses as input to an internal hash function This could be a player name game ID UUID or similar unique key Amazon DynamoDB builds an unordered hash index on this key • Partition key and sort key – Referred to as a composite primary key this type of key is composed of two attributes: the partition key and the sort key DynamoDB uses the partition key value as input to an internal hash function and all items with the same partition key are stored together in sorted order by sort key value For example y ou could store game history as a duplet of [user_id last_login] Amazon DynamoDB builds an unordered hash index on the partition key attribute and a sorted range index on the sort key attribute Only the combination of both keys is unique in this scenari o For best querying performance you should maintain each DynamoDB table at a manageable size For example if you have multiple game modes it’s better to have a separate leaderboard table for each game mode rather than a single giant table This also g ives you the flexibility to scale your leaderboards separately in the event that one game mode is more popular than the others Provisioned throughput DynamoDB shards your data behind the scenes to give you the throughput you’ve requested DynamoDB uses t he concept of read and write units One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second for an item up to 4 KB in size One write capacity unit represents one write per second for an ite m up to 1 KB in size The defaults are 5 read and 5 write units which means 20 KB of strongly consistent reads/second and 5 KB of writes/second You can increase your read and or write capacity at any time by any amount up to your account limits You can also decrease the read and or write capacity by any amount but this can’t exceed more than four decreases in one day Scaling can be done using the AWS Management Console or AWS CLI by selecting the table and modifying it appropriately You can also take a dvantage of DynamoDB Auto Scaling by using the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf in r esponse to actual traffic patterns DynamoDB Auto Scaling works in conjunction with Amazon CloudWatch alarms that monitor the capacity units It scales according to your defined rules There is a delay before the new provisioned throughput is available wh ile data is repartitioned in the background This doesn’t cause downtime but it does mean that the DynamoDB scaling is best suited for changes over time such as the growth of a game Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 32 from 1000 to 10000 users It isn’t designed to handle hourly user spik es For this as with other databases you need to leverage some form of caching to add resiliency To get the best performance from DynamoDB make sure your reads and writes are spread as evenly as possible across your keys Using a hexadecimal string suc h as a hash key or checksum is one easy strategy to inject randomness For more details on optimizing DynamoDB performance see Best Practices for DynamoDB in the Amazon DynamoDB Developer Guide Amazon DynamoDB Accelerator (DAX) DAX allows you to provision a fully managed in memory cache for DynamoDB that speeds up the responsiveness of your DynamoDB tables from millisecond scale latency to microseconds This acceleration comes without requiring any major changes in your game code which simplifies deployment into your architecture All you have to do is re initialize your DynamoDB client with a new endpoint that points to DAX and the rest of the code can remain untouched DAX handles cache invalidation and data population without your intervention This cache can help speed responsiveness when running events that might cause a spike in players such as a seasonal DLC offering or a new patch release Other NoSQL options There are a number of other NoSQL alternatives including Riak Couchbase and Cassandra You can use any of these for gaming and there are examples of gaming companies using them on AWS with success As with choosing a server programming language there is no perfect database —you need to weigh the pros and cons of each one Caching For gaming adding a caching layer in front of your database for frequently used data can alleviate a significant number of scalability problems E ven a short lived cache of just a few seconds for data such as leaderboards friend lists and recent activity can greatly offload your database Adding cache servers is also cheaper than adding additional database servers so it also lowers your AWS costs Memcached is a high speed memory based key value store that is the gold standard for caching Redis features similar performance to Memcached plus Redis has advanced data types Both options perform well on AWS You can choose to install Memcached or Redis on EC2 instances yourself or you can use Amazon ElastiCache Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 33 the AWS managed caching service Like Amazon RDS and DynamoDB ElastiCache completely automates the installation configuration and management of Memcached and Redis on AWS For more details on setting up ElastiCache see Gettin g Started with Amazon ElastiCache in the Amazon ElastiCache User Guide ElastiCache groups servers in a cluster to simplify management Most ElastiCache operations like configuration security and parameter changes are performed at the cache cluster level Despite the use of the cluster terminology ElastiCache nodes do not talk to each other or share cache data ElastiCache deploys the same versions of Memcache and Redis that you would download yourself so existing client libraries written in Ruby Java PHP Python and so on are completely compatible with ElastiCache The typical approach to caching is known as lazy population or cache aside This means that the cache is checked and if the value is not in cache (a cache miss) the record is retrieved stored in cache and returned The following Python example checks ElastiCache for a value queries the database if the cache doesn’t have it and then stores the value back to ElastiCache for subsequent queries Lazy population is the most prevalent cachi ng strategy because it only populates the cache when a client actually requests the data This way it avoids extraneous writes to the cache in the case of records that are infrequently (or never) accessed or that change before being read This pattern is so ubiquitous that most major web development frameworks such as Rails Django and Grails include plugins that wrap this strategy The downside to this strategy is that when data changes the next client that requests it incurs a cache miss which means t hat their response time is slower because the new record needs to be queried from the database and populated into cache This downside leads us to the second most prevalent caching strategy For data that you know will be accessed frequently populate the cache when records are saved to avoid unnecessary cache misses This means that client response times will be faster and more uniform In this case you simply populate the cache when you update the record rather than when the next client queries it The tradeoff here is that it could result in an unnecessarily high number of cache writes if your data is changing rapidly In addition writes to the database can appear slower to users since the cache also needs to be updated To choose between these two st rategies you need to know how often your data is changing versus how often it's being queried Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 34 The final popular caching alternative is a timed refresh This is beneficial for data feeds that span multiple different records such as leaderboards or friend lists In this strategy you would have a background job that queries the database and refreshes the cache every few minutes This decreases the write load on your cache and enables additional caching to happen upstream (for example at the CDN layer) bec ause pages remain stable longer Amazon ElastiCache scaling ElastiCache simplifies the process of scaling your cache instances up and down ElastiCache provides access to a number of Memcached metrics in CloudWatch at no additional charge You should set C loudWatch alarms based on these metrics to alert you to cache performance issues You can configure these alarms to send emails when the cache memory is almost full or when cache nodes are taking a long time to respond We recommend that you monitor the f ollowing metrics: • CPUUtilization – How much CPU Memcached or Redis is using Very high CPU could indicate an issue • Evictions – Number of keys that have to be forced out of memory due to lack of space Should be zero If it’s not near zero you need a larger ElastiCache instance • GetHits/CacheHits and GetMisses/CacheMisses – How frequently does your cache have the keys you need? The higher percentage of hits the more you’re offloading your database • CurrConnections – The number of clients that are currently connected (this depends on the application) In general monitoring hits misses and evictions is sufficient for most appl ications If the ratio of hits to misses is too low you should revisit your application code to make sure your cache code is working as expected As mentioned typically evictions should be zero 100 percent of the time If evictions are nonzero either sc ale up your ElastiCache nodes to provide more memory capacity or revisit your caching strategy to ensure you’re only caching what you need to cache Additionally you can configure your cache node cluster to span multiple Availability Zones to provide hig h availability for your game’s caching layer This ensures that in the event of an Availability Zone being unavailable your database is not overwhelmed by a sudden spike in requests When creating a cache cluster or adding nodes to an existing cluster yo u can chose the Availability Zones for the new nodes You can either specify Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 35 the requested number of nodes in each Availability Zone or select the option to spread nodes across zones With Amazon ElastiCache for Redis you can create a read replica in anoth er Availability Zone Upon a failure of the primary node AWS provisions a new primary node In scenarios where the primary node cannot be provisioned you can decide which read replica to promote to be the new primary ElastiCache for Redis also supports Sharded Cluster with supported Redis Engines version 3 or higher You can create clusters with up to 15 shards expanding the overall inmemory data store to more than 35 TiB Each shard can have up to 5 read replicas giving you the ability to handle 20 million reads and 45 million writes per second The sharded model in conjunction with the read replicas improves overall performance and availability Data is spread across multiple nodes and the read replicas support rapid automatic failover in the ev ent that a primary node has an issue To take advantage of the sharded model you must use a Redis client that is cluster aware The client will treat the cluster as a hash table with 16384 slots spread equally across the shards and will then map the inc oming keys to the proper shard ElastiCache for Redis treats the entire cluster as a unit for backup and restore purposes You don’t have to think about or manage backups for the individual shards Binary game content with Amazon S3 Your database is respons ible for storing user data including accounts stats items purchases and so forth But for game related binary data Amazon S3 is a better fit Amazon S3 provides a simple HTTP based API to PUT (upload) and GET (download) files With Amazon S3 you pay only for the amount of data that you store and transfer Using Amazon S3 consists of creating a bucket to store your data in and then making HTTP requests to and from that bucket For a walkthrough of the proces s see Create a Bucket in the Amazon S3 Getting Started Guide Amazon S3 is ideally suited for a variety of gaming use cases including the following: • Content downloads – Game assets maps patches and betas • User generated files – Photos avatars user created levels and device backups • Analytics – Storing metrics device logs and usage patterns Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 36 • Cloud saves – Game save data and syncing between devices ( AWS AppSync would be a good choice as well) Although you can store this type of data in a database using Amazon S3 has a number of advantages including the following: • Storing binary data in a DB is memory and dis k intensive consuming valuable query resources • Clients can directly download the content from Amazon S3 using a simple HTTP/S GET • Designed for 99999999999% durability and 9999% availability of objects over a given year • Amazon S3 natively supports feat ures such as ETag authentication and signed URLs • Amazon S3 plugs into the Amazon CloudFront CDN for distributing content quickly to large numbers of clients With these factors in mind let’s look at th e aspects of Amazon S3 that are most relevant for gaming Content delivery and Amazon CloudFront Downloadable content (DLC) is a huge aspect of modern games from an engagement perspective and it is becoming a primary revenue stream Users expect an ongoin g stream of new characters levels and challenges for months —if not years —after a game’s release Being able to deliver this content quickly and cost effectively has a big impact on the profitability of a DLC strategy Although the game client itself is t ypically distributed through a given platform’s app store pushing a new version of the game just to make a new level available can be onerous and time consuming Promotional or time limited content such as Halloween themed assets or a long weekend tourna ment are usually easier to manage yourself in a workflow that mirrors the rest of your server infrastructure If you’re distributing content to a large number of clients (for example a game patch expansion or beta) we recommend that you use Amazon Cl oudFront in front of Amazon S3 CloudFront has points of presence (POPs) located throughout the world which improves download performance In addition you can configure which Regions Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 37 CloudFront serves to optimize your costs For more information see the CloudFront FAQ in particular How does CloudFront lower my costs? Finally if you anticipate significant CloudFront usage you should contact our CloudFront sales team because Amazon offers reduced pricing that is even lower than our on demand pricing for high usage customers Easy versioning with ETag As mentioned earlier Amazon S3 supports HTTP ETag and the If None Match HTTP header which are well known to web developers but frequently overlooked by game developers These headers enable you to send a request for a piece of Amazon S3 content and include the MD5 checksum of the version you already have If you already have the latest version Amazon S3 responds with an HTTP 304 Not Modified or HTTP 200 along with the file data if you need it Leveraging ETa g in this manner makes any future use of CloudFront more powerful because CloudFront also supports the Amazon S3 ETag For more information see Request and Response Behavior for Amazon S3 Origins in the Amazon CloudFront Developer Guide Finally you also have the ability to Geo Target or Restrict access to your content through CloudFront’s Geo Targeting feature Amazon CloudFront dete cts the country where your customers are located and will forward the country code to your origin servers This allows your origin server to determine the type of personalized content that will be returned to the customer based on their geographic location This content could be anything from a localized dialog file for an RPG to localized asset packs for your game Uploading content to Amazon S3 Our other gaming use cases for Amazon S3 revolve around uploading data from the game be it user generated conte nt analytics or game saves There are two strategies for uploading to Amazon S3: either upload directly to Amazon S3 from the game client or upload by first posting to your REST API servers and then have your REST servers upload to Amazon S3 Although both methods work we recommend uploading directly to Amazon S3 if possible since this offloads work from your REST API tier Uploading directly to Amazon S3 is straightforward and can even be accomplished directly from a web browser For more informatio n see Browser Based Uploads Using POST (AWS Signature Version 2) in the Amazon S3 Developer Guide You can even Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 38 create secure URLs for players to upload content ( such as from an out of game tool) using pre signed URLs To protect against corruption you should also consider calculating an MD5 checksum of the file and including it in the Content MD5 header This approach enable s Amazon S3 to automatically verify the file was not corrupted during upload For more information see PUT Object in the Amazon S3 API Reference User generated content (UGC) is a great use case for uploading data to Amazon S3 A typical piece of UGC has two parts: binary content (for example a graphic asset) and its metadata (for example name date author tags etc) The us ual pattern is to store the binary asset in Amazon S3 and then store the metadata in a database Then you can use the database as your master index of available UGC that others can download The following figure shows an example call flow that you can use to upload UGC to Amazon S3 Figure 5: A simple workflow for transfer of game content In this example first you PUT the binary game asset (for example avatar level etc) to Amazon S3 which creates a new object in Amazon S3 After you receive a success response from Amazon S3 you make a POST request to our REST API layer with the metadata for that asset The REST API needs to have a service that accepts the Amazon S3 key name plus any metadata you want to keep an d then it stores the key name and the metadata in the database The game’s other REST services can then query the database to find new content popular downloads and so on This simple call flow handles the case where the asset data is stored verbatim in Amazon S3 which is usually true of user generated levels or characters This same Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 39 pattern works for game saves as well —store the game save data in Amazon S3 and then index it in your database by user_id date and any other important metadata If you nee d to do additional processing of an Amazon S3 upload (for example generating preview thumbnails) make sure to read the section on Asynchronous Jobs later in this book In that section we’ll discuss adding Amazon SQS to queue jobs to handle these types o f tasks Analytics and A/B testing Collecting data about your game is one of the most important things you can do and one of the easiest as well Perhaps the trickiest part is deciding what to collect You should consider keeping track of any reasonable m etrics you can think of for a user (for example total hours played favorite characters or items current and highest level etc) If you aren’t sure what to measure or if you have a client that is not updated easily Amazon S3 is a popular choice for st oring raw metrics data as it can be very cost effective However if you are able to formulate questions that you want answered beforehand or if client updates are simple to distribute you can focus on gathering the data that help you answer those specifi c questions After you’ve identified the data follow these steps to track it: 1 Collect metrics in a local data file on the user’s device (for example mobile console PC etc) To make things easier later we recommend using a CSV format and a unique fil ename For example a given user might have their data tracked in 241 game_name user_idYYYYMMDDHHMMSScsv or something similar 2 Periodically persist the data by having the client upload the metrics file directly to Amazon S3 Or you can integrate with Ama zon Kinesis and adopt a loosely coupled architecture as we discussed previously When you go to upload a given data file to Amazon S3 open a new local file with a new file name This keeps the upload loop simple Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 40 3 For each file you upload put a record so mewhere indicating that there’s a new file to process Amazon S3 event notifications provide an excellent way to support this To enable notifications you must first ad d a notification configuration identifying the events you want Amazon S3 to publish such as a file upload and the destinations where you want Amazon S3 to send the event notifications We recommend Amazon SQS because you can then have a background worker listening to Amazon SQS for new files and processing them as they arrive For more details see the Amazon SQS section in this book 4 As part of a background job process the data using a framework such as Amazon EMR or other framework that you choose to run on Amazon EC2 This background process can look at new data files that have been uploaded since the last run and perform aggregation or other operations on the data (Note tha t if you’re using Amazon EMR you may not need step #3 because Amazon EMR has built in support for streaming new files) 5 Optionally feed the data into Amazon Redshift for additional data warehousing and ana lytics flexibility Amazon Redshift is an ANSI SQL compliant columnar data warehouse that you pay for by the hour This enables you to perform queries across large volumes of data such as sums and min/max using familiar SQLcompliant tools Repeat these steps in a loop uploading and processing data asynchronously The following figure shows how this pattern works Figure 6: A simple pipeline for analytics and A/B testing Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 41 For both analytics and A/B testing the data flow tends to be unidirectional That is metrics flow in from users are processed and then a human makes decisions that affect future content releases or game features Using A/B testing as an example when you present users with different items screens and so f orth you can make a record of the choice they were given along with their subsequent actions (for example purchase cancel etc) Then periodically upload this data to Amazon S3 and use Amazon EMR to create reports In the simplest use case you coul d just generate cleaned up data from Amazon EMR in CSV format into another Amazon S3 bucket and then load this into a spreadsheet program For more information on analytics and Amazon EMR see Data Lakes and Analytics on AWS and the Amazon EMR Documentation Amazon Athena Gleaning insights quickly and cheaply is one of the best ways that developers can improve on their games Traditionally this has been relatively difficult because data normally has to be extracted from game application servers stored somewhere transformed and then loaded into a database in order to be queried later This process can take a significa nt amount of time and compute resources greatly increasing the cost of running such tasks Amazon Athena assists with your analytical pipeline by providing the means of querying data stored in Amazon S3 using standard SQL Because Athena is serverless th ere is no infrastructure to provision or manage and generally there is no requirement to transform data before applying a schema to start querying However keep the following points in mind to optimize performance while using Athena for your queries: • Ad hoc queries – Because Athena is priced at a base of $5 per TB of data scanned this means that you incur no charges when there aren’t any queries being run Athena is ideally suited for running queries on an ad hoc basis when information must be gleaned fr om data quickly without running an extract transform and load (ETL) process first • Proper partitioning – Partitioning data divides tables into parts that keep related entries together Partitions act as virtual columns You define them at table creation and they can help reduce the amount of data scanned per query thereby improving performance and reduci ng the cost of any particular query You can restrict the amount of data scanned by a query by specifying filters based on the partition Amazon Web Services Introduction to Scalable Game Developmen t Patterns on AWS 42 For example in the following query: SELECT count(*) FROM lineitem WHERE l_gamedate = '2019 1031' A non partitione d table would have to scan the entire table looking through potentially millions of records and gigabytes of data slowing down the query and adding unnecessary cost A properly partitioned table can help speed queries and significantly reduce cost by cutting the amount of data queried by Athena For a detailed example see Top 10 Performance Tuning Tips for Amazon Athena on the AWS Big Data Blog • Com pression – Just like partitioning proper compression of data can help reduce network load and costs by reducing data size It’s also best to make sure that the compression algorithm you choose allows for splittable files so Athena’s execution engine can i ncrease parallelism for additional performance • Presto knowledge –Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes Athena uses Presto and therefore understanding Presto can help you to optimize the various queries that you run on Athena For example the ORDER BY clause returns the results of a query in sort order To do the sort Presto must send all rows of data t o a single worker and then sort them This could cause memory pressure on Presto which could cause the query to take a long time to execute Worse the query could fail If you are using the ORDER BY clause to look at the top or bottom N values then use a LIMIT clause to reduce the cost of the sort significantly by pushing the sorting and limiting to individual workers rather than the sorting being done in a single worker Amazon S3 performance considerations Amazon S3 can scale to tens of thousands of P UTs and GETs per second To achieve this scale there are a few guidelines you must follow to get the best performance out of Amazon S3 First as with DynamoDB make sure that your Amazon S3 key names are evenly distributed because Amazon S3 determines ho w to partition data internally based on the first few characters in the key name Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 43 Let's assume your bucket is called mygameugc and you store files based on a sequential database ID: http://mygame ugcs3amazonawscom/10752dat http://mygame ugcs3amazon awscom/10753dat http://mygame ugcs3amazonawscom/10754dat http://mygame ugcs3amazonawscom/10755dat In this case all of these files would likely live in the same internal partition within Amazon S3 because the keys all start with 107 This limit s your scalability because it results in writes that are sequentially clustered together A solution is to use a hash function to generate the first part of the object name in order to randomize the distribution of names One strategy is to use an MD5 or SHA1 of filename and prefix the Amazon S3 key with that as shown in the following code example: http://mygame ugcs3amazonawscom/988 10752dat http://mygame ugcs3amazonawscom/483 10753dat http://mygame ugcs3amazonawscom/d5d 10754dat http://myga meugcs3amazonawscom/06f 10755dat Here’s a variation with a Python SHA1 example: #!/usr/bin/env python import hashlib sha1 = hashlibsha1(filename)hexdigest()[0:3] path = sha1 + "" "" + filename For more information about maximizing S3 performance see Best Practices Design Patterns: Optimizing Amazon S3 Performance in the Amazon S3 Developer Guide If you anticipate a particularly high PUT or GET load file an AWS Support Ticket Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 44 Loosely coupled architectures with asynchronous jobs Loosely coupled architectures that involve decoupling components refers to the concept of designing your server components so that they can operate as independently as possible A common approach is to put queues between services so that a sudden burst of activity on one part of your system doesn’t cascade to other parts Some aspects of gaming are difficult to dec ouple because data needs to be as up todate as possible to provide a good matchmaking and gameplay experience for players However most data such as cosmetic or character data doesn’t have to be up tothe millisecond Leaderboards and avatars Many gam ing tasks can be decoupled and handled in the background For example the task of a user updating his stats must be done in real time so that if a user exits and then re enters the game they won’t lose progress However re ranking the global top 100 le aderboard isn’t required every time a user posts a new high score Most users appear far down the leaderboard Instead the ranking process c an be decoupled from score posting and performed in the background every few minutes This approach has minimal im pact on the game experience because game ranks are highly volatile in any active online game As another example consider allowing users to upload a custom avatar for their character In this case your front end servers put a message into a queue such as Amazon SQS about the new avatar upload You write a background job that runs periodically pulls avatars off the queue processes them and marks them as available in MySQL Aurora DynamoDB or whatever database you’re using The background job runs on a different set of EC2 instances which can be set up to auto matically scale just like your front end servers To help you get started quickly Elastic Beanstalk provides worker environments that simplify this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you This approach is an effective way to decouple your front end servers fr om backend processing and it enables you to scale the two independently For example if the image resizing is taking too long you can add additional job instances without needing to scale your REST servers too Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 45 The rest of this section focuses on Amazo n SQS Note that you could implement this pattern with an alternative such as RabbitMQ or Apache ActiveMQ deployed to Amazon EC2 instead Amazon SQS Amazon SQS is a fully managed queue solution with a long pollin g HTTP API This makes it easy to interface with regardless of the server languages you’re using To get started with Amazon SQS see Gettin g Started with Amazon SQS in the Amazon SQS Developer Guide Here are a few tips to best use Amazon SQS: • Create your SQS queues in the same Region as your API servers to make writes as fast as possible Your asynchronous job workers can live in any Region because they are not time dependent This enables you to run API servers in Regions near your users and job instances in more economical Regions • Amazon SQS is designed to scale horizontally A given Amazon SQS client can process about 50 requests a second The more Amazon SQS client processes you add the more messages you can process concurrently For tips on adding additional worker processes and EC2 instances see Increasing Throughput with Horizontal Scaling and Action Batching in the Amazon SQS Developer Guide • Consider using Amazon EC2 Spot Instances for your job workers to save money Amazon SQS is designed to redeliver messages that aren’t explicitly deleted which protects against EC2 instances going away mid job Make sure to delete messages only after you have completed processing them This enables an other EC2 instance to retry the job if a given instance fails while running • Consider message visibility which you can think of this as the redelivery time if a message is not deleted The default is 30 seconds You may need to increase this if you have l ongrunning jobs to avoid multiple queue readers from receiving the same message • Amazon SQS also supports dead letter queues A dead letter queue is a queue that other (source) queues can target for messages that can't be processed (consumed) successfull y You can set aside and isolate these messages in the dead letter queue to determine why their processing doesn't succeed Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 46 In addition Amazon SQS has the following caveats: • Messages are not guaranteed to arrive in order You may receive messages in rando m order (for example 2 3 5 1 7 6 4 8) If you need strict ordering of messages see the following FIFO Queues section • Messages typically arrive quickly but occasionally a message may be delayed by a few minutes • Messages can be duplicated and it's the responsibility of the client to de duplicate Taken together this information means that you must make sure your asynchronous jobs are coded to be idempotent and resilient to delays Resizing and replacing an avatar is a good example of idempotence because doing that twice would yield the same result Finally if your job workload scales up and down over time (for examp le perhaps more avatars are uploaded when more users are online) consider using Auto Scaling to Launch Spot Instances Amazon SQS offers a number of metr ics that you can automatically scale on the best being ApproximateNumberOfMessagesVisible The number of visible messages is basically your queue backlog For example depending on how many jobs you can process each minute you could scale up if this reaches 100 and then scale back down when it falls below 10 For more information about Amazon SQS and Amazon SNS metrics see Amazon SNS Metric s and Dimensions and Amazon SQS Metrics and Dimensions in the Amazon CloudWatch User Guide FIFO queues Although the recommended method of using Amazon SQS is to engineer and architect for your application to be resilient to duplication and misordering yo u may have certain tasks where the ordering of messages is absolutely critical to proper functioning and duplicates can’t be tolerated For example micro transactions where a user wants to buy a particular item once and only once and this action must be strictly regulated To supplement this requirement First InFirstOut (FIFO) queues are available in select AWS Regions FIFO queues provide the ability to process messages both in order and exactly once There are additional limitations when working wi th FIFO queues due to the emphasis on message order and delivery For more details about FIFO queues see FIFO (First InFirstOut) Queues in the Amazon SQS Developer Guide Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 47 Other queue options In addition to Amazon SQS and Amazon SNS there are dozens of other approaches to message queues that can run effectively on Amazon EC2 such as RabbitMQ ActiveMQ and Redis With all of these you are responsible for launching a set of EC2 instances and configuring them yourself which is outside the scope of this book Keep in mind that running a reliable queue is much like running a highly available database: you need to consider high throughp ut disk (such as Amazon EBS PIOPS) snapshots redundancy replication failover and so forth Ensuring the uptime and durability of a custom queue solution can be a time consuming task and can fail at the worst times like during your highest load peaks Cost of the cloud With AWS you no longer need to dedicate valuable resources to building costly infrastructure including purchasing servers and software licenses or leasing facilities With AWS you can replace large upfront expenses with lower variable costs and pay only for what you use and for as long as you need it All AWS services are available on demand and don’t require long term contracts or complex licensing dependencies Some of the advantages of AWS include the following: • OnDemand Instances – AWS offers a pay asyougo approach for over 70 cloud services enabling game developers to deploy both quickly and cheaply as their game gains users Like the utilities that provide power or water you pay only what you consume and once you stop using them there are no additional costs • Reserved Instances – Some AWS services like Amazon EC2 allow you to enter into a 1 or 3 year agreement in order to gain additional savings on the on demand cost of these services With Amazon EC2 in particular you can choose to pay either no upfront cost for an exchange in reduced hourly cost or pay all upfront for additional savings over the year (no hourly costs) • Spot Instances – Amazon EC2 Spot Instances enable you to bid on spare Amazon EC2 capacity as a method of significantly reducing your computing spend Spot Instances are great for applications that are tolerant to workload interruptions; some use cases include batch processing and analytics pipelines that aren’t critical to your primary game functioning Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 48 • Savings Plans – Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices than On Demand in exchange for a commitment to use a specific amount of computer power for a one or three year period • Serverless model – Some other services like AWS Lambda are more granular in their approach to pricing Instead of being pay bythehour they are billed in either very small units of time li ke milliseconds or by request count instead of time This allows you to truly pay for only what you use instead of leaving a service up but idle and accruing costs Conclusion and next steps We've covered a lot of ground in this book Let's revisit the major takeaways and some simple steps you can take to begin your game’s journey on AWS: • Start simple with two EC2 instances behind an Elastic Load Balancing load balancer Choose either Amazon RDS or Amazon DynamoDB as your database Consider using AWS Elastic Beanstalk to manage this backend stack • Store binary content such as game data assets and patches on Amazon S3 Using Amazon S3 offloads network intensive downloads from your game servers Consider CloudFront if you’re distributing these assets glo bally • Always deploy your EC2 instances and databases to multiple Availability Zones for best availability This is as easy as splitting your instances across two Availability Zones to begin with • Add caching via ElastiCache as your server load grows Crea te at least one ElastiCache node in each Availability Zone where you have application servers • As the load grow s offload time intensive operations to background tasks by using Amazon SQS or another queue such as RabbitMQ This enables your EC2 app instanc es and database to handle a higher number of concurrent players • If database performance becomes an issue add read replicas to spread the read/write load out Evaluate whether a NoSQL store such as DynamoDB or Redis could be added to handle certain databa se tasks • At extreme loads advanced strategies such as event driven servers or sharded databases may be necessary However wait to implement these un til necessary since they add complexity to development deployment and debugging Amazon Web Services Introducti on to Scalable Game Development Patterns on AWS 49 Finally remember tha t Amazon Web Services has a team of business and technical people dedicated to supporting our gaming customers To contact us fill out the form at the AWS Game Tech website Contributors Contributors to thi s document include: • Greg McConnel Sr Manager AWS Security and Identity Compliance • Keith Lafaso Sr Technical Account Manager AWS Enterprise Support • Chris Blackwell Sr Software Development Engineer AWS Marketing Further reading For additional inform ation see: • AWS Game Tech website • AWS Marketplace • AWS Support • AWS Architecture Center • AWS Whitepapers & Guides • AWS Documentation Blog Posts and Article s • Best Practices in Evaluating Elastic Load Balancing • Top 10 Performance Tuning Tips for Amazon Athena • Best Practices for Amazon EMR • Fitting the Pattern: Serverless Custom Matchmaking with Amazon GameLift • Performance at Scale with Amazon ElastiCache • MongoDB on AWS: Guidelines and Best Practices Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 50 Document revisions Date Description December 2019 First publication March 11 2021 Reviewed for technical accuracy",General,consultant,Best Practices ITIL_Asset_and_Configuration_Management_in_the_Cloud,ArchivedITIL Asset and Configuration Management in the Cloud January 2017 This paper has been archivedThis paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What Is ITIL? 1 AWS Cloud Adoption Framework 2 Asset and Configuration Management in the Cloud 3 Asset and Configuration Management and AWS CAF 5 Impact on Financial Management 5 Creating a Configuration Management Database 6 Managing the Configuration Lifecycle in the Cloud 8 Conclusion 9 Contributors 10 ArchivedAbstract Cloud initiatives require more than just the right technology They also must be supported by organizational changes such as people and process changes This paper is intended for IT service management (ITSM) professionals who are supporting a hybrid cloud environment that leverag es AWS It outlines best practices for asset and configuration management a key area in the IT Infrastructure Library ( ITIL) on the AWS cloud platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 1 Introduction Leveraging the experiences of enterprise customers who have successfully integrated their cloud strategy with their IT Infrastructure Library (ITIL)based service management practices this paper will cover:  Asset and Configuration Management in ITIL  AWS Cloud Adoption Framework (AWS CAF)  Cloudspecific Asset and Configuration Management best practices like creating a configuration management database What Is ITIL? The framework managed by AXELOS Limited defines a commonly used best practice approach to IT service management (ITSM) Although it builds on ISO/IEC 20000 which provides a “formal and universal standard for organizations seeking to have their ITSM capabilities audited and certified ”1 ITIL goes one step further to propose operational processes required to deliver the standard ITIL is composed of five volumes that describe the ITSM lifecycle as defined by AXELOS: Service Strategy Understands organizational objectives and customer needs Service Design Turns the service strategy into a plan for delivering the business objectives Service Transition Develops and improves capabilities for introducing new services into supported environments Service Operation Manages services in supported environments Continual Service Improvement Achieves incremental and large scale improvements to services Each volume addresses the capabilities that enterprises must have in place Asset and Configuration Management is one of the chapters in the Service Transition volume For more information see the Axelos website 2 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 2 AWS Cloud Adoption Framework AWS CAF is used to help enterprises modernize ITSM practices so that they can take advantage of the agility security and cost benefits afforded by public or hybrid clouds ITIL and AWS CAF are compatible Like ITIL AWS CAF organizes and describes all of the activities and processes involved in planning creating managing and supporting modern IT services It offers practical guidance and comprehensive guidelines for establishing developing and running cloud based IT capabilities AWS CAF is built on seven perspectives: People Selecting and training IT personnel with appropriate skills defining and empowering delivery teams with accountabilities and service level agreements Process Managing programs and projects to be on time on target and within budget while keepi ng risks at acceptable levels Security Applying a comprehensive and rigorous method for describing the structure and behavior for an organization’s security processes systems and personnel Business Identifying analyzing and measuring the effectiveness of IT investments Maturity Analyzing defining and anticipating demand for and acceptance of plan ned IT capabilities and services Platform Defining and describing core architectural principles standards and patterns that are required for optimal IT capabilities and services Operations Transitioning operating and optimizing the hybrid IT environment enabling efficient and automated IT service management AWS CAF is an important supplement to enterprise ITSM frameworks used today because it provides enterprises with practical operational advice for implementing and operating ITSM in a cloudbased IT infrastructure For more information see AWS Cloud Adoption Framework 3 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 3 Asset and Configuration Management in the Cloud In practice asset and configuration management aligns very closely to other ITIL processes such as incident management change management problem management or servicelevel management ITIL defines an asset as “any resource or capability that could contribute to the delivery of a service” Examples of assets include:  virtual or physical storage  virtual or physical servers  a software license  undocumented information known to internal team members ITIL defines configuration items as “an asset that needs to be managed in order to deliver an IT service” All configuration items are assets but many assets are not configuration items Examples of configuration items include a virtual or physical server or a software license Every configuration item should be under the control of change management The goals of asset and configuration management are to:  Support ITIL processes by providing accurate configuration information to assist decision making (for example the authorization of changes the planning of releases) and to help resolve incidents and problems faster  Minimize the number of quality and compliance issues caused by incorrect or inaccurate configuration of services and assets  Define and control the components of services and infrastructure and maintain accurate configuration information on the historical planned and current state of the services and infrastructure The value to business is: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 4  Optimization of the performance of assets improves the performance of the service overall For example i t mitigates risks caused by service outages and failed licensing audits  Asset and configuration management provides an accurate representation of a service release or environment which enables: o Better planning of changes and releases o Improved incident and problem resolution o Meeting service levels and warranties o Better adherence to standards and legal and regulatory obligations (fewer nonconformances) o Traceable changes o The ability to identify the costs for a service The following diagram from AXELOS shows there are elements in asset and configuration management that directly relate to elements in change management Asset and configuration management underpins change management Without it the business is subject to increased risk and uncertainty Figure 1: Asset and configuration management in ITIL ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 5 Asset and Configuration Management and AWS CAF As with most specifications covered in the Service Transition volume of ITIL asset and configuration management falls into the Cloud Service Management function of the AWS CAF Operations perspective People and process changes should be supported by a cloud governance forum or Center of Excellence whose role is to use AWS CAF to manage through the transition From the perspective of ITSM your operations should certainly have a seat at the table As shown in Figure 2 AWS CAF accounts for the management of assets and configuration items in a hybrid environment Information can come from the onpremises environment or any number of cloud providers (private or public) Figure 2: AWS CAF integration Impact on Financial Management One of the most important aspects of asset management is to ensure data is available for these financial management processes:  Capitalization and depreciation  Software license management ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 6  Compliance requirements These activities typically require comprehensive asset lifecycle management processes which take significant cost and effort One of the benefits of moving IT to the cloud is that the financial nature of the transaction moves from a capital expenditure (CAPEX ) to an operating expenditure (OPEX ) You can do away with the large capital outlays (for example a server refresh) that require months of planning as well as amortization and depreciation Creating a Configuration Management Database A configuration management database (CMDB) i s used by IT to track and manage its resources The CMDB presents a logical model of the enterprise infrastructure to give IT more control over the environment and facilitate decisionmaking At a minimum a CMDB contains the following:  Configuration item (CI) records with all associated attributes captured  A relationship model between different CIs  A history of all service impacts in the form of incidents changes and problems In a traditional IT setup the goals of establishing a CMDB are met through the process of:  Discovery tools used to create a record of existing CIs  Comprehensive change management processes to keep track of creation and updates to CIs  Integration of incident and problem management data with impacted CIs with ITSM workflow tools like BMC HewlettPackard or ServiceNow These processes and tools in turn help organizations better understand the IT environment by providing insight into not only the impact of incidents problems and changes but also financial resources service availability and capacity managemen t There are some challenges to creating a CMDB for cloud resources due to: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 7  The inherent dynamic nature of cloud resource provisioning where resources can be created or terminated through predefined business policies or application architecture elements like auto scaling  The difficulty of capturing cloud resources data in a format that can be imported and maintain ed in a single system of record for all enterprise CIs  A prevalence of shadow IT organizations that makes information sharing and even manual consolidation of enterprise IT assets and CIs difficult Configuration Management Inventory for Cloud Resources There are two logical approaches AWS customers can take to create a CMDB for cloud resources: Figure 3: Options for Enterprise CMDB Systems AWS Config helps customers manage their CIs i n the cloud AWS Config provides a detailed view of the configuration of AWS resources in an AWS account With AWS Config customers can do the following:  Get a snapshot of all the supported resources associated with an AWS account at any point in time  Retrieve the configurations of the resources  Retrieve historical configurations of the resources ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 8  Receive a notification whenever a resource is created modified or deleted  View relationships between resources This information is important to any IT organization for CI discovery and recording change tracking audit and compliance and security incident analysis Customers can access this information from the AWS Config console or programmatically extract it into their CMDBs As an example of the potential for integration with legacy systems ServiceNow the platform asaservice (PaaS) provider of enterprise service management software is now integrated with AWS Config This means ServiceNow users can leverage Option 1 shown in Figure 3 Managing the Configuration Lifecycle in the Cloud One of the goals of service asset and configuration management is to manage the CI lifecycle and track and record all changes One of the key aspects of the cloud is a much tighter integration of the software and infrastructure configuration lifecycles This section covers aspects of configuration lifecycle management across instance stacks and applications:  Instance creation templates : Every IT organization has security and compliance standards for instances introduced into its IT environments Amazon Machine Images (AMIs) are a robust way of standardizing instance creation Users can opt for AWS or thirdparty provided predefined AMIs or define custom AMIs If you create AMI templates for instance provisioning you can define instance configuration and environmental addins in a predefined and programmatic manner A typical custom AMI might prescribe the base OS version and associated security monitoring and configuration management agents  Instance lifecycle management : For every instance or resource created in an IT environment there are multiple lifecycle management activities that must be performed Some of the standard tasks are patch management hardening policies version upgrades environment variable changes and so on These activities can be performed manually but most IT organizations use robust configuration management tools like Chef Puppet and System Center Configuration Manager to perform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 9 these tasks AWS allows easy integration with these tools to ensure a consistent enterprise configuration management approach  Environment provisioning templates : AWS CloudFormation is useful for provisioning end toend environments (also referred to as stacks ) in a consistent and repeatable fashion without actually provisioning each component individually You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work AWS CloudFormation takes care of this for you You can use a template to create identical copies of the same stack without effort or errors Templates are simple JSONformatted text files that can be held securely leveraging your current source control mechanisms  Application configuration and lifecycle management : In today’s world of agile development development teams leverage continuous integration and continuous delivery best practices AWS provides seamless integration with tools like Jenkins (CI) and Github for code management and deployment Services like AWS CodePipeline AWS CodeDeploy and AWS CodeCommit can be used to manage the application lifecycle Conclusion Service asset and configuration management processes consist of critical activities for the provisioning and maintenance of the health of IT systems Consistent management of configuration items through their lifecycle leads to efficient and effective system health and performance AWS enables best practices across every level of resource in an application stack With the tools automations and integration available on the AWS platform IT organizations can achieve significant productivity gains Successful implementation an d execution of service asset and configuration management processes should be seen as a shared responsibility that can be achieved through the right commitment by IT organizations enabled by the AWS platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 10 Contributors The following individuals contributed to this document:  Darren Thayre Transformation Consultant AWS Professional Services  Anindo Sengupta Chief Operating Officer Minjar Cloud Solutions 1 ITIL Service Operation Publication AXELOS 2007 page 5 2 https://wwwaxeloscom/bestpracticesolutions/itil/what isitil 3 http://awsamazoncom/professionalservices/CAF/ Notes,General,consultant,Best Practices ITIL_Event_Management_in_the_Cloud_An_AWS_Cloud_Adoption_Framework_Addendum,Archived ITIL Event Management in the Cloud An AWS Cl oud Adoption Frame work Addendum January 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What is ITIL? 1 What is the AWS Cloud Adoption Framework? 2 Event Management in ITIL 3 Event Management and the CAF 5 CloudSpecific Event Management Best Practices for IT Service Managers 5 Cloud Event Monitoring Detection and Communication Using Amazon CloudWatch 6 Conclusion 11 Contributors 11 ArchivedAbstract Many enterprises have successfully migrated some of their onpremises IT workloads to the cloud An enterprise must also deploy an IT Service Management (ITSM) framework so it can efficiently and effectively operate those IT capabilities This whitepaper outlines best practices for event management in a hybrid cloud environment using Amazon Web Services (AWS) ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 1 Introduction This whitepaper is for IT Service Management (ITSM) professionals who support a hybrid cloud environment that uses AWS The focus is on Event Management a core chapter of the Service Operations volume of the IT Infrastructure Library (ITIL) Many AWS enterprise customers have successfully integrated their cloud strategy with their ITILbased IT service management practices This whitepaper provides you with background in the following areas:  Event Management in ITIL  The AWS Cloud Adoption Framework  CloudSpecific Event Management Best Practices What is ITIL? The IT Infrastructure Library (ITIL) Framework managed by AXELOS Limited defines a commonly used bestpractice approach to IT Service Management (ITSM) It builds on ISO/IEC 20000 which provides a “formal and universal standard for organizations seeking to have their ITSM capabilities audited and certified”1 However the ITIL Framework goes one step further to propose operational processes required to deliver the standard ITIL is composed of five volumes that describe the entire ITSM lifecycle as defined by the AXELOS To explore these volumes in detail go to https://wwwaxeloscom/ The following table gives you a brief synopsis of each of the five volumes: ITIL Volume Description Service Strategy Describes how to design develop and implement service management as a strategic asset Service Design Describes how to design and develop services and service management processes Service Transition Describes the development and improvement of capabilities for transitioning new and changed services into operations Service Operation Embodies practices in the management of service operation Continual Service Improvement Guidance in creating and maintaining value for customers ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 2 What is the AWS Cloud Adoption Framework? The Cloud Adoption Framework (CAF) offers comprehensive guidelines for establishing developing and running cloudbased IT capabilities AWS uses the CAF to help enterprises modernize their ITSM practices so that they can take advantage of the agility security and cost benefits afforded by the cloud Like ITIL the CAF organizes and describes the activities and processes involved in planning creating managing and supporting a modern IT service ITIL and the CAF are compatible In fact the CAF provides enterprises with practical operational advice for how to implement and operate ITSM in a cloudbased IT infrastructure The details of the AWS CAF are beyond the scope of this whitepaper but if you want to learn more you can read the CAF whitepaper at http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf The CAF examines IT management in the cloud from seven core perspectives as shown in the following table: CAF Perspective Description People Selecting and training IT personnel with appropriate skills defining and empowering delivery teams with accountabilities and service level agreements Process Managing programs and projects to be on time on target and within budget while keeping risks at accepta ble levels Security Applying a comprehensive and rigorous method of describing a structure and behavior for an organization’s security processes systems and personnel Strategy & Value Identifying analyzing and measuring the effectiveness of IT investm ents that generate the most optimal business value Maturity Analyzing defining and anticipating demand for and acceptance of envisioned IT capabilities and services Platform Defining and describing core architectural principles standards and patterns that are required for optimal IT capabilities and services Operation Transitioning operating and optimizing the hybrid IT environment enabling efficient and automated IT service management ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 3 Event Management in ITIL The ITIL specification defines an event as “any detectable or discernable occurrence that has significance for the management of the IT infrastructure or the delivery of IT service” In other words an event is something that happens to an IT system that has business impact An occurrence can be anything that has material impact on the business such as environmental conditions security intrusions warnings errors triggers or even normal functioning Occurrences are things that an enterprise needs to monitor preferably in an automated fashion giving you the visibility you need to run your systems more efficiently and effectively over time with minimal downtime The goal of Event Management is to detect events prioritize and categorize them and figure out what to do about them In practice Event Management is used with a central monitoring tool which registers events from services or other tools such as configuration tools availability and capacity management tools or specialized monitoring tools Event Management acts as an umbrella function that sits on top of other ITIL processes such as Incident Management Change Management Problem Management or ServiceLevel Management and divides the work depending on the type of event or its severity AXEL OS provides the following flow chart to describe what an enterprise’s Event Management process should look like: ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 4 Figure 1: Event management in ITIL AXELOS observes that not all events are or need to be detected or registered Defining the events to be managed is an explicit and important management decision After management decides which events are relevant service components must be able to publish the events or the events must be pollable by a monitoring tool Events must also be actionable The Event Management process whether automated or manual must be able to determine what to do for any event This determination can take many forms such as ignoring logging or escalating the event Finally the Event Management process must be able to review and eventually close events ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 5 Event Management and the CAF As with most specifications covered in the Service Operation Volume of ITIL Event Management falls nicely into the Cloud Service Management function of the AWS CAF Operating Domain Of course cloud initiatives require more than just the right technology They also must be supported by organizational changes including people and process changes Such changes should be supported by a Cloud Governance Forum or Center of Excellence that has the role of managing through transition using the CAF From the perspective of ITSM your operations should certainly have a seat at the table Figure 2 illustrates how the CAF looks at managing events and actions in a hybrid environment Review and action is based on information comes from the on premises environment or any number of cloud providers (private or public) Figure 2: CAF integration CloudSpecific Event Management Best Practices for IT Service Managers AWS provides the building blocks for your enterprise to create your own Event Management Infrastructure These building blocks allow for the integration of cloud services with onpremises or more traditional environments In particular ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 6 AWS provides full support for ITIL Section 4110: Designing for Event Management AWS does not provide Event Management as a Service Enterprises that enable Event Management would need to deploy and manage their own Event Management infrastructure Cloud Event Monitoring Detection and Communication Using Amazon CloudWatch AWS supports instrumentation by providing tools to publish and poll events In particular you can use the Amazon CloudWatch API for automated management and integration into your Event Management infrastructure Amazon CloudWatch monitors your AWS resources and the applications that you run on AWS in realtime2 You can use Amazon CloudWatch to collect and track metrics which are the variables you want to measure for your resources and applications In addition Amazon CloudWatch alarms (or monitoring scripts) can send notifications or automatically make changes to the resources that you are monitoring based on rules that you define For information on CloudWatch pricing go to the Amazon CloudWatch pricing page 3 You can use CloudWatch to monitor the CPU usage and disk reads and writes of your Amazon Elastic Compute Cloud (Amazon EC2) instances Then you can use this data to determine whether you should launch additional instances to handle increased load You can also use this data to stop underused instances and save money In addition to monitoring the builtin metrics that come with AWS you can monitor your own custom metrics You can publish and monitor metrics that you derive from your applications to reflect your business needs With Amazon CloudWatch you gain systemwide visibility into resource utilization application performance and operational health4 Amazon EC2 Monitoring Detail Read more about Amazon EC2 monitoring in the AWS documentation: http://docsawsamazonco m/AWSEC2/latest/UserGui de/monitoring_ec2html ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 7 By default metrics and calculated statistics are presented graphically in the Amazon CloudWatch console You can also retrieve these metrics using the API or command line tools When you use Auto Scaling you can configure alarm actions to stop start or terminate an Amazon EC2 instance when certain criteria are met In addition you can create alarms that initiate Auto Scaling and Amazon Simple Notification Service (Amazon SNS) actions on your behalf5 An enterprise that does not have its own event management infrastructure can implement basic ITIL Event Management using Amazon CloudWatch However most large enterprises especially those running hybrid cloud designs will maintain their own event management infrastructure using products such as BMC Remedy Microsoft System Center or HP Open View Many event management tools are integrated with Amazon Web Services See the following table for some examples Tool Reference MS System Center http://awsamazoncom/windows/system center/ BMC Remedy http://mediacmsbmccom/documents/439126_BMC_Managing_AWS_SWP pdf IBM Tivoli https://awsamazoncom /marketplace/pp/B007P7MEK0 CA APM https://awsamazoncom/marketplace/pp/B00GGX0N0W/ref=portal_asin_url Tool Reference CA Nimsoft http://wwwcacom/~/media/Files/DataSheets/ca nimsoft monitor for amazon webservicespdf HP Sitescope http://h304 99www3hpcom/t5/Business Service Management BAC/HP SiteScope integration withAmazon CloudWatch AutoScaling AWS/ba p/2408860#VCzWTPmSzTY This type of design is fully compatible with AWS However enterprises will need to deploy SNMP AWS SNS or other interfaces that sit between Amazon CloudWatch and their enterprise Event Management / Service Desk tool This ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 8 will ensure that AWSgenerated events can pass through Amazon CloudWatch and into the enterprise Event Manager IT service management professionals who integrate Amazon CloudWatch into their enterprise event management infrastructure need to answer the following questions:  Are the right events are bei ng propagated?  Are the events tracked at the right level of granularity?  Is there a mechanism to review and update triggers limits and event handling rules? Best Practices for Monitoring in AWS Make monitoring a priority to head off small problems before they become big ones Automate monitoring tasks as much as possible Check the log files on your services (Amazon EC2 Amazon S3 Amazon RDS etc) Create and implement a monitoring plan that collects data from all parts of your AWS solution so that you can more easily debug a multipoint failure if one occurs Your monitoring plan should address at a minimum the following questions:  What are your monitoring goals?  What resources will you monitor?  How often will you will monitor these resources?  What monitoring tools will you use?  Who will perform the monitoring tasks?  Who should receive notification when something goes wrong? ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 9 Incident Management Events classified as Warnings or Exceptions may trigger incident management processes These processes restore normal service operation as quickly as possible and minimize any adverse impact on business operations In the ITIL process first attempt to resolve warnings or exceptions by consulting a database of known errors or a configuration management database (CMDB) If the warning or exception is not in the database then classify the incident and transfer it to Incident Management Incident Management typically consists of first line support specialists who can resolve most of the common incidents When they cannot resolve an incident they escalate it to the second line support team and the process continues until the incident is resolved Incident Management tries to find a quick resolution to the Incident so that the service degradation or downtime i s minimized”1 Figure 3: Incident management in ITIL It is worth noting that a welldesigned cloud infrastructure can be far more resilient to faults There is less likelihood of generating production incidents where faults are able to gracefully fail over Underlying problems can be resolved through Problem Management ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 10 Incident Management Best Practices As part of cloudintegrated Incident Management enterprises should define several parameters:  Ensure that relevant employees and staff understand which services are AWSoperated versus enterpriseoperated (for example an Amazon EC2 instance versus a business application running on that instance)  Ensure that relevant staff and processes are aware of the SLAs associated with AWSoperated services and integrate those SLAs into the existing Enterprise Incident Management infrastructure  Define explicit SLAs (including resolution time scales) for services operated by the enterprise but running on the AWS infrastructure  Define Incident Severity levels and Priorities for all services running on the AWS infrastructure  Subscribe to Enterprise Support and agree on the role the Amazon Technical Account Manager (TAM) will have during Incident Responses For example for Severity 1 incidents should the TAM be part of the emergency resolution bridge / emergency response team?  Ensure 360 degree ticket integration Make sure that ticket opening and closing is seamless across onpremises and cloud systems  Define recovery runbook recipes (Incident Model) that include the recovery steps in chronological order individual responsibilities escalation rules timescales and SLA thresholds media/communications roles and post mortems You should note that in a cloud environment where infrastructure is defined as code termination and reboot might be a faster way to recover from an incident than by using standard debugging approaches Service can be immediately restored and root problems can be addressed offline as part of Problem Management  Where possible incident remediation should occur automatically with no human intervention However where human intervention is required that intervention should be simple with mostly automated runbook steps Problem Management Problem Management is the process of managing the lifecycle of all problems with the goal of preventing repeat incidents Whereas the goal of Incident Management is to recover Problem Management is about resolving root causes ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 11 so that incidents do not recur and maintaining information about problems and related solutions so organizations can reduce the impact of incidents Enterprises operating a hybrid environment will likely have their own Problem Management infrastructure The goal of integration should be to seamlessly integrate the process for addressing problems related to AWS into the existing Problem Management infrastructure Enterprises have the option of purchasing AWS Enterprise Support where they can agree on role the Amazon Technical Account Manager (TAM) will have during Problem Management For example where the problem explicitly involves part of the AWS infrastructure the TAM might be involved with formal problem detection prioritization and diagnosis workshops and discussions or be required to log AWSrelated problems with the enterprise Problem Logging platform / Known Error Database If AWS infrastructure is not part of the root cause it could play a role in supporting diagnosis Here the TAM can support the information gathering Conclusion Enterprises that migrate to the cloud can feel confident that their existing investments in ITIL and particularly Event Management can be leveraged going forward The Cloud Operating model is consistent with traditional IT Service Management discipline This whitepaper gives you a proposed suite of best practices to help smooth the transition and ensure continuing compliance Contributors The following individual contributed to this document:  Eric Tachibana AWS Professional Services 1 ITIL Service Operation Publication Office of Government Commerce 2007 Page 5 2 For up to 2 weeks! Notes ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 12 3 http://awsamazoncom/cloudwatch/pricing/ 4 What Is Amazon CloudWatch? (http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/W hatIsCloudWatchhtml ) 5 For more information about creating CloudWatch alarms see Creating Amazon CloudWatch Alarms in the CloudWatch documentation (http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/ AlarmThatSendsEmailhtml ),General,consultant,Best Practices Lambda_Architecture_for_Batch_and_RealTime_Processing_on_AWS_with_Spark_Streaming_and_Spark_SQL,Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived,General,consultant,Best Practices Lambda_Architecture_for_Batch_and_Stream_Processing,Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived,General,consultant,Best Practices Leveraging_Amazon_Chime_Voice_Connector_for_SIP_Trunking,"Leveraging Amazon Chime Voice Connector for SIP Trunking April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 About Amazon Chime Voice Connector 1 Service Benefits 1 Low Cost and Reduc ed TCO 1 Flexible and On Demand 2 Use Case Scenarios 3 Outbound Calling Only 4 Inbound and Outbound Calling 5 Inbound and Outbound Calling Exclusively 6 Inbound Calling Only 7 Service Features 8 Reliability and Elasticity 8 AWS SDK 8 Security – Call Encryption 8 IP Whitelisting and Call Authentication 8 Call Detail Records (CDR) 8 Phone Number Inventory Management 9 Outbound Caller ID Name 9 Call Routing with Load Sharing 9 Failover and Load Sharing 10 Fax 11 Access 11 Real time Audio Streaming to Amazon Kinesis Video Streams 12 Monitoring Amazon Chime Voice Connectors 13 Conclusion 14 Contributors 14 Further Reading 14 Document Revisions 15 Appendix A: Call Detail Record (CDR) Specifications 16 Call Detail Record (CDR) 16 Streaming Detail Record (SDR) 18 Appendix B: SIP Signaling Specifications 21 Ports and Protocols 21 Supported SIP Methods 21 Unsupported SIP Methods 21 Required SIP Headers 21 SIP OPTIONS Requirements 22 SIPREC INVITE Requirements 22 Dialed Number Requirements 22 Caller ID Number Requirements 23 Caller ID Name 23 Digest Authentication 23 Call Encryption 23 Session Description Protocol (SDP) 24 Supported Codecs 24 DTMF 24 Appendix C: CloudWatch Metrics and Logs Examples 25 CloudWatch Metrics 25 CloudWatch Logs 25 Abstract This whitepaper outlines the features and benefits of using Amazon Chime Voice Connector Amazon Chime Voice Connector is a service that carries your voice traffic over the internet and elastically scales to meet your capacity needs This whitepaper assumes that you are already familiar with Session Initiation Protocol (SIP) trunkingAmazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 1 Introduction Amazon Chime Voice Connector is a payasyougo service that enables companies to make and receive secure inexpensive phone calls over the internet using their on premises telephone system such as a private branch exchange (PBX) The service has no upfront fees elastically scales based on demand and supports calling both landline and mobile phone numbers in over 100 countries Getting started with Amazon Chime Voice Connector is as easy as a few clicks on the AWS Management Console and then employees can place and receive calls on their desk phones in minutes About Amazon Chime Voice Connector Amazon Chime Voice Connector uses standards based Session Initiation Protocol (SIP) and c alls are delivered over the internet using Voice over Internet Protocol (VoIP) Amazon Chime Voice Connector does not require dedicated data circuits and can use a company’s existing internet connection or use AWS Direct Connect public virtual interface for the SIP connection to AWS The configuration of SIP trunks can be performed in minutes using the AWS Managemen t Console or the AWS SDK Amazon Chime Voice Connector offers costeffective rates for outbound calls In addition c alls to Amazon Chime audio conferences as well as calls to other companies using Amazon Chime Voice Connector are at no additional cost With this service companies can reduce their voice calling costs without having to replace their on premises phone system Service Benefits Amazon Chime Voice Conne ctor provides the following benefits Low Cost and Reduced TCO Amazon Chime Voice Connector provides an easy way to move telephony to the cloud without replacing on premises phone system s Using the service you can reduce your voice calling costs by up to 50% by eliminating fixed telephone network costs and simplifying your voice network administration To estimate the cost of using Amazon Chime Voice Connector see the Amazon Chime Pricing page Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 2 Amazon Chime Voice Connector allows you to use SIP trunking infrastructure on demand with voice encryption available at no extra charge The elastic scaling of the service eliminates the need to overprovision SIP and/or time division multiplexing (TDM) trunks fo r peak capacity You only pay for what you use and can track your telecom spending in your monthly AWS invoice There is no charge for c reating SIP trunks and no subscription or per user license fees or c oncurrent conversation fees The following table sho ws a cost comparison of Amazon Chime Voice Connector with other service offerings Table 1: Cost Comparison of Amazon Chime Voice Connector and Other SIP Offerings Monthly Cost Offering 1 Offering 2 Offering 3 Amazon Inbound call/minute $00000 $00000 $00045 $00022 Outbound call/minute $00080 $00120 $00070 $00049 Concurrent call charge per sub $08180 $1090 7 $0 $0 Number rental $010 $100 $100 $100 350 minutes/month $187 $280 $216 $140 Normalized Pricing/month $278 $489 $316 $240 Potential savings with Amazon Chime Voice Connector 1467% 6831% 2734% N/A Flexible and On Demand Your telecom administrator uses the AWS Management Console to create the Amazon Chime Voice Connector and your organization can begin sending and receiving voice calls in minutes You can route as much voice traffic to it as needed or desired within the AWS service quotas You can also choose to keep your inbound phone numbers also known as Direct Inward Dialing (DID) numbers with your current service provider or contact AWS Support to port the number s to Amazon Chime Voice Connector and take advantage of the Amazon Chime dial in rates Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 3 Use Case Scenarios You can use Amazon Chime Voice Connector to send voice traffic from your on premises PBX to AWS (outbound calls to public switched telephone network [PSTN ] numbers ) and to receive voice calls from your Voice Connector to your PBX ( inbound calls from DID numbers ) or both In both call flow scenarios (outbound and /or inbound calls ) you can connect to Amazon Chime Voice Connector using your existing telephony devices These device s can be a Session Border Controller (SBC) an IP PBX or a media gateway In the following examples an SBC is the network element that is used to connect the SIP trunks • Outbound Calling Only • Inbound and Outbound Calling • Inbound and Outbound Calling Exclusively • Inbound Calling Only Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 4 Outbound Calling Only In this deployment model you benefit from the lowcost outbound calling to PSTN phone number s Calls from your PBX to Amazon Chime Voice Connector incur no outbound telephony charges You can use Amazon Chime Voice Connector for outbound calling in conjunction with the existing connection to your current SIP trunking provider Your inbound calling remains unchanged In this use case Amazon Chime Voice Connector is typically configured as a route for high availability in case the default route to the Existing SIP Trunking Provider is unavailable as well as for least cost routing ( LCR) within the IP PBX or SBC Figure 1: Outbound Calling Onl y Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 5 Inbound and Outbound Calling In this deployment model you use Amazon Chime Voice Connector for both inbound and outbound voice calling in parallel with your current service provider For inbound calling you either acquire new phone numbers from AWS or port your existing phone numbers from your current service provider You can move some or all of the phone numbers from your current service provider to Amazon Chime Voice Connector For outbound calling you use Amazon Chime Voice Connector as a parallel route for your outbound voice calls from your PBX Figure 2: Inbound and Outbound Calling Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 6 Inbound and Outbound Calling Exclusively In this deployment model you use Amazon Chime Voice Connector for both inbound and outbound voice calling This eliminate s the need for your existing SIP trunks and reduces network complexity For inbound calling you acquire new phone numbers from AWS or port the existing phone numbers from your current service provider For outbound calling use Amazon Chime Voice Connector as the singl e route for all outbound voice calls from your PBX Amazon Chime Voice Connector has built in call failover service resilience and high availability features Figure 3: Inbound and Outbound Calling Exclusively Amazon Web Services Leveraging Am azon Chime Voice Connector for SIP Trunking Page 7 Inbound Calling Only In this deployment model you use Amazon Chime Voice Connector only for inbound voice calling For inbound calling only you acquire new phone numbers from AWS or port existing phone numbers from your current service provider For inbound calling only you benefit from the routing features provided by Amazon Chime Voice Connector such as load balancing failure mitigation mechanisms and easy phone number inventory management using the AWS Management Console or the AWS SDK For more information on these features see Call Routing with Load Sharing and Phone Number Inventory Management Figure 4: Inbound Calling Only Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 8 Service Features Reliability and Elasticity Amazon Chime Voice Connector delivers highly available and scalable telephone service for inbound calls to your onpremises telephone system outbound calls to Amazon Chime Voice Con nector or both Using Amazon Chime Voice Connector Groups you can configure multi region failover for inbound calls from PSTN c alls to your Amazon Chime Voice Connectors Additionally Amazon Chime Voice Connector provides a loadsharing mechanism for inbound calls t o your on premises phone system using priority and weight AWS SDK The AWS SDK allows you to perform and automate key administrative tasks such as managing phone numbers Amazon Chime Voice Connectors and Amazon Chime Voice Connector Groups Security – Call Encryption Call e ncryption is a configurable option for each Amazon Chime Voice Conne ctor and is provided at no additional charge If encryption is enabled voice calls are encrypted between the service and your SIP infrastructure Transport Layer Security ( TLS) is used to encrypt the SIP signaling and Secure Real Time Protocol (SRTP) is used to encrypt the media streams To learn about the SIP Signaling Specifications see Appendix B: SIP Signaling Specifications IP Whitelisting and Call Authentication You can authenticate v oice traffic to Amazon Chime Voice Connector by using the mandatory Allow List (IP whitelisting) and by using the optional Digest Authentication (as described in RFC 3261 section 22 ) Call Detail Records (CDR) Shortly after each call A mazon Chime Voice Connector stores the Call Detail Record (CDR) as an object in your own Amazon Simple Storage Service ( Amazon S3 ) bucket You configure the S3 bucket in the AWS Management Consol e You can retrieve t he CDR records from Amazon S3 and import them into a VoIP billing system To learn Amazon Web Services Leveraging Amazon Chime Voice Connector for S IP Trunking Page 9 about the CDR schema see Appendix A: Call Detail Record (CDR) Specifications For the current CDR format see the Amazon Chime Voice Connector documentation Phone Number Inventory Management You can manage p hone n umber s using the AWS Management C onsol e and the AWS SDK You can manage your existing phone numbe r inventory order new numbers review pending transactions and manage deleted phone number s Contact AWS Support to port existing phone numbers Outbound Caller ID Name Support for Outbound Caller ID Name (CNAM) is a component of caller ID that displays your name or company name on the Caller ID display of the party that you are calling Amazon Chime Voice Connector makes it easy to set calling names for Amazon Chime Voice Connector phone numbers using the AWS Management C onsole Amazon make s the necessary changes to the Line Information Database (LIDB) so that your configured name appear s on outbound phone calls There is no charge to use this feature You can set a defa ult calling name for all the phone numbers in the Amazon Chime account once every 7 days using the AWS Management Console or AWS SDK You can also set and update calling names for each phone number purchased or ported into Amazon Chime Voice Connector The update can take up to 72 hours to propagate during which time the previous setting is still active You can track the status of the calling name updates in the AWS Management C onsole or the AWS SDK When you place a call using Amazon Chime Voice Connect or the call is routed through the public switched telephone network (PSTN) to a fixed or mobile telephone carrier of the called party Note that not all landline and mobile telephone carriers support CNAM or use the same CNAM database as Amazon Chime Voice Connector which can result in the called party either not seeing CNAM or seeing a CNAM that is different from the value you set Call Routing with Load Sharing Amazon Chime Voice Connector provides you with flexibility to configure how inbound calls from PSTN are routed to multiple offices thus allowing you to improve the resiliency of your telephone network Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 10 Inbound Calls Inbound calls to your on premises phone system are routed using user defined priorities and weights to automatically route calls to multiple SIP hosts Calls are routed in priority order first with 1 being the highest priority If hosts are equal in priority calls are distributed among them based on their relative weight This approach is useful for both load balancing and f ailure mitigation If a particular host is unavailable Amazon Chime Voice Connector automatically re route s calls to the next SIP host based on priority and weight This approach allows administrators to send all or a percentage of the calls to one site a nd to reroute the calls to another site in a disaster recovery scenario Outbound Calls For outbound calls from your on premises phone system the hostname is a fully qualified domain name (FQDN) with dynamically assigned multiple IP addresses for load sharing Failover and Load Sharing You can use Amazon Chime Voice Connector groups for fault tolerant cross region routing for inbound calling to your on premises phone system By associating Amazon Chime Voice Connectors in different AW S Regions to a n Amazon Chime Voice Connector group you can create multiple independent routes for inbound calls to your onpremises phone system In the event of loss of connectivity between an AWS Region and your phone system or an Amazon Chime Voice Connector service unavailability in an AWS Region incoming calls route to the next Amazon Chime Voice Connectors in a n Amazon Chime Voice Connector group in priority order For more information see Working with Amazon Chime Voice Connector Groups Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 11 Figure 5: Voice Connector Groups Failover Fax Amazon Chime Voice Connector supports faxing using SIP with either T38 or G711 µ law The SI P messaging when using T38 should follow the format described in RFC 3362 In short much of the SIP messaging stays the same as a voice call One change is the “image/t38” MIME content type is added in the SDP to indicate a T38 media stream will be pres ent Modern PBX and SBC systems will recognize T38 and its messaging format Access Access to the Amazon Chime Voice Connector can be provided through the internet or by using AWS Direct Connect Internet Access You can connect to Amazon Chime Voice Connector using the internet The bandwidth between Amazon Chime Voice Connector and your SIP infrastructure must be sufficient Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 12 to handle the number of simultaneous calls For information about the bandwidth requiremen ts see Network Configuration and Bandwidth Requirements AWS Direct Connect Access You can connect using AWS Direct Connect public virtual interfaces which in many cases can reduce your network costs as it is more cost effective than Multiprotocol Label Switching (MPLS ) AWS Direct Connect can also increase bandwidth throughput and provide a more consistent network experience than internet based connections When you combine Amazon Chime Voice Connector with AWS Direct Connect your voice call sessions use a single provider Real time Audio Stream ing to Amazon Kinesis Video Streams Amazon Chime Voice Connector can stream audio from telephone calls to Amazon Kinesis Video Streams in real time and gain insights from your business’ conversa tions Amazon Kinesis Video Streams is an AWS service that makes it easy to accept durably store and encry pt realtime media and connect it to other services for analytics voice transcription machine learning (ML) playback and other processing You c an process audio streams with services like AWS Lambda Amazon Transcribe or Amazon Comprehend to build call recording transcription and analysis solutions For each audio call that is streamed to Kinesis Video Streams two separate Kinesis streams are created for the caller and call recipient media streams Each Kinesis stream within an audio call contains metadata such as the TransactionId and the VoiceConnectorId which can be used to easily filter the audio streams within the same phone call You can enable media streaming for all phone calls placed on the Amazon Chime Voice Connector using the Amazon Chime console or you can enable real time audio streaming on a per call basis using SIPREC INVITE For more inform ation on streaming audio to Kinesis see Streaming Amazon Chime Voice Connector Media to Kinesis Audio Streaming using SIPREC You can also send a SIPREC INVITE from your existing on premises telephone system (Session Border Controller or IP PBX) to Amazon Chime Voice Connector to initiate a realtime audio stream to Amazon Kinesis Video Streams You can use this feature to integrate your existing on premises phone system with AWS services for analytics Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 13 voice transcription machine learning (ML) playback and other real time processing After receiving the SIPREC INVITE from your on premises phone system Amazon Chime Voice Connector then sends the caller and call recipient media flows to your Amazon Kinesis Video Stream to connect the media streams to other AWS services for other process ing For more information on using SIPREC INVITE to stream media to Kinesis see Streaming Amazon Chime Voice Connector Media to Kinesis Figure 6: SIPREC Support Monitoring Amazon Chime Voice Connectors You can monitor Amazon Chime Voice Connector using Amazon CloudWatch which collects raw data and processes it into readable near real time metrics These metrics are kept for 15 months so that you can access historical information and gain a better perspective on how your audio service is performing Amazon Chime Voice Connector sends metrics to Amazon CloudWatch Metrics that capture and process performance metr ics across all Voice Connectors in your AWS Account You can use Amazon CloudWatch Metrics to create dashboards and setup alarms to monitor the performance and availability of your calling solution You can use Amazon CloudWatch Logs when configuring new V oice Connectors and troubleshooting issues For more in formation see Monitoring Amazon Chime with Amazon Clo udWatch Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 14 CloudWatch Metrics Amazon CloudWatch Metrics provi des a near real time stream of system events that describe metrics pertaining to the usage and performance of your Amazon Chime Voice Connectors Using the Amazon CloudWatch Metrics you can create dashboards set up automated alarms respond quickly to operational changes and take corrective actions CloudWatch Logs You can choos e to send SIP Message Capture L ogs from your Voice Connector to CloudWatch Logs You can use SIP Message Capture Logs when setting up new Voice Connectors or to tro ubleshoot issues with existing Voice Connectors For more information see Monitoring Amazon Chime with Amazon CloudWatch Conclusion Amazon Chime Voice Connector is simpl e to set up via the AWS Management Console or AWS SDK and employees can place and receive calls on their desk phones in minutes Calls are delivered to Amazon over a n internet connection using industry standard VoIP With Amazon Chime Voice Connecto r there are no upfront fees commitments or long term contracts You only pay for what you use Contributors Contributors to this document include: • Delyan Radichkov Sr Technical Program Manager Amazon Web Services • Joe Trelli Chime Specialized Soluti ons Architect Amazon Web Services Further Reading For additional information see: • Working with Amazon Chime Voice Connectors • Amazon Chime Pricing • Amazon Chime Documentation • RFC 3261 Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 15 Document Revisions Date Description April 2020 Added fax support ; updated dialed number requirements for outbound calls November 2019 New features and content updates March 2019 First publication Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 16 Appendix A: Call Detail Record (CDR) Specifications Call Detail Record ( CDR ) Storage Details Call Detail Records (CDRs) are stored in your Amazon S3 bucket based on your bucket retention policy CDR objects are stored using names in the following format: AmazonChimeVoiceConnector CDRs/json/ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID where: • vconID – Amazon Chime Voice Connector ID • yyyy/mm/dd – Year month and day that the call started • HHMMssmmm – Start time of call • transactionID – Amazon Chime Voice Connector transaction ID For example: AmazonChimeVoiceConnector CDRs/json/grdcp7r7fjejaautia8rvb/2019/0 3/01/171000020_123456789 CDR Schema CDR object s are stored with no whitespace or newline characters using the following format: Value Description {""AwsAccountId"":"" AWSaccount ID"" AWS account ID ""TransactionId"":"" transaction ID"" Amazon Chime Voice Connector transaction ID UUID ""CallId"": ”SIPcall ID"" Customer facing SIP call ID ""VoiceConnectorId"":"" voice connector ID"" Amazon Chime Voice Connector ID UUID Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 17 Value Description ""Status"":""status"" Status of the call ""StatusMessage "":""status message"" Status message of the call ""SipAuthUser "":""sipauth user"" SIP authentication name ""BillableDurationSeconds "":""billable duration inceconds"" Billable duration of the call in seconds ""BillableDuration Minutes"":""billable duration inminutes"" Billable duration of the call in minutes ""SchemaVersion "":""schema version"" The version of the CDR schema ""SourcePhone Number"":"" source phone number"" E164 origination phone number ""SourceCountry "":""source country"" Country of origination phone number ""DestinationPhone Number"":"" destination phone number"" E164 destination phone number ""DestinationCountry "":""destination country"" Country of destination phone number ""UsageType "":""usage type"" Usage details of the line item in the Price List API ""ServiceCode "":""service code"" The code of the service in the Price List API ""Direction "":""direction "" Direction of the call “ Outbound ” or “Inbound ” ""StartTimeEpochSeconds "":""start time epoch seconds"" Indicates the call start time in epoch/Unix timestamp format ""EndTimeEpochSeconds "":""endtime epoch seconds"" Indicates the call end time in epoch/Unix timestamp format ""Region"":""AWSregion""} AWS region for the Voice Connector ""Streaming "":{""true|false ""} Indicates whether the Streaming audio option was enables for this call if Streaming is not enabled Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 18 Sample Call Detail Record ( CDR ): { ""AwsAccountId"": "" 111122223333 "" ""TransactionId"": "" 879eee6eeec74167b634a2519506d142 "" ""CallId"": "" 777a6b953100721d372188753f2059a8@20301139:8080 "" ""VoiceConnectorId"": "" abcd112222223333334444 "" ""Status"": ""Completed"" ""StatusMessage"": ""OK"" ""SipAuthUser"": "" 5600"" ""BillableDurationSeconds"": 6 ""BillableD urationMinutes"": 01 ""SchemaVersion"": ""20"" ""SourcePhoneNumber"": ""+ 15105551212 "" ""SourceCountry"": ""US"" ""DestinationPhoneNumber"": ""+ 16285551212 "" ""DestinationCountry"": ""DE"" ""UsageType"": "" USE1USUSoutboundminutes"" ""ServiceCode"": ""AmazonChimeVoiceConnector"" ""Direction"": ""Outbound"" ""StartTimeEpochSeconds"": 1565399625 ""EndTimeEpochSeconds"": 1565399629 ""Region"": ""us east1"" ""Streaming"": true } Streaming Detail Record (S DR) Storage Details Streaming Detail Record ( SDR ) objects are stored in your Amazon S3 bucket based on your bucket retention policy S DR objects are stored using names in the following format: AmazonChimeVoiceConnector SDRs/json/ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID where: • vconID – Amazon Chime Voice Connector ID • yyyy/mm/dd – Year month and day that the call started • HHMMssmmm – Start time of call Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 19 • transactionID – Amazon Chime Voice Connector transaction ID Streaming Detail Records (SDRs) always c orrespond to a call detail record matching the object prefix for example “ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID ” For example: AmazonChimeVoiceConnector SDRs/json/grdcp7r7fjejaautia8rvb/2019/0 3/01/171000020_123456789 SDR Schema Value Description {""SchemaVersion "":""schema version"" The version of the CDR schema ""TransactionId "":""transaction id"" Amazon Chime Voice Connector transaction ID UUID ""CallId"":""SIPcall id"" Customer facing SIP call ID ""AwsAccountId "":""AWSaccount ID"" AWS account ID ""VoiceConnectorId "":""voice connector id"" Amazon Chime Voice Connector ID UUID ""StartTimeEpochSeconds "":""start time epoch second"" Indicates the call start time in epoch/Unix timestamp format ""EndTimeEpochSeconds "":""endtime epoch second"" Indicates the call end time in epoch/Unix timestamp format ""Status"":""status"" Status of the call option (Completed Failed etc) ""StatusMessage "":""status message"" Details of the call option status ""ServiceCode "":""service code"" The code of the service in the Price List API ""UsageType "":""usage type"" Usage details of the line item in the Price List API ""BillableDurationSeconds "":""billable duration seconds"" Billable duration of the call in seconds ""Region"":""AWSregion""} AWS region for the Voice Connector Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 20 Sample Streaming Detail Record (SDR) { ""SchemaVersion"": ""10"" ""AwsAccountId"": "" 111122223333 "" ""TransactionId"": "" 879eee6eeec74167b634a2519506d142 "" ""CallId"": "" 777a6b953100721d372188753f2059a8@20301139:8080 "" ""VoiceConnectorId"": "" abcd112222223333334444 "" ""StartTimeEpochSeconds"": 1565399625 ""EndTimeEpochSeconds"": 1565399629 ""Status"": ""Completed"" ""StatusMessage"": "" Streaming succeeded "" ""ServiceCode"": ""AmazonChime"" ""UsageType"": ""USE1 VCkinesisaudiostreaming"" ""BillableDurationSeconds"": 6 ""Region"": ""us east1"" } Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 21 Appendix B: SIP Signaling Specifications Ports and Protocols Amazon Chime Voice Connector requires the following ports and protocols Signaling AWS Region Destination Ports US East (N Virginia) 380160/23 UDP/ 5060 TCP/5060 TCP/5061 US West (Oregon) 99772530/24 UDP/ 5060 TCP/5060 TCP/5061 Media AWS Region Destination Ports US East (N Virginia) 380160/23 UDP/5000:65000 US East (N Virginia) 525562128/25 UDP/1024:65535 US East (N Virginia) 5255630/25 UDP/1024:65535 US East (N Virginia) 3421295128/25 UDP/1024:65535 US East (N Virginia) 34223210/25 UDP/1024:65535 US West (Oregon) 99772530/24 UDP/5000:65000 Supported SIP Methods OPTIONS INVITE ACK CANCEL BYE Unsupported SIP Methods SUBSCRIBE NOTIFY PUBLISH INFO REFER UPDATE PRACK MESSAGE Required SIP Headers In general the service implements SIP as described in RFC 3261 The following SIP headers are required on all OPTIONS INVITE and BYE requests: CallID Contact CSeq From Max Forwards To Via CANCEL requests must also include these headers with the exception of Contact Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 22 Further details about SIP headers can be found in RFC 3261 § 20 SIP OPTIONS Requirements The Request URI of the SIP OPTIONS requests that are sent to the se rvice must identify the Voice Connector host name For example : OPTIONS sip:abcdefghijklmnop12345voiceconnectorchimeaws SIP/20 SIPREC INVITE Requirements The Request URI must identif y the Voice Connector host name For example: INVITE sip:+16285551212@abcd112222223333334444gvoiceconnectorchimeaws:5060 SIP/20 The user portion of the From: header must have a number in E164 format For example: From: +16285551212 ;tag=gK1005c68e If you experience connectivity issues or dropped packets the potential reason is that the UDP packets are dropped by the participating network elements such as routers or receiving hosts on the internet because the UDP packets are larger than the maximum transmission unit (MTU) You can resolve this issue by either clearing the Don’t fragment (DF) flag or alternatively you can use TCP Dialed Number Requirements • Outbound calls: The dialed number must be valid and presented in E164 format Supported countries can be found under Calling Plan on the Termination Page in the Chime Console Countries can be allowed or disallowed by the customer If a call is placed from a customer PBX to a number that is not v alid the call will be rejected with a SIP 403 Forbidden response The dialed number must be presented in E164 format as the user portion of the Request URI in the SIP INVITE for example: INVITE sip:+12125551212@abcdefghijklmnop12345voiceconnectorchime aws The leading “ +” is required Amazon Web Services Leveraging Am azon Chime Voice Connector for SIP Trunking Page 23 • Inbound calls: The called number is presented in E164 format as the user portion of the Request URI in the SIP INVITE For example: INVITE sip:+1 2065551212@abcdefghijklmnop12345voiceconnectorchimeaws Caller ID Number Requirements • Outbound Calls: The caller ID number is derived from the user portion of the PAsserted Identity: header or the From: header in that order The caller ID must be a valid E164 formatted phone number • Inbound Calls: The caller ID number is presented in E164 format as the user portion of the PAssertedIdentity: and From: headers Caller ID Name The delivery of Caller ID Name for inbound calls to your on premises phone system is not supported You can enable the del ivery of Caller ID name for outbound calls from your on premises phone system using the Outbound Calling Name (CNAM) feature Digest Authentication Digest Authentication is an optional feature and it is implemented as described in RFC 3261 section 22 Call Encryption Enabling encryption in Amazon Chime Voice Connector to use TLS for SIP signaling and Secure RTP (SRTP) for media Encryption is enabled using the Secure Trunking option in the console and the service uses port 5061 When enabled all inbou nd calls use TLS and unencrypted outbound calls are blocked You must import the Amazon Chime root certificate Note that at this time the Amazon Chime Voice Connector service uses a wildcard certificate(*voiceconnector chimeaws ) SRTP is implemented as described in RFC 4568 Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 24 For outbound calls the service uses the SRTP default AWS counter cipher and HMAC SHA1 message authentication The following ciphers are supported for inbound and outbound calls: AES_CM_128_HMAC_SHA1_80 AES_CM_128_HMAC_SHA1_32 AES_ CM_192_HMAC_SHA1_80 AES_CM_192_HMAC_SHA1_32 AES_CM_256_HMAC_SHA1_80 AES_CM_256_HMAC_SHA1_32 At least one cipher is mandatory but all can be included in preference order There is no additional charge for voice encryption Session Description Protocol (SD P) SDP is implemented as described in RFC 4566 Supported Codecs The service support s G711 µ law and G722 pass through for Amazon Chime meeting dial ins only DTMF Dualtone multifrequency (DTMF) is implemented as described in RFC 4733 (also known as RFC 2833 DTMF) Amazon Web Services Leveraging Amazon Chime Voice Connector for S IP Trunking Page 25 Appendix C: CloudWatch Metrics and Logs Examples CloudWatch Metrics Amazon Chime Voice Connector sends usage and performance metrics to Amazon CloudWatch The namespace is AWS/ChimeVoiceConnector To find a complete list of the CloudWatch Metrics sent by Amazon Chime Voice Connector see Monitoring Amazon Chime with Amazon CloudWatch CloudWatch Logs SIP Capture Log Example CloudWatch Logs log group name pattern /aws/ChimeVoiceConnectorSipMessages/[VoiceConnectorID] {""voice_connector_id"":""abcdefg628ghsyzd8bwmh6""""event_timestamp"":""20 191007T17:16:51Z""""call_id"":""5bf5ecf1 27a14068a7ee 6bd828a5f54a""""sip_message"":"" \nINVITE sip:+15105551212@abc defg628ghsyzd8bwmh6gvoiceconnectorchimeaws:5 061 SIP/20 \nVia: SIP/20/TLS 19216810010 :8081;branch=z9hG4bK66a2d803;rport \nMaxForwards: 69\nFrom: ""Testing Account"" ;tag=as283a6f9b \nTo: \nContact: \nCallID: 6347f9d4697c1539361a1d97727bd2c8@ 19216810010 :8081\nCSeq: 102 INVITE\nUserAgent: Asterisk PBX 18323 \nDate: Mon 07 Oct 2019 17:16:51 GMT \nAllow: INVITE ACK CANCEL OPTIONS BYE REFER SUBSCRIBE NOTIFY INFO PUBLISH MESSAGE \nSupported: replaces timer\nContentType: application/sdp \nContentLength: 322\n\nv=0\no=root 1248709283 1248709283 IN IP4 19216810010\ns=Asterisk PBX 18323 \nc=IN IP4 19216810010 \nt=0 0\nm=audio 15406 RTP/SAVP 0 101 \na=rtpmap:0 PCMU/8000 \na=rtpmap:101 telephone event/8000 \na=fmtp:101 0 16\na=ptime:20 \na=sendrecv \na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:OkiaSoC0tQG15E7eG21 +7DFprLZku9XkE8hl9Zlc \n'""}",General,consultant,Best Practices Leveraging_Amazon_EC2_Spot_Instances_at_Scale,ArchivedLeveraging Amazon EC2 Spot Instances at Scale March 2018 This paper has been archived The latest version is now available at: https://docsawsamazoncom/whitepapers/latest/costoptimizationleveraging ec2spotinstances/costoptimizationleveragingec2spotinstanceshtmlArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction to Spot Instances 1 When to Use Spot Instances 1 How to Request Spot Instances 2 How Spot Instances Work 2 Managing Instance Termination 3 Launch Groups 3 Spot Fleets 4 Spot Request Limits 4 Determining the Status of Your Spot Instances 4 Spot Instance Interruptions 5 Spot Best Practices 5 Spot Integration with Other AWS Services 6 Amazon EMR Integration 6 AWS CloudFormation Integration 6 Auto Scaling Integration 6 Amazon ECS Integration 7 Amazon Batch Integration 7 Conc lusion 7 Archived Abstract This is the fourth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and cont inuously measure your optimization status This paper provides an overview of Amazon EC2 Spot Instances as well as best practices for using them effectively ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 1 Introduction to Spot Instances In addition to OnDemand and Reserved Instances the third major Amazon Elastic Compute Cloud (Amazon EC2) pricing model is Spot Instances With Spot Instances you can use spare Amazon EC2 computing capacity at discounts of up to 90 % compared to On Demand pricing That means you can significantly reduce the cost of running y our applications or grow your application’s compute capacity and throughput for the same budget The only difference between On Demand Instances and Spot Instances is that Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back Unlike Reserved Instances Spot Instances do not require an upfront commitment However because Spot Instances can be terminated if the Spot price exceeds your maximum price or if no capacity is available for the instance type you’ve specified they are b est for flexible workloads When to Use Spot Instances You can use Spot Instances for various fault tolerant and flexible applications Examples include web servers API backends continuous integration/continuous development and Hadoop data processing Workloads that constantly save data to persistent storage —including Amazon Simple Storage Service (Amazon S3) Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon DynamoDB or Amazon Relational Database Service (Amazon RDS) —can work effectively with Spot Instances You can also take advantage of Spot Instances to run and scale applications such as stateless web services image rendering big data analytics and massively parallel computations Spot Instances are typically used to supplement On Demand In stances where appropriate and are not meant to handle 100 % of your workload However you can use all Spot Instances for any stateless non production application such as dev elopment and test servers where occasional downtime is acceptable They are no t a good choice for se nsitive workloads or databases ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 2 How to Request Spot Instances To use Spot Instances y ou create a Spot Instance request that includes the number of instances the instance type the Availability Zone and the maximum price that you ar e willing to pay per instance hour You can create a Spot Instance request using the Launch Instance Wizard from the Amazon EC2 console or Amazon EC2 API For details on how to create a Spot Instance request using the console see Creating a Spot Instance Request For details on how to request Spot Instances through the Amazon EC2 API see RequestSpotInstances in the Amazon EC2 API Reference You can also launch Spot Instances through other AWS services such as Amazon EMR AWS Data Pipeline AWS CloudFormation and Amazon Elastic Container Service (Amazon ECS) as well as through third party tools To learn more about Spot Instance requests see Spot Instance Requests How Spot Instances Work The Spot price is determined by long term trends in supply and demand for EC2 spare capacity You pay the Spot price that's in effect at the beginning of each instance hour for your running instanc e billed to the nearest second With Spot Instances you never pay more than the maximum price you specif y If the Spot price exceeds your maximum price for a given instance or if capacity is no longer available your instance will automatically be terminated (or be stopped/hibernated if you opt for this b ehavior on persistent request) The Spot price may change anytime but in general it will change once per hour and in many cas es less frequently AWS publishes the current Spot price and historical prices for Spot Instances through the describe spot price history command as well a s the AWS Management Console This can help you assess the levels and timing of fluctuations in the Spot price over time Spot Instances perform exactly like other EC2 instances while running and can be terminated when you no longer need them If you termi nate your instance you pay for any partial hour used (as you do for On Demand or Reserved ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 3 Instances) However you are not charged for any partial hour of usage if the Spot price goes above your maximum price and Amazon EC2 interrupts your Spot I nstance Managing Instance Termination Spot offers three features to help you better track and control when Spot Instances run and terminate (or stop/hibernate) • Termination notices – If you need to save state upload final log files or remove Spot Instances from Elastic Load Balancing before interruption you can use termination notices which are issued two minutes prior to interruption To learn more about managing interruptions see Spot Instance Interruptions • Persistent requests – You can opt to set your request to remain open so that a new instance will be launched in its place when the instance is interrupted You can also have your Amazon EBS backed instance stopped upon interruption and restarted when Spot has capacity at your preferred price To learn more about persistent and one time requests see Spot Instance Request States • Block dur ations – If you need to execute workloads continuously for 1–6 hours you can also specify a duration requirement when requesting Spot Instances To learn more about block durations for Spot Instances see Specifying a Duration for Your Spot Instances Launch Groups You can launch a set of Spot Instances at once in a launch group or in an Availability Zone group With a launch group if the Spot service must terminate one of the instances in a launch group it must terminate them all With an Availability Zone group the Spot service launches a set of Spot Instances in the same Availability Zone When launch groups are required try to mi nimize the group size Larger groups have a lower chance of being fulfilled Also be aware that specifying a specific Availability Zone can increase your chances of successfully launching To learn more about launch groups and Availability Zone groups see How Spot Instances Work ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 4 Spot Fleets With a Spot Fleet you can automatically request Spot Instances with the lowest price p er unit of capacity To use a Spot Fleet create a Spot Fleet request that includes the target capacity based on your application needs (in any unit including instances vCPUs memory storage or network throughput) one or more launch specifications for the instances and the maximum price that you are willing to pay To learn more about Spot Fleets see How Spot Fleet Works Spot Request Limits By default there is an ac count limit of 20 Spot Instances per AWS Region If you terminate your Spot Instance but do not cancel the request the request counts against this limit until Amazon EC2 detects the termination and closes the request Spot Instance limits are dynamic Whe n your account is new your limit might be lower than 20 to start but then increase over time In addition your account might have limits on specific Spot Instance types If you submit a Spot Instance request and you receive the error Max Spot Instance c ount exceeded you can go to the AWS Support Center and request a limit increase To learn more about default limits and how to request a limit increase see AWS Service Limits Determining the Status of Your Spot Instances By reviewing Spot status you can see why your Spot requests state has or has not changed and you can learn how to optimize your Spot requests to get them fulfilled To find the Spot status you can use the DescribeSpotInstanceRequests API action or the ec2describe spot instance requests using th e AWS C ommand Line Interface (CLI) The AWS Management Console makes a detailed billing report available which shows Spot Instance start and termination times for all instances You can check the billing report against historical Spot prices via the API to verify that the Spot price billed was correct ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 5 Spot Instance Interruptions You can choose to have the Spot service stop instead of terminate your Amazon EBS backed Spot I nstances when they are interrupted Spot can then fulfill your request by restartin g instances from a stopped state when capacity again becomes available within your price and time requirements To use this new feature choose stop instead of terminate as the interruption behavior when submitting a persistent Spot request When you choos e stop Spot will shut down your instance upon interruption The EBS root device and attached EBS volumes are saved and their data persists When capacity is available again within your price and time requirements Spot will restart your instance Upon res tart the EBS root device is restored from its prior state previously attached data volumes are reattached and the instance retains its instance ID This feature is available for persistent Spot requests and Spot Fleets with the maintain fleet option ena bled You will not be charged for instance usage while your instance is stopped EBS volume storage is charged at standard rates Spot Best Practices Your instance type requirements budget requirements and application design will determine how to apply the following best practices for your application: • Be flexible about instance types Test your application on different instance types when possible Because prices fluctuate independently for each instance type in an Availability Zone you can often get more compute capacity for the same price when you have instance type flexibility Request all instance types that meet your requirements to further reduce costs and improve application performance Spot Fleets enable you to request multiple instance types simultaneously • Choose pools where prices haven't changed much Because prices adjust based on long term demand popular instance types (such as recently launched instan ce families) tend to have more price adjustments Therefore picking older generation instance types that are less popular tends to result in lower costs and fewer interruptions ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 6 Similarly the same instance type can have different prices in different Availability Zones • Minimize the impact of interruptions Amazon EC2 Spot's Hibernate feature allows you to pause and then resume Amazon EBS backed instances when capacity is available Hibernate is just like closing and opening your laptop lid with your app lication starting up right where it left off For more information see Hibernate Your Instance Spot Integration with Other AWS Services Amazon EC2 Spot Instances integrate with several AWS services Amazon EMR Integration You can run Amazon EMR clusters on Spot Instances and significantly reduce the cost of processing vast amounts of data on managed Hadoop clusters You can run your EMR clusters by easily mixing Sp ot Instances with On Demand and Reserved Instances using the instance fleet feature To learn more about setting up an EMR cluster with Spot see the EMR Developer Guide AWS CloudFormation Integration AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources including EC2 Spot and lets you describe any dependencies or special parameters to pass in at runtime For a sample high performance computing framework using AWS Cloud Formation that can use Spot Instances see the cfncluster demo To learn more about setting up AWS CloudFormation with Spot see the Amazon EC2 User Guide Auto Scaling Integration You can use Amazon EC2 Auto Scaling groups to launch and manage Spot Instances maintain application a vailability and scale your Amazon EC2 Spot capacity up or down automatically according to the conditions and maximum prices you define To learn more about using Amazon EC2 Auto Scaling with Spot Instances see the Amazon EC2 Auto Scaling User Guide ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 7 Amazon ECS Integration You can run Amazon ECS clusters on Spot Instances to reduce the operational cost of running containerized applications on Amazon ECS The Amazon ECS console is also tightly integrated with Amazon EC2 Spot and you can use the Create Cluster Wizard to easily set up an ECS cluster with Spot Instances Amazon Batch Integration AWS Batch plans schedules and executes your batch computing workloads on AWS AWS Batch dynamically requests for Spot Instances on your behalf reducing the cost of running your batch jobs Conclusion Whether yo u have flexible compute needs or want to augment capacity without growing your budget Spot Instances can be a great way to optimize your AWS costs By properly architecting your workloads you can take advantage of Spot pricing for a wide range of needs For more information about Spot Instances visit the Spot Instances overview,General,consultant,Best Practices Leveraging_AWS_Marketplace_Storage_Solutions_for_Microsoft_SharePoint,"ArchivedLeveraging A WS Marketplace Storage Solutions for Microsoft SharePoint January 2018 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Introduction 1 About AWS Marketplace 2 About SoftNAS Cloud NAS 4 Architecture Considerations 4 Capacity Planning 4 Storage Performance 5 Fault Tolerance 5 High Availability 5 High Level Architecture 5 Deployment 6 SoftNAS IAM Policy and Role 6 Marketplace AMI Deployment with EC2 Console 8 Limited Access Security Group 10 Configuration 11 Administrative Setup 11 Active Directory Membership 17 SoftNAS Snap Replication 19 SoftNAS SNAP HA 20 Conclusion 22 Contributors 23 Further Reading 23 Document Revisions 24 Archived Abstract Designing a cloud storage solution to accommodate traditional enterprise software such as Microsoft SharePoint can be challenging Microsoft SharePoint is complex and demands a lot of the underlying storage that’s used for its many databases and content repositories To ensure that the selected storage platform can accommodate the availability connectivity and performan ce requirements recommended by Microsoft you need to use third party storage solutions that build on and extend the functionality and performance of AWS storage services An appropriate storage solution for Microsoft SharePoint needs to provide data redund ancy high availability fault tolerance strong encryption standard connectivity protocols point intime data recovery compression ease of management directory integration and support The focus of this paper is to walk through the deployment and co nfiguration of SoftNAS Cloud NAS an AWS Marketplace third party storage product that provides secure highly available redundant and fault tolerant storage to the Microsoft SharePoint collaboration suite Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 1 Introduction Successful Micr osoft SharePoint deployments require significant upfront planning to understand the infrastructure and application architecture required A successful deployment would ensure performance scalability high availability and fault tolerance across all aspec ts of the application The primary component of a successful Microsoft SharePoint architecture is the proper understanding and sizing of the storage system used by the SQL Server databases that store analyze and deliver content for the SharePoint applica tion Microsoft SharePoint requires storage for several key aspects of its architecture to include a quorum for the Windows Services Failover Cluster (WSFC) WSFC witness server CIFS file share Microsoft SQL Server Always On clustered database storage Remote Blob Storage (RBS) and Active Directory integration Microsoft provides detailed guidance on SharePoint storage architecture and capacity planning in the Storage and SQL Server capacity planning and configuration Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 2 (SharePoint Server) documentation on TechNet at https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx This guidance described in the Architecture Considerations section provides details about how you can use a SharePoint implementation the types and numbers of objects that you can store the per formance required for object storage and retrieval and the storage design that best fits the requirements for a SharePoint implementation This guidance drives how you can use the underlying storage provisioned with Amazon AWS in conjunction with AWS Mark etplace third party storage products to provide a successful storage architecture for deploying Microsoft SharePoint on AWS About AWS Marketplace AWS Marketplace is a curated digital catalog that provides a way for customers around the globe to find buy and immediately start using software that runs on AWS The storage software products available on AWS Marketplace are provided and maintained by industry newcomers with born inthecloud solutions as well as existing industry leaders They include many m ainstream storage products that are already familiar and commonly deployed in enterprises AWS Marketplace provides value in several ways: saving money with flexible pricing options access to easy 1 click deployments of preconfigured and optimized Amazon Machine Images (AMIs) software as a service (SaaS) AWS CloudFormation templates and ensures that products are scanned periodically for known vulnerabilities malware default passwords and other security related concerns Several solutions from AWS Marketplace can provide appropriately available and scaled storage for SharePoint implementations You should consider the following when choosing a product: • High availability (HA) – Multiple Availability Zone failover and multiple region failover • Fault tolerance – Multiple availability zone and multiple region replication • Performance – RAID mapping complementary to Amazon Elastic Block Store (Amazon EBS) and instances sized for high IO • Encryption – Integration with AWS Key Management Service (KMS) or built in data encryption capability Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 3 • Compression – Proprietary or industry adopted compression capability • Standard connectivity protocols – iSCSI and CIFS • Point intime data recovery – Proprietary or industry adopted data recovery capability • Active Directory integration – Domain membership with user group and computer controls AWS Marketplace Products for SharePoint integration Product Vendor Product Name Link Datomia Datomizer S3NAS https://awsamazoncom/marketplace/seller profile?id=e5778de2 bea7 48d1 96c9 9bc9e6611458 NetApp ONTAP Cloud for AWS https://awsamazoncom/marketplace/seller profile?id=ba83fe1c 57eb 4bac 93a5 5f5d7da7e2f2 SoftNAS SoftNAS Cloud NAS https://awsamazoncom/marketplace/seller profile?id=28ae3a2c 9300 4a7c 898f6f6df5692092 StarWind StarWind Virtual SAN https://awsamazoncom/marketplace/seller profile?id=395b939f 9b80 4d40 bb58 d099abdb342f Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 4 The solution proposed in this paper uses the AWS Marketplace SoftNAS Cloud NAS product however you can us e other AWS Marketplace storage products to provide similar functionality About SoftNAS Cloud NAS Secure redundant and highly available storage for content is a critical requirement for any collaboration suite SharePoint can accumulate significant amo unts of data over time increasing the size and scope of the infrastructure required to serve this data with the continued expectations around performance and availability Additional details about SoftNAS Cloud NAS capabilities and features are available on the SoftNAS AWS Marketplace product webpage at https://awsamazoncom/marketplace/pp/B00PJ9FGVU Architecture Considerations Capacity Planning SharePoint uses storage in several ways and selecting the appropriate configuration is a key aspect in the overall performance of the SharePoint collaboration suite AWS Marketplace storage product provides storage for the Microsoft SQL Server 2016 databases and for SharePoint Remote BLOB Storage (RBS) which stores larger binary objects (for example Visio diagrams PowerPoint presentations) within a file system outside the SharePoint Microsoft SQL database Microso ft provides detailed guidance related to SharePoint capacity planning in Storage and SQL Server capacity planning and configuration (SharePoint Server) on TechNet that takes into account the type and number of artifacts you plan to store in your SharePoint environment (see https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx ) This guidance helps you select and size the appropriate Amazon EC2 instan ces you need to provide database and content storage capacity and necessary I/O performance to meet your needs Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 5 Storage Performance Your storage configuration varies based on the requirements you gather from the SharePoint capacity planning guidance Amazon EBS volumes can be configured in a variety of ways (for example RAID striping different volume sizes etc) to yield different performance characteristics For high I/O scenarios you can create and attach additional Amazon EBS volumes and stripe using RAID software to increase the total number of I/O operations per second (IOPS) Each Amazon EBS volume is protected from physical drive failure through drive mirroring so using a RAID level higher than RAID 0 is unnecessary Fault Toler ance For multi AZ fault tolerance SoftNAS instances need to be deployed independently because each instance must reside in a separate Availability Zone When you configure SnapReplicate the SoftNAS replication component the Availability Zone of replicat ion source and target are validated to ensure that the instances are not in the same Availability Zone High Availability You need to configure each SoftNAS instance with a second network interface that you’ll use later to establish connectivity for high availability The secondary interface is used to create a virtual IP address within the Amazon Virtual Private Cloud (Amazon VPC) The virtual IP address is used as the target for iSCSI and CIFS storage clients and enables continued connectivity to the Sof tNAS Cloud NAS in the event that the primary instance fails You can add the secondary network interface when you create the instance or at a later time prior to enabling SoftNAS SnapHA HighLevel Architecture To implement the Microsoft SharePoint solution described in this paper includes the following components: • Two AWS Marketplace SoftNAS Cloud NAS instances • Each instance deployed in separate Availability Zones • Each instance deployed with two network interfaces Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 6 • Each ins tance deployed with the appropriate number and configuration of Amazon EBS volumes • SoftNAS Snap Replicate to replicate the source instances to the target instance • SoftNAS SnapHA to provide high availability and failover capability between instances • Virtua l IP address to provide SoftNAS SnapHA cluster connectivity (VIP is allocated from an address range outside the scope of the CIDR block for VPC of each instance) Deployment SoftNAS IAM Policy and Role Prior to deploying the SoftNAS Cloud NAS instances you need to create a custom IAM role that allows the setup and configuration of SoftNAS Snap high availability (HA) You must use the name SoftNAS_HA_IAM for the role because the IAM role is hard coded in the SoftNAS Snap HA application Create the SoftNAS_HA_IAM role with the following policy: Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 7 { ""Version"": ""2012 1017"" ""Statement"": [ { ""Sid"": ""Stmt1444200186000"" ""Effect"": ""Allow"" ""Action"": [ ""ec2:ModifyInstanceAttribute"" ""ec2:DescribeInstances"" ""ec2:CreateVolume"" ""ec2:DeleteVolume"" ""ec2:CreateSnapshot"" ""ec2:DeleteSnapshot"" ""ec2:CreateTags"" ""ec2:DeleteTags"" ""ec2:AttachVolume"" ""ec2:DetachVolume"" ""ec2:DescribeInstances"" ""ec2:DescribeVo lumes"" ""ec2:DescribeSnapshots"" ""awsmarketplace:MeterUsage"" ""ec2:DescribeRouteTables"" ""ec2:DescribeAddresses"" ""ec2:DescribeTags"" ""ec2:DescribeInstances"" ""ec2:ModifyNetworkInterfaceAttribute"" ""ec2:ReplaceRoute"" ""ec2:CreateRoute"" ""ec2:DeleteRoute"" ""ec2:AssociateAddress"" ""ec2:DisassociateAddress"" ""s3:CreateBucket"" ""s3:Delete*"" ""s3:Get*"" ""s3:List*"" ""s3:Put*"" ] ""Resource"": [ ""*"" ] Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 8 The IAM policy grants users permissions to access APIs for Amazon EC2 Amazon S3 and AWS Marketplace • Amazon EC2 permissions allow for management of instance attributes volumes tags snapshots route tables routes network attributes and IP addresses • Amazon S3 permissions allow for the setup of SoftNAS Snap Replication and SnapHA • AWS Marketplace permissions allow for metered billing Marketplace AMI Deployment with EC2 Console You can deploy the SoftNAS Cloud NAS using the Amazon EC2 console To do this open the console select Launch Instance choose AWS Marketplace type SoftNAS in the search box and then select the appropriate SoftNAS storage configuration from the results list } ] } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 9 After you choose a SoftNAS Cloud NAS configuration you can complete the rest of the process to deploy and configure the SoftNAS Cloud NAS instance You need to deploy two SoftNAS Cloud NAS instances to configure fault tolerance and h ighavailability but you need to deploy each instance independently so that you can select separate Availability Zones For this implementation you add instance storage to accommodate the WSFC quorum majority disk SharePoint databases (for example tem pdb content usage search transaction logs) a Microsoft WSFC witness file share and SharePoint RBS Storage using separate Amazon EBS volumes for each database as recommended by Microsoft for optimal performance You can also add initial or additional storage from the SoftNAS GUI after deployment For more information see Storage and SQL Server capacity planning and configuration (SharePoint Server 2013) at https://technetmic rosoftcom/en us/library/cc298801aspx Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 10 To complete the instance deployment follow the Amazon EC2 launch wizard providing the appropriate input for instance type instance configuration details addition of storage tags and security group configuration After you review the launch configuration you need to select a key pair to use for post deployment administration prior to launching the SoftNAS Cloud NAS instance Select the appropriate key pair and then launch the instance Limited Access Security Group SoftNAS Cloud NAS instances require access for administration on ports TCP 22 and TCP 443 and access for iSCSI connectivity on port TCP 3260 SoftNAS Snap Replicate and Snap HA require SSH between instances as well as the additional ICMP Echo Request and Echo Reply configuration Configure inbound security group rules to accommodate this connectivity and to limit inbound traffic from authorized sources You can limit access to the SoftNAS storage to accept only traffic from authorized sources by adding the appropriate sources in the configuration Management access on ports 22 and 443 is required only from the jump server instances iSCSI and CIFS access is required only from the Microsoft SQL Server database instances and WSFC file share witness ICMP and SSH connectivity are required between the subnets used by the SoftNAS Cloud NAS instances Security Group Inbound Source Type Ports SoftnasAdmin Jump Servers and RDGW Servers SSH HTTPS TCP 22 TCP 443 Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 11 Security Group Inbound Source Type Ports SoftnasISCSI Microsoft SQL Servers ISCSI TCP 3260 SoftnasCIFS WSFC Witness Server CIFS CIFS CIFS AD UDP 137 & 138 TCP 139 & 445 TCP 389 SoftnasCluster SoftNAS Replication and HA members SSH ICMP ICMP TCP 22 Echo Request Echo Reply Configuration Administrative Setup After you provision your SoftNAS Cloud NAS instances you access the instances using the Amazon EC2 console Because the SoftNAS EC2 instance is deployed into a private subnet within the Amazon VPC access is restricted through a bastion host or remote desktop gateway server with access to the SoftNAS Cloud NAS security group For more information see Controlling Network Access to EC 2 Instances Using a Bastion Server on the AWS Security Blog at https://awsamazoncom/blogs/security/controlling network access toec2instances using abastion server/ The default user name is softnas and the default password is set as the instance ID which you can find in the Amazon EC2 console After you log in you see a Getting Started Checklist that you can use to configure your SoftNAS storage By following the checklist you can set up and present your storage targets quickly Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 12 The Amaz on EBS storage volumes that you added during deployment are available to each SoftNAS Cloud NAS instance as a device that needs a partition Using the SoftNAS administration interface you need to partition all appropriate devices Optionally you can par tition devices using the SoftNAS command line interface (CLI) After partitioning is complete the devices are available and you can assign them to a storage pool Create storage pools that accommoda te the storage capacity and performance requirements required For this solution you create separate storage pools for each Amazon AWS EBS storage device When you configure the storage pool you can set up an additional layer of encryption that allows So ftNAS Cloud NAS to encrypt data You can use an encryption password or the AWS Key Management Service (KMS) to implement encryption key management For more information see the AWS KMS website at https://aws amazoncom/kms/ Optionally you can create storage pools using the SoftNAS CLI ec2user @ip100133229:~$ /usr/local/bin/softnas cmd parted_command partition_all t { ""result"": { ""msg"": ""All partitions have been created successfully"" ""records"": { ""msg"": ""All partitions have been created successfully"" } ""success"": true ""total"": 1 } ""session_id"": ""8756"" ""success"": true } ec2user @ip100133229:~$ /usr/local/bin/softnas cmd createpool /dev/xvdb quorum 0 on LUKSpassword123 standard off on t { Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 13 ""result"": { ""msg"": ""Create pool 'quorum' was successful"" ""records"": { ""Available"": 70996566768736002 ""Used"": 000034332275390625 ""compression"": ""on"" ""dedup"": ""off"" ""dedupfactor"": ""100x"" ""free_numeric"": 7623198310 ""free_space"": ""71G"" } ""no_disks"": 5 ""optimizations"": ""Compress"" ""pct_used"": ""0%"" ""pool_name"": ""quorum"" ""pool_type"": ""Standard"" ""provisioning"": ""Thin"" ""request_arguments"": { ""cbPoolCaseinsensitive"": ""off"" ""cbPoolTrim"": ""on"" ""forcedCreation"": ""on"" ""opcode"": ""createpool"" ""pool_name"": "" quorum"" ""raid_abbr"": ""0"" ""selectedItems"": [ { ""disk_name"": ""/dev/xvdb"" } ] ""sync"": ""standard"" ""useLuksEncryption"": ""on"" } ""status"": ""ONLINE"" ""time_updated"": ""Oct 16 2017 15:43:01"" ""total_numeric"": 7623566950 ""total_space"": ""71G"" ""used_numeric"": 368640 ""used_space"": ""3600K"" } ""success"": true ""total"": 21 } ""session_id"": ""8756"" Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 14 ec2user @ip100133229:~$ /usr/local/bin/softnas cmd createvolume vol_name=quorum pool=quorum vol_type=blockdevice provisioning=thin exportNFS=off shareCIFS=off ShareISCSI=on dedup=on enable_snapshot= off schedule_name=Default hourlysnaps=0 dailysnaps=0 weeklysnaps=0 sync=always pretty_print { ""result"": { ""msg"": ""Volume 'LUN_quorum' created"" ""records"": { ""Available"": 70999999999999996 ""Snapshots"": 0 ""Used"": 5340576171875e 05 After you create the storage pools you must allocate the capacity in each storage pool to SoftNAS volumes to enable remote connectivity as iSCSI LUNs and CIFS shares Optionally you can create volumes with the SoftN AS CLI iSCSI volume example: ""success"": true } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 15 ""cbSnapshotEnabled"": ""1"" ""compression"": ""off"" ""compressratio"": ""100x"" ""dailysnaps"": 0 ""dedup"": ""on"" ""free_numeric"": 76235669503999996 ""free_space"": ""71G"" ""hourlysnaps"": 0 ""logicalused"": ""00G"" ""minimum_threshold"": ""0"" ""nfs_export"": null ""optimizations"": ""Dedup"" ""pct_used"": ""0%"" ""pool"": ""quorum"" ""provisioning"": ""Thin"" ""replication"": false ""request_arguments"": { ""cbSnapshotEnabled"": ""on"" ""dailysnaps"": ""0"" ""dedup"": "" on"" ""exportNFS"": ""off"" ""hourlysnaps"": ""0"" ""opcode"": ""createvolume"" ""pool"": ""quorum"" ""provisioning"": ""thin"" ""schedule_name"": ""Default"" ""shareCIFS"": ""off"" ""sync"": ""always"" ""vol_name"": ""quorum"" ""vol_type"": ""blockdevice"" ""weeklysnaps"": ""0"" } ""reserve_space"": 71000534057616997 ""reserve_units"": ""G"" ""schedule_name"": ""Default"" ""status"": ""ONLINE"" ""sync"": ""always"" ""tier"": false ""tier_disabled"": null ""tier_name"": null ""tier_order"": null ""tier_uuid"": null ""time_updated"": ""Oct 16 2017 15:52:59"" Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 16 When you create the iSCSI LUNs the associated iSCSI targets are also created The initial iSCSI target is set up with open connectivity However you can update the configuration for each iSCSI target with the IQN for each iSCSI initiator as well as a user nam e and password that can be used for CHAP authentication between the iSCSI initiators and targets ""total_numeric"": 76236242943999996 ""total_space"": ""71G"" ""used_numeric"": 5340576171875e 05 ""used_space"": ""00G"" ""usedbydataset"": ""56K"" ""usedbysnapshots"": ""0B"" ""vol_name"": ""LUN_quorum"" ""vol_path"": "" "" ""vol_type"": ""blockdevice"" ""weeklysnaps"": 0 } ""success"": true ""total"": 40 } ""session_id"": ""8756"" ""success"": true } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 17 You can’t create the iSCSI targets or add IQN and CHAP details using the SoftNAS CLI Active Directory Membership Before you can join the SoftNAS Cloud NAS instances to the Active Directory domain you need to update the hostname of each instance (that is the hostname used by the SoftNAS management interface not the hostname of the EC2 instance) The default hostnam e is based on the IP address of the EC2 instance Depending on the IP address the hostname might contain too many characters to be a valid NETBIOS name which is required for you to add it to Active Directory Update the hostname as appropriate in the SoftNAS web management console to a NETBIOS compliant name For more information see the Naming conventions in Active Directory for computers domains sites and OUs article on the Microsoft website at https://supportmicrosoftcom/en us/help/909264/naming conventions inactive directory forcomputers domains sites and Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 18 You attach th e SoftNAS instance to Active Directory by navigating to the volume configuration page and selecting Active Directory from the top level menu After you select the interface you are prompted for the Active Directory domain name enter a domain user name an d password with appropriate domain join permissions to join it to the domain If the NETBIOS hostname is too long a prompt appears and explains what actions you need to take to correct the error before proceeding Optionally you can use the SoftNAS CLI to attach the SoftNAS Cloud NAS instance to Active Directory Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 19 SoftNAS Snap Replication At this point you’ve finished configuring the primary SoftNAS Cloud NAS instance Now you need to configure the secondary failover instance so that you can configure SNAP Replicate and SNAP HA For th e first step follow the instructions in the previous section to set up the secondary node but stop before you create any volumes because these are created during the replication process After you have configured both the primary and secondary SoftNAS Cloud NAS instances connect to the SoftNAS administration console of the primary instance and navigate to the SnapReplicate / Snap HA menu First you set up replication between the primary and secondary SoftNAS Cloud NAS instances You need to do this from the primary instance You need to use the IP address administrative user name and password of the secondary instance as input After you complete the setup wizard SnapReplicate begins r eplicating each iSCSI LUN from the primary instance to the secondary After the replication process finishes the SnapReplicate replication control plan indicates that Current State for each LUN is SNAPREPLICATED COMPLETE and the secondary instance now ha s the replicated LUNs created and visible within the Volume and LUNs dashboard ec2user @ip100133229:~$ # kinit p adminuser@EXAMPLECOM ec2user @ip100133229:~$ # cd /var/www/softnas/scripts ec2user @ip100133229:~$ # /ad_connectsh c examplecom e EXAMPLE f Adminuser g yourpassword Note The secondary instance should only be configured to include disk partitioning and storage pool creation The replication setup process creates all appropriate volumes CIFS shares and iSCSI targets as a mirror of the source instance Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 20 Optionally you can set up SoftNAS SnapReplicate using the SoftNAS CLI SoftNAS SNAP HA After SnapReplicate replication has been established you can set up Snap HA to enable high availability and failover capability for the SoftNAS Cloud NAS In the ec2user @ip100133229:~$ # softnas cmd snaprepcommand initsnapreplicate remotenode=”REMOTENODEIP” userid=softnas password=”PASSWORD” type=target t Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 21 SnapReplicate / Snap HA control panel choose Add Snap HA to begin the setup process During the setup process select the Virtual IP mode You need to use a virtual IP address outside of the VPC CIDR block to set up Snap HA communication on the secondary network interface When requested enter an IP address that is not addressable within your VPC CIDR range For instance if the VPC CIDR block is 1019500/16 select any other address that doesn’t start with 10195 can work as the virtual IP address required to set up Snap HA It’s important to ensure that the IP address you choose doesn’ t belong to another VPC or CIDR range that’s routed to from this VPC After you provide a virtual IP address you need to enter an AWS Access Key ID and Secret Key These options are greyed out if the SoftNAS_HA_IAM IAM role was attached to each instance Choose Next to confirm that the appropriate permissions are associated with the attached IAM role If the permissions aren’t correct an error appears and the setup process fails If the permissions are correct Choose Start Install to begin the Snap HA installation and configuration Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 22 After preparation and configuration are complete choose Next The Snap HA process completes the installation and then places the SoftNAS Cloud NAS instances in high availability mode After the SnapHA setup is complete choose Finish Optionally you can use the SoftNAS CLI to set up SoftNAS SnapHA Conclusion The solu tion is complete and configured as follows: • The primary and secondary SoftNAS Cloud NAS instances are configured • The primary instance replicates to the secondary instance • Both instances are configured in an active passive high availability failover cluster • SoftNAS Cloud NAS storage is ready to be used by Microsoft SharePoint and SQL Server ec2user @ip100133229:~$ # softnas cmd hacommand add YOUR_AWS_ACCESS_KEY YOUR_AWS_SECRET_KEY VIP 1111 pretty_print Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 23 • Connectivity from client iSCSI initiators and CIFS clients is established using the cluster virtual IP address AWS has a powerful set of tools that you can use to build your next solution In addition to AWS services you can use the software available in AWS Marketplace to build and extend solutions using familiar products from reputable software vendors Contributors The following indiv iduals and organizations contributed to this document: • Israel Lawson Solutions Architect AWS • Kevin Brown Solutions Architect SoftNAS • Ross Ethridge Technical Support Manager SoftNAS Further Reading SoftNAS Resources • AWS Getting Started Guide at https://docssoftnascom/pages/viewpageaction?pageId=3604488 • AWS Design and Configuration Guide at https://wwwsoftnascom/wp/support/aws cloud nasdesign configuration guide/ • AWS Instance Size Guide at https://wwwsoftnascom/wp/produc ts/instance sizerecommendations/#aws • AWS Backend Storage Selection Guide at https://ww wsoftnascom/wp/support/aws storage guide/ • High Availability: Amazon Web Services at https://docssoftnascom/display/SD/High+Availability%3A+Amazon+We b+Se rvices • Cloud Formation Template at https://wwwsoftnascom/docs/softnas/v3/api/Softnas AWSCloudTempl ateHVMjson Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 24 Microsoft SharePoint SQL Server Resources • Overview of SQL Server in a SharePoint Server 2016 environment at https://docsmicrosoftcom/en us/sharepoint/administration/overview ofsqlserver insharepoint server 2016 and2019 environments • Storage and SQL Server capacity planning and configuration (SharePoint Server) at https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx • SharePoint Server 2016 Databases – Quick Reference at https://technetmicrosoftcom/en us/library /cc298801(v=office16)aspx#section1a • Database Types and Descriptions in SharePoint Server at https://technetmicrosoftcom/en us/library/cc678868(v=office16)aspx AWS Resources • AWS SoftNAS Whitepaper at https://d0awsstaticcom/whitepapers/softnas architecture onawspdf • AWS Bastion Host Blog Post at https://awsamazoncom/blogs/security/controlling network access toec2 instances using abastion server/ Document Revisions Date Description January 2018 First publication",General,consultant,Best Practices Machine_Learning_Foundations_Evolution_of_Machine_Learning_and_Artificial_Intelligence,ArchivedMachine Learning Foundations Evolution of Machine Learning and Artificial Intelligence February 2019 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s prod ucts or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of no r does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Evolution of Artificial Intelligence 1 Symbolic Artificial Intelligence 1 Rise of Machine Learning 5 AI has a New Foundation 6 AWS and Machine Learning 9 AWS Machine Learning Services for Builders 9 AWS Machine Learning Services for Custom ML Models 12 Aspiring Developers Framework 13 ML Engines and Frameworks 13 ML Model Training and Deployment Support 14 Conclusions 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract Artificial Intelligence (AI) and Machine Learning (ML) are terms of interest to business people technicians and researchers around the world Most descriptions of the terms oversimplify their true relationship This paper provides a foundation for understanding artificial intelligence describes how AI is now based on a foundation o f machine learning and provides an overview of AWS machine learning services ArchivedAmazon Web Services Machine Learning Foundations Page 1 Introduction Most articles that discuss the relationship between artificial intelligence (AI) and machine learning (ML) focus on the fact that ML is a do main or area of study within AI Although that is true historically an even stronger relationship exists —that successful artificial intelligence applications are almost all implemented using a foundation of ML techniques Instead of a component machine l earning has become the basis of modern AI To support this theory we review how AI systems and applications worked in the first three decades versus how they work today We begin with an overview of AI’s original structure and approach describe the rise of machine learning as its own discipline show how ML provides the foundation for modern AI review how AWS supports customers using machine learning We conclude with observations about why AI and ML are not as easily distinguished as they might first ap pear Evolution of Artificial Intelligence Symbolic Artificial Intelligence Artificial Intelligence as a branch of computer science began in the 1950s Its two main goals were to 1) study human intelligence by modeling and simulating it on a computer and 2) make computers more useful by solving complex problems like humans do From its inception through the 1980s most AI systems were programmed by hand usually in functional declarative or other high level languages such as LISP or Prolog Several custom languages were creat ed for specific areas (eg STRIPS for planning ) Symbols within the languages represented concepts in the real world or abstract ideas and formed the basis of most knowledge representations Although AI practitioners used standard computer science techniques such as search algorithms graph data structures and grammars a significant amount of AI programming was heuristic —using rules of thumb —rather than algorithmic due to the complexity of the probl ems Part of the difficulty of producing AI solutions then was that to make a system successful all of the conditionals rules scenarios and exceptions needed to be added programmatically to the code ArchivedAmazon Web Services Machine Learning Foundations Page 2 Artificial Intelligence Domains Researchers were inte rested in general AI or creating machines that could function as a system in a way indistinguishable from humans but due to the complexity of it most focused on solving problems in one specific domain such as perception reasoning memory speech moti on and so on Major AI domains at this time are listed in the following table Table 1: Domains in Symbolic AI (1950s to 1980s) Domain Description Problem Solving Broad general domain for solving problems making decisions sati sfying constraints and other types of reasoning Subdomains included expert or knowledge based systems planning automatic programming game playing and automated deduction Problem solving was arguably the most successful domain of symbolic AI Machine Learning Automatically generating new facts concepts or truths by rote from experience or by taking advice Natural Language Understanding and generating written human languages (eg English or Japanese) by parsing sentences and converting them into a knowledge representation such as a semantic network and then returning results as properly constructed sentences easily understood by people Speech Recognition Converting sound waves into phonemes words and ultimately sentences t o pass off to Natural Language Understanding systems and also speech synthesis to convert text responses into natural sounding speech for the user Vision Converting pixels in an image into edges regions textures and geometrical objects in order to mak e sense of a scene and ultimately recognize what exists in the field of vision Robotics Planning and controlling actuators to move or manipulate objects in the physical world Artificial Intelligence Illustrated In the following diagram lower levels depict layers that provide the tools and foundation used to build solutions in each domain For example below the Primary Domains are a sampling of the many Inferencing Mechanisms and Knowledge Representations that were commonly used at the time ArchivedAmazon Web Services Machine Learning Foundations Page 3 Figure 1: Overview of Symbolic Artificial Intelligence The Sample K nowledge Representations stored knowledge and information to be reasoned on by the system Common categories of knowledge represent ations included structured (eg frames which can be compared to objects and semantic networks which are like knowledge graphs) and logic based (eg propositional and predicate logic modal logic and grammars) The advantage of these symbolic knowledge representations over other types of models is that they are transparent explainable composable and modifiable They support many types of inferencing or reasoning mechanisms which manipulate the knowledge representations to solve problems understand sentences and provide solutions in each domain The AI Language Styles and Infrastructure layers show some types of languages and infrastructure used to develop AI systems at this time Both tended to be specialized and not easily integrated with external data or enterprise systems A Question of Chess and Telephones A question asked at the time was “which is a harder problem to solve: answering the telephone or playing chess at a master level?” The answer is counter intuitive to most people Although even children can answer a telephone properly very few people play chess at a master level However for traditional AI chess is the perfect problem It is ArchivedAmazon Web Services Machine Learning Foundations Page 4 bounded has limited well understood moves and can be solved using heuristic search of the ga me’s state space Answering a telephone on the other hand is quite difficult Doing it properly requires multiple complex skills that are difficult for symbolic AI including speech recognition and synthesis natural language processing problem solving i ntelligent information retrieval planning and potentially taking complex actions Successes of Symbolic AI Generally considered to have disappointing results at least in light of the high expectations that were set symbolic AI did have several successes as well Most of the software deemed useful was turned into algorithms and data structures used in software development today Business rule engines that are in common use were derived from AI’s expert system inference engines and shells Other common com puting concepts credited to or developed in AI labs include timesharing rapid iterative development the mouse and Graphical User Interfaces (GUIs) The list below describes some of the strengths and limitations of this approach to artificial intelligence Table 2: Strengths and Limitations of Symbolic AI Strength Limitation Simulates high level human reasoning for many problems Systems tended not to learn or acquire new knowledge or capabilities autonomously depending instead on regular developer maintenance Problem Solving domain had several successes in areas such as expert systems planning and constrain propagation Most domains including machine learning natural language speech and vision did not produce signi ficant general results Can capture and work from heuristic knowledge rather than step bystep instructions Problem Solving domain specifically expert or knowledge based systems require articulated human expertise extracted and refined using knowledge engineering techniques Encodes specific known logic easily eg enforces compliance rules Systems tended to be brittle and unpredictable at the boundaries of their scope they didn’t know what they didn’t know Straightforward to review internal data structures heuristics and algorithms Built on isolated infrastructure with little integration to external data or systems Provides explanations for answers when requested Requires more context and common sense information to resolve many real world situations ArchivedAmazon Web Services Machine Learning Foundations Page 5 Strength Limitation Does not require significant amounts of data to create Many approaches were not distributed or easily scalable though there were hardware networking and software constraints to distribution as well Requires less compute resources to develop Difficult to create and maintain systems Many tools and algorithms were incorporated into mainstream system development As research money associated with symbolic AI disappe ared many researchers and practitioners turned their attention to different and pragmatic forms of information search and retrieval data mining and diverse forms of machine learning Rise of Machine Learning From the late 1980s to the 2000s several div erse approaches to machine learning were studied including neural networks biological and evolutionary techniques and mathematical modeling The most successful results early in that period were achieved by the statistical approach to machine learning Algorithms such as linear and logistic regression classification decision trees and kernel based methods (ie Support Vector Machines ) gained popularity Later deep learning proved to be a powerful way to structure and train neural networks to solve complex problems The basic approach to training them remained similar but there were several improvements driving deep learning’s success including: • Much larger networks with many more layers • Huge data sets with thousands to millions of training exampl es • Algorithmic improvements to neural network performance generalization capability and ability to distribute training across servers • Faster hardware (such as GPUs and Tensor Cores) to handle orders of magnitude more computations which are required to train the complex network structures using large data sets Deep learning is key to solving the complex problems that symbolic AI could not One factor in the success of deep learning is its ability to formulate identify and use features discovered on its own Instead of people trying to determine what it should look for the deep learning algorithms identified the most salient features automa tically ArchivedAmazon Web Services Machine Learning Foundations Page 6 Problems that were intractable for symbolic AI —such as vision natural language understanding speech recognition and complex motion and manipulation —are now being solved often with accuracy rates nearing or surpassing human capability Today the answer to the question of which is harder for machines —answering the telephone or playing chess at a master level —is becoming harder to answer Although there is important work yet to be done machine learning has made significant progress in enabling ma chines to function more like people in many areas including directed conversations with humans Machine learning has become a branch of computer science in its own right It is key to solving specific practical artificial intelligence problems AI has a New Foundation Artificial intelligence today no longer relies primarily on symbolic knowledge representations and programmed inferencing mechanisms Instead modern AI is built on a new foundation machine learning Whether it is the models or decision tr ees of conventional mathematics based machine learning or the neural network architectures of deep learning most artificial intelligence applications today across the AI domains are based on machine learning technology This new structure for artificial intelligence is depicted in the following diagram The structure of this diagram parallels the diagram of symbolic AI in order to show how the foundation and the nature of artificial intelligence systems have changed Although some of the domains in the to p layer of the diagram remain the same —Natural Language Speech Recognition and Vision —the others have changed Instead of the broad Problem Solving category seen in Figure 1 for symbolic AI there are two more focused categories for predictions and recomm endation systems which are the dominant forms of problem solving systems developed today And in addition to more traditional robotics the domain now includes autonomous vehicles to highlight recent projects in self driving cars and drones Finally since it is now the foundation of the AI domains machine learning is no longer included in the top level domains ArchivedAmazon Web Services Machine Learning Foundations Page 7 Figure 2: Machine Learning as a foundation for Artificial Intelligence There are still many questions and challenges for machine learning The following list provides some of the strengths and limitations of artificial intelligence based on a machine learning foundation Table 3: Strengths and Limitations of ML Based AI Strength Limitation Easy to train new solutions given data and tools Experiencing hype and researchers and practitioners need to properly set expectations Large number of diverse algorithms to solve many types of problems Requires large amounts of clean potentially labeled data Solves problems in all AI domains often approaching or exceeding human level of capability Problems in data such as staleness incompleteness or adversarial injection of bad data can skew results No human expertise or complex knowledge engineering required solutions are derived from examples Some especially statistically based ML algorithms rely on manual feature engineering ArchivedAmazon Web Services Machine Learning Foundations Page 8 Strength Limitation Deep learning extracts features automatically which enables complex perception and understanding s olutions System logic is not programmed and must be learned This can lead to more subjective results such as competing levels of activation where precise answers are needed (eg specific true or false answers for compliance or verification problems) Trained ML models can be replicated and reused in ensembles or components of other solutions Selecting the best algorithm network architecture and hyperparameters is more art than science and requires iteration though tools for hyperparameter optimiza tion are now available Making predictions or producing results is often faster than traditional inferencing or algorithmic approaches Training on complex problems with large data sets requires significant time and compute resources Algorithms for trainin g ML models can be engineered to be distributed and one pass improving scalability and reducing training time It is often difficult to explain how the model derived the results by looking at its structure and results of its training Can be trained and deployed on scalable highperformance infrastructure Most algorithms solve problems in one step so no chains of reasoning or partial results are available though outputs can reflect numeric “confidence” Deployed using common mechanisms like microservices / APIs for ease of integrations with other systems An important take away from Table 2 and Table 3 is that they are somewhat complementary MLbased AI can benefit from the strengths of symbolic AI Some ML approaches inclu ding automatically learning decision trees already merge the two approaches effectively Active research continues into other means of combining the strengths of both approaches as well as many open questions Given that today’s AI is built on the new fo undation of machine learning that has long been the realm of researchers and data scientists how can we best enable people from different backgrounds in diverse organizations to leverage it? ArchivedAmazon Web Services Machine Learning Foundations Page 9 AWS and Machine Learning AWS is committed to democratizing machi ne learning Our goal is to make machine learning widely accessible to customers with different levels of training and experience and to organizations across the board AWS innovates rapidly creating services and features for customers prioritized by the ir needs Machine Learning services are no exception In the diagram below you can see how the current AWS Machine Learning services map to the other AI diagrams Figure 3: AWS Machine Learning Services AWS Machine Learning Services for Builders The first layer shows AI Services which are intended for builders creating specific solutions that require prediction recommendation natural language speech vision or other capabilities These intelligent services are created using machine learning and especially deep learning models but do not require the developer to have any knowledge of machine learning to use them Instead these capabilities come pre ArchivedAmazon Web Services Machine Learning Foundations Page 10 trained are accessible via API call and provide customers the ability to add intelligence to their applications Amazon Forecast Amazon Forecast is a fully managed service that delivers highly accurate forecasts and is based on the same technology used at Amazoncom You provide historical data plus any additional data that you believe impacts your forecasts Amazon Forecast examines the data id entifies what is meaningful and produces a forecasting model Amazon Personalize Amazon Personalize makes it easy for developers to create individualized product and content recommendations for customers u sing their applications You provide an activity stream from your application inventory of items you want to recommend and potential demographic information from your users Amazon Personalize processes and examines the data identifies what is meaningful selects the right algorithms and trains and optimizes a personalization model Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using voice and text Amazon Lex pr ovides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging us er experiences and lifelike conversational interactions With Amazon Lex the same deep learning technologies that power Amazon Alexa are now available to any developer enabling you to quickly and easily build sophisticated natural language conversation al bots (“ chatbots ”) Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to f ind insights and relationships in text Amazon Comprehend identifies the language of the text; extracts key phrases places people brands or events; understands how positive or negative the text is and automatically organizes a collection of text files by topic ArchivedAmazon Web Services Machine Learning Foundations Page 11 Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that extracts relevant medical information from unstructured text using advanced machine learni ng models You can use the extracted medical information and their relationships to build or enhance applications Amazon Translate Amazon Translate is a neural machine translation service that delivers fast high quality and affordable language translation Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule based translation algorithms Amazon Translate allows you to localize content such as websites and applications for international users and to easily translate large volumes of text efficiently Amazon Polly Amazon Polly is a service that turns text into lifelike speech allowing you to create applications that talk and build entirely new categories of speech enabled products Amazon Polly is a TexttoSpeech service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice Amazon Transcribe Amazon Transcribe is an automatic speech recogn ition (ASR) service that makes it easy for developers to add speech totext capability to their applications Using the Amazon Transcribe API you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to your applications You just provide an image or video to the Rekognition API and the service can identify the objec ts people text scenes and activities as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial recognition You can detect analyze and compare faces for a wide variety of user verificati on cataloging people counting and public safety use cases ArchivedAmazon Web Services Machine Learning Foundations Page 12 Amazon Textract Amazon Textract automatically extracts text and data from scanned documents and forms going beyond simple optical character recog nition to identify contents of fields in forms and information stored in tables AWS Machine Learning Services for Custom ML Models The ML Services layer in Figure 3 provides more access to managed services and resources used by developers data scientists researchers and other customers to create their own custom ML models Custom ML models address tasks such as inferencing and prediction recommender systems and gu iding autonomous vehicles Amazon SageMaker Amazon SageMaker is a fully managed machine learning (ML) service that enables developers and data scientists to quickly and easily build train and deploy machi ne learning models at any scale Amazon Sage Maker Ground Truth helps build training data sets quickly and accurately using an active learning model to label data combining machine learning and human interaction to make the model progressively better Sage Maker provides fully managed and pre built Jupyter notebooks to address common use cases The services come with multiple built in high performance algorithms and there is the AWS Marketplace for Machine Learning containing more than 100 additional pre trained ML models and algorithms You can also bring your own algorithms and frameworks tha t are built into a Docker container Amazon Sagemaker includes built in fully managed Reinforcement Learning (RL) algorithms RL is ideal for situations where there is not pre labeled historical data but there is an optimal outcome RL trains using rewa rds and penalties which direct the model toward the desired behavior SageMaker supports RL in multiple frameworks including TensorFlow and MXNet as well as custom developed frameworks SageMaker sets up and manages environments for training and provides hyperparameter optimization with Automatic Model Tuning to make the model as accurate as possible Sagemaker Neo allows you to deploy the same trained model to multiple platforms Using machine l earning Neo optimizes the performance and size of the model and deploys to edge devices containing the Neo runtime AWS has released the code as the open source Neo AI project on GitHub under the Apache Software License SageMaker deployments run models s pread across availability zones to deliver high performance and high availability ArchivedAmazon Web Services Machine Learning Foundations Page 13 Amazon EMR /EC2 with Spark/Spark ML Amazon EMR provides a managed Hadoop framework that makes it easy fast and costeffective t o process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run other popular distributed frameworks such as Apache Spark including the Spark ML machine learning library HBase Presto and Flink in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB Spark and Spark ML can also be run on Amazon EC2 instances to pre process data engineer features or run machine learning models Aspiring Developers Framework In parallel w ith ML Services is the Aspiring Developers Framework layer With a focus on teaching ML technology and techniques to users this layer is not intended for production use at scale Currently the aspiring developers framework consists of two service offeri ngs AWS DeepLens AWS DeepLens helps put deep learning in the hands of developers with a fully programmable video camera tutorials code and pre trained models designed to expand deep learning skills DeepLens offers developers the opportunity to use neural networks to learn and mak e predictions through computer vision projects tutorials and real world hands on exploration with a physical device AWS DeepRacer AWS DeepRacer is a 1/18th scale race car that provides a way to get star ted with reinforcement learning (RL) AWS DeepRacer provides a means to experiment with and learn about RL by building models in Amazon SageMaker testing in the simulator and deploying an RL model into the car ML Engines and Frameworks Below the ML Platform layer is the ML Engines and Frameworks layer This layer provides direct hands on access to the most popular machine learning tools In this layer are the AWS Deep Learning AMIs that equip you with the infrastructure and tools to accelerate deep lear ning in the cloud The AMIs package together several important tools and frameworks and are pre installed with Apache MXNet TensorFlow PyTorch the Microsoft Cognitive Toolkit (CNTK) Caffe Caffe2 Theano Torch Gluon Chainer and Keras to train sophi sticated custom AI models The Deep Learning AMIs let you ArchivedAmazon Web Services Machine Learning Foundations Page 14 create managed auto scaling clusters of GPUs for large scale training or run inference on trained models with compute optimized or general purpose CPU instances ML Model Training and Deployment Support The Infrastructure & Serverless Environments layer provides the tools that support the training and deployment of machine learning models Machine learning requires a broad set of powerful compute options ranging from GPUs for compute intensive de ep learning to FPGAs for specialized hardware acceleration to high memory instances for running inference Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 provides a wide selection of instance types optimized to fit machine learning use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources whether you are training models or running inference on trained models Amazon Elastic Inference Amazon Elastic Inference allows you to attach low cost GPU powered acceleration to Amazon EC2 and Amazon Sage Maker instances for making predictions with your model Rather than attaching a full GPU which is more than required for most models Elastic Inference can provide savings of up to 75% by allowing separate configuration of the right amount of acceleration for the specific model Amazon Elastic Container Service (Amazon ECS) Amazon ECS supports running and scaling containerized applications including trained machine learning models from Amazon SageMaker and containerized Spark ML Serverless Options Serverless options remove the burden of managing specific infrastructure and allow customers to focus on deploying the ML models and other logic necessary to run their systems Some of the serverless ML deployment options provided by AWS include Amazon SageMaker model deployment AWS Fargate for containers and AWS Lambda for serverless code deployment ArchivedAmazon Web Services Machine Learning Foundations Page 15 ML at the Ed ge AWS also provides an option for pushing ML models to the edge to run locally on connected devices using Amazon Sage Maker Neo and AWS IoT Greengra ss ML Inference This allows customers to use ML models that are built and trained in the cloud and deploy and run ML inference locally on connected devices Conclusions Many people use the terms AI and ML interchangeably On the surface this seems incorrect because historically machine learning is just a domain inside of AI and AI covers a much broader set of systems Today the algorithms and models of machine learning replace traditional symbolic inferencing knowledge representations and languages Training on large data sets has replaced hand coded algorithms and heuristic approaches Problems that seemed intractable using symbolic AI methods are modeled consistently with remarkable results using this approach Machine learning has i n fact become the foundation of most modern AI systems Therefore it actually makes more sense today than ever for the terms AI and ML to be used interchangeably AWS provides several machine learning offerings ranging from pre trained ready to use servi ces to the most popular tools and frameworks for creating custom ML models Customers across industries and with varying levels of experience can add ML capabilities to improve existing systems as well as create leading edge applications in areas that we re not previously accessible Contributors Contributors to this document include : • David Bailey Cloud Infrastructure Architect Amazon Web Services • Mark Roy Solutions Architect Amazon Web Services • Denis Batalov Tech Leader ML & AI Amazon Web Services ArchivedAmazon Web Services Machine Learning Foundations Page 16 Further Reading For additional information see: • AWS Whitepapers page • AWS Machine Learning page • AWS Machine Learning Training • AWS Documentation Document Revisions Date Description February 201 9 First publication,General,consultant,Best Practices Managing_User_Logins_for_Amazon_EC2_Linux_Instances,"ArchivedManaging User Logins for Amazon EC 2 Linux Instances September 2018 This paper has been archived For the latest technical content about the AWS Cloud go to the AWS Whitepapers & Guides page on the AWS website: https://awsamazoncom/whitepapersArchivedArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or servic es each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensor s The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Use Key Pairs for Amazon EC2 Linux Logins 1 The Challenge 2 The Solution 3 An Expect Example 5 Grantin g Login Access: Steps and Commands 6 Automation: The Process 12 Script Development: Linux Commands and Code Samples 14 Confirm Authorization and Network Access 14 Create User Generate Key Pair Install Public Key 14 Key Distribution and Testing 16 Two Sample Scripts 16 Architecture for EC2 Linux Login Access Management 18 Database Tier 18 Application Tier 19 Web Tier 19 Automation Improvements 19 Use Cases 20 Ec2User (Default User) Key Rotation 20 Cross Environment Access 21 Authorization and Permissions for Non Employees 21 Conclusion 21 Contributors 21 Further Reading 22 Archived Abstract Public key and private key pairs are used to l og in to Amazon Elastic Compute C loud ( Amazon EC2) Linux instances and provide robust security The process to manage user logins can be manually intensive if you have many EC2 Linux instances and many users Simplified management of user logins is natively available for EC2 Windows insta nces but not yet for EC2 Linux instances This white paper describes a method to automate the process to grant and revoke login access to users across multiple EC2 Linux instances The description is based on Amazon EC2 Linux but can applied with minor modification s to other types of Amazon EC2 Linux instances The required steps and commands are described in this whitepaper and can be captured in a script or program You can then use the script or program as a tool to automate and simplify login manage ment on other Amazon EC2 Linux instances The target audience for this whitepaper includes Solutions Architects Technical Account Managers Product Engineers and System Administrators All references in this white paper to EC2 instances refer to Amazon EC 2 Linux instances unless otherwise stated ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 1 Introduction Amazon Web Services ( AWS ) generates a public key and private key (key pair) for logging in to each Amazon Elastic Compute Cloud ( Amazon EC2) Linux instance which is an extremely robust security des ign The key pair is used for the Secure Sockets Layer ( SSL) handshake It enables a user to log in to an Amazon EC2 Linux host with an SSH client without having to enter a password Use Key Pairs for Amazon EC2 Linux Logins For Amazon EC2 Linux instance s the default user name is ec2user The public key is store d on the target instance (the instance that the user is requesting access to) in the ec2user home directory ( ~ec2user/ss h/authorized_keys ) The private key i s stored locally on the client devi ce from which the user logs in for example : a PC desktop computer tablet Linux host or Unix host Typically the private key for a n Amazon EC2 Linux instance is downloaded by the users who are authorized to log in to that host For login access to a new EC2 Linux instance you can either generate a new key pair or use an existing key pair Key pai rs can either be generated on the AWS console or created locally The public key of a locally generated key pair can be given a unique name and uploaded to AWS from the AWS Command Line Interface (CLI) Thereafter that key pair can be u sed to l og in to new ly created EC2 instances but only as ec2user ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 2 The Challenge Although using key pairs to log in to EC2 instance s is ve ry robust efficiently managing ac cess to multiple instances for many users with key pairs can be manually in tensive and difficult to automate To simplify the process to manage access to your Amazon EC2 Windows instances you can integrate your EC2 Windows instance with Active Directory You can grant or remove the login access of one user or a group of users to a Windows instance or a group of Windows instances Currently however AWS does not support integration of EC2 Linux instance login s with Active Directory or a Security Assertion Markup Language ( SAML ) compliant authen tication repository such as LDAP Imagine a scenario where ten users have access to one Linux instance and each user logs in to the server as ec2user with the same private key In a situation where one user ’s access to this instance must be removed you would typically have to complete these steps : 1 Generate a new k ey pair 2 Log into the instance and replace t he old public key with the new public key 3 Distribute the new private key to the remaining nine users This process is manual and must be repeated each time you must remove access to the instance for any of the ten users This can be tedious if there are many EC2 Linux instances if you need to temporarily grant user access or quickly revoke user access with out impacting other users for example : in a production environment Next imagine a scenario in which there are ten EC2 Linux instances that share a single key pair In this case a user who has access to one instance automatica lly has access to all ten instances One method to provide more granular login access control is to create ten different key pairs —one for each instance —so that a user only get s the private key s to the specific instances to which that user needs access Although this provides gran ularity it makes private key management difficult For example if the user needs to log in to a hundred different EC2 instances he will need 100 different private keys Furthermore even with a unique key pair for every instance i f a user’s access to a n instance must be removed you still face the problem of recreating reinstallin g and redistributing new key pairs to all the users that have login access to that instance To remove user access to a large fleet for example 100 EC2 Linux instances each with a unique key pair you must create and distribute 100 new key pairs to each user that already has login access to those instances For medium to large environments login access management key distribution and tracking can become complicated and tim e consuming In addition every user authorized to log in to a L inux instance does so with the ec2user account which is root by default That means that ec2user can run any command with sudo This might not always be desi rable You might want to grant a user root login access to EC2 Linux instance s in development but limit the commands that the user can run on production ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 3 instance s For example y ou might want to prevent a user from performing mount or unmount operations on Amazon Elastic File System ( EFS) in production but permit it in development (Although mount privileges for Amazon EFS can be limited through root squashing it is preferable to have finer contro l by attaching sudo privileges to the user especially when granting access to a producti on environment ) The S olution The solution to the preceding challenges is to give each user a unique login name and a single key pair with which to log in to every EC2 Linux instance to which that user is granted access The user gets a unique home direct ory on every EC2 instance to which that user has login access This directory has the same name as the user’s login name This design greatly simplifies login management: Granting a user login access to an EC2 Linux instance simpl y requires creating a hom e directory for that user on that EC2 instance and placing the user’s public key in th at home directory No other users with login access to that instance are affected Conversely when you have to remove a user’s login access to a specific EC2 instance you can simply delete that user’s public key from that user’s home directory on that EC2 Linux instance Again n o other users with login access to that instance are affected If you want to temporarily grant login access to a user you can generate a new key pair place the public key in the user’s home directory and securely send the private key to the user This significantly reduces the overhead associated with key distribution to many users To purge a user from a n instance delete that user’s home d irectory (which should be backed up if it contains files scripts or data) which also removes the user’s login access Lastly each user does not need sudo root privileges on every instance but can have different sudo privilege levels on different insta nces For example sudo can be set to default to root (unlimited permissions) but can be modified to allow only a limited set of commands on a specific instance or group of instances This is controlled entirely through the sudo configuration file on the EC2 instance which is typically /etc/sudoers or /etc/sudoers d/cloud init for Amazon Linux This file can be modified by a root user to set sudo privileges for any user To give a user access to an EC2 instance complete these steps: 1 Login to the target EC2 instance as root ( ec2user ) 2 Creat e the user’s login and home directory 3 Generate a key pair and place the public key in the user’s home directory If the user already has a key pair c opy the public key of that key pair to the user’s home directory on the target EC2 instance 4 Modify the configuration file /etc/ssh/ssh_config to disable password login and allow only ssh login by key pair ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 4 5 Modify the /etc/sudoers d/cloud init file to grant the required sudo permission s to the user 6 Securely send the pr ivate key to the user The user will then be able to log in to the instance with a unique login name and private key To simplify the process so you can easily repeat it for each user you can complete these steps manually and capture the steps in a Bash or Python script: 1 Log in to the target EC2 instance and run the commands to create the user account 2 Set sudo permissions for the user account 3 Grant login access with a key pair The script takes a user ’s login name as input so when you run the sc ript on any target EC2 instance it grant s login access to that user for that specific instance The process to login to the target instance and run the script is usually manual and interactive ( over SSH) but can be automated with a wrapper script written in Expect When you run the Expect wrapper script you don’t have to manually log in to each target EC2 instance to run the commands that crea te the user and enable the user to log in By automating th is process an administrator can grant or revoke acce ss to any number of users for any number of instances Expect is public domain software that enables you to automate control of interactive applications such as SSH SFTP SCP FTP passwd etc These applications interactively prompt and expect a user to enter keystrokes in response This is the case with SSH (secure socket shell) the protocol typically used to securely log in to EC2 Linux instances When you use Expect you can write simple scripts to automate SSH interactions This makes it ideal for automating interactive logins to Linux which does not have a login API Several languages either have ports of Expect for example: Perl (expectpm) Python (pexpect) and Java (expect4j) or have projects implementing Expect like functionality All Linux versions come installed with Expect which is also available on Windows ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 5 An Expect Example One example of how you can use Expect to automate your actions with scripts is a connection to an anonymous FTP server To make a manual FTP connection to an anon ymous FTP server you would typically follow these steps: 1 Open a connection to the FTP server 2 At the name prompt type anonymous 3 At the p assword prompt type your email address to get to the FTP prompt 4 From the FTP prompt select whether to download upload or list files But instead of manually making the connection to the FTP server you can run the following script to automate this interaction with Expect The script connects you to the FTP server and then runs the interact command which gives control to the user The variable in the second line of the script $argv takes the name or IP address of the FTP server as command li ne input #!/usr/local/bin/expect spawn FTP $argv expect ""Name"" send ""anonymous \r"" expect ""Password:"" send chiji@amazoncom \r interact + Expect is based on a subset of TCL which can be used to write large and complicated programs However all the commands that give the user login access are included in the Bash script whic h runs on the target EC2 instance The Expect wrapper script simply automate s the connection to each instance and runs the Bash script that is on each instance This uses simple Expect syntax and is relatively simple to write Additional program features such as setting timeouts and checking command line inputs which need to be included in the wrapper script for robustness might require more advance d syntax ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 6 Granting Login Access: Steps and Commands Scalable management of multiple user logins to a ta rget Amazon EC2 Linux instance requires user account creation key pair generation or public key retrieval public key installation sudo configuration password login disablement and user private key distribution To grant a user John Smith with the login name johnsmith login access to a target EC2 instance with the IP address 101010100 you must complete the following steps Login to the Target EC2 Instance Log in to the target EC2 Linux instance as ec2user with an SSH client such as PuTTY from a Windows host or the default SSH client from a Linux or Mac host You must have the private key for the target EC2 instance on the device from which you log in If you log in from a Linux host the private key has a pem extension but PuTTY requires a p pk extension Use the PuTTYgen client to convert the pem private key to a file with a ppk extension From a Linux or Mac host the command to log in to the EC2 target instance is: $ ssh –i /pathtoprivatekey ec2user@101010100 ECDSA key fingerprint is d3:f2:70:3c:2b:cf:2b:c3:94:e4:94:74:dc:5c:97:4f Are you sure you want to continue connecting (yes/no)? Yes [ec2user@ip101010100] $ Create the User Account After you log in to the EC2 Linux instance you create the new user account To create this account you run these commands: 1 Log in at the root level $ sudo –i 2 Change the user creation configuration so that the ~/ssh directory is created for each new user $ mkdir /etc/skel/ssh ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 7 3 Set the default permissions for the ssh directory of each new user $ chmod 700 /etc/skel/ssh 4 Create the user johnsmith add him to the rootusers group (for sudo root access) and include a summary of John Smith’s role $ useradd c “John Smith Engineer” d /home –k /etc/skel – m g rootusers \ [G other_group] Johnsmith 5 Su to the new user johnsmith log in to his ~/ssh directory generate a key pair and install the public key on the host $ su johnsmith $ cd /ssh $ sshkeygen –t rsa –b 2048 –f /johnsmith N “” John Smith can then use this key pair to log in to every EC2 instance to which he is granted access To revoke access you can simply delete the public key from his home directory on the target host To grant access you must create an account on the target instance (if one does not exist) and install his public key in his home directory 6 Return to the root level $ exit Next you move the private key to the ec2user home directory and rename it This is a security measure The priva te key will be securely sent to the user from the ec2user home directory or written to a keys database for later distribution to the user 1 Move the private key to the ec2user home directory $ mv ~johnsmith/ssh/johnsmith ~ec2 user/privk _johnsmith ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 8 2 Rename the public key to authorized_keys and set the correct permissions (600) to enable the SSH connection $ mv johnsmithpub authorized_keys $ chmod 600 authorized_keys 3 If the user already has a n existing key pair you can copy the user’s public key either from an EC2 instance (to which the user already has login access) from a keys host or from a keys database to the target host Install that public key in the user’s home directory and change the permissions for the key $ scp –i /pathtoprivatekey johnsmith @hostwherehehas \ login/ssh/authorized_keys johnsmithpub $ mv johnsmithpub authorized_keys $ chmod 600 authorized_keys User account creation with the login name is complete Th e login name should be the same as the user’s Windows desktop or Corporate Account Login user name After the account is created and sudo permissions are set the user can log in to the instanc e with the new login name and private key New Ke y Installation and Rotation If the private key is lost perform these steps to remove and replace the lost key : 1 Delete the user’s public key from the home directory on each EC2 instance that the user has access to 2 Generate a new key pair for the user 3 Install the public key of this new key pair in the user’s home directory on the requisite EC2 instances to reinstate login access for that user 4 Securely send the new private key to the user As a mandatory security procedure you should configure your enviro nment to automatically rotate key pairs at frequent intervals For more information see Key Rotation ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 9 Configure Sudo Permissions For Amazon EC2 Linux user privileges are defined in the /etc/sudoers/cloud init file The file usually contains only the entry for the ec2user ec2user ALL=(ALL) NOPASSWD:ALL Add the login name johnsmith to this file to give him root sudo privileges $ sudo –i $ echo “johnsmith ALL=(ALL) NOPASSWD:ALL” >> \ /etc/sudoersd/cloud init If you ha ve a large number of new users to add to the sudo configuration file or if you do not want to manually add each user you can create a file with a list of named user groups and upload that file to each target host The file must include the associated su do privileges for the group When the file is uploaded to the target host the content is appended to the cloud init file You can then assign a new user to one of the groups named in the file and the user inherit s the sudo permissions of that group For e xample you can create a file named grp_permissions which specifies different permissions for different groups $ cat grp_permissions %rootusers ALL=(ALL) NOPASSWD:ALL %dbausers ALL=(ALL) ALL !/usr/bin/passwd root %sysadmin ALL=(root) /bin/mount /bin/ umount %operator ALL=(root) /bin/mount !/bin/umount /efs The rootusers group has full root privileges so any user added to that group during account creation has full sudo privileges The operator group also has root privileges but cannot unmount an EF S file system ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 10 Upload the grp_permissions file to each target EC2 instance and then use the sed command to append the entries in the file to the existing cloudinit file Make sure to verify that the entries in the file do not already exist in the cloudinit file $ sudo –i $ cd /etc/sudoersd $ /bin/sed –i –e ‘$r grp_permissions’ /cloud init Because the sudo syntax is complicated and a syntax error might make it impossible to log in or run sudo on the instance make sure that you are logged in as roo t through a separate terminal when you modify the cloudinit file (/etc/sudoers for other Linux versi ons Errors can be corrected if you are already logged in as root ; you can either correct the incorrect entry or overwrite the file with the backup file ve rsion You should always create a backup copy before you modify the cloudinit file All sudo entries or edits should be made with visudo (not vim) which reviews new entries for syntax errors If it finds an error it gives you the choice to fix the error to exit and not save the changes to the file or to save the changes and exit The last choice is not recommended so visudo marks it with (DANGER!) Because you can break sudo when you update /etc/sudoersd/cloud init from a script new sudo configurati on changes additions and customizations should first be tested on a nonproduction host Sudo configuration files that have been tested and work correctly should then be checked in to a version control repository such as AWS CodeCommit from which all production deployments should be sourced ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 11 Sudo with LDAP Sudo permissions can also be defined in LDAP which can synchronize the sudoers configuration file in a large distributed environment To define sudo permissions in LDAP: 1 Rebuild sudo with LDAP support on each EC2 instance You can choose to rebuild one EC2 instance and then create an Amazon Machine Image (AMI) the Golden Image which you can use to spin up other EC2 instances 2 Update the LDAP sc hema 3 Import the /etc/sudoersd/cloud init file into LDAP 4 Configure the sudoers service in nsswitchconf with this command: sudoers: files ldap For more information see the sudoersLDAP manual page The sudo with LDAP method has many benefits:  Because there are only two or three queries per invocation it is very fast  Data loaded into LDAP always conforms to the schema So unlike sudo which exits if there is a typo sudo with LDAP continues to run  Because syntax is verified when data is inserted in LDAP locking is not necessary with LDAP This means that visudo which provides locking and syntax verification when the /etc/sudoersd/cloud init file is modified is no longer needed For information about how to use a keys host see Step 5 in Script Development: Linux Commands and Code Samples ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 12 Automation : The Process Normally to complete the process to gran t a user login access to a specific EC2 instance an administrator or root user must manually log in t o each target instance and run all the commands described in the preceding sections The Expect wrapper script automat ically logs in to each target EC2 instance uploads the Bash script and then runs it to create accounts and the sudo permissions that ena ble login on that target for the specified users This eliminat es the need to manually log in to each target instance to run the script Information to include in the Expect wrapper script can be provided as a csv flat input file with the format in Figure 1 below Instance IP Address User Login Name User Full Name User Role User Groups Action 1111 johnsmith John Smith SA rootusers users add 1111 heidismith Heidi Smith Supermodel users add 1111 abejohn Abraham John Senator dbausers remove 2222 alberteinstein Albert Einstein Scientiest rootusers add 2222 galoisevariste Galois Evariste Math Whiz mlusers add 3333 genghiskhan Genghis Khan Mongol rootusers remove Figure 1 When the Expect script is run against this input file it takes the information for each user (login name full name user group and user role ) logs in to each instance with the admin’s private key and runs the Bash script This create s a login account (if one does not exist) and adds or revoke s access for each user to that instance The Expect wrapper script and Bash script jointly constitute the base tool for managing EC2 Linux login access However they must be integrated into the authorization and security processes of the organi zation to be used robustly for i nstance management Figure 2 below is a flow chart that shows the typical steps to grant a user login access to a specific EC2 instance It starts at user account creation continues through setting sudo permissions and finishes with login account testing The same steps are performed —either serially or in parallel —to grant or revoke user login access to multiple instances ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 13 Figure 2 : Steps to create and grant login access to a n EC2 Linux instance This document does not include information about the speci fic internal processes that determine which EC2 instances a user can access and which commands the user is authorized to run because they are included in the company’s security and access policy Additional steps might be added as necessary to conform to any custom security or management requirements for the environment Companies and individuals are advised to review AWS Security Best Practices and make sure they understand h ow to properly secure their environments ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 14 Script Development : Linux Commands and Code Samples To successfully automate user logins to make sure that the process is both robust and secure the automation scripts must be created correctly and the procedures must be correct To complete this process the administrator who runs the scripts must perform all the procedures that follow Confirm Authorization and Network Access 1 Confirm that login access for the user to the target L inux instances has been authorized through the requisite internal processes 2 Confirm that t he remote Linux host that runs the Expect wrapper scrip t is able to connect to each target Linux instance to which the user is to be granted access This is important if the target instance might not have a public IP address In that case the host from which the wrapper script is run must be on the same network as the target instance 3 Confirm that after the administrator logs in to each target instance as ec2user the administrator can su to root because the user creation scri pt must be run as root Create User Generat e Key Pair Install Public Key 1 Log in to the target server as ec2user and run the Bash script This create s the user account a home directory with an ssh directory and sets corre ct directory permissions ( 700) a Add the user to the group from which to in herit sudo permissions b Generate or retrieve a key pair and install the public k ey in the user’s home directory c Set the correct permissions on both keys and move t he private key to a secure repository ~ec2user/privk temporarily hold s the private key for download before it is deleted For the specific login name johnsmith these commands cr eate the johnsmith user account generate a key pair and install a copy of the public key in johnsmith’s home directory Both keys are then moved to a subdirectory on the target host ( ~ec2 user/privk ) This is the local key directory and is defined in the variable LOCAL_KEYS_REPO in the Bash script Newly created keys are downloaded from this directory on the target host The private key is then securely forwarded to the user and copies of the private key and public key are moved to the database or repository ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 15 path specified in the variable KEYS_DATABASE A user or application with proper IAM permissions can retrieve this public key from the repository or database and install it on any target EC2 instance to which login access for the user is required The following variables refer to addresses or paths to the locations of users’ SSH keys and must be set in the Bash script $KEYS_DATABASE =”pathtopubickeyslocation forallusers” $LOCAL_KEYS_REPO =”pathtorepository ontargetinstance that temporarily holdsusers’newlycreatedkeypair” $RECEIVED_KEYS_REPO =”Directory ontargetinstance towhich users’publickeysarecopiedoruploaded” 2 If john smith already has a key pair retrieve his public key either from a host to which he already has access or from a keys repository Upload t he public key and the user creation script to the target ins tance The RECEIVED_KEYS_REPO variable specifies a directory on the target instance to which a user’s existing public key should be uploaded o For automation when the user creation script runs on the target Linux instance it first verif ies if a public key for the user is present in the RECEIVED_KEYS_REPO directory If it is not the script generates a new key pair installs the public key prompts the admin on the remote host to download both keys from LOCAL_KEYS_REPO and then deletes b oth keys o If a public key with the user’s login name is present in the local RECEIVED_KEYS_REPO directory then t hat public key is moved to the ssh directory of that user on that instance to grant login access to the target instance o The keys database address is included in the shell variable KEYS_DATABASE and keeps the login data for each user (full name login name public key private key authorized hosts sudo permissions and other user metadata ) The KEYS_DATABASE could refer to a csv file in S3 o r an AWS managed relational database such as Amazon Relational Database Service (Amazon RDS ) which provides six familiar database engines to choose from (Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server ) Amazon Aurora is a MySQL compatible relational database engine that combines the speed and availability of high end commercial databases with the simplicity and cost effectivene ss of open source databases Amazon Aurora provides up to five times better performance than MySQL with the security availability and reliability of a commercial database at one tenth the cost For more information about the infrastructure design on AWS that provides robust login access management see Architecture for EC2 Linux Login Access Management ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 16 The Bash script can read the keys database to find the existing public key and other metadata for a user It can also write a new key pair to the keys database For greater security you can create separate keys database s for user public and private keys Key Distribution and Testing 1 Download the private key for the user to the key s database To test sudo for this user update the /etc/sudoersd/cloud init file change to the user and run a sudo command For example to test a login for john smith who has full root access run the following commands as ec2user on the target instance : Source John Smith’s full env ironment $ sudo su – john smith Test sudo to root For a user with limited access the explicit sudo commands permitted should be tested $ sudo –i 2 Securely send the private ke y to the user for logins to all EC2 Linux instances to which the user has been granted access The user ’s public key will thereafter be used to grant login access to EC2 Linux instances To remov e the user ’s login access remov e the authorized_key s file from the ssh directory in the user’s home directory Two Sample Scripts The commands to automate user login access that are described in this document are included in a zip file with two working scripts : the user creation Bash script and the auto instance connect Expect wrapper script You can download both scripts from my S3 buck et unzip them and test them on a n EC2 Linux host  The Bash user creation script runs the actual commands required to create the user and grant login access on a target instance These are the commands captured when you log in to the target instance and manually perform these operations The script takes a user login the IP address of a key s host and the action to perform (add or remove login access of a user to the instance) as input It then creates the user account generates a key pair or retrieve s the user’s existing public key installs the public key in the user’s home sets the user’s sudo privileges and tests the user ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 17 account To revoke user access it simply remove s the user’s public key from the user’s h ome directory on the target instance It must be runs as root A forloop in the Bash script can be used to revoke or grant login access to multiple users for the same EC2 instance  The Expect auto instance connect script automates conn ection to each target instance It is designed to be run by an admin istrator from a remote Linux host and connects to each target Linux instance (the instances to which the user is to be granted login access) to configure user login access It uploads the user creation Bash script to the instance (if it is not already installed) and then runs it with the required command line inputs —user’s login name keys host IP address add or remove access —to grant or remove login access to the instance for the specified user Because the Expect script simulates the in terac tive commands required to make an SSH connection to an EC2 instance and run commands it requires the path to the private key of the ec2user user who runs it A Forloop in the Expect wrapper script can be used to connect to multiple instances to grant or revoke user login access to one or more users This Expect wrapper script is typically run from an EC2 Linux instance but can also be run from a Windows host The latter requires installation of Expect and SSH packages for Windows For more information see Further Reading After you run both scripts the user is granted login access with a unique login name to the specified EC2 instance s The login name can be the same as the user’s single signon (SSO) login name and can be given root or other limited sudo privileges Both scripts illustrate automation of the process to grant or revoke login access They must be modified before they can be used in production For example you could add logging robust failure recovery etc Administrators and developers might also need to modify the script s for use on other EC2 Linux versions or for custom management and security needs Instead of uploading the user creation Bash script to each target Linux instance at r untime you can preinstall the script on each instance (include it in an Amazon Machine Image) install it as an RPM package on the EC2 Linux instance install it from ec2user data on initial boot or install it from a configuration server such as a Chef server After it is installed on the EC2 Linux instance the Expect wrapper script does not have to load the script on to each target instance instead it connect s to the target and run s the installed scripts ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 18 Architecture for EC2 Linux Login Access Management Database Tier To perform automated user logins in production environments the user login data and associated metadata should be stored in a database Because there is no requirement for millisecond latencies and the amount of login data for a ll users is unlikely to be very large Amazon RDS is an ideal keys database Amazon RDS is a managed database service available in a choice of engines It also provides significant security benefits For more inf ormation s ee the Overview of AWS Security Database S ervices whitepaper The RDS database instance hold s the user data required to provide login acce ss to a ny EC2 Linux host A database schema for user logins should include the following tables and fields :  User table – UserID (primary key) user login name first name last name email mobile phone number user role public key private key key pair creation date admin creator of key  Linux h ost table – Hostnames IP FQDN EC2 type host function ( database web) environment ( production QA test)  Access options table – Group or user (root user poweruser operator and custom sudo configurations etc) sudo permissions authorization date You can choose not to store the user’s private key in the database This means that if the user ’s private key is lost a new key pair must be generated for that user Public keys that are rotated a re irretrievably lost and s o old public keys should not be retained in the database Only administrators should have read access to the keys database The database should also be replicated across Availability Zones (MultiAZ) for high availability otherwise it might not be possible to grant access to users if the database is down or unreachable Because significant security problems could arise if the users ’ login data and metadata are compromised the data in the Amazon RDS datab ase instance should be encrypted at rest Amazon RDS also supports encrypting an Oracle or SQL Server DB instance with Transparent Data Encryption (TDE) TDE can be used in conjunction with encryption at rest although using TDE and encryption at rest simultaneously might cause a slight decrease in database performance To manage the keys used to encrypt and decrypt your Amazon RDS resources you can use the AWS Key Management Service (KMS) AWS KMS combines secure highly available hardware and softwa re to provide a key management system that is scaled for the cloud With AWS KMS you can create encryption keys and define the policies that control how these keys can be used AWS KMS supports CloudTrail so you can audit key usage to verify that keys are used appropriately ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 19 You can use SSL from your application to encrypt a connection to a DB instance that runs MySQL MariaDB Amazon Aurora SQL Server Oracle or PostgreSQL This provides end toend encryption of data in transit and at rest Application Tier The infrastructure for login access management in a production environment can be configured as a standard three tier architecture with the database behind a Security Group that is only reachable from the application server The application server can be a small T2 instance The user creation script does not query the database directly but goes through th e application server The user creation script and the Expect wrapper script can be configured to run only from the application server You can then limit logins to the application server to specific administrators Web Tier The web tier provides the inte rface through which user s can request login access to specific servers and enables user s to securely download their private key s The web server can also be a T2 instance and should only allow connections over HTTPS Connections from the web server to the application server can also be encrypted Automation Improvements To simplify administration of user login accounts you can add a graphical user interface ( GUI) in front of the backend From this interface you can click a user name and select the group of instances to which you want to grant or remove access for that user The backend processing is still performed by the user creation script and the auto instance connect script After a production version of the automation script is built with the threetier architectur e discussed in the preceding section integration with A ctive Directory (AD) or any SAML 20 compliant system is feasible The target Linux instances as well as the privileges the user should have on the instance s (root or non root) can be r ead from your AD server and mapped to the Linux user group that has equivalent sudo permissions When the user account is created it automatically inherits the sudo privileges of that Linux group However to implement this solution you must specif y a call to the LD AP/ADSI API of your AD server to retrieve the hosts a nd privileges authorized for each EC2 instance for that user The script receives that input and creates the user accounts adds or revokes their access and raises or removes permissio n by updating the group the user belongs to on the target instances ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 20 Use Cases There are several production use cases for which automated login access management are required Ec2User (Default User) Key Rotation Key r otation for the ec2user account should be frequent but is rarely done in production environments simply because of the amount of manual effort required to create and reinstall keys in a moderately large environment When you use the automated login access scripts the effort required is significantly reduced so key pair generation and rotation can be performed more frequent ly which significantly improves security A key pair can be created named and imported to the AWS account with the AWS CLI The import command is: $ aws ec2 import keypair keyname mykey publickey material \ MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuhrGNglwb2Zz/Qcz1zV+ l12fJOnWmJxC2GMwQOjAX/L7p01o9vcLRoHXxOtcHBx0TmwMo+i85HWMUE7aJtYc lVWPMOeepFmDqR1AxFhaIc9jDe88iLA07VK96wY4oNpp8+lICtgCFkuXyunsk4+K huasN6kOpk7B2 w5cUWveooVrhmJprR90FOHQB2Uhe9MkRkFjnbsA/hvZ/Ay0Cflc 2CRZm/NG00lbLrV4l/SQnZmP63DJx194T6pI3vAev2+6UMWSwptNmtRZPMNADjmo 50KiG2c3uiUIltiQtqdbSBMh9ztL/98AHtn88JG0s8u2uSRTNEHjG55tyuMbLD40 QEXAMPLE Output: { ""KeyName"": ""my key"" ""KeyFingerprint"": ""1f:51:ae: 28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca"" } With this command the public key is imported to AWS When you spin up Linux instances that key pair is available for selection from the drop down list of existing key pair names on the Console Any new instanc e that selects this key pair will have the public key installed on the instance Admin istrators with the private key can log in to new instances they did not create as ec2user For ec2user key rotatio n a new key pair is created named and installed on all instances for ec2user with the user creation script The public key of the new key pair is then imported into AWS with either the console or CLI the old key pair is deleted and the new private key is securely distributed to authorized users ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 21 Cross Environment Access Regardless of the risk associated with granting login access to production systems to developers or third party consultants it is sometimes necessary To provide root access in those circumstances is dangerous The automated login acce ss management tools described in this document can give admin istrators control over the commands that a user can run on a production EC2 instance An administrator could create a group with limited sudo privileges and add user s to this group when accounts are created The administrator could also remove the user’s public key to revoke login access to the instance after the specific task for which access is needed is completed Authorization and Permission s for Non Employees The capacity to grant or revoke login access to target EC2 instances and provide granular control over the actions that can be perform ed by a user on the target instance offers great flexibility It is particularly useful when you need to give login access to temporary employees partn ers consultants software vendors or applications whose actions on target hosts must be limited In addition every action performed on the target host by the user can be monitored and captured by a shell such as sudosh or rootsh which logs all key str okes A tracking shell can be specified when the account is created for a specific user on the target instance Conclusion The concepts explained in this document for automating login access a process which is ordinarily interactive for all types of Linu x and is therefore manually intensive can be used to develop an application or script that has great benefits and broad utility across the enterprise Automated login access management will eventually become a native feature of Amazon EC2 Linux instances For now developing an d using an automation tool will be invaluable to administrators engineers architects system administrators and account managers for managing user access Contributors The following individuals and organizations contributed to th is document: Chiji Uzo AWS Solutions Architect ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 22 Further Reading For more information see these resources : user creation script : https://s3 uswest 2amazonawscom/samplescri pts/user creation Overview of AWS Security – Database Services (whitepaper) : https://d0awsstaticcom/whitepapers/Security/Security_Database_Services_Wh itepaperpdf Expect for Windows : http://docsactivestatecom/activetcl/84/expect4win/ex_usagehtml#cross_platform Open SSH for Windows: http://sshwindowssourceforgenet/",General,consultant,Best Practices Managing_Your_AWS_Infrastructure_at_Scale,"ArchivedManaging Your AWS Infrastructure at Scale Shaun Pearce Steven Bryen February 2015 This paper has been archived For the latest technical guidance on AWS Infrastructure see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 2 of 32 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 3 of 32 Contents Abstract 4 Introduction 4 Provisioning New EC2 Instances 6 Creating Your Own AMI 7 Managing AMI Builds 9 Dynamic Configuration 12 Scripting Your Own Solution 12 Using Configuration Management Tools 16 Using AWS Services to Help Manage Your Environments 22 AWS Elastic Beanstalk 22 AWS OpsWorks 23 AWS CloudFormation 24 User Data 24 cfninit 25 Using the Services Together 26 Managing Application and Instance State 27 Structured Application Data 28 Amazon RDS 28 Amazon DynamoDB 28 Unstructured Application Data 29 User Session Data 29 Amazon ElastiCache 29 System Metrics 30 Amazon CloudWatch 30 Log Management 31 Amazon CloudWatch Logs 31 Conclusion 32 Further Reading 32 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 4 of 32 Abstract Amazon Web Services (AWS) enables organizations to deploy large scale application infrastructures across multiple geographic locations When deploying these large cloud based applications it’s important to ensure that the cost and complexity of operating such systems does not increase in direct proportion to their size This whitepaper is intended for existing and potential customers —especially architects developers and sysops administrators —who want to deploy and manage their infrastructure in a scalable and predictable way on AWS In this whitepaper we describe tools and techniques to provision new instances configur e the instances to meet your requirements and deploy your application code We also introduce strategies to ensure that your instances remain stateless resulting in an architecture that is more scalable and fault tolerant The techniques we describe allow you to scale your service from a single instance to thousand s of instances while maintaining a consistent set of processes and tool s to manage them For the purposes of this whitepaper w e assume that you have knowledge of basic scripting and core services such as Amazon Elastic Compute Cloud (Amazon EC2) Introductio n When designing and implementing large cloud based applications it’s important to consider how your infrastructure will be managed to ensure the cost and complexity of running such systems is minimiz ed When you first begin using Amazon EC2 it is easy to manage your EC2 instances just like regular virtualized servers running in your data center You can create an instance log in configure the operating system install any additional packages and install your applic ation code You can main tain the instance by installing security patches rolling out new deployments of your code and modifying the configuration as needed Despite the operational overhead you can continue to manage your instances in this way for a long time However your in stances will inevitably begin to diverge from their original specification which can lead to inconsistencies with other instances in the same environment This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments Ultimately it will lead to service issues because your environments will become less predictable and more difficult to maintain The AWS platform provides you with a set of tools to address this challenge with a different approach By using Amazon EC2 and associated services you can specify and manage the desired end state of your infrastructure independently of the EC2 instances and other running components ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 5 of 32 For example with a traditional approach you would alter the configuration of an Apache server running across your web servers by logging in to each server in turn and manually mak ing the change By using the AWS platform you can take a different approach by chang ing the underlying specification of your web servers and launch ing new EC2 instances to replace the old ones This ensures that each instance remains identical; it also reduces the effort to implement the change and reduces the likelihood of errors being introduced When you start to think of yo ur infrastructure as being defined independently of the running EC2 instances and other components in your environments you can take greater advantage of the benefits of dynamic cloud environment s: • Software defined infrastructure – By defining your infrastructure using a set of software art ifacts you can leverage many of the tools and techniques that are used when developing software components This includes managing the evolution of your infrastructure in a version control system as well as using continuous integration (CI) processes to continually test and validate infrastructure changes befo re deploying them to production • Auto Scaling and selfhealing – If you automatically provision your new instances from a consistent specification you can use Auto Scaling groups to manage the number of instances in an EC2 fleet For example you can set a condition to add new EC2 instances in increments to the Auto Scaling group when the average utilization of your EC2 fleet is high You can also use Auto Scaling to detect impaired EC2 instances and unhealthy applications and replace the instances without your intervention • Fast environment provisioning – You can quickly and easily provision c onsistent environments which opens up new ways of working within your teams For example you can provision a new environment to allow testers to validate a new version of your application in their own personal test environment s that are isolated from other changes • Reduce costs – Now that you can provision environments quickly you also have the option to remove them when they are no longer needed This reduce s costs because you pay only for the resources that you use • Blue green deployments – You can deploy new versions of your application by provisioning new instances (containing a new version of the code) beside your existing infrastructure Y ou can then switch traffic between environments in an approach known as bluegreen deployments This has many benefits over traditional deployment strategies including the ability to quickly and easily roll back a deployment in the event of an issue To leverage these advantages your infrastructure must have the following capabilities: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 6 of 32 1 New infrastructure components are automatically provisioned from a known version controlled baseline in a repeatable and predictable manner 2 All instances are stateless so that they can be removed and destroyed at any time without the risk of losing applicat ion state or system data The following figure shows the overall process: Figure 1: Instance Lifecycle and State M anagement The following sections outline tools and techniques that you can use to build a system with these capabilities By moving to an architecture where your instances can be easily provisioned and destroyed with no loss of data you can fundamentally change the way you m anage your infrastructure Ultimately you can scale your infrastructure over time without significantly increasing the operational overhead associated with it Provisioning New EC2 Instances A number of external events will require you to provision new inst ances into your environment s: • Creating new instances or replicating existing environments • Replacing a failed instance in an existing environment • Responding to a “sca le up” event to add additional instances to an Auto Scaling group • Deploying a new version of your software stack (by using bluegreen deployments ) Some of these events are difficult or even impossible to predict so it’s important that the process to create new instances into your environment is fully automated repeatable and consistent The process of automatically provisioning new instances and bringing them into service is known as bootstrapping There are multiple approaches to bootstrap ping your Amazon EC2 instances The two most popular approaches are to either create your own EC2 Instance Version Control System1 Durable Storage 2ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 7 of 32 Amazon Machine Ima ge (AMI) or to use dynamic configuration We explain both approaches in the following sections Creating Your Own AMI An Amazon Machine Image (AMI) is a template that provides all of the information required to launch an Amazon EC2 instance At a minimum it contains the base operating system but it may also include additional configuration and software You can launch multiple instances of an AMI and you can also launch different types of instances from a single AMI You have several options when launch ing a new EC2 instance : • Select an AMI provided by AWS • Select an AMI provided by the community • Select an AMI containing preconfigured software from the AWS Marketplace1 • Create a custom AMI If launch ing an instance from a base AMI containing only the operating system you can further customiz e the instance with additional configuration and software afte r it has been launched I f you create a custom AMI you can launch an instance that already contains your complete software stack thereby removing the need for a ny runtime configuration However b efore you decide whether to create a custom AMI you should understand the advantages and disadvantages Advantages of custom AMIs • Increases s peed – All configuration is packaged into the AMI itself which significantly increases the speed in which new instances can be launched This is particularly useful during Auto Scaling events • Reduce s external dependencies – Packaging everything into an AMI mean s that there is n o dependenc y on the availability of external services when launching new instances ( for example package or code repositories) • Remove s the reliance on complex configuration scripts at launch time – By preconfiguring your AMI scaling events and instance replacement s no longer rely on the successful completion of configuration scripts at launch time This reduces the likelihood of operational issues caused by erroneous scripts Disadvantages of custom AMIs 1 https://awsamazoncom/marketplace ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 8 of 32 • Loss of agility – Packaging everything into an AMI means that even simple code changes and defect fixes will require you to produce a new AMI This increase s the time it takes to develop test and release enhancements and fixes to your application • Complexity – Managing the A MI build process can be complex You need a process that enables the creation of consistent repeatable AMIs where the changes between revisions are identifiable and auditable • Runtime configuration requirements – You might need to make additional customizations to your AMIs based on runtime information that cannot be known at the time the AMI is created For example the database connection string required by your application might change depending on where the AM I is used Given the se advantages and disadvantages we recommend a hybrid approach : build static components of your stack into AMIs and configure dynamic aspects that change regularly (such as application code) at run time Consider the following factors to help you decide what configuration to include within a custom AMI and what to include in dynamic run time scripts: • Frequency of deployments – How often are you likely to deploy enhancements to your system and at what level in your stack will you make the deployments? For example you might deploy changes to your application on a daily basis but you might upgrade your JVM version far less frequently • Reduction on external dependencies – If the configuration of your system depends on other external syst ems you might decide to carry out these configuration steps as part of an AMI build rather than at the time of launching an instance • Requirements to scale quickly – Will your application use Auto Scaling groups to adjust to changes in load? If so how quickly will the load on the application increase? This will dictate the speed in which you need to provision new instances into your EC2 fleet Once you have assessed your application stack based on the preceding criteria you can decide which element s of your stack to include in a custom AMI and which will be configured dynamically at the time of launch The following figure show s a typical Java web application stack and how it could be manage d across AMIs and dynamic scripts ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 9 of 32 Figure 2: Base Foundational and Full AMI Models In the base AMI model only the OS image is maintained as an AMI The AMI can be an AWS managed image or an AMI that you manage that contains your own OS image In the foundational AMI model elements of a stack that change infrequently ( for example components such as the JVM and application server) are built into the AMI In the full stack AMI model a ll elements of the stack are built into the AMI This model is useful if your applicatio n changes infrequently or if your application has rapid auto scaling requirements (which means that dynamically installing the application isn’t feasible ) However e ven if you build your application into the AMI it still might be advantageous to dynamic ally configure the application at run time because it increases the flexibility of the AMI For example it enables you to use your AMIs across multiple environments Managing AMI Builds Many people start by manually configur ing their AMIs using a process similar to the following : 1 Launch the latest version of the AMI 2 Log in to the instance and manually reconfigure it (for example by making package updates or installing new application s) 3 Create a new AMI based on the running instance EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Base AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Foundational AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Full stack AMI Bootstrapping CodeApp ConfigArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 10 of 32 Although this manual process is sufficient for simple applications it is difficult to manage in more complex environments where AMI updates are needed regularly It’s essential to have a consistent repeatable process to create your AMIs It’s also important to be able to audit what has changed between one version of your AMI and another One way to achieve this is to manage the customization of a base AMI by using automated scripts You can develop your own scripts or you can use a configuration management tool For more information about configuration management tools see the Using Configuration Management Tools section in this whitepaper Using automated scripts has a number of advantages over the manual method Automat ion significantly speed s up the AMI creation process In addition you can use version control for your scripts/configuration files which results in a repeatable process where the change between AMI versions is transparent and auditable This automated process is similar to the manual process: 1 Launch the latest version of the AMI 2 Execute the automated configuration using your tool of choice 3 Create a new AMI image based on the running instance You can use a third party tool such as Packer 2 to help automat e the process However many find that this approach is still too time consuming for an environment with multiple frequent AMI builds across multiple environments If you use the Linux operating system you can reduce the time it takes to create a new AMI by customi zing an Amazon Elastic Block Store (Amazon EBS) volume rather than a running instance An Amazon EBS volume is a durable block level storage device that you can attach to a single Amazon EC2 instance It is possible to creat e an Amazon EBS volume from a base AMI snapshot and customise this volume before storing it as a new AMI This replaces the time taken to initializ e an EC2 instance with the far shorter time needed to create and attach an EBS volume In addition this approach makes use of the incremental nature of Amazon EBS snapshots An EBS snapshot is a point intime backup of an EBS volume th at is stored in Amazon S3 Snapshots are incremental backups meaning that only the blocks on the device that have changed after your most recent snapshot are saved For example i f a configuration update changes only 100 MB of the blocks on an 8 GB EBS volume only 100 MB will be stored to Amazon S3 2 https://packerio ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 11 of 32 To achieve this you need a long running EC2 instance that is responsible for attaching a new EBS volume based on the latest AMI build executing the scripts needed to customiz e the volume creating a snapshot of the volume and registering the snapshot as a new version of your AMI For example Netflix uses t his technique in their open source tool called aminator 3 The following figure shows this process Figure 3: Using EBS Snapshots to Speed Up D eployments 1 Create the volume from the latest AMI snapshot 2 Attach the volume to the instance responsible for building new AMIs 3 Run automated provisioning scripts to update the AMI configuration 4 Snapshot the volume 5 Register the snapshot as a new version of the AMI 3 https://githubcom/Netflix/aminator ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 12 of 32 Dynamic Configuration Now that you have decided what to include into your AMI and what should be dynamically configured at run time you need to decide how to complete th e dynamic configuration and bootstrapping process There are many tools and techniques that you can use to configure your instances ranging from simple scripts to complex centralized configuration management tools Scripting Your Own Solution Depending on how much pre configuration has been included into your AMI you might need only a single script or set of scripts as a simple elegant way to configure the final elements of your application stack User Data and cloudinit When you launch a ne w EC2 instance by using either the AWS Management Console or the API you have the option of passing u ser data to the instance You can retrieve the user data from the instance through the EC2 m etadata service and use it to perform automated tasks to conf igure instances as they are first launched When a Linux instance is launched the initialization instructions passed into the instance by means of the user data are executed by using a technology called cloudinit The cloudinit package is an open source application built by Canonical It’s included in many base Linux AMIs (to find out if your distribution supports cloudinit see the distribution specific documentation) Amazon Linux a Linux distribution created and maintained by AWS contains a customized version of cloudinit You can pass two types of user data either shell scripts or cloudinit directives to cloudinit running on your EC2 instance For example the following shell script can be passed to an instance to update all installed p ackages and to configure the instance as a PHP web server : #!/bin/sh yum update y yum y install httpd php php mysql chkconfig httpd on /etc/initd/httpd start The following user data achieve s the same result but us es a set of cloudinit directives: #cloudconfig ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 13 of 32 repo_update: true repo_upgrade: all packages: httpd php phpmysql runcmd: service httpd start chkconfig httpd on AWS Windows AMIs contain an additional service EC2Config that is installed by AWS The EC2Config service performs tasks on the instance such as activating Windows setting the Administrator password writing to the AWS console and performing one click sysprep from within the application If launching a Windows instance the EC2Config service can also execut e scripts passed to the instance by means of the user data The data can be in the form of commands that you run at the cmdexe prompt or Windows PowerShell prompt This approach work s well for simple use cases However as the number of instance roles (web d atabase and so on) grows along with the number of environments that you need to manage your scripts m ight become large and difficult to maintain Additionally user data is limited to 16 KB so if you have a large number of con figuration tasks and associated logic we recommend that you use the user data to download additional scripts from Amazon S3 that can then be executed Leveraging EC2 Metadata When you configur e a new instance you typically need to understand the context in which the instance is being launched For example you m ight need to know the hostname of the instance or which region or Availability Zone the instance has been launched into The EC2 metadata service can be queried to provide such contextual information about an instance as well as retrieving the user data To access the instance metadata from within a running instance you can make a standard HTTP GET using tools such as cURL or the GET command For example to retrieve the host name of the instance you can make an HTTP GET request to the following URL: http://169254169254/latest/meta data/hostname ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 14 of 32 Resource Tagging To help you manage your EC2 resources you can assign your own metadata to each instance in addition to the EC2 metadata that is used to define hostnames Availability Zones and other resources You do this with tags Each tag consists of a key and a value both of which you define when the instance is launched You can use EC2 tags to define further context t o the instance being launched For example you can tag your instances for different environments and roles as shown in the following figure Figure 4: Example of E C2 Tag U sage As long as your EC2 instance has access to the Internet these tags can be retrieved by using the AWS Command Line Interface (CLI) within your bootstrapping scripts to configure your instances based on their role and the environment in which they are being launched Putting it all Together The following figure shows a typical boo tstrapping process using user data and a set of configuration scripts hosted on Amazon S3 i1bbb2637environment = production role = web if2871adeenvironment = dev role = app Key ValueArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 15 of 32 Figure 5: Example of an End toEnd W orkflow This example uses the user data as a lightweight mechanism to download a base configuration script from Amazon S3 The script is responsible for configuring the system to a baseline across all instances regardless of role and environment (for example the script m ight install monitoring agents and ensure that the OS is patched ) This base configuration script use s the CLI to retrieve the instances tags Based on the value of the “role” tag the script download s an additional overlay script responsible for the additional configuration required for the instance to perform its specific role ( for example installing Apache on a web server) Finally the script use s the instances “environment” tag to download an appropriate environment overlay script to carry out the EC2 API Amazon EC2 Instance Amazon S3 BucketBase ConfigurationUser Data Server Role Overlay Scripts Environment Overlay ScriptsRetrieve and process User Data Download base config and executeEC2 Metadata Service Retrieve server role from EC2 API download and execute appropriate script Retrieve server environment from EC2 API download and execute appropriate script Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch RequestArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 16 of 32 final configuration for the environment the instance resides in ( for example setting log levels to DEBUG in the development environment) To protect sensitive information that m ight be contained in your scripts you should restrict access to these assets by using IAM Roles 4 Using Configuration Management Tools Although scripting your own solution works it can quickly become complex when managing large environments It also can become difficult to govern and audit your environment such as identifying change s or troubleshoot ing configuration issues You can address some of these issues by using a configuration management tool to manage instance configurations Configuration management tools allow you to define your environment ’s configuration in code typically by using a domain specific language These domain specific languages use a declarative approach to code where the code describes the end state and is not a script that can be executed Because the environment is defined using code you can track changes to the configuration and apply version control Many configuration management tools also offer additional features such as compliance auditing and search Push vs Pull Models Configuration management tools typically leverage one of two models push or pull The model used by a tool is defined by how a node (a target EC2 instance in AWS) interacts with the master configuration management server In a push model a master configuration management server is aware of the nodes that it needs to manage and pushes the configuration to them remotely These nodes need to be pre registered on the master server Some push tools are agentless and execute configuration remotely using existing protocols such as SSH Others push a package which is then executed locally using an agent The push model typi cally has some constraints when working with dynamic and scalable AWS resources: • The master server needs to have information about the nodes that it needs to manage When you use tools such as Auto Scaling where nodes might come and go this can be a challenge • Push systems that do remote execution do not scale as well as systems where configuration changes are offloaded and executed locally on a node In large 4 http://docsaws amazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 17 of 32 environments the master server m ight get overloaded when config uring multiple systems in parallel • Connecting to nodes remotely requires you to allow specific ports to be allowed inbound to your nodes For some remote execution tools this includes remote SSH The second model is the pull model Configuration management tools that use a pull system use an agent that is installed on a node The agent asks the master server for configuration A node can pull its configuration at boot time or agents can be daemonized to poll the master periodically for configuration changes Pull systems are especially useful for managing dynamic and scalable AWS environments Following are the main benefits of the pull model : • Nodes can scale up and down easily as the master does not need to know they exist before they can be configured Nodes can simply register themselves with the server • Configuration management masters require less scaling when using a pull system because all processing is offloaded and executed locally on the remote node • No specifi c ports need to be opened inbound to the nodes Most tools allow the agent to communicate with the master server by using typical outbound ports such as HTTPS Chef Example Many configuration management tools work with AWS Some of the most popular are Chef Puppet Ansible and SaltStack For our example in this section we use Chef to demonstrate bootstrapping with a configuration management tool You c an use other tools and apply the same principles Chef is an open source configuration management tool that uses an agent (chef client) to pull configuration from a master server (Chef server) Our example shows how to bootstrap nodes by pulling configuration from a Chef server at boot time The example is based on the following assumptions: • You have configured a Chef server • You have an AMI that has the AWS command line tools installed and configured • You have the chefclient installed and included into your AMI First let’s look at what w e are going to configure within Chef We’ll create a simple Chef cookbook that installs an Apache web server and deploys a ‘Hello World’ site A C hef cookbook is a collection of recipes; a recipe is a definition of resources that should be configured on a node This can include files packages permissions and more The default recipe for this Apache cookbook might look something like this: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 18 of 32 # # Cookbook Name:: apache # Recipe:: default # # Copyright 2014 YOUR_COMPANY_NAME # # All rights reserved Do Not Redistribute # package ""httpd"" #Allow Apache to start on boot service ""httpd"" do action [:enable :start] end #Add HTML Template into Web Root template ""/var/www/html/indexhtml"" do source ""indexhtmlerb"" mode ""0644"" end In this recipe we install enable and start the HTTPD (HTTP daemon) service Next w e render a template for indexhtml and place it into the /var/www/html directory The indexhtmlerb template in this case is a very simple HTML page :

Hello World

Next the cookbook is uploaded to the Chef server Chef offers the a bility to group cookbooks into r oles Roles are useful in large scale environment s where servers within your environment m ight have many different r oles and cookbooks might have overlapping roles In our example w e add this cookbook to a role called ‘webserver’ Now when we launch EC2 instances (nodes) we can provide EC2 user data to bootstrap them by using Chef To make this as dynamic as possible we can use an EC2 tag to define which Chef role to apply to our node This allows us to use the same user data script for all nodes whichever role is intended for them For example a web server and a database server can use the same user data if you assign different values to the ‘role’ tag in EC2 We also need to consider how our new instance will authenticate with th e Chef server We can store our private key in an encrypted Amazon S3 bucket by using Amazon S3 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 19 of 32 server side encryption5 and we can restrict access to this bucket by using IAM r oles The key can then be used to authenticate with the Chef ser ver The chef client uses a validatorpem file to authenticate to the Chef server when registering new nodes We also need to know which Chef server to pull our configuration from W e can store a prepopulated clientrb file in Amazon S3 and copy this within our user data script You might want to dynamically populate this clien trb file depending on environment but for our example we assume that we have only one Chef server and that a pre populated clientrb file is sufficient You could also include these two files into your custom AMI build The user data would look like this: #!/bin/bash cd /etc/chef #Copy Chef Server Private Key from S3 Bucket aws s3 cp s3://s3 bucket/orgname validatorpem orgname validatorpem #Copy Chef Client Configuration File from S3 Bucket aws s3 cp s3://s3 bucket/clientrb clientrb #Change permiss ions on Chef Server private key chmod 400 /etc/chef/orgname validatorpem #Get EC2 Instance ID from the Meta Data Service INSTANCE_ID =`curl s http://169254169254/latest/meta data/instance id` #Get Tag with Key of ‘role’ for this EC2 instance ROLE_TAG=$(aws ec2 describe tags filters ""Name=resource idValues=$ INSTANCE_ID "" ""Name=keyValues=role"" output text) #Get value of Tag with Key of ‘role’ as string ROLE_TAG_VALUE=$(echo $ROLE_TAG | awk 'NF>1{print $NF}') #Create first_bootjson file dynamically adding the tag value as the chef role in the run list echo ""{\ ""run_list\ "":[\""role[$ROLE_TAG_VALUE] \""]}"" > first_bootjson 5 http://docsawsamazoncom/AmazonS3/latest/dev/UsingServerSideEncryptionhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 20 of 32 #execute the chef client using first_bootjson config chefclient j first_bootjson #daemonize the chef client to run every 5 minutes chefclient d i 300 s 30 As shown i n the preceding user data example we copy our client configuration files from a private S3 bucket We then use the EC2 metadata service to get some information about the instance ( in this example Instance ID) Next we query the Amazon EC2 API for any tags with the key of ‘role ’ and dynamically configure a Chef run list with a C hef role of this value Finally we execute the first chef client run by providing the first_bootjson options which include our new run list We then execute chef client once more ; however this time we execute it in a daemonized setup to pull configuration every 5 minutes We now have some re usable EC2 user data that we can apply to any new EC2 instances As long as a ‘role’ tag is provided with a value that matches a role on the target Chef server the instance will be configured using the corresponding Chef cookbooks Putting it all Together The following figure shows a typical workflow from instance laun ch to a fully configured instance that is ready to serve traffic ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 21 of 32 Figure 6: Example of an End toEnd W orkflow EC2 APIEC2 API Amazon EC2 Instance Amazon S3 BucketUser Data Chef config filesRetrieve and process User Data Download private key and clientrb from S3 bucketEC2 Metadata Service Retrieve server role from EC2 API Configure first_bootson to use chef role with tag value Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch Request Pull Config from Chef Server and configure instanceArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 22 of 32 Using AWS Services to Help Manage Your Environments In the preceding sections we discussed tools and techniques that systems administrators and developers can use to provision EC2 instances in a n automated predictable and repeatable manner AWS also provides a range of application management services that help make this proces s simpler and more productive The following figure shows how to sele ct the right service for your application based on the level of control that you require Figure 7: AWS Deployment and Management Services In addition to provisioning EC2 instances these services can also help you to provision any other associated AWS components that you need in your systems such as Auto Scaling groups load balancers and networking components We provide more information about how to use these services in the following sections AWS Elastic Beanstalk AWS Elastic Beanstalk allows web developers to easily upload code without worrying about managing or implementing any underlying infrastructure components Elastic Beanstalk takes care of deployment capacity provisioning load balancing auto scaling and application health monitoring I t is worth noting that Elastic Beanstalk is not a black box service: You have full visibility and control of the underlying AWS resources that are deployed such as EC2 instances and load balancers Elastic Beanstalk supports deployment of Java NET Ruby PHP Python Nodejs and Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk provides a default configuration but you can extend the configuration as needed For example you m ight want to install additional packages from a yum repository or copy configuration files that your application depends on such as a replacement for httpdconf to override specific settings ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 23 of 32 You can write the c onfiguration files in YAML or JSON format and create the files with a config file e xtension You then place the files in a folder in the application root named ebextensions You can use c onfiguration files to manage packages and services work with files and execute commands For more information about using and extending Elastic Beanstalk see AWS Elastic Beanstalk Documentation 6 AWS OpsWorks AWS OpsWorks is an application management service that makes it easy to deploy and manage any applic ation and its required AWS resources With AWS OpsWorks you build application stacks that consist of one or many layers You configure a layer by using an AWS OpsWorks configuration a custom configuration or a mix of both AWS OpsWorks uses Chef the open source configuration managem ent tool to configure AWS r esources This gives you the ability to provide your own custom or community Chef recipes AWS OpsWorks features a set of lifecycle events —Setup Configure Deploy Undeploy and Shutdown —that automatically run the appropriate recipes at the appr opriate time on each instance AWS OpsWorks provides some AWS managed layers for typical application stacks These layers are open and customi zable which mean s that you can add additional custom recipes to the layers provided by AWS OpsWorks or create custom layers from scratch using your existing recipes It is important to ensure that the correct recipes are associated with the correct lifecycle events Lifecycle events run during the following times: • Setup – Occurs on a new instance after it successfully boots • Configure – Occurs on all of the stack’s instances when an instance enters o r leaves the online state • Deploy – Occurs when you deploy an app • Undeploy – Occurs when you delete an app • Shutdown – Occurs when you stop an instance For example the c onfigure event is useful when building distributed systems or for any system that needs to be aware of when new instances are added or removed from the stack You c ould use this event to update a load balancer when new web servers are added to the stack 6 http://awsamazoncom/documentation/elastic beanstalk/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 24 of 32 In addition to typical server configuration AWS OpsWorks manages application deployment and integrates with your application’s code repository This allows you to track application versions and rollback deployments if needed For mo re information about AWS OpsWorks see AWS OpsWorks Documentation 7 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an eas y way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion Compared to Elastic Beanstalk and AWS OpsWorks AWS CloudFormation gives you the most control and flexibility when provisioning resources AWS CloudFormation allows you to manage a broad set of AWS resources For the purposes of this white paper we focus on the features that you can use to bootstrap your EC2 instances User Data Earlier in this whitepaper we described t he process of using user data to configure and customize your EC2 instances (see Scripting Your Own Solution ) You also can include user data in a n AWS CloudFormation template which is executed on the instance once it is created You can include u ser data when specifying a single EC2 instance as well as when specifying a launch configuration The following example shows a launch configuration that provision s instances configured to be PHP web server s: ""MyLaunchConfig"" : { ""Type"" : ""AWS::AutoScaling::LaunchConfiguration"" ""Properties"" : { ""ImageId"" : ""i 123456"" ""SecurityGroups"" : ""MySecurityGroup"" ""InstanceType"" : ""m3medium"" ""KeyName"" : ""MyKey"" ""UserData"": {""Fn::Base64"": {""Fn::Join"":[""""[ ""#!/bin/bash \n"" ""yum update y\n"" ""yum y install httpd php php mysql\n"" ""chkconfig httpd on \n"" ""/etc/initd/httpd start \n"" ]]}} 7 http://awsamazoncom/documentation/opsworks/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 25 of 32 } } cfninit The cfninit s cript is an AWS CloudFormation helper scri pt that you can use to specify the end state of an EC2 instance in a more declarative manner The cfninit script is installed by default on Amazon Linux and AWS supplied Windows AMIs Administrators can also install cfninit on other Linux distributions and then include this into their own AMI if needed The cfninit script parses metadata from the AWS CloudFormation template and uses the metadata to customiz e the instance accordingly The cfninit script can do the followin g: • Install packages from packa ge repositories ( such as yum and aptget) • Download and unpack archives such as zip and tar files • Write files to disk • Execute arbitrary commands • Create users and groups • Enable /disable and start/stop services In an AWS CloudFormation template t he cfninit helper script is called from the user data Once it is called it will inspect the metadata associated with the resource passed into the request and then act accordingly For example you can use the following launch configuration metadata to instruct cfn init to configure an EC2 instance to become a PHP web server (similar to the preceding user data example): ""MyLaunchConfig"" : { ""Type"" : ""AWS::AutoScaling::LaunchConfiguration"" ""Metadata"" : { ""AWS::CloudFormation::Init"" : { ""config"" : { ""packages"" : { ""yum"" : { ""httpd"" : [] ""php"" : [] ""phpmysql"" : [] } } ""services"" : { ""sysvinit"" : { ""httpd"" : { ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 26 of 32 ""enabled"" : ""true"" ""ensureRunning"" : ""true"" } } } } } } ""Properties"" : { ""ImageId"" : ""i 123456"" ""SecurityGroups"" : ""MySecurityGroup"" ""InstanceType"" : ""m3medium"" ""KeyName"" : ""MyKey"" ""UserData"": {""Fn::Base64"": {""Fn::Join"":[""""[ ""#!/bin/bash \n"" ""yum update y awscfnbootstrap\ n"" ""/opt/aws/bin/cfn init stack "" { ""Ref"" : ""AWS::StackId"" } "" resource MyLaunchConfig "" "" region "" { ""Ref"" : ""AWS::Region"" } "" \n"" ]]}} } } For a detailed walkthrough of bootstrapping EC2 instances by using AWS CloudFormation and its related helper scripts see the Bootstrapping Applications via AWS CloudFormation whitepaper8 Using the Services Together You can use the services separately to help you provision new i nfrastructure components but you also can combine them to create a single solution This approach has clear advantages For example you can model an entire architecture including networking and database configurations directly into a n AWS CloudFormation template and then deplo y and manage your application by using AWS Elastic Beanstalk or AWS OpsWorks This approach unifies resource and application management making it easier to apply version control to your entire architecture 8 https://s3amazonawscom/cloudformation examples/BoostrappingApplicationsWithAWSCloudFormationpdf ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 27 of 32 Managing Application and Instance State After you implement a suitable process to a utomatically provision new infrastructure components your system will have the capability to create new EC2 instances and even entire new environments in a quick repeatable and predictable manner However in a dynamic cloud environment you will also need to consider how to remove EC2 instances from your environments and what impact this might have on the service that you provide to your users There are a number of reasons why an instance might be removed from your system: • The instance is terminated as a result of a hardware or software failure • The instance is terminated as a response to a “scale down ” event to remove instances from an Auto Scaling group • The instance is terminated because you’ve deployed a new version of your software stack by using bluegreen deployments (instances running the older version of the application are terminated after the deployment) To handle the removal of instance s without impacting your service you need to ensure that your application instances are stateless This means that all system and application state is stored and managed outside of the instances themselves There are many forms of system and application state that you need to consider when designing your system as shown in the following table State Examples Structured application data Customer orders Unstructured application data Images and documents User session data Position in the app; contents of a shopping cart Application and system logs Access logs; security audit logs Application and system metrics CPU load; network utilization Running stateless application instances means that no instance in a fleet is any different from its counterparts This offers a number of advantages: • Providing a robust service – Instances can serve any request from any user at any time I f an instance fails subsequent requests can be routed to alternative instance s while the failed instance is replaced This can be achieved with no interruption to service for any of you r users • Quicker less complicated bootstrapping – Because your instances don’t contain any dynamic state your bootstrapping process needs to concern itself only with provision ing your system up to the application layer There is no need to try to ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 28 of 32 recover state and data which is often large and therefore can significantly increase bootstrapping times • EC2 instances as a unit of deployment – Because all state is maintained off of the EC2 instances themselves you can replace the instance s while orchestrating application deployments This can simplify your deployment processes and allow new deployment techniques such as bluegreen deployments The following section describes each form of application and instance state and outlines some of the tools and techniques that you can use to ensure it is store d separately and independently from the application instances themselves Structured Application Data Most applications produce structured textual data such as customer orders in an order management system or a list of web pages in a CMS In most cases this kind of content is best stored in a database Depending on the structure of th e data and the requirements for acce ss speed and concurrency you m ight decide to use a relational databas e management system or a NoSQL data s tore In either case it is important to store this content in a durable highly available system away from the instances running your application This will ensure that the service you provide your users will not be interrupted or their data lost even in the event of an instance failure AWS offers both relational and NoSQL managed databases that you can use as a persistence layer for your applications We discuss these database options in the following sections Amazon RDS Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up operate and scale a relational database in the cloud It allows you to continue to work with the relational database engines you’re familiar with including MySQL Oracle Microsoft SQL Server or PostgreSQL This means that the code applications and operational tools that you are already using can be used with Amazon RDS Amazon RDS also handles time consuming database man agement tasks such as data backups recover y and patch management which frees your database administrators to pursue higher value application development or database refinements In addition Amazon RDS Multi AZ deployments increase your database availability and protect your da tabase against unplanned outages This give s your service an additional level of resiliency Amazon DynamoDB Amazon Dynamo DB is a fully managed NoSQL database service offering both document (JSON) and key value data models DynamoDB has been designed to provide consistent single digit m illisecond latency at any scale making it ideal for high ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 29 of 32 traffic applications with a requirement for low latency data access DynamoDB manage s the scaling and partitioning of infrastructure on your behalf When you creat e a table you specify how much request capacity you require If your throughput requirements change you can update this capacity as needed with no impact on service Unstructured Application Data In addition to the structured data created by most appli cations some systems also have a requirement to receive and store unstructured resources such as documents images and other binary data For example t his might be the case in a CMS where an editor upload s images and PDFs to be hosted on a website In most cases a database is not a suitable storage mechanism for this type of content Instead you can use Amazon Simple Storage Service (Amazon S3) Amazon S3 provides a highly available and durable object st ore that is well suited to storing this kind of data Once your data is stored in Amazon S3 you have the option of serving these files directly from Amazon S3 to your end users over HTTP(S) bypassing the need for these requests to go to your application instances User Session Data Many applications produce information associated with a user ’s current position within an application For example as user s browse an e commerce site they m ight start to add various items into their shopping basket This information is known as session state It would be frustrating to users if the items in their baskets disappeared without notice so it’s important to store th e session state away from the application instances themselves This ensure s that baskets remain populated even if users ’ requests are directed to an alternative instance behind your load balancer or if t he current instance is removed from service for any reason The AWS platform offers a number of services that you can use to provide a highly available session store Amazon ElastiCache Amazon ElastiCache makes it easy to deploy operate and scale an in memory data store in AWS Inmemory data store s are ideal for storing transient session data due to the low latency these technologies offer ElastiCache supports two open source in memory caching engines: • Memcached – A widely adopted memory object caching system ElastiCache is protocol compliant with Memcached which is already supported by many open source applications as an in memory sessio n storage platform ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 30 of 32 • Redis – A popular open source inmemory key value store that supports data structures such as sorted sets and lists ElastiCache supports master/ slave replication and Multi AZ which you can use to achieve cross AZ redundancy In addition to the in memory data stores offered by Memcached and Redis on ElastiCache some applications require a more durable storage platform for their session data For these applications Amazon DynamoDB offers a low latency highly scalable and durable solution DynamoDB replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage To help customers easily integrate DynamoDB as a session store within their applications AWS provides pre built DynamoDB session handlers for both Tomcat based Java applications9 and PHP applications 10 System Metrics To properly support a production system operational teams need access to system metrics that indicate the overall health of the system and the relative load under which it’s currently operating In a traditional environment this information is often obtained by logging into one of the instances and looking at OS level metrics such as system load or CPU utilization However in an environment where you have multiple instances running and these instances can appear and disappear at any moment this approach soon becomes ineffective and difficult to manage Instead you should push this data to an external monitoring system for collection and analysis Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure This means that the metrics will be available to your operational teams even when the instances themselves have been terminated In addition to tracking metrics you can use Amazon CloudWatch to trigger alarms on the metrics when they pass certain thresholds You can use the alarms to notify your teams and to initiat e further automated actions to deal with issues and bring your system back within its normal operating tolerances For example an automated action could initiate an Auto Scaling policy to increase or decrease the number of instances in an Auto Scaling group 9 http://docsawsamazoncom/AWSSdkDocsJava/latest/DeveloperGuide/java dgtomcat session managerhtml 10 http://docsawsamazoncom/aws sdkphp/guide/latest/feature dynamodb session handlerhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 31 of 32 By default Amazon CloudWatch can monitor a broad range of metrics across your AWS resources That said it is also important to remember that AWS doesn’t have access to the OS or applicatio ns running on your EC2 instances Because of this Amazon CloudWatch cannot automatically monit or metrics that are accessible only within the OS such as memory and disk v olume utilization If you w ant to monitor OS and application metrics by using Amazon CloudWatch you can publish your own metrics to CloudWatch through a simple API request With t his approach you can manage these metrics in the same way that you manage other native metrics including configuring alarms and associated actions You can use the EC2Config service11 to push additional OS level operating metrics into CloudWatch without the need to manually code against the CloudWatch APIs If you are running L inux AMIs you can use the set of sample Perl scripts12 provided by AWS that demonstrate how to produce and consume Amazon CloudWatch custom metrics In addition to CloudWatch you can use third party monitoring solutions in AWS to extend your monitoring capabilities Log Management Log data is used by your operational team to better understand how the system is performing and to diagnose any issues that might arise Log data can be produced by the application itself but also by system components lower down in your stack This might include anything from access logs produced by your w eb server to security audit logs produced by the operating system itself Your operations team need s reliable and timely access to these logs at all times regardless of whether the instance that originally produced the log is still in existence For this reason it’s important to move log data from the instance to a mor e durable storage platform as close to real time as possible Amazon CloudWatch Logs Amazon CloudWatch Logs is a service that allows you to quickly and easily move your system and applicati on logs from the EC2 instances them selves to a centrali zed durable storage platform ( Amazon S3) This ensures that this data is available even when the instance itself has been terminated You also have control over the log retention policy to ensure that all logs are retained for a specified period of time The CloudWat ch Logs service provides a log management agent that you can install onto your EC2 instances to manage the ingestion of your logs into the log management service 11 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/UsingConfig_Wi nAMIhtml 12 http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/mon scripts perlhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 32 of 32 In addition to moving your logs to durable storage the CloudWatch Logs service also allows you to monitor your logs in near real time for specific phrases values or patterns (metrics) You can use t hese metrics in the same way as any other CloudWatch metric s For example you can create a CloudWatch alarm on the number of errors being thrown by your application or when certain suspect actions are detected in your security audit logs Conclusion This whitepaper showed you how to accomplish the following: • Quickly provision new infrastructure components in an automated repeatable and predictable manner • Ensure that no EC2 instance in your environment is unique and that all instances are stateless and therefore easily replaced Having these capabilities in place allows you to think differently about how you provision and manage infrastructure components when compared to traditional environments Instead of manually building each instance and maintaining consistency through a set of operational checks and balances you can treat your infrastructure as if it w ere software By specifying the desired end state of your infrastructure through the software based tools and process es described in this whitepaper you can fundamentally change the way your infrastructure is managed and you can take full advantage of the dynamic elastic and automated nature of the AWS cloud Further Reading • AWS Elastic Beanstalk Documentation • AWS OpsWorks Documentation • Bootstrapping Applications via AWS CloudFormation whitepaper • Using Chef with AWS CloudFormation • Integrating AWS CloudFormation with Puppet",General,consultant,Best Practices Maximizing_Value_with_AWS,ArchivedMaximizing Value with AWS Achieve Total Cost of Operation Benefits Using Cloud Computing February 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Create a Culture of Cost Management 2 Driving Cost Optimization 2 Total Cost of Operation 4 Start with an Understanding of Current Costs 4 Total Cost of Migration 5 Select the Right Plan for Specific Workloads 6 Employ Best Practices 7 Determine TopLine Business Metrics 8 Stay on Top of Instance Utilization 8 Distribute Daily Spending Updates 8 Every Engineer Can Be a Cost Engineer 9 Build Automation into Services 9 Implement a Reservation Process 10 Conclusion 10 Contributors 10 Archived Abstract Amazon Web Services (AWS) provides rapid access to flexible and low cost IT resources With cloud computing public sector organizations no longer need to make large upfront investments in hardware or spend time and money on managing infrastructure The goal of this whitepaper is to help you gain insight into some of the financial considerations of operating a cloud IT environment and learn how to maximize the overall value of your decision to adopt AWS ArchivedAmazon Web Services – Maximizing Value with AWS Page 1 Introduction A core reason organizations adopt a cloud IT infrastructure is to save money The traditional approach of analyzing Total Cost of Ownership no longer applies when you move to the cloud Cloud services provide the opportunity for you to use only what you need and pay only for what you use We refer to this new paradigm as the Total Cost of Operation You can use Total Cost of Operation (TCO) analysis methodologies to compare the costs of owning a traditional data center with the costs of operating your environment using AWS Cloud services Eliminate Upfront Sunk Costs Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure ( CapEx ) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large upfront sunk costs (costs that have already been incurred and can’t be recovered) If you are using the cloud you can return CapEx to the general fund and invest in activities that better serve your constituents AWS helps lower customer costs through its “pay only for what you use” pricing model To get started it is critical to understand how to measure value improve the economics of a migration project manage migration costs and expectations through largescale IT transformations and optimize the cost of operation Launch an Amazon EC2 Instanc e for Free The AWS Free Tier lets you gain free hands on experience with AWS products and services AWS Free Tier includes 750 hours of Linux and Windows t2micro instances each month for one year To stay within the Free Tier use only EC2 Micro instance s View AWS Free Tier Details » ArchivedAmazon Web Services – Maximizing Value with AWS Page 2 Create a Culture of Cost Management All teams can help manage costs and cost optimization should be everyone’s responsibility There are many variables that affect cost with different levers that can be pulled to drive operational excellence By using resources like the AWS Trusted Advisor dashboard and the AWS Billing Cost Explorer tool you can get realtime feedback on costs and usage that puts you on the road to operational excellence  Put data in the hands of everyone – This reduces the feedback loop between the information/data and the action that is required to correct usage and sizing issues  Enact policies and evangelize – Define and implement best practices to drive operational excellence  Spend time training – Educate staff on the items that affect cost and the steps they can take to eliminate waste  Create incentives for good behavior – Have friendly competitions across teams to encoura ge cost efficiencies throughout the organization To achieve true success cost optimization must be come a cultural norm in your organization Get everyone involved Encourage everyone to track their cost optimization daily so they can establish a habit of efficiency and see the daily impact over time of their cost savings Although everyone shares the ownership of cost optimization someone should be tasked with cost optimization as a primary responsibility Typically this is someone from either t he finance or IT department who is responsible for ensuring that cost controls are monitored so that business goals can be met The “cost optimization engineer” makes sure that the organization is positioned to derive optimal value out of the decision to adopt AWS Driving Cost Optimization By moving to the consumptionbased model of the cloud you can increase innovation with in the organization However one of the biggest challenges of the consumptionbased model is the lack of predictability ArchivedAmazon Web Services – Maximizing Value with AWS Page 3 You need to balance agility and innovation against cost As multiple teams spin up instances to test new ideas it is important to control and optimize AWS spending as cloud usage increases Don’t target cost savings as the end goal Instead optimize spending by focus ing on business growth opportunities that can result from innovative ideas The following table contrasts the traditional funding model against the cloud funding model Funding Model Characteristics Traditional Data Center  A few big purchase decisions are made b y a few people every few years  Typically o verprovision ed as a result of planning up front for spikes in usage Cloud  Decentrali zed spending power  Small decisions made by a lot of people  Resources are spun up and down as new services are designed and then decommissioned  Cost ramifications felt by the organization as a whole are closely monitored and tracked Give stakeholders access to your spending fundamentals The data is there Share it By using dashboards you can quickly highlight spending habits across your teams  Actively manage workloads Turn services on and off as needed rather than runn ing them 24/ 7  Eliminate surprises Provide visibility into costs by making dashboard review a daily habit  Make cost optimization a joint effort Have “spenders” (those spinning up resources) work closely with “watchers” (finance and leadership who can track to business goals)  Allocate charges (or show departmental usage) to organizations actually using services This provides insight into each group’s impact on business goals  Savings Know who uses services and how they use services To select the best rate evaluate pricing options that best meet the workload  Tie spending to business metrics Determine what gets measured track usage and identify areas for improvement ArchivedAmazon Web Services – Maximizing Value with AWS Page 4  Use innovative approaches to optimize spend Consider policies such as “default off” for test and dev environments as opposed to 24/7 or even “on during business hours” Total Cost of Operation A pay asyougo model reduces investments in large capital expenditures In addition you can reduce the operating expense (OpEx) costs involved with the management and maintenance of data This frees up budget allowing you to quickly act on innovative initiatives that can’t be easily pursued when managing CapEx A clear understanding of your current costs is an important first step of a cloud migration journey This provides a baseline for defining the migration model that delivers optimal cost efficiency Our online total cost of ownership calculators allow you to estimate cost savings when using AWS These calculators provide a detailed set of reports that you can use in executive presentations The calculators also give you the option to modify assumptions so you can best meet your business needs Ready to find out how much you could be saving in the AWS Cloud? Take a look at the AWS Total Cost of Ownership Calculator Start with an Understanding of Current Costs Evaluate the following when calculating your onpremises computing costs:  Labor How much do you spend on maintaining your environment?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity How do you plan for capacity? What is the cost of over provisioning for peak capacity? What if you need less capacity? Anticipating next year? ArchivedAmazon Web Services – Maximizing Value with AWS Page 5  Availability/Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data centers last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N (parallel redundancy) power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overprovision for peak load? What is the cost of overprovisioning?  Space Will you run out of data center space? When is your lease up? Total Cost of Migration To achieve the maximum benefits of the AWS Cloud it is important to understand and plan for the financial costs associated with migrating workloads to AWS While there isn’t yet a simple calculation for the total cost of migration (TCM) it is possible to estimate the cost and duration of the migration phase based on the experiences of others Some of the inputs for TCM include the following :  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified  Cost of discovery and migration tooling needs to be calculated  Duplicate environments will need to run until one is decommissioned  Penalties could be incurred for breaking data center colocation or licensing agreements AWS uses the term migration bubble to describe the time and cost of moving applications and infrastructure from onpremises data centers to the AWS Cloud Altho ugh the cloud can provide significant savings certain costs may increase as you move into the migration bubble It is important to understand the costs associated with migration so that you can work to shrink the size of the migration bubble and accomplish the migration in a quick and sustainable manner ArchivedAmazon Web Services – Maximizing Value with AWS Page 6 Figure 1: Migration bubble To realize cost savings it is important to plan your migration to coincide with hardware retirement license and maintenance expiration and other opportunities to be frugal with your resources In addition the savings and cost avoidance associated with a full allin migration to AWS can help you fund the migration bubble You can even shorten the duration of the migration by applying more resources when appropriate For more information read the AWS Cloud Adoption Framework whitepaper Select the Right Plan for Specific Workloads Depending on your needs you can choose among three different ways to pay for Amazon Elastic Compute Cloud (EC2) instances: OnDemand Reserved Instances and Spot Instances You can also pay for Dedicated Hosts that provide you with EC2 instance capacity on physical servers dedicated for your use ArchivedAmazon Web Services – Maximizing Value with AWS Page 7 Purchasing Options Description Recommended for OnDemand Instances Pay for compute capacity by the hour with no long term commitments or upfront payment s  Increase or decrease compute capacity depending on the demands of your application  Only pay the specified hourly rate for the instances you use  Users that want the low cost and flexibility of Amazon EC2 without any upfront payment or long term commitment  Applications with short term spiky or unpredictable workloads that cannot be interrupted  Applications being developed on AWS the first time Reserved Instances Can provide significant savings compared to using On Demand instances  Sunk cost but the longer term commitment delivers a lower hourly rate  Applications that have been in use for years and that you plan to continue to use  Applications with steady state or predictable usage  Applications that require reserved capacity  Users who want to make upfront payments to further reduce their total computing costs Spot Instances Provide the ability to purchase compute capacity with no upfront commitment and lower hourly rates  Allow you to specify the maximum hourly price that yo u are willing to pay to run a particular instance type  Applications that have flexible start and end times  Applications that are only feasible at very low compute prices  Users with urgent computing needs for large amounts of additional capacity Dedi cated Hosts Physical EC2 server s with instance capacity fully dedicated for your use  Help reduce costs by using existing server bound software licenses  Can provide up to a 70% discount compared to the On Demand price  Users who want to save money by using their own per socket or per core software in Amazon EC2  Users who deploy instances using configurations that help address corporate compliance and regulatory requirements Learn more about Amazon EC2 Instance Purchasing Options Employ Best Practices As your organization transitions to the cloud and you pilot new cloud initiatives be careful to avoid common pitfalls The best practices presented below can help you ArchivedAmazon Web Services – Maximizing Value with AWS Page 8 Determine TopLine Business Metrics To fully benefit from the cloud it is important to map business goals to specific metrics so that you can evaluate where changes need to be made Define the metrics that provide the most us eful data to track your service such as user subscriber customer access API calls and page views Dashboards are an excellent source of information and provide instant feedback on how services are delivering against specific goal s Stay on Top of Instance Utilization Oversight is an excellent practice to make sure that you are not overspending Monitoring tools provide visibility control and optimization Post DevOps use dashboards to monitor how services are used as well as your current spending profile If your monthly bill goes up make sure it is for the right reason (business growth) and not the wrong reason (waste)  Choose a cadence and regularly measure results for services that have moved to the cloud  Use tools that track performance and usage to reduce cost overruns It only takes five minutes to resize – up or down – to ensure that the service is providing the desired performance level  Keep track of running instances Optimize the size of servers and adjust as needed rather than overprovisioning from the start  If an instance is underutilized determine if you still need the instance if it can be shut down or if it needs to be resized  As AWS introduces new technology find and then upgrade your legacy instances so that you can lower costs This can provide substantial savings over time Distribute Daily Spending Updates Make usage reviews a daily habit for all team members Provide weekly reporting to elevate visibility and drive accountability across large complex organization s Have teams review bills associated with their projects to identify ways to optimize for costs during dev/test as well as production And to create an ArchivedAmazon Web Services – Maximizing Value with AWS Page 9 atmosphere of friendly competition create a leaderboard that highlights teams with the best cost efficiencies Every Engineer Can Be a Cost Engineer Engineers should design code so that instances only spin up when needed and spin down when not in use There is no need to have AWS services running 24/ 7 if they are only used during standard work hours Turn off underutilized instances that you discover using dashboards and reports  Innovate Spin up instances to test new ideas If the ideas work keep the instance for further refinement If not spin it down  Build sizing into architecture Use tagging to help with cost allocation Tagging allows you to track the users of particular instances optimize usage and bill back or show charges by department or user  Schedule dev/test Eliminate waste of resources not in use Eliminate Waste Default = Off is a good best practice Build Automation into Services Automation can accelerate the migration process  Automate process es so that they turn off when not in use to eliminate waste  Automate alerts to show when thresholds have been exceeded  Configuration management With automation every machine defined in code spins up or down as needed to drive performance and cost optimization  Set alerts on old snapshots oversized resources and unattached volumes and then automate and rebalance for optimal sizing  Eliminate troubleshooting If an instance goes down spin up a new one Stop wasting time on unproductive activities ArchivedAmazon Web Services – Maximizing Value with AWS Page 10 Implement a Reservation Process Appoint someone to own the reservation process (typically a finance person) Buy on a regular schedule but continually track usage and modify reservations as need ed This can result in big savings over time See How to Purchase Reserved Instances for more information Conclusion Moving business applications to the AWS Cloud helps organizations simplify infrastructure management deploy new services faster provide greater availability and lower costs Having a clear understanding of your existing infrastructure and migration costs and then projecting your savings will help you calculate payback time project ROI and maximize the value your organization gains from migrating to AWS AWS delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep professional services and support organizations robust training programs and an ecosystem that is tens ofthousands of partners strong AWS can help you move faster and do more Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector SalesVar  Carina Veksler Public Sector Solutions AWS Public Sector SalesVar,General,consultant,Best Practices Microservices_on_AWS,ArchivedImplementing Microservice s on AWS First Published December 1 2016 Updated Novembe r 9 2021 This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/ microservicesonaws/microservicesonawspdfArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not pa rt of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 5 Microservices architecture on AWS 6 User interface 6 Microservices 7 Data store 9 Reducing operational complexity 10 API implementation 11 Serverless microservices 12 Disaster recovery 14 Deploying Lambda based applications 15 Distributed systems components 16 Service discovery 16 Distributed data management 18 Config uration management 21 Asynchronous communication and lightweight messaging 21 Distributed monitoring 26 Chattiness 33 Auditing 34 Resources 37 Conclusion 38 Document Revisions 39 Contributors 39 ArchivedAbstract Microservices are an architectural and organizational approach to software development created to speed up deployment cycles foster innovation and ownership improve maintainability and scalability of software applications and scale organizations deliver ing software and services by using an agile approach that helps teams work independently With a microservices approach software is composed of small services that communicate over well defined application programming interface s (APIs ) that can be deploye d independently These services are owned by small autonomous teams This agile approach is key to successfully scale your organization Three common patterns have been observe d when AWS customers build microservices: API driven event driven and data str eaming This whitepaper introduce s all three approaches and summarize s the common characteristics of microservices discuss es the main challenges of building microservices and describe s how product teams can use Amazon Web Services (AWS) to overcome these challenges Due to the rather involved nature of various topics discussed in this whitepaper including data store asynchronous communication and service discovery the reader is encouraged to consider specific requirements and use cases of their applications in addition to the provided guidance prior to making architectural choices ArchivedAmazon Web Services Implementing Microservices on AWS 5 Introduction Microservices architectures are not a completely new approach to software engineering but rather a combination of various successful and proven concepts such as: • Agile software development • Service oriented architectures • APIfirst design • Continuous integration/ continuous delivery (CI/CD) In many cases design patterns of the Twelve Factor App are used for microservices This whitepaper first describe s different aspects of a highly scalable fault tolerant microservices architecture (user interface microservices implementation and data store) and how to build it on AWS using container technologies It then recommend s the AWS services for implementing a typical serverless microservices architecture to reduce operational complexity Serverless is defined as an operational model by the following tenets: • No infrastructure to provision or manage • Automatically scaling by unit of consumption • Pay for value billing model • Builtin availability and fault tolerance Finally th is whitepaper covers the overall system and discusses the cross service aspects of a microservices architecture such as distributed monitoring and auditing data consistency and asynchronous communication This whitepaper only focus es on workloads running in the AWS Cloud It doesn’t cover hybrid scenarios or migration strategies For more information about migration refer to the Container Migrat ion Methodology whitepaper ArchivedAmazon Web Services Implementing Microservices on AWS 6 Microservices architecture on AWS Typical monolithic applications are built using different layers —a user interface (UI) layer a business layer and a persistence layer A central idea of a microservices architecture is to split functionalities into cohesive verticals —not by technological layers but by implementing a specifi c domain The following f igure depicts a referen ce architecture for a typical microservices application on AWS Typical microservices application on AWS User interface Modern web applications often use JavaScript frameworks to implement a single page application that communicates with a representational state transfer (REST) or RESTful ArchivedAmazon Web Services Implementing Microservices on AWS 7 API Static web content can be served using Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront Because clients of a microservice are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin latencies can be significantly reduced However microservices running close to each other don’t benefit from a content delivery network In some cases this approach might actually add additional latency A best practice is to implement other caching mechanisms to reduce chattiness and minimize latencies For more information refer to the Chattiness topic Microservices APIs are the front door of microservices which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces typically a REST ful web services API This API accepts and proces ses calls from clients and might implement functionality such as traffic management request filtering routing caching authentication and authorization Microservices implementation AWS has integrated building blocks that support the development of microservices Two popular approaches are using AWS Lambda and Docker containers with AWS Fargate With AWS Lambda you upload your code and let Lambda take care of everything required to run and scale the implementatio n to meet your actual demand curve with high availability No administration of infrastructure is needed Lambda supports several programming languages and can be invok ed from other AWS services or be called directly from any web or mobile application One of the biggest advantages of AWS Lambda is that you can move quickly: you can focus on your business logic because security and scaling are managed by AWS Lambda’s opinionated approach drives the scalable platform A common approach to reduce operational efforts for deployment is container based deployment Container technologies like Docker have increased in popularity in the last few years due to benefits like portability productivity and efficiency The learning curve with containers can be steep and you have to think about security fixes for your Docker images and monitoring Amazon Elastic Container Service (Amazon ECS ) and Amazon ArchivedAmazon Web Services Implementing Microservices on AWS 8 Elastic Kubernetes Service (Amazon EKS ) eliminate the need to install operate and scale your own cluster management infrastructure With API calls you can launch and stop Docker enabled applications query the complete state of your cluster and access many familiar features like security groups Load Balancing Amazon Elastic Block Store (Amazon EBS) volumes and AWS Identity and Access Management (IAM) roles AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS With Fargate you no longer have to worry about provisioning enough compute resources for your container applications Fargate can launch tens of thousands of containers and easily scale to run your most mission critical applications Amazon ECS supports container placement strategies and constraints to customize how Amazon ECS places and ends tasks A task placement constraint is a rule that is considered during task placement You can associate attributes which are essentially keyvalue pairs to your container instances and then use a constraint to pl ace tasks based on these attributes For example you can use constraints to place certain microservices based on instance type or instance capability such as GPU powered instances Amazon EKS runs up todate versions of the open source Kubernetes softwar e so you can use all the existing plugins and tooling from the Kubernetes community Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment whether running in on premises data centers or public clouds Amazon EKS integrates IAM with Kubernetes enabling you to register IAM entities with the native authentication system in Kubernetes There is no need to manually set up credentials for authenticating with the Kubernetes control plane The IAM integration enable s you to use IAM to directly authenticate with the control plane itself a nd provide fine granular access to the public endpoint of your Kubernetes control plane Docker images used in Amazon ECS and Amazon EKS can be stored in Amazon Elastic Container Registry (Amazon ECR ) Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry Continuous integration and continuous delivery (CI/C D) are best practice s and a vital part of a DevOps initiative that enables rapid software changes while maintaining system stability and security However this is out of scope for this whitepaper For m ore ArchivedAmazon Web Services Implementing Microservices on AWS 9 information refer to the Practicin g Continuous Integration and Continuous Delivery on AWS whitepaper Private links AWS PrivateLink is a highly available scalable technology that enables you to privately connect your virtual private cloud (VPC) to supported AWS services services hosted by other AWS accounts (VPC endpoi nt services) and supported AWS Marketplace partner services You do not require an internet gateway network address translation device public IP address AWS Direct Connect connection or VPN connection to communicate with the service Traffic between your VPC and the service does not leave the Amazon network Private links are a great way to increase the isolation and security of microservices architecture A microservice for example could be deployed in a totally separate VPC fronted by a load balancer and exposed to other microservices through an AWS PrivateLink endpoint With this setup using AWS PrivateLink the network traffic to and from the microservice never traverses the public internet One use case for such isolation includes regulatory compliance for services handling sensitive data such as PCI HIPPA and EU/US Privacy Shield Additionally AWS PrivateLink allows connecting microservices across different accounts and Amazon VPCs with no need for firewall rules path definitions or route tables; simplifying network management Utilizing PrivateLink software as a service (SaaS ) providers and ISVs can offer their microservices based solutions with complete operational isolation and secure access as well Data store The data store is used to persist data needed by the microservices Popular stores for session data are in memory caches such as Memcached or Redis AWS offers both technologies as part of the managed Amazon ElastiCache service Putting a cache between application servers and a d atabase is a common mechanism for reducing the read load on the database which in turn may enable resources to be used to support more writes Caches can also improve latency Relational databases are still very popular to store structured data and business objects AWS offers six database engines (Microsoft SQL Server Oracle MySQL ArchivedAmazon Web Services Implementing Microservices on AWS 10 MariaDB PostgreSQL and Amazon Aurora ) as managed services through Amazon Relational Database Service (Amazon RDS ) Relational databases however are not designed for endless scale which can make it difficult and time intensive to apply techniques to support a high number of queries NoSQL databases have been designed to favor scalability performance and availability over the consistency of relational databases One important element of NoSQL databases is that they typically don’t enforce a strict schema Data is distributed over partitions that can be scaled horizontally and is retrieved using partition keys Because individual microservices are designed to do one thing well they typically have a simplified data model that might be well suited to NoSQL persistence It is important to understand that NoSQL databases have different access patterns than relational databases For example it is not possible to join tables If this is necessary the logic has to be implemented in the application You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB delivers single digit millisecond performance however there are cert ain use cases that require response times in microseconds Amazon DynamoDB Accelerator (DAX) p rovides caching capabilities for accessing data DynamoDB also offers an automatic scaling feature to dynamic ally adjust throughput capacity in response to actual traffic However there are cases where capacity planning is difficult or not possible because of large activity spikes of short duration in your application For such situations DynamoDB provides an on demand option which offers simple pay perrequest pricing DynamoDB on demand is capable of serving thousands of requests per second instantly without capacity planning Reducing operational complexity The architecture previously described in this whitepaper is already using managed services but Amazon Elastic Compute Cloud (Amazon EC2 ) instances still need to be managed The operational efforts needed to run maintain and monitor microservices can be further reduced by using a fully serverless architecture ArchivedAmazon Web Services Implementing Microservices on AWS 11 API implementation Architecting deploying monitoring continuously improving and maintaining an API can be a time consuming task Sometimes different versions of APIs need to be run to assure backward compatibility for all clients The different stages of the development cycle ( for example development testing and production) further multiply operational efforts Authorization is a critical feature for all APIs but it is us ually complex to build and involves repetitive work When an API is published and becomes successful the next challenge is to manage monitor and monetize the ecosystem of thirdparty developers utilizing the APIs Other important features and challenges include throttling requests to protect the backend services caching API responses handling request and response transformation and generating API definitions and documentation with tools such as Swagger Amazon API Gateway addresses those challenges and reduces the operational complexity of creating and maintaining RESTful APIs API Gateway allows you to create your APIs programmatically by importing Swagger definitions using either the AWS API or the AWS Management Console API Gateway serves as a front door to any web application running on Amazon EC2 Amazon ECS AWS Lambda or in any on premises environment Basically API Gateway allows you to run APIs without having to manage servers The following f igure illustrates how API Gateway handles API calls and interacts with other components Requests from mobile devices websites or other backend services are routed to the closest CloudFront Point of Presence to minimize latency and provide optimum user experience ArchivedAmazon Web Services Implementing Microservices on AWS 12 API Gateway call flow Serverless microservices “No server is easier to manage than no server ” — AWS re:Invent Getting rid of servers is a great way to eliminate operational complexity Lambda is tightly integrated with API Gateway The ability to make synchronous calls from API Gateway to Lambda enables the creation of fully serverless applications and is described in detail in the Amazon API Gateway Developer Guide The following figure shows the architecture of a serverless microservice with AWS Lambda where the complete service is built out of managed services which eliminates the architectural burden to design for scale and high availability and eliminates the operational efforts of running and monitoring the microservice’s underlying infrastructure ArchivedAmazon Web Services Implementing Microservices on AWS 13 Serverless microservice using AWS Lambda A similar implementation that is also based on serverless services is shown in the following figure In this architecture Docker containers are used with Fargate so it’s not necessary to care about the underlying infrastruc ture In addition to DynamoDB Amazon Aurora Serverless is used which is an ondemand autoscaling configuration for Aurora (MySQL compatible edition) where the database will automatically start up shut down and scale capacity up or down based on your application's needs ArchivedAmazon Web Services Implementing Microservices on AWS 14 Serverless microservice using Fargate Disaster recovery As previously mentioned in the introduction of this whitepaper typical microservices applications are implemented using the Twelve Factor Application patterns The Processes section states that “Twelve factor processes are stateless and share nothing Any data that needs to persist must be sto red in a stateful backing service typically a database” For a typical microservices architecture this means that the main focus for disaster recovery should be on the downstream services that maintain the state of the application For example t hese can be file systems databases or queues for example When creating a disaster recovery strategy organizations most commonly plan for the recovery time objective and recovery point objective Recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service This objective determines what is considered an acceptable time window when service is unavailable and is defined by the organization ArchivedAmazon Web Services Implementing Microservices on AWS 15 Recovery point objective is the maximum acceptable amount of time since the last data recovery point This objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization For more information refer to the Disaster Recovery of Workloads on AWS: Recovery in the Cloud whitepaper High availability This section take s a closer l ook at high availability for different compute options Amazon EKS runs Kubernetes control and data plane instances across multiple Availability Zones to ensure high availability Amazon EKS automatically detects and replaces unhealthy control plane instan ces and it provides automated version upgrades and patching for them This control plane consists of at least two API server nodes and three etcd nodes that run across three Availability Zones within a region Amazon EKS uses the architecture of AWS Regio ns to maintain high availability Amazon ECR hosts images in a highly available and high performance architecture enabling you to reliably deploy images for container applications across Availability Zones Amazon ECR works with Amazon EKS Amazon ECS and AWS Lambda simplifying development to production workflow Amazon ECS is a regional service that simplifies running containers in a highly available manner across multiple Availability Zones within a n AWS Region Amazon ECS includes multiple scheduling strategies that place containers across your clusters based on your resource needs (for example CPU or RAM) and availability requirements AWS Lambda runs your function in multiple Availability Zones to ensure that it is available to process events in cas e of a service interruption in a single zone If you configure your function to connect to a virtual private cloud ( VPC) in your account specify subnets in multiple Availability Zones to ensure high availability Deploying Lambda based applications You can use AWS CloudFormation to define deploy and configure serverless applications ArchivedAmazon Web Services Implementing Microservices on AWS 16 The AWS Serverless Application M odel (AWS SAM ) is a convenient way to define serverless applications AWS SAM is natively supported by CloudFormation and defines a simplified syntax for expressing serverless resources To deploy your application specify the resources you need as part of your application along with their associated permissions policies in a CloudFormation template package your deployment artifacts and deploy the template Based on AWS SAM SAM Local is an AWS Command Line Interface tool that provides an environm ent for you to develop test and analyze your serverless applications locally before uploading them to the Lambda runtime You can use SAM Local to create a local testing environment that simulates the AWS runtime environment Distributed systems componen ts After looking at how AWS can solve challenges related to individual microservices the focus moves to on cross service challenges such as service discovery data consistency asynchronous communication and distributed monitoring and auditing Service discovery One of the primary challenges with microservice architecture s is enabl ing services to discover and interact with each other The distributed characteristics of microservice architectures not only make it harder for services to communicate but also presents other challenges such as checking the health of those systems and announcing when new applications become available You also must decide how and where to store meta information such as configuration data that can be used by applicat ions In this section several techniques for performing service discovery on AWS for microservices based architectures are explored DNS based service discovery Amazon ECS now includes integrated service discovery that enables your containerized services to discover and connect with each other Previously to ensure that services were able to discover and connect with each other you had to configure and run your own service discovery system based on Amazon Route 53 AWS Lambda and ECS event stream s or connect every service to a load balancer ArchivedAmazon Web Services Implementing Microservices on AWS 17 Amazon ECS creates and manages a registry of service names using the Route 53 Auto Naming API Names are automatically mapped to a set of DNS records so that you can refer to a service by name in your code and write DNS queries to have the name resolve to the service’s endpoint at runtime You can specify health check conditions in a service's task definition and Amazon ECS ensures that only healthy service endpoints are returned by a service lookup In addition you can also use unified service discovery for services managed by Kubernetes To enable this integration A WS contributed to the External DNS project a Kubernetes incubator project Another option is to use the capabilities of AWS Cloud Map AWS Cloud Map extends the capabilities of the Auto Naming APIs by providing a service registry for resources such as Internet Protocols ( IPs) Uniform Resource Locators ( URLs ) and Amazon Resource Names ( ARNs ) and offering an APIbased service discovery mechanism with a faster change propagation and the ability to use attributes to narrow down the set of discovered resources Existing Route 53 Auto Naming resources are upgraded automatically to AWS Cloud Map Third party software A different approach to implementing service discovery is using third party software such as HashiCorp Consul etcd or Netflix Eureka All three examples are distributed reliable keyvalue stores For HashiCorp Consul there is an AWS Quick Start that sets up a flexible scalable AWS Cloud environment an d launches HashiCorp Consul automatically into a configuration of your choice Service meshes In an advanced microservices architecture the actual application can be composed of hundreds or even thousands of services Often the most complex part of the application is not the actual services themselves but the communication between those services Service meshes are an additional layer for handling interservice communication which is responsible for monit oring and controlling traffic in microservice s architectures This enables tasks like service discovery to be completely handled by this layer Typically a service mesh is split into a data plane and a control plane The data plane consists of a set of intelligent proxies that are deployed with the application code as a ArchivedAmazon Web Services Implementing Microservices on AWS 18 special sidecar proxy that intercepts all network communication between microservices The control plane is responsible for communicating with the proxies Service meshes are transpare nt which means that application developers don’t have to be aware of this additional layer and don’t have to make changes to existing application code AWS App Mesh is a service mesh that provides applicati onlevel networking to enable your services to communicate with each other across multiple types of compute infrastructure App Mesh standardizes how your services communicate giving you complete visibility and ensuring high availability for your applicat ions You can use App Mesh with existing or new microservices running on Amazon EC2 Fargate Amazon ECS Amazon EKS and self managed Kubernetes on AWS App Mesh can monitor and control communications for microservices running across clusters orchestration systems or VPCs as a single application without any code changes Distributed data management Monolithic applications are typically backed by a large relational database which defines a single data model common to all application components In a microservices approach such a central database would prevent the goal of building decentralized and independent components Each microservice component should have its own data persistence layer Distributed data management however rais es new challenges As a consequence of the CAP theorem distributed microservice architectures inherently trade off consistency for performance and need to embrace eventual consistency In a distributed system business transactions can span multiple microservices Because they cannot use a single ACID transaction you can end up with partial executions In this case we wou ld need some control logic to redo the already processed transactions For this purpose t he distributed Saga pattern is commonly used In the case of a failed business transaction Saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions AWS Step Functions make it easy to implement a Saga execution coordinator as shown in the following figure ArchivedAmazon Web Services Implementing Microservices on AWS 19 Saga execution coordinator Building a centralized store of critical reference data that is curated by core data management tools and procedures provides a means for microservices to synchronize their critical data and possibly roll back state Using AWS Lambda with scheduled Amazo n CloudWatch Events you can build a simple cleanup and deduplication mechanism It’s very common for state changes to affect more than a single microservice In such cases event sourcing has proven to be a useful pattern The core idea behind event sourcing is to represent and persist every application change as an event record Instead of persisting applicatio n state data is stored as a stream of events Database transaction logging and version control systems are two well known examples for event sourcing Event sourcing has a couple of benefits: state can be determined and reconstructed for any point in time It naturally produces a persistent audit trail and also facilitates debugging In the context of microservices architectures event sourcing enables decoupling different parts of an application by using a publish and subscribe pattern and it feeds the s ame event data into different data models for separate microservices Event sourcing is frequently used in conjunction with the Command Query Responsibility Segregation (CQRS) pattern to decouple read from write workloads and optimize both for performance scalability and security In traditional data management systems commands and queries are run against the same data repository The following figure shows how the event sourcing patter n can be implemented on AWS Amazon Kinesis Data Streams serves as the main component of the central event store which captures application changes as events and persists them on ArchivedAmazon Web Services Implementing Microservices on AWS 20 Amazon S3 The figure depicts three different microservices composed of API Gateway AWS Lambda and DynamoDB The arrows indicate the flow of the events: when Microservice 1 experiences an event state change it publishes an event by writing a message into Kinesis Data Streams All microservices run their own Kinesis Data Streams application in AWS Lambda which reads a copy of the message filters it based on relevancy for the microservice and possibly forwards it for further processing If your function re turns an error Lambda retries the batch until processing succeeds or the data expires To avoid stalled shards you can configure the event source mapping to retry with a smaller batch size limit the number of retries or discard records that are too old To retain discarded events you can configure the event source mapping to send details about failed batches to an Amazon Simple Queue Service (SQS ) queue or Amazon Simple Notification Service (SNS) topic Event sourcing pattern on AWS Amazon S3 durably stores all events across all microservices and is the single source of truth when it comes to debugging recovering application state or auditing application changes There are two primary reasons why records may be delivered more than one time to your Kinesis Data Streams application: producer retries and consumer retries Your application must anticipate and appropriately handle processing individual records multiple times ArchivedAmazon Web Services Implementing Microservices on AWS 21 Configuration management In a typical microservices architecture with dozens of different services each service needs access to several downstream services and infrastructure components that expose data to the service Examples could be message queues databases and other micros ervices One of the key challenges is to configure each service in a consistent way to provide information about the connection to downstream services and infrastructure In addition the configuration should also contain information about the environment in which the service is operating and restarting the application to use new configuration data shouldn’t be necessary The third principle of the Twelve Factor App patterns covers this topic: “ The twelve factor app stores config in environment variables (often shortened to env vars or env)” For Amazon ECS environment variables can be passed to the container by using the environment container definition parameter which maps to the env option to docker run Environment variables can be passed to your containers in bulk by using the environme ntFiles container definition parameter to list one or more files containing the environment variables The file must be hosted in Amazon S3 In AWS Lambda the runtime makes environment variables available to your code and sets additional environment varia bles that contain information about the function and invocation request For Amazon EKS you can define environment variables in the env field of the configuration manifest of the corresponding pod A different way to use env variables is to use a ConfigMa p Asynchronous communication and lightweight messaging Communication in traditional monolithic applications is straightforward —one part of the application uses method calls or an internal event distribution mechanism to communicate with the other parts If the same application is implemented using decoupled microservices the communication between different parts of the application must be implemented using network communication REST based communication The HTTP/S protocol is the most popular way to implement synchronous communication between microservices In most cases RESTful APIs use HTTP as a ArchivedAmazon Web Services Implementing Microservices on AWS 22 transport layer The REST architectural style relies on stateless communication uniform interfaces and standard methods With API Gateway you can create an API that acts as a “front door” for applications to access data business logic or functionality from your backend services API developers can create APIs that access AWS or other web services as well as data stored in the AWS Cloud An API object defined with the API Gateway service is a group of resources and methods A resource is a typed object within the domain of an API and may have associated a data model or relationships to other resources Each resource can be configured to respond to one or more methods that is standard HTTP verbs such as GET POST or PUT REST APIs can be deployed to different stages and versioned as well as cloned to new versions API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Asynchronous messaging and event passing Message passing is a n additional pattern used to implement communication between microservices Services communicate by exchanging messages by a queue One major benefit of this communication style is that it’s not necessary to have a service discovery and services are loosely couple d Synchronous systems are tightly coupled which means a problem in a synchronous downstream dependency has immediate impact on the upstream callers Retries from upstream callers can quickly fan out and amplify problems Depending on specific requirements like protocols AWS offers different services which help to implement this pattern One possible implementation uses a combination of Amazon Simple Queue Service (Amazon SQS ) and Amazon Simple Notification Service (Amazon SNS) Both services work closely together Amazon SNS enable s applications to send messages to multiple subscribers through a push mechanism By using Amazon SNS and Amazon SQS together one message can be delivered to multiple consumers The following figure demonstrates the integration of Amazon SNS and Amazon SQS ArchivedAmazon Web Services Implementing Microservices on AWS 23 Message bus pattern on AWS When you sub scribe an SQS queue to an SNS topic you can publish a message to the topic and Amazon SNS sends a message to the subscribed SQS queue The message contains subject and message published to the topic along with metadata information in JSON format Another option for building event driven architectures with event sources spanning internal applications third party SaaS applications and AWS services at scale is Amazon EventBridge A fully managed event bus service EventBridge receives events from disparate sources identifies a target based on a routing rule and delivers near realtime data to that target including AWS Lambda Amazon SNS and Amazon Kinesis Streams among others An inbound event can also be customized by input transformer prior to delivery To develop event driven applications sig nificantly faster EventBridge schema registries collect and organize schemas including schemas for all events generated by AWS services Customers can also d efine custom schemas or use an infer schema option to discover schemas automatically In balance however a potential trade off for all th ese features is a relatively higher latency value for EventBridge delivery Also the default throughput and quotas for EventBridge may require an increase through a support request based on use case A different implementation strategy is based on Amazon MQ which can be used if existing software is using open standard APIs and protocols for messaging including JMS NMS AMQP STOMP MQTT and WebSocket Amazon SQS exposes a custom ArchivedAmazon Web Services Implementing Microservices on AWS 24 API which means if you have an existing application that you want to migrate from—for example an onpremises environment to AWS —code changes are necessary With Amazon MQ t his is not necessary in many cases Amazon MQ manages the administration and maintenance of ActiveMQ a popular open source message broker The underlying infrastructure is automatically provisioned for high availability and message durability to support the reliability of your applications Orchestration and state management The distributed character of microservices makes it challenging to orchestrate workflows when multiple microservices are involved Developers might be tempted to add orchestra tion code into their services directly This should be avoided because it introduces tighter coupling and makes it harder to quickly replace individual services You can use AWS Step Functions to build applications from individual components that each perform a discrete function Step Fu nctions provides a state machine that hides the complexities of service orchestration such as error handling serialization and parallelization This lets you scale and change applications quickly while avoiding additional coordination code inside servic es Step Functions is a reliable way to coordinate components and step through the functions of your application Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps This makes it easier to build and run distributed services Step Functions automatically starts and tracks each step and retries when there are errors so your application executes in order and as expected Step Functions logs the state of each step so when something goes wrong you can diagnose and debug problems quickly You can change and add steps without even writing code to evolve your application and innovate faster Step Functions is part of the AWS serverless platform and supports orchestration of Lambda functions as well as applications based on compute resources such as Amazon EC2 Amazon EKS and Amazon ECS and additional services like Amazon SageMaker and AWS Glue Step Functions manages the operations and underlying infrastructure for you to help ensure that your application is available at any scale ArchivedAmazon Web Services Implementing Microservices on AWS 25 To build workflows Step Functions uses the Amazon States Language Workflows can contain sequential or parallel steps as well as branching steps The following figure shows an example workflow for a microservices architecture combining sequential and parallel steps Invoking such a workflow can be done either through the Step Functions API or with API Gateway An example of a microservices workflow invoked by Step Functions ArchivedAmazon Web Services Implementing Microservices on AWS 26 Distributed monitoring A microservices architecture consists of many different distributed parts that have to be monitored You can use Amazon CloudWatch to collect and track metrics centralize and monitor log files set alarms and automatically react to changes in your AWS environment CloudWatch can monitor AWS resources such as Amazon EC2 instances DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate Moni toring You can use CloudWatch to gain system wide visibility into resource utilization application performance and operational health CloudWatch provides a reliable scalable and flexible monitoring solution that you can start using within minutes You no longer need to set up manage and scale your own monitoring systems and infrastructure In a microservices architecture the capability of monitoring custom metrics using CloudWatch is an additional benefit because developers can decide which metrics should be collected for each service In addition dynamic scaling can be implemented based on custom metrics In addition to Amazon Cloudwat ch you can also use CloudWatch Container Insights to collect aggregate and summari ze metrics and logs from your containeri zed applications and microservices CloudWatch Container Insights automatically collects metrics for many resources such as CPU m emory disk and network and aggregate as CloudWatch metrics at the cluster node pod task and service level Using CloudWatch Container Insights you can gain access to CloudWatch Container Insights dashboard metrics It also provides diagnostic inform ation such as container restart failures to help you isolate issues and resolve them quickly You can also set CloudWatch alarms on metrics that Container Insights collects Container Insights is available for Amazon ECS Amazon EKS and Kubernetes platforms on Amazon EC2 Amazon ECS support includes support for Fargate Another popular option especially for Amazon EKS is to use Prometheus Prometheus is an open source monitoring and alerting toolkit that is often used in combination with Grafana to visualize the collected metrics Many Kubernetes components store metrics at /metrics and Prometheus can scrape these metrics at a regular interval ArchivedAmazon Web Services Implementing Microservices on AWS 27 Amazon Managed Service for Prometheus (AMP) is a Prometheus compatible monitoring service that enables you to monitor containerized applica tions at scale With AMP you can use the open source Prometheus query language (PromQL) to monitor the performance of containerized workloads without having to manage the underlying infrastructure required to manage the ingestion storage and querying of operational metrics You can collect Prometheus metrics from Amazon EKS and Amazon ECS environments using AWS Distro for OpenTelemetry or Prometheus servers as collection agents AMP is often used in combination with Amazon Managed Service for Grafana (A MG) AMG makes it easy to query visualize alert on and understand your metrics no matter where they are stored With AMG you can analy ze your metrics logs and traces without having to provision servers configure and update software or do the heavy lifting involved in securing and scaling Grafana in production Centralizing logs Consistent logging is critical for troubleshooting and identifying issues Microservices enable teams to ship many more releases than ever before and encourage engineering teams to run experiments on new features in production Understanding customer impact is crucial to gradually improving an application By default m ost AWS services centralize th eir log files The primary destinations for log files on AWS are Amazon S3 and Amazon CloudWatch Logs For applications running on Amazon EC2 instances a da emon is available to send log files to CloudWatch Logs Lambda functions natively send their log output to CloudWatch Logs and Amazon ECS includes support for the awslogs log driver that enables the centralization of container logs to CloudWatch Logs For Amazon EKS either Fluent Bit or Fluentd can forward logs from the individual instances in the cluster to a centralized logging CloudWatch Logs where they are combined for higher level reporting using Amazon OpenSearch Service and Kibana Because of its smaller footprint and performance advantages Fluent Bit is recommended instead of Fluent d The following figure illustrates the logging capa bilities of some of the services Teams are then able to search and analyze these logs using tools like Amazon OpenSearch Service and Kibana Amazon Athena can be used to run a one time query against centralized log files in Amazon S3 ArchivedAmazon Web Services Implementing Microservices on AWS 28 Logging capabilities of AWS services Distributed tracing In many cases a set of microservices works together to handle a request Imagine a complex system consisting of tens of microservices in which an error occurs in one of the services in the call chain Even if every microservice is logging properly and logs are consolidated in a central system it can be difficult to find all relevant log messages The central idea of AWS X Ray is the use of correlation IDs which are unique identifiers attached to all requests and messages related to a specific event chain The trace ID is added to HTTP requests in specific tracing headers named XAmznTraceId when the request hits the first XRay integrated service (for example Application Load Balancer or API Gateway) and included in the response Through the X Ray SDK any microservice can read but can also add or updat e this header XRay works with Amazon EC2 Amazon ECS AWS Lambda and AWS Elastic Beanstalk You can use X Ray with applications written in Java Nodejs and NET that are deployed on these services ArchivedAmazon Web Services Implementing Microservices on AWS 29 XRay service map Epsagon is fully managed SaaS that includes tracing for all AWS services third party APIs ( through HTTP calls) and other common services such as Redis Kafka and Elastic The Epsagon service includes monitoring capabilities alerting to the most common services and payload visibility into each and every call your code is making AWS Distro for OpenTelemetry is a secure production ready AWS supported distribution of the OpenTelemetry project Part of the Cloud Native Computing Foundation AWS Distro for OpenTelemetry provides open source APIs libraries and agents to collect distributed traces and metrics for application monitoring With AWS Distro for OpenTelemetry you can instrument your applications just o ne time to send correlated metrics and traces to multiple AWS and partner monitoring solutions Use autoinstrumentation agents to collect traces without changing your code AWS Distro for OpenTelemetry also collects metadata from your AWS resources and managed services to correlate application performance data with underlying infrastructure data reducing the mean time to problem resolution Use AWS Distro for OpenTelemetry to instrument your applications running on Amazon EC2 Amazon ECS Amazon EKS on Amazon EC2 Fargate and AWS Lambda as well as on premises ArchivedAmazon Web Services Implementing Microservices on AWS 30 Options for log analysis on AWS Searching analyzing and visualizing log data is an important aspect of understanding distributed systems Amazon CloudWatch Logs Insights enables you to explore analyze an d visualize your logs instantly This allows you to troubleshoot operational problems Another option for analyzing log files is to use Amazon OpenSearch Service together with Kibana Amazon OpenSearch Service can be used for full text search structured search analytics and all three in combination Kibana is an open source data visualization plugin that seamless ly integrates with the Amazon OpenSearch Service The following figure demonstrates log analysis with Amazon OpenSearch Service and Kibana CloudWatch Logs can be configured to stream log entries to Amazon OpenSearch Service in near real time through a CloudWatch Logs subscription Kibana visualizes the data and exposes a convenient search interface to data stores in Amazon OpenSearch Service This solution can be used in combination with software like ElastAlert to implement an alerting system to send SNS notifications and emails create JIRA tickets and so forth if anomalies spikes or other patterns of interest are detected in the data ArchivedAmazon Web Services Implementing Microservices on AWS 31 Log analysis with Amazon OpenSearch Service and Kibana Another option for analyzing log files is to use Amazon Redshift with Amazon QuickSight QuickSight can be easily connected to AWS data services including Redshift Amazon RDS Aurora Amazon EMR DynamoDB Amazon S3 and Amazon Kinesis CloudWatch Logs can act as a centralized store for log data and in addition to only storing the data it is possible to stream log entries to Amazon Kinesis Data Firehose The following figure depicts a scenario where log entries are streamed from different sources to Redshift using CloudWatch Logs and Kinesis Data Firehose QuickSight uses the data stored in Redshift for analysis reporting and visualization ArchivedAmazon Web Services Implementing Microservices on AWS 32 Log analysis with Amazon Redshi ft and Amazon QuickSight The following f igure depicts a scenario of log analysis on Amazon S3 When the logs are stored in Amazon S3 buckets the log data can be loaded in different AWS data services such as Redshift or Amazon EMR to analyze the data stored in the log stream and find anomalies ArchivedAmazon Web Services Implementing Microservices on AWS 33 Log analysis on Amazon S3 Chattiness By breaking monolithic applications into small microservices the communication overhead increases because microservices have to talk to each other In many implementations REST over HTTP is used because it is a lightweight communication protocol but high message volumes can cause issues In some cases you might consider consolidating services that send many messages back and forth If you find yourself in a situation where you consolidate an increased number of services just to reduce chattiness you should review your problem domains and your domain model Protocols Earlier in this whitepaper in the section Asynchronous communication and lightweight messaging different possible protocols are discussed For microservices it is common to use protocols like HTTP Messages exchang ed by services can be encoded in different ways such as human readable formats like JSON or YAML or efficient binary formats such as Avro or Protocol Buffers ArchivedAmazon Web Services Implementing Microservices on AWS 34 Caching Caches are a great way to reduce latency and chattiness of microservices architectures Several caching layers are possible depending on the actual use case and bottlenecks Many microservice applications running on AWS use ElastiCache to reduce the volume of calls to other microservices by caching results locally API Gateway provides a bu ilt in caching layer to reduce the load on the backend servers In addition caching is also useful to reduce load from the data persistence layer The challenge for any caching mechanism is to find the right balance between a good cache hit rate and the timeliness and consistency of data Auditing Another challenge to address in microservices architectures which can potentially have hundreds of distributed services is ensuring visibility of user actions on each service and being able to get a good overall view across all services at an organizational level To help enforce security policies it is important to audit both resource access a nd activities that lead to system changes Changes must be tracked at the individual service level as well a s across services running on the wider system Typically changes occur frequently in microservices architectures which makes auditing changes even more important This section examines the key services and features within AWS that can help you audit your microservices architecture Audit trail AWS CloudTrail is a useful tool for tracking changes in microservices because it enables all API calls made in the AWS Cloud to be logged and sent to either CloudWa tch Logs in real time or to Amazon S3 within several minutes All user and automated system actions become searchable and can be analyzed for unexpected behavior company policy violations or debugging Information recorded includes a timestamp user and account information the service that was called the service action that was requested the IP address of the caller as well as request parameters and response elements CloudTrail allows the definition of multiple trails for the same account which enables different stakeholders such as security administrators software developers or IT ArchivedAmazon Web Services Implementing Microservices on AWS 35 auditors to create and manage their own trail If microservice teams have different AWS accounts it is possible to aggregate trails into a single S3 bucket The advantages of storing the audit trails in CloudWatch are that audit trail data is captured in real time and it is easy to reroute in formation to Amazon OpenSearch Service for search and visualization You can configure CloudTrail to log in to both Amazon S3 and CloudWatch Logs Events and realtime actions Certain changes in systems architectures must be responded to quickly and either action taken to remediate the situation or specific governance procedures to authorize the change must be initiated The integration of Amazon CloudWatch Events with CloudTrail allows it to generate events for all mutating API calls across all AWS services It is also possible to define custom events or generate events based on a fixed schedule When an event is fired and matches a defined rule a pre defined group of people in your organization can be immediately notified so that they can take the appropriate action If the required action can be automated the rule can automatically trigger a built in workflow or invoke a Lambda function to resolve the issue The following figure shows an environment where CloudTrail and CloudWatch Events work tog ether to address auditing and remediation requirements within a microservices architecture All microservices are being tracked by CloudTrail and the audit trail is stored in an Amazon S3 bucket CloudWatch Events becomes aware of operational changes as th ey occur CloudWatch Events responds to these operational changes and takes corrective action as necessary by sending messages to respond to the environment activating functions making changes and capturing state information CloudWatch Events sit on top of CloudTrail and triggers alerts when a specific change is made to your architecture ArchivedAmazon Web Services Implementing Microservices on AWS 36 Auditing and remediation Resource inventory and change management To maintain control over fast changing infrastructure configurations in an agile development envi ronment having a more automated managed approach to auditing and controlling your architecture is essential Although CloudTrail and CloudWatch Events are important building blocks to track and respond to infrastructure changes across microservices AWS Config rules enable a company to define security policies with specific rules to automatically detect track and alert you to policy violations The next example demonstrates how it is possible to detect inform and automatically react to non compliant configuration changes within your microservices architecture A member of the development team has made a change to the API Gateway for a microservice to allow the endpoint to accept inbound HTTP traffic rather than only allowing HTTPS requests Because this situation has been previously identified as a security compliance concern by the organization an AWS Config rule is already monitoring for this condition ArchivedAmazon Web Services Implementing Microservices on AWS 37 The rule identifies the change as a security violation and performs two actions: it creates a log of the detected change in an Amazon S3 bucket for auditing and it creates an SNS notification Amazon SNS is used for two purposes in our scenario: to send an email to a specified group to inform about the security violation and to add a message to an SQS queue Next the message is picked up and the compliant state is restored by changing the API Gateway configuration Detecting security violations with AWS Config Resources • AWS Architecture Center • AWS Whitepapers • AWS Architecture Monthly • AWS Architecture Blog • This Is My Architecture videos • AWS Answers • AWS Documentation ArchivedAmazon Web Services Implementing Microservices on AWS 38 Conclusion Microservices architecture is a distributed design approach intended to overcome the limitations of traditional monolithic architectures Microservices help to scale applications and organizations while improving cycle times However they also come with a couple of challenges that might add additional arc hitectural complexity and operational burden AWS offers a large portfolio of managed services that can help product teams build microservices architectures and minimize architectural and operational complexity This whitepaper guide d you through the relev ant AWS services and how to implement typical patterns such as service discovery or event sourcing natively with AWS services ArchivedAmazon Web Services Implementing Microservices on AWS 39 Document Revisions Date Description November 9 2021 Integration of Amazon EventBridge AWS OpenTelemetry AMP AMG Container Insights minor text changes August 1 2019 Minor text changes June 1 2019 Integration of Amazon EKS AWS Fargate Amazon MQ AWS PrivateLink AWS App Mesh AWS Cloud Map September 1 2017 Integration of AWS Step Functions AWS XRay and ECS event streams December 1 2016 First publication Contributors The following individuals contributed to this document: • Sascha Möllering Solutions Architecture AWS • Christian Müller Solutions Architecture AWS • Matthias Jung Solutions Architecture AWS • Peter Dalbhanjan Solutions Architecture AWS • Peter Chapman Solutions Architecture AWS • Christoph Kassen Solutions Architecture AWS ArchivedAmazon Web Services Implementing Microservices on AWS 40 • Umair Ishaq Solutions Architecture AWS • Rajiv Kumar Solutions Architecture AWS,General,consultant,Best Practices Migrating_Applications_to_AWS_Guide_and_Best_Practices,"Migrating Applications Running Relational Databases to AWS Best Practices Guide First published December 2016 Updated March 9 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Overview of Migrating Data Centric Applications to AWS 1 Migration Steps and Tools 2 Development Environment Setup Prerequisites 3 Step 1: Migration Assessment 4 Step 2: Schema Conversion 6 Step 3: Conversion of Embedded SQL and Application Code 10 Step 4: Data Migration 13 Step 5: Testing Converted Code 15 Step 6: Data Replication 16 Step 7: Deployment to AWS and Go Live 20 Best Practices 22 Schema Conversion Best Practices 22 Application Code Conversion Best Practices 23 Data Migration Best Practices 23 Data Replication Best Practices 24 Testing Best Practices 25 Deployment and Go Live Best Practices 25 PostDeplo yment Monitoring Best Practices 26 Conclusion 26 Document Revisions 27 About this Guide The AWS Schema Conversion Tool ( AWS SCT) and AWS Data Migration Service (AWS DMS) are essential tools used to migrate an on premises database to Amazon Relational Data base Service (Amazon RDS) Th is guide introduces you to the benefits and features of these tools and walk s you through the steps required to migrate a database to Amazon RDS Schema data and application code migration processes are discussed regardless of whether your target database is PostgreSQL MySQL Amazon Aurora MariaDB Oracle or SQL Server Amazon Web Services Migrating Applications Running Relational Databases to AWS 1 Introduction Customers worldwide increasingly look at the cloud as a way to address their growing needs to store process and analyze vast amounts of data Amazon Web Services (AWS ) provides a modern scalable secure and performant platform to address customer requi rements AWS makes it easy to develop applications deployed to the cloud using a combination of database application networking security compute and storage services One of the most time consuming tasks involved in moving an application to AWS is migrating the database schema and data to the cloud The AWS Schema Conversion Tool ( AWS SCT) and AWS Database Migration Service (AWS DMS) are invaluable tools to make this migration easier faster and less error prone Amazon Relational Database Service (Am azon RDS) is a managed service that makes it easier to set up operate and scale a relational database in the cloud It provides cost efficient resizable capacity for an industry standard relational database and manages common database administration tas ks The simplicity and ease of management of Amazon RDS appeals to many customers who want to take advantage of the disaster recovery high availability redundancy scalability and time saving benefits the cloud offers Amazon RDS currently supports the MySQL Amazon Aurora MariaDB PostgreSQL Oracle and Microsoft SQL Server database engines In this guide we discuss how to migrate applications using a relational database management system ( RDBMS ) such as Oracle or M icrosoft SQL Server onto an Amazo n RDS instance in the AWS Cloud using the AWS SCT and AWS DMS Th is guide cover s all major steps of application migration: database schema and data migration SQL code conversion and application code re platforming Overview of Migrating Data Centric Appl ications to AWS Migration is t he process of moving applications that were originally developed to run on premises and need to be remediated for Amazon RDS During the migration process a database application may be migrated between two databases of the sa me engine type (a homogen eous migration; for example Oracle Oracle SQL Server SQL Server etc) or between two databases that use different Amazon Web Services Migrating Applications Running Relational Databases to AWS 2 engine types (a heterogeneous migration; for example Oracle PostgreSQL SQL Server MySQL etc) In this guide we look at common migration scenarios regardless of the database engine and touch on specific issues related to certain examples of heterogeneous conversions Migration Steps and Tools Application migration to AWS involves the following steps rega rdless of the database engine: 1 Migration assessment analysis 2 Schema conversion to a target database platform 3 SQL statement and application code conversion 4 Data migration 5 Testing of converted database and application code 6 Setting up replication and failover scenarios for data migration to the target platform 7 Setting up monitoring for a new production environment and go live with the target environment Figure 1: Steps of application migration to AWS Each application is different and may require extra attention to one or more of these steps For example a typical application contains the majority of complex data logic in database stored procedures functions and so on Other applications are heavier on logic in the application such as ad hoc queries to support search functionality On average the percentage of time spent in each phase of the migration effort for a typical application breaks down as shown in Table 1 Amazon Web Services Migrating Applications Running Relational Databases to AWS 3 Table 1: Time spent in each migration phase Step Percentage of Overall Effort Migration Assessment 2% Schema Conversion 30% Embedded SQL and Application Code Conversion 15% Data Migration 5% Testing 45% Data Replication 3% Go Live 5% Note: Percentages for data migration and replication are based on man hours for configuration and do not include hours needed for the initial load To make the migration process faster more predictable and cost effective AWS provides the following tools and methods to automate migration steps: • AWS Schema Conversion Tool (AWS SCT) – a desktop tool that automates conversion of database objects from different database migration systems (Oracle SQL Server MySQL PostgreSQL) to different RDS database targets (Amazon Aurora PostgreSQL Oracle MySQL SQL Server) This tool is invaluable during the Migration Assessment Schema Conversion and Application Code Conversion steps • AWS Database Migration Service (AWS DMS) – a service for data migration to and from AWS database targets AWS DMS can be used for a variety of replication tasks including continuous replication to offload reads from a primary production server for reporting or extract transform load (ETL); continuous replication for high availability; database consolidation; and temporary replication for data migrations In this guide we focus on the replication needed for data migrations This service reduces time and effort during the Data Migration and Dat a Replication Setup steps Development Environment Setup Prerequisites To prepare for the migration you must set up a development environment to use for the iterative migration process In most cases it is desirable to have the development Amazon Web Services Migrating Applications Running Relational Databases to AWS 4 environment mi rror the production environment Therefore this environment is likely on premises or running on an Amazon Elastic Compute Cloud (Amazon EC2) instance Download and install the AWS SCT on a server in the development environment If you are interested in changing database platforms the New Project Wizard can help you determine the m ost appropriate target platform for the source database See Step 1: Migration Assessment for more information Procure an Amazon RDS database instance to serve as the migration target and any necessary EC2 instances t o run migration specific utilities Step 1: Migration Assessment During Migration Assessment a team of system architects reviews the architecture of the existing application produces an assessment report that includes a network diagram with all the application layers identifies the application and database components that are not automatically migrated and estimates the effort for manual conversion work Although migration analysis tools exist to expedite the evaluation the bulk of the assessment is conducted by internal staff or with help from AWS Professional Services This effort is usuall y 2% of the whole migration effort One of the key tools in the assessment analysis is the Database Migration Assessment Report This report provid es important information about the conversion of the schema from your source database to your target RDS database instance More specifically the Assessment Report does the following: • Identifies schema objects (eg tables views stored procedures trig gers etc) in the source database and the actions that are required to convert them (Action Items ) to the target database (including fully automated conversion small changes like selection of data types or attributes of tables and rewrites of significant portions of the stored procedure) • Recommends the best target engine based on the source database and the features used • Recommends other AWS services that can substitute for missing features • Recommends unique features available in Amazon RDS th at can save the customer licensing and other costs Amazon Web Services Migrating Applicat ions Running Relational Databases to AWS 5 • Recommends re architecting for the cloud for example sharding a large database into multiple Amazon RDS instances such as sharding by customer or tenant sharding by geography or sharding by partition k ey Report Sections The database migration assessment report includes three main sections —executive summary conversion statistics graph conversion action items Executive Summary The executive summary provides key migration metrics and helps you choose th e best target database engine for your particular application Conversion Statistics Graph The conversion statistics graph visualizes the schema objects and number of conversion issues (and their complexity) required in the migration project Figure 2: Graph of conversion statistics Conversion Action Items Conversion action items are presented in a detailed list with recommendations and their references in the database code Amazon Web Services Migrating Applications Running Relational Databases to AWS 6 The database migration assessment report shows conversion action items with three levels of complexity: Simple task that requires less than 1 hour to complete Medium task that requires 1 to 4 hours to complete Significant task that require s 4 or more hours to complete Using the detailed report provided by the AWS SCT skilled architects can provide a much more precise estimate for the efforts required to complete migration of the database schema code For more information about how to confi gure and run the database migration assessment report see Creating a Database Migration Assessment Report All results of the assessment report calculations and the summary of conversion action items are saved inside the AWS SCT This data is useful for the schema conversion step of the overall data migration Tips • Before running the assessment report you can restrict the database objects to evaluate by selecting or clearing the desired nodes in the source database tree • After running the initial assessment report save the file as a PDF Then open the file in a PDF viewer to view the entire database migration assessment report You can navigate the assessment report more easily if you convert it to a Microsoft Word document and use Word’s Table of Contents Navigation pane Step 2: Schema Conversion The Schema Conversion step consists of translating t he data definition language (DDL) for tables partitions and other database storage objects from the syntax and features of the source database to the syntax and features of the target database Schema conversion in the AWS SCT is a two step process: 1 Convert the schema 2 Apply the schema to the target database AWS SCT also converts procedural application code in triggers stored procedures and functions from feature rich languages (eg PLSQL T SQL) to the simpler procedural languages of MySQL and P ostgreSQL Schema conversion typically accounts for 30% of the whole migration effort Amazon Web Services Migrating Applications Running Relational Databases to AWS 7 The AWS SCT automatically creates DDL scripts for as many database objects on the target platform as possible For the remaining database objects the conversion action items describe why the object cannot be converted automatically and the manual steps required to convert the object to the target platform References to articles that discuss the recommended solution on the target platform are included when available The translated DDL for database objects is also stored in the AWS SCT project file — both the DDL that is generated automatically by the AWS SCT and any custom or manual DDL for objects that could not convert automatically The AWS SCT can also generate a DDL s cript file per object; this may come in handy for source code version control purposes You have complete control over when the DDL is applied to the target database For example for a smaller database you can run the Convert Schema command to automatically generate DDL for as many objects as possible then write code to handle manual conversion action items and lastly apply all of the DDL to create all database objects at once For a larger database that takes weeks or months to convert it can be advantageous to generate the target database objects by executing the DDL selectively to create objects in the target database as needed The Step 6: Data Replication section discuss es how you can also speed u p the data migration process by applying secondary indexes and constraints as a separate step after the initial data load By selecting or clearing objects from the target database tree you can save DDL scripts separately for tables and their correspondi ng foreign keys and secondary indexes You can then use these scripts to generate tables migrate data to those tables without performance slowdown and then apply secondary indexes and foreign keys after the data is loaded After the database migration assessment report is created the AWS SCT offers two views of the project: main view and assessment report view Tips for Navigating the AWS SCT in the Assessment Report View See Figure 3 and corresponding callouts in Table 2 for tips on navigating the assessment report view Amazon Web Services Migrating Applications Running Relational Databases to AWS 8 Figure 3: AWS SCT in the assessment report view Table 2: AWS SCT in assessment report view callouts Callout Description 1 Select a code object from the source database tree on the left to view the source code DDL and mappings to create the object in the target database Note: Source code for tables is not displayed in the AWS SCT; however the DDL to create tables in the target database is displayed The AWS SCT displays both source and target DDL for other database objects 2 Click the chevron ( ) next to an issue or double click the issue message to expand the list of affected objects Select the affected object to locate it in the source and target database trees and view or edit the DDL script Source database objects with an associated conversion action item are indicate d with an exclamation icon: 3 When viewing the source SQL for objects the AWS SCT highlights the lines of code that require manual intervention to convert to the target platform Hovering over or double clicking the highlighted source code displays the corresponding action item 4 The target SQL includes comments with the Issue # for action items to be resolved in the converted SQL code Amazon Web Services Migrating Applications Running Relational Databases to AWS 9 Schema Mapping Rules The AWS SCT allows you to create custom schema transformations and mapping rules to use during the conversion Schema mapping rules can standardize the target schema naming convention apply internal naming conventions correct existing issues in the source schema and so on Transformations are applied to the target database schema table or column DDL and currently include th e following: • Rename • Add prefix • Add suffix • Remove prefix • Remove suffix • Replace prefix • Replace suffix • Convert uppercase (not available for columns) • Convert lowercase (not available for columns) • Move to (tables only) • Change data type (columns only) New transformations and mapping rules are being added to the AWS SCT with each release to increase the robustness of this valuable feature For example Figure 4 depicts a schema mapping rule that has been applied to standardize a table name and correct a typo Notice the Source Name to Target Name mapping Amazon Web Services Migrating Applications Running Relational Databases to AWS 10 Figure 4: Schema mapping rule in AWS SCT You can create as many schema mappi ng rules as you need by choosing Settings and then Mapping Rules from the AWS SCT menu After schema mapping rules are created you can export them for use by AWS DMS during the Data Migration step Schema mapping rules are exported in JavaScript Object N otation (JSON) format The Step 4: Data Migration section examine s how AWS DMS uses this mapping Tips • Before applying individual SQL objects to the target carefully examine the SQL for the object to ensure that any dependent objects have already been created If an error occurs while applying an object to the target database check the error log for details To find the location of the error log from the AWS SCT menu choose Settings and then choose Global Settings Step 3: Conversion of Embedded SQL and Application Code After you convert the database schema the next step is to address any custom scripts with embedded SQL statements (eg ETL scripts reports etc) and the application code so that they work with the new target database This includes rewriting portions of application code written in Java C# C++ Perl Python etc that relate to JDBC/ODBC driver usage establishing connections data retrieval and iteration AWS SCT scan s a folder containing ap plication code extract s embedded SQL statements convert s as many as possible automatically and flag s the remaining statements for manual Amazon Web Services Migrating Applications Running Relational Databases to AWS 11 conversion actions Converting embedded SQL in application code typically accounts for 15% of the whole migration ef fort Some applications are more reliant on database objects such as stored procedures while other applications use more embedded SQL for database queries In either case these two efforts combined typically account for around 45% or almost half of th e migration effort The workflow for application code conversion is similar to the workflow for the database migration: 1 Run an assessment report to understand the level of effort required to convert the application code to the target platform 2 Analyze the code to extract embedded SQL statements 3 Allow the AWS SCT to automatically convert as much code as possible 4 Work through the remaining conversion Action Items manually 5 Save code changes The AWS SCT uses a two step process to convert applica tion code: 1 Extract SQL statements from the surrounding application code 2 Convert SQL statements An application conversion project is a subproject of a database migration project One Database Migration Project can include one or more application conversio n subprojects; for example there may be a front end GUI application conversion an ETL application conversion and a reporting application conversion All three applications can be attached to the parent database migration project and converted in the AWS SCT The AWS SCT can also standardize parameters in parameterized SQL statements to use named or positional styles or keep parameters as they are In the following example the original application source code used the named (:name) style and positio nal (?) style has been selected for the application conversion Notice that AWS SCT replaced the named parameter :id with a positional ? during conversion Amazon Web Services Migrating Applications Running Relational Databases to AWS 12 Figure 5: AWS SCT replaced named style with positional style The applicat ion conversion workspace makes it easy to view and modify embedded SQL code and track changes that are yet to be made Parsed SQL scripts and snippets appear in the bottom pane alongside their converted code Selecting one of these parsed scripts highlight s it in the application code so you can view the context and the parsed script appear s in the lower left pane as shown in Figure 6 Figure 6: Selecting a parsed script highlights it in the application code The embedded SQL conversion process consists of the following iterative steps: 1 Analyze the selected code folder to extract embedded SQL 2 Convert the SQL to the target script If the AWS SCT is able to convert the script automatically it appear s in the lower right pane Any manual conversion code can also be entered here 3 Apply the converted SQL to the source code base swapping out the original snippet for the newly c onverted snippet Amazon Web Services Migrating Applications Running Relational Databases to AWS 13 4 Save the changes to the source code A backup of the original source code is saved to your AWS SCT working directory with an extension of old 5 Click the green checkmark to the right of the Parsed SQL Script to validate the Target SQL scr ipt against the target database Tips • AWS SCT can only convert or make recommendations for the SQL statements that it was able to extract The application assessment report contains a SQL Extraction Actions tab This tab lists conversion action items where AWS SCT detected SQL statements but was not able to accurately extract and parse them Drill down through these issues to identify application code that must be manually evaluated by an application developer and converted manually if needed • Drill into t he issues on either the SQL Extraction Actions or the SQL Conversion Actions tab to locate the file and line number of the conversion item then double click the occurrence to view the extracted SQL Step 4: Data Migration After the schema and application code are successfully converted to the target database platform it is time to migrate data from the source database to the target database You can easily accomplish this by using AWS DMS After the data is migrated you can perform testing on the new schema and application Because much of the data mapping and transformatio n work has already been done in AWS SCT and AWS DMS manages the complexities of the data migration for you configuring a new Data Migration Service is typically 5% of the whole migration effort Note: AWS SCT and AWS DMS can be used independently For example AWS DMS can be used to synchronize homogeneous databases between environments such as refreshing a test environment with production data However the tools are integrated so that the schema conversion and data migration steps can be used in any ord er Later sections of this guide cover specific scenarios of integrating these tools AWS DMS works by setting up a replication server that acts as a middleman between the source and target databases This instance is referred to as the AWS DMS replicatio n instance (Figure 7) AWS DMS migrates data between source and target Amazon Web Services Migrating Applications Running Relational Databases to AWS 14 instances and tracks which rows have been migrated and which rows have yet to be migrated Figure 7: AWS DMS replication instance AWS DMS provides a wizard to walk through the three main steps of getting the data migration service up and running: 1 Set up a replication instance 2 Define connections for the sourc e and target databases 3 Define data replication tasks To perform a database migration AWS DMS must be able to connect to the source and target databases and the replication instance AWS DMS will automatically create the replication instance in the speci fied Amazon Virtual Private Cloud ( Amazon VPC) The simplest database migration configuration is when the source and target databases are also AWS resources (Amazon EC2 or Amazon RDS) in the same VPC For more information see Setting Up a Network for Database Migration in the AWS Database Migration Service User Guide You can migrate data in two ways: • As a full load of existing data • As a full load of existing data followed by continuous replication of data changes to the target AWS DMS can be configured to drop and recreate the target tables or truncate existing data in the target tables before reloading da ta AWS DMS will automatically create the target table on the target database according to the defined schema mapping rules with primary keys and required unique indexes then migrate the data However AWS DMS doesn't create any other objects that are not required to efficiently migrate the data from the source For example it doesn't create secondary indexes non primary key constraints or data defaults or other database objects such as stored procedures views functions packages and so on Amazon Web Services Migrating Applications Running Relational Databases to AWS 15 This is where the AWS SCT feature of saving SQL scripts separately for various SQL objects can be used or these objects can be applied to the target database directly via the AWS SCT Apply to Database command after the initial load Data can be migrated asis (su ch as when the target schema is identical or compatible with the source schema) AWS DMS can use Schema Mapping Rules exported from the AWS SCT project or custom mapping rules can be defined in AWS DMS via JSON Fo r example the following JSON renames a table from tbl_departmnet to department and creates a mapping between these two tables { ""rules"": [ { ""ruletype"": ""selection"" ""rule id"": ""1"" ""rulename"": ""1"" ""object locator"": { ""schemaname"": ""HumanResources"" ""tablename"": ""%"" } ""ruleaction"": ""include"" } { ""ruletype"": ""transformation"" ""rule id"": ""2"" ""rulename"": ""Rename tbl_Departmnet"" ""rule action"": ""rename"" ""ruletarget"": ""table"" ""object locator"": { ""schemaname"": ""HumanResources"" ""tablename"": ""tbl_Departmnet"" } ""value"": ""Department"" } ] Tips For more information on AWS replication instance types and their capacities see Working with an AWS DMS Replication Instance Step 5: Testing Converted Code After schema and application code has been converted and the data successfully migrated onto the AWS platform thoroughly test the migrated application The focus of this testing is to ensure correct functional behavior on the new platform Although best practices vary it is generally accepted to aim for as much time in the testing phase as in the development phase which is about 45% of the overall migration effort The goal of testing shoul d be two fold: exercising critical functionality in the application and verifying that converted SQL objects are functioning as intended An ideal scenario Amazon Web Services Migrating Applications Running Relational Databases to AWS 16 is to load the same test dataset into the original source database load the converted version of th e same dataset into the target database and perform the same set of automated system tests in parallel on each system The outcome of the tests on the converted database should be functionally equivalent to the source Data rows affected by the tests shou ld also be examined independently for equivalency Analyzing the data independently from application functionality verif ies there are no data issues lurking in the target database that are not obvious in the user interface (UI) Step 6: Data Replication Although a one time full load of existing data is relatively simple to set up and run many production applications with large database backends cannot tolerate a downtime window long enough to migrate all the data in a full load For these databases AWS DMS can use a proprietary Change Data Capture (CDC) process to implement ongoing replication from the source database to the target database AWS DMS manages and monitors the ongoing replication process with minimal load on the source database without platf ormspecific technologies and without components that need to be installed on either the source or target Due to CDC’s ease ofuse setting up data replication typically accounts for 3% of the overall effort CDC offers two ways to implement ongoing repli cation: • Migrate existing data and replicate ongoing changes – implements ongoing replication by: a (Optional) Creating the target schema b Migrating existing data and caching changes to existing data as it is migrated c Applying those cached data changes until the database reaches a steady state d Lastly applying current data changes to the target as soon as they are received by the replication instance • Replicate data changes only – replicate data changes only (no schema) from a specified point in time This o ption is helpful when the target schema already exists and the initial data load is already completed For example using native export/import tools ETL or snapshots might be a more efficient method of loading the bulk data in some situations In this ca se AWS DMS can be used to replicate changes from when the bulk load process started to bring and keep the source and target databases in sync Amazon Web Services Migrating Applications Running Relational Databases to AWS 17 AWS DMS takes advantage of built in functionality of the source database platform to implement the proprietary C DC process on the replication instance This allows AWS DMS to manage process and monitor data replication with minimal impact to either the source or target databases The following sections describe the source platform features and configurations neede d by the DMS replication instance’s CDC process MS SQL Server Sources Replication Replication must be enabled on the source server and a distribution database that acts as its own distributor configured Transaction logs The source database must be in Full or Bulk Recovery Mode to enable transaction log backups Oracle Sources BinaryReader or LogMiner By default AWS DMS uses LogMiner to capture changes from the source instance For data migrations with a high volume of change and/or large object (LOB) data using the proprietary Binary Reader may offer some performance advantages ARCHIVELOG The source database must be in ARCHIVELOG mode Supplemental Logging Supplemental logging must be turned on in the source databas e and in all tables that are being migrated PostgreSQL Sources Write Ahead Logging (WAL) In order for AWS DMS to capture changes from a PostgreSQL database: • The wal_level must be set to logical • max_replication_slots must be >= 1 • max_wal_senders must be >= 1 Primary Key Tables to be included in CDC must have a primary key MySQL Sources Binary Logging Binary logging must be enabled on the source database Automatic backups Automatic backups must be enabled if the source is a MySQL Amazon Aurora or MariaDB Amazon RDS instance Amazon Web Services Migrating Applications Running Relational Data bases to AWS 18 SAP ASE (Sybase) Sources Replication Replication must be enabled on the source but RepAgent must be disabled MongoDB Oplog AWS DMS requires access to MongoDB oplog to enable ongoing replication IBM Db2 LUW Either one or bo th of the database configuration parameters LOGARCHMETH1 and LOGARCHMETH2 should be set to ON For additional information including prerequisites and security configurations for each source platform refer to the appropriate link in the Sources for Data Migration for AWS Database Migration Service section of the AWS Database Migration Service User Guide The basic setup of ongoing data replication is done in the Task configuration pane Table 3 describes the migration type options Table 3: Migration type options Migration type Description Migrate existing data Perform a one time migration from the source endpoint to the target endpoint Migrate existing data and replicate ongoing changes Perform a onetime migration from the source to the target and then continue replicating data changes from the source to the target Replicate data changes only Don't perform a one time migration but continue to replicate data changes from the source to the targe t Additional configurations for the data migration task are available in the Task settings pane (Figure 8 and Table 4) Amazon Web Services Migrating Applications Running Relational Databases to AWS 19 Figure 8: Data migration task settings Table 4: Task setting options Setting Description Target table preparation mode Do nothing If the tables already exist at the target they remain unaffected Otherwise AWS DMS creates new tables Drop tables on target AWS DMS drops the tables and creates new tables in their place Truncate AWS DMS leaves the tables and their metadata in place but removes the data from them Include LOB columns in replication Don’t include LOB columns AWS DMS ignores columns or fields that contain large objects (LOBs) Full LOB mode AWS DMS includes the complete LOB Limited LOB mode AWS DMS truncates each LOB to the size defined by Max LOB size (Limited LOB mode is faster than full LOB mode) Enable CloudWatch logs (check box) AWS DMS publishes detailed task information to CloudWatch Logs 1 2 3 7 8 Amazon Web Services Migrating Applications Running Relational Databases to AWS 20 Step 7: De ployment to AWS and Go Live Test the data migration of the production database to ensure that all data can be successfully migrated during the allocated cutover window Monitor the source and target databases to ensure that the initial data load is complet ed cached transactions are applied and data has reached a steady state before cutover You can also use the Enable Validation option available in the Task settings pane of AWS DMS ( Figure 8) If you select t his option AWS DMS validate s the data migration by comparing the data in the source and the target databases Design a simple rollback plan for the unlikely event that an unrecoverable error occurs during the Go Live window The AWS SCT and AWS DMS work together to preserve the original source database and application so the rollback plan will mainly consist of scripts to point connection strings back to the original source database Post Deployment Monitoring AWS DMS monitors the number of rows inserted deleted and updated as well as the number of DDL statements issued per table while a task is running You can view these statistics for the selected task on the Table Statistics pane of you r migration task In the list of migration ta sks in AWS DMS choose your Database migration task (Figure 9) Figure 9: List of database migration tasks On the detail page scroll to the Table Statistics pane (Figure 10) You can monitor the number of rows inserted deleted and updated as well as the number of DDL statements issued per table while a task is running Amazon Web Services Migrating Applications Running Relational Databases to AWS 21 Figure 10: Table statistics monitoring The most relevant metrics can be viewed for the selected task on the Migration task metrics pane (Figure 11) Figure 11: Relevant metrics for a task Additional metrics are available from the Amazon CloudWatch Logs dashboard accessible from the link on the Overview details pane or by navigating in the AWS Management Console to Services choosing CloudWatch and then choosing DMS If logging is enabled for the task review the Amazon CloudWatch Logs for any errors or warnings You can enable logging for a task during task creation by selecting Enable CloudWatch Logs in Task Settings (Figure 8) Amazon Web Services Migrating Applications Running Relational Databases to AWS 22 Best Practices This section presents best practices for each of the seven major steps of migrating applications to AWS Schema Conversion Best Practices • Save the Database Migration Assessment Report After running the initial database migration assessment report save it as a CSV and a PDF As conversion action items are completed they may no longer appear in the database migration assessment report if it is regenerated Saving the initial assessment report can serve as a valuable project management tool such as providing a history of conversion tasks and tracking the percentage of tasks completed The CSV version is helpful because it can be i mported into Excel for ease ofuse such as the ability to search filter and sort conversion tasks • For most conversions apply DDL to the target database in the following order to avoid dependency errors: a Sequences b Tables c Views d Procedures Functions should be applied to the target database in order of dependency For example a function might be referenced in a table column; therefore the function must be applied before the table to avoid a dependency error Another function might reference a table; therefore the table must be created first • Configure the AWS SCT with the memory performance settings you need Increasing memory speeds up the performance of your conversion but uses more memory resources on your desktop On a desktop with limi ted memory you can configure AWS SCT to use less memory resulting in a slower conversion You can change these settings by choosing Settings Global Settings and then Performance and Memory Amazon Web Services Migrating Applications Running Relational Databases to AWS 23 • Apply the additional schema that AWS SCT creates to the target database For most conversion projects AWS SCT create s an additional schema in the target database named aw_[source platform]_ext This schema contain s SQL objects to emulate features and functionality that are present in the source platform but not in the target platform For example when converting from Microsoft SQL Server to PostgreSQL the aws_sqlserver_ext schema contains sequence definitions to r eplace SQL Server identity columns Don’t forget to apply this additional schema to the target database as it will not have a direct mapping to a source object • Use source code version control to track changes to target objects (both database and applicat ion code) If you find bugs or data differences during testing or deployment the history of changes is useful for debugging Application Code Conversion Best Practices • After running the initial application assessment report save it as a CSV and a PDF As conversion tasks are completed they no longer appear in the application assessment report if it is regenerated The initial application assessment report serve s as a history of tasks completed throughout the entire application conversion effort The CSV file is also helpful because it can be imported into Excel for ease ofuse such as the ability to search filter and sort conversion tasks Data Migration Best Practices • Choose a replication instance class large enough to support your database size and transactional load By default AWS DMS loads eight tables at a time On a large replication server such as a dmsc4xlarge or larger instance you can improve performance by increasing the number of tables to load in parallel On a smaller replication se rver reduce the number of tables to load in parallel for improved performance • On the target database disable what isn’t needed Disable unnecessary triggers validation foreign keys and secondary indexes on the target databases if possible Disable u nnecessary jobs backups and logging on the target databases Amazon Web Services Migrating Applications Running Relational Databases to AWS 24 • Tables in the source database that do not participate in common transactions can be allocated to different tasks This allows multiple tasks to synchronize data for a single database migration thereby improving performance in some instances • Monitor performance of the source system to ensure it is able to handle the load of the database migration tasks Reducing the number of tasks and/or tables per task can reduce the load on the source system Using a synchronized replica mirror or other read only copy of the source database can also help reduce the load on the source system • Enable logging using Amazon CloudWatch Logs Troubleshooting AWS DMS errors without the full logging captured in Clou dWatch Logs can be difficult and time consuming (if not impossible) • If your source data contains Binary Large Objects (BLOBs) such as an image XML or other binary data loading of these objects can be optimized using Task Settings For more information see Task Settings for AWS Database Migration Service Tasks in the AWS Database Migration Service User Guide Data Replication Best Practices • Achieve best performance by not applying indexes or foreign keys to the target database during the initial load The initial load of existing data comprises inserts into the target database Therefore you can get the best performance during the initial load if the target databas e does not have indexes or foreign keys applied However after the initial load when cached data changes are applied indexes can be useful for locating rows to update or delete • Apply indexes and foreign keys to the target database before the applicati on is ready to go live • For ongoing replication (such as for high availability) enable the Multi AZ option on the replication instance The Multi AZ option provides high availability and failover support for the replication instance • Use the AWS API or A WS Command Line Interface (AWS CLI) for more advanced AWS DMS task settings The AWS API and/or AWS CLI offer more granular control over data replication tasks and additional settings not currently available in the AWS Management Console Amazon Web Services Migrating Applications Running Relational Databases to AWS 25 • Disable backups o n the target database during the full load for better performance Enable them during cutover • Wait until cutover to make your target RDS instance Multi AZ for better performance Testing Best Practices • Have a test environment where full regression tests o f the original application can be conducted The tests completed before conversion should work the same way for the converted database • In the absence of automated testing run “smoke” tests on the old and new applications comparing data values and UI fu nctionality to ensure like behavior • Apply standard practices for database driven software testing regardless of the migration process The converted application must be fully retested • Have sample test data that is used only for testing • Know your data logic and apply it to your test plans If you don’t have correct test data the tests might fail or not cover mission critical application functionality • Test using a dataset similar in size to the production dataset to expose performance bottlenecks such as missing or non performant indexes Deployment and Go Live Best Practices • Have a rollback plan in place should anything go wrong during the live migration Since the original database and application code are still in place and not touc hed by AWS SCT or AWS DMS this should be fairly straightforward • Test the deployment on a staging or pre production environment to ensure that all needed objects libraries code etc are included in the deployment and created in the correct order of de pendency (eg a sequence is created before the table that uses it) • Verify that AWS DMS has reached a steady state and all existing data has been replicated to the new server before cutting off access to the old application in preparation for the cutover • Verify that database maintenance jobs are in place such as backups and index maintenance Amazon Web Services Migrating Applications Running Relational Databases to AWS 26 • Turn on Multi AZ if required • Verify that monitoring is in place • AWS provides several services to make deployments easier and trouble free such as AWS CloudFormation AWS OpsWorks and AWS CodeDeploy These services are especially helpful for deploying and managing stacks involving multiple AWS resources that must interact with each other such as databases web servers load balancers IP addresses VPCs and so on These se rvices enable you to create reusable templates to ensure that environments are identical For example when setting up the first development environment you may complete some tasks manually either via the AWS Management Console AWS CLI PowerShell etc Instead of tracking these items manually to ensure they are created in the staging environment resources in the running development environment can be included in the template then the template can be used for setting up the staging and production envir onments Post Deployment Monitoring Best Practices • Create CloudWatch Logs alarms and notifications to monitor for unusual database activity and send alerts to notify production staff if the AWS instance is not performing well High CPU utilization disk latency and high RAM usage can be indicators of missing indexes or other performance bottlenecks • Monitor logs and exception reports for unusual activity and errors • Determine if there are additional platform specific metrics t o capture and monitor such as capturing locks from the pg_locks catalog table on the Amazon Redshift platform Amazon Redshift also allows viewing running queries from the AWS Management Console • Monitor instance health CloudWatch Logs provides more metr ics on an RDS instance than an EC2 instance and these may be sufficient for monitoring instance health For an EC2 instance consider installing a third party monitoring tool to provide additional metrics Conclusion The AWS Schema Conversion Tool (AWS SC T) and AWS Data Migration Service (AWS DMS) make the process of moving applications to the cloud much easier and Amazon Web Services Migrating Applications Run ning Relational Databases to AWS 27 faster than manual conversion alone Together they save many hours of development during the migration effort enabling you to reap the benefi ts of AWS more quickly Document Revisions Date Description March 9 2021 Reviewed for technical accuracy November 2019 Updated to reflect latest features and functionality December 2016 First publication",General,consultant,Best Practices Migrating_AWS_Resources_to_a_New_Region,"ArchivedMigrating AWS Resources to a New AWS Region July 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Abstract 5 Introduction 1 Scope of AWS Resources 1 AWS IAM and Security Considerations 1 Migrating Compute Resources 2 Migrating Amazon EC2 Instances 2 Considerations for Reserved Instances 9 Migrating Networking and Content Delivery Network Resources 10 Migrating Amazon Virtual Private Clo ud 11 Migrating AWS Direct Connect Links 13 Using Amazon Route 53 to Aid the Migration Process 13 Migrating Amazon CloudFront Distributions 15 Migrating Storage Resources 16 Migrating Amazon S3 Buckets 16 Migrating Amazon S3 Glacier Storage 19 Migrating Amazon Elastic File Syst em 20 Migrating AWS Storage Gateway 21 Migrating Database Resources 22 Migrating Amazon RDS Services 22 Migrating Amazon DynamoDB 24 Migrating Amazon SimpleDB 25 Migrating Amazon ElastiCache 26 Migrating Amazon Redshift 26 Migratin g Analytics Resources 27 Migrating Amazon Athena 27 Migrating Amazon EMR 28 ArchivedMigrating Amazon Elasticsearch Service 28 Migrating Application Services and Messaging Resources 28 Migrating Amazon SQS 29 Migrating Amazon SNS Topics 30 Migrating Amazon API G ateway 30 Migrating Deployment and Management Resources 31 Migrating with AWS CloudFormation 31 Capturing Environments by Using CloudFormer 32 API Implications 32 Updating Customer Scripts and Programs 33 Important Considerations 33 Conclusi ons 33 Contributors 33 Document Revisions 34 ArchivedAbstract This document is intended for experienced customers of Amazon Web Services who want to migrate existing resources to a new AWS Region You might want to migrate for a variety of reasons In particular if a new region becomes available that is closer to your user base you might want to locate various services geographically closer to those users This document is not intended to be a “step bystep” or “definitive” guide Rather it provides a variety of options and methods for migrat ing various services that you might require in a new region Archived Page 1 Introduction Amazon Web Services (AWS) provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world For many AWS services you can choose the region from which you want to deliver those service s Each region has multiple Availability Zones By using separate Availability Zones you can additionally protect your applications fr om the failure of a single location By using separate AWS Regions you can design your application to be closer to your customers and achieve lower latency and higher throughput AWS ha s designed the regions to be isolated from each other so that you can achieve greater fault tolerance and improved stability in your applications Scope of AWS Resources While most AWS services operate within a region the following services operate across all regions and require no migration: • AWS Identity and Access Management (AWS IAM) • AWS Management Console • Amazon CloudWatch Further as all services are accessible using API endp oints you do not necessarily need to migrate all components of your architecture to the new region depending on your application For example you can migrate Amazon Elastic Compute Cloud (Amazon EC2) instances but retain existing Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront configurations When planning a migration to a new region we recommend that y ou check what AWS products and services are available in that region An updated list of AWS product and service offerings by region is available here1 AWS IAM and Security Considerations AWS IAM enables you to securely control access to AWS services and resources for your users IAM users are created and managed within the scope of an AWS account rather than a particular region No migration of users or groups is required Archived Page 2 When migrating to a new region it is important to note any defined policy restrictio ns on IAM users For example Amazon Resource Names (ARNs) might restrict you to a specific region For more information see IAM Identifiers in the AWS Identity and Access User Guide 2 IAM is a core security service that enables you to add specific policies to control user access to AWS resources Some policies can affect : • Timeofday access (which can require consideration due to time zone differences) • Use of new originating IP addresses • Whether you need to use SSL connections • How users are authenticated • Whether you can use multi factor authentication (MFA) devices Because IAM underpins security we recommend that you careful ly review your security configuration policies procedures and practices before a region migration Migrating Compute Resources This section cover s the migration of compute services such as Amazon EC2 and other closely associated services for security storage load balancing and Auto Scaling Migrating Amazon EC2 Instances Amazon EC2 is a web service that provid es resizable compute capacity in the cloud It is designed to make web scale computing easier for developers Migrating an instance involves copying the data and images ensuring that the security groups and SSH keys are present and then restarting fresh instances SSH Keys AWS does not keep any of your user SSH private keys after they are generated These public keys are made available to Amazon EC2 instances when they are running (Under Linux operating systems these are normally copied into the relevan t user’s ~/ssh/authorized_keys file) Archived Page 3 Figure 1 Key pairs in the AWS Management Console You can retrieve a fingerprint of each key from the application programming interface (API) software development kit (SDK) command line interface (CLI) or the AWS Management Console SSH public keys are only stored per region AWS does not copy or synchronize your configured SSH keys between regions It is up to you to determine if you will use separate SSH keys per region or the same SSH keys in severa l regions Note: You can log in to an existing Linux instance in the source region obtain a copy of the public key (from ~/ssh/authorized_keys ) and import this public key into the target region It is important to know that Auto Scaling launch configurations and AWS CloudFormation templates might refer to SSH keys using the key pair name In these case s you must take care to either update any Auto Scaling launch confi guration or AWS CloudFormation template to use keys that are available in a new region or deploy the public key with the same key pair name to the new region For more information see AWS Security Credentials in the AWS General Reference 3 Archived Page 4 Security Groups Security groups in Amazon EC2 restrict ingress traffic (or in the case of virtual private cloud or VPC ingress and egress traffic ) to a group of EC2 instances Each rule in a security group can refer to the source (or in VPC the destination) by either a CIDR notation IPv4 address range ( abcd/x ) or by using the security group identifier ( sgXXXXXXXX ) Figure 2 Security group configuration in the AWS Management Console Each security group can exist within the scope of only one region The same name can exist in multiple regions but have different definitions of what traffic is permitted to pass Every instance being launched mu st be a member of a security group If a host is being started as part of an Auto Scaling launch configuration or an AWS CloudFormation template the required security group must exist (AWS CloudFormation templates might often define the security group to be created as part of the template ) It is vital that you review your configured security groups to ensure that the required level of network access restrictions is in place To export a copy of the definitions of existing security groups (using the command line tools) run the following command: ec2describe group –H –region > security_groupstxt Archived Page 5 For more information see Security Groups in the Amazon EC2 User Guide 4 Amazon Machine Images An Amazon Machine Image (AMI) is a special type of preconfigured operating system image used to create a virtual machine (an EC2 instance) within the Amazon EC2 environment Each AMI is assign ed an identifier of the form “ami XXXXXXXX” where ”X” is a hexadecimal value (0 9 AF) Figure 3 AMIs in the AWS Management Console Each AMI is unique per region AMIs do not span multiple regions However the same content of an AMI can be availabl e in other regions (for example Amazon Linux 201 609 or Windows Server 20 12 R2) Each region has its own unique AMI ID for its copy of this data You can create your own AMIs from running instances and use these as a starting point for launching additional instances These user created AMIs are assigned a unique AMI ID within the region AMI IDs are used within Auto Scaling launch configuration and AWS CloudFormation templates If you plan to use Auto Scaling or AWS CloudFormation you need to update the AMI ID references to match the ones that exist in the target region Migration of AMIs across regions is supported using the EC2 AMI Copy function 5 AMI Copy enables you to copy an AMI to as many regions as you want from the AWS Management Console the Amazon EC2 CLI or the Amazon EC2 API AMI Copy is available for AMIs bac ked by Amazon Elastic Block Store (EBS ) and instance store backed AMIs and is operating system agnostic Archived Page 6 Each copy of an AMI results in a new AMI with its own unique AMI ID Any changes made to the source AMI du ring or after a copy are not propagated to the new AMI as part of the AMI copy process You must recopy the AMI to the target regions to copy the changes made to the source AMI Note : Permissions and user defined tags applied to the source AMI are not copied to the new AMIs as part of the AMI copy process After the copy is complete you can apply any permissions and user defined tags to the new AMIs Amazon EBS Volumes Amazon EBS is a block storage volume that can be presented to an EC2 instance You can format a n EBS volume with a specific file system type such as NTFS Ext4 XFS etc EBS volumes can contain the operating system boot volum e or be used as an additional data drive (Windows) or mount point (Linux) You can migrate EBS volumes using the cross region EBS snapshot copy capability 6 This enables you to copy snapshots of EBS volumes between regions using either the AWS Management Console API call or command line EBS Snapshot Copy offers the following key capabilities : • The AWS Management Console shows you the progress of a snapshot copy in progress where you can check the percentage complete d • You can initiate multiple EBS Snapshot Copy commands simultaneously either by selecting and copying multiple snapshots to the same region or by copying a snapshot to multiple regions in parallel The in progress copies do not affect the performance of the associated EBS volumes • The console based interface is push based You log in to the source region and tell the console where you'd like the snapshot to end up The API and the command line are by contrast pull based You must run them within the target region The entire process takes place without the need to use external tools or perform any configuration H ere is a h ighlevel overview of the migration process: 1 Identify relevant EBS volumes to migrate (you can choose to use tagging to assist in identification) Archived Page 7 2 Identify which volumes can be copied with the application running and which require you to pause or shut down the application EBS Snapshot Copy accesses a snapshot of the primary volume rather than the volume itself Therefore you might need to shut down the application during the copy process to ensure the latest data is copied across 3 Create the necessary EBS snapshots and wait for their st atus to be “Complete” 4 Initiate the EBS Snapshot Copy feature using either the AWS Management C onsole API or CLI 5 Create EBS volumes at the target region by selecting the relevant snapshots and using the “create volume from snapshot” functionality Volu mes and Snapshots Amazon EBS volumes can currently be from 1 GB to 1 6 TB in size (in 1 GB increments ) They can be used with disk management tools such as Logical Volume Manager (LVM) or Windows Disk Manager to span or stripe across multiple block device s You can stripe multiple EBS volumes together to deliver higher performance storage volumes to applications Volumes that are in constant use might benefit from having a snapshot taken especially if there are multiple volumes being used in RAID 1 stripe or part of an LVM volume group Provisioned IOPS volumes are another way to increase EBS performance These volumes are designed to deliver predictable high performanc e for I/O intensive workloads such as databases To enable EC2 instances to fully use the IOPS provisioned on an EBS volume you can launch selected EC2 instance types as “EBS Optimized” instances Before a region migration we recommend that you check tha t these instances are supported in the Availability Zones in the target region For more information about for getting optimal performance from your EBS volumes see Amazon EBS Performance Tips in the Amazon EC2 User Guide7 Archived Page 8 Elastic IP Addresses Elastic IP addresses are assigned to an account from the pool of addresses for a given region As such an Elastic IP address cannot be migrated between regions We recommend you update the timetolive (TTL) value on your Domain Name System (DNS ) server that points to this Elastic IP address and reduce it to an amount that is a tolerable delay in DNS cache expire such as 300 seconds (five minutes) or less Any decrease in DNS TTL could result in an increase in DNS requests increase load on your current DNS service and affect charges from y our DNS service provider You can make DNS changes more optimally by taking a staged approach to TTL modifications For example: • The c urrent TTL for wwwexamplecom (which points to an Elastic IP address) is 86 400 seconds (one day) • Modify the TTL for wwwexamplecom to 300 seconds ( five minutes) and schedule work for two days’ time • Monitor the increase of DNS traffic in this period • At the start of the day of the sched uled work reduce the TTL for wwwexamplecom further Later optionally reduce the TTL more depending upon load on your DNS infrastructure (possibly 10 seconds) • 10 minutes after the last change update the A re cord to point to a new Elastic IP address in the new region • After a short period confirm that traffic is being adequately serviced and then increase the TTL back to five minutes (300 seconds) • After another period of operation return the TTL to normal level Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple EC2 instances You cannot migrate ELB to a new region Instead you must launch a new ELB service in the target region that contain s a new set of EC2 instances spanning the Availability Zones you want within the service group Archived Page 9 Before a region migration we recommend that you review the source and target Availability Zones to confirm that matching levels of zones exist In scenarios where you discover extra Availability Zones you might need to revise application load balancing and scalability This could lead to further assessments of CloudWatch alarms an d thresholds that are used for Auto Scaling group configuration Furthermore you must add associated SSL certificates on the old ELB service to the new ELB service You must also add h ealth check conditions to verify EC2 instance health tests Launch Conf igurations and Auto Scaling Groups Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define 8 You can view the current Auto Scaling and launch configuration definitions from the AWS Management C onsole Alternatively you can use the following commands to capture this information : asdescribe autoscalinggroups –H –region > autoscale_groupstxt asdescribe launchconfigs –H –region > launch_conf igstxt These extracted Auto Scaling groups and launch configuration settings reference AMIs security groups and SSH key pairs as they exist in the source region See the earlier sections on migrating these resources to the t arget region Then create new Auto Scaling groups and launch configurations in the target region using new AMI IDs and security groups For more information on Auto Scaling groups and launch configurations see Getting Started with Auto Scaling in the Auto Scaling User Guide 9 Considerations for Reserved Instances Many customers take advantage of greatly reduced pricing of Reserved Instances for Amazon EC2 Amazon Redshift Amazon Relational Database Service (Amazon RDS) and Amazon EMR Amazon EC2 Standard Reserved Instances (or reserved cache nodes) are assigned to a specific instance type in a specific region for a period of one or three years while Amazon EC2 Archived Page 10 Convertible Reserv ed Instances give you the flexibility to change the instance type Reserved Instances are available in three payment options : All Upfront Partial Upfront and No Upfront The upfront cost and per hour charges vary between these utilization levels as well as between different geographic regions When you buy Reserved Instances the larger the upfront payment the greater the discount To maximize your savings you can pay all upfront and receive the largest discount Partial upfront Reserved Instances offer lower discounts but give you the option to spend less upfront You can also choose to spend nothing upfront and receive a smaller discount allowing you to free up capital to spend in other projects If you purchased Reserved Instances for EC2 and want to migrate them to a different region we recommend that you first sell them in the Reserved Instances Marketplace As soon as they are sold the billing switches to the new buyer and you are no longer billed for the Reserved Instances The buyer then conti nues to pay the remainder of the term To get savings over On Demand Instances you can either buy Reserved Instances for a shorter term in the region where you are migrating from the Reserved Instance Marketplace or purchase directly from AWS Reserved Instance Marketplace makes it easy to “migrate” your billing to a new region For more detail ed information about how to buy and sell Reserved Instances see Buying in the Reserved Instance Marketplace 10 and Amazon EC2 Rese rved Instance Marketplace 11 We recommend that you carefully assess cost implications of the purchas e and sale of Reserved Instances before undertaking a migration to a new region Migrating Networking and Content Delivery Network Resources This section cover s the migration of network resources such as subnets route tables virtual private network s access control lists and Domain Name Systems Archived Page 11 Migrating Amazon Virtual Private Cloud Amazon Virtual Private Cloud (VPC) lets you provision a private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define When you create a VPC it exists within a region and spans all the Availability Zones in that region You cannot move or migrate it to a new region However you can create a new VPC in a target region and potentially use the same IP address ranges that the existing VPC uses You can list all VP Cs using the following command : aws ec2 describe vpcs A VPC consists of multiple components which you need to recreate in the target region • Subnets You must recreate the same subnets in the target VPC You can list all subnets in a VPC using the following command: aws ec2 describe subnets filters ""Name=vpc idValues=vpc abcd1234"" • DHCP option set If you have a customized DHCP option set you must recreate it in the target region You can get details of the DHCP option set using the following com mand: aws ec2 describe dhcpoptions • Internet gateways You must recreate internet gatewa ys in the target region You can do this using the following commands: aws ec2 create internet gateway aws ec2 attach internet gateway vpcid vpcabcd1234 The interne t gateway resource IDs returned by the previous command are used in the route tables Archived Page 12 • NAT gateway s You must recreate a NAT gateway in the VPC of the target region You can list all the NAT gateway s using the following command: aws ec2 describe natgateways • Route tables You must recreate the route tables in the target region You can list all route tables for a VPC using the following command: aws ec2 describe routetables filters ""Name=vpc idValues=vpc abcd1234"" Note : As the resourc e IDs for gateways (internet gateway NAT gateway etc) change in the target range be sure you use the new resource IDs • Security groups You must recreate the security groups in the target region You can list all security groups using the following command: aws ec2 describe security groups • Network access control lists (ACLs) You must recreate the network ACL if you have made changes to it You can list the network ACLs for a VPC using the following co mmand: aws ec2 describe networkacls filters ""Name=vpc idValues=vpc 0f7ec66a"" • Customer gateways: You must recreate the customer gateway in the target region You can list all the customer gateways using the following command: aws ec2 describe customer gateways • Virtual private gateway s You must recreate the virtual private gateway in the target region You can list all the virtual private gateways using the following command: Archived Page 13 aws ec2 describe vpngateways • VPN Details about creating a VPN with the target region can be found here12 Migrating AWS Direct Connect Links AWS Direct Connect is a service that links physical infrastructure to AWS services One or more fiber connections are provisioned in a Direct Connect location facility If you want to provision new link s in a new region you must request a new Direct Connect service and provision any tail circuits to their infrastructure Charges for Direct Connect vary per geographic location You can terminate e xisting connections at any time when they’re no longer required AWS has relationships with several different peering partners in each geographic region You can find an updated list of AWS Direct Connect Amazon Partners that can assist with service provisioning at http://awsamazoncom/directconnect/partners/ Using Amazon Route 53 to Aid t he Migration Process Amazon Route 53 is a highly available DNS service that is available from all AWS Regions and edge locations worldwide DNS can be very effective when managing a migration scenario as it can assist in gracefully migrating traffic from one location to another by routing traffic by single cutover or gradually By adding new DNS records for copy ing an application in the destination region you can test access to the application and choose whe n to cut over to the new site or region One approach is to use weighted resource record sets This functionality enables you to determine what percentage of traffic to route to each particular address when using the same DNS name For example use the fol lowing configuration to route all traffic to the existing region and none to the new region Archived Page 14 wwwmysitecom CNAME elbnamesourceregioncom 100 wwwmysitecom CNAME elbnamedestinationregioncom 0 When it is time to perform the migration the weighting on these records is flipped as follows wwwmysitecom CNAME elbnamesourceregioncom 0 wwwmysitecom CNAME elbnamedestinationregioncom 100 This causes all new DNS requests to resolve to the new region Note: Some clients might continue to use the old address if they have cached their DNS resolution if a long TTL exists or if a TTL update has not been honored Figure 4 Using Amazon Route 53 to facilitate region migration Archived Page 15 It is also possible to perform gradual cutover using varied weightings as long as the application supports a dual region operational model For more information see Working with Resource Record Sets in the Amazon Route 53 User Guide 13 Migrating Amazon CloudFront Distributions Amazon CloudFront is a content delivery service that operates from the numerous AWS edge locations worldwide CloudFront delivers customer data in configuration sets known as distribution s Each distribution has one configured origin but can have more as in the case of cache behaviors Each origin can be an S3 bucket or a web server including web servers running from within EC2 (in any AWS Region worldwide) To update an origin in the CloudFront console 1 Move your origin server or S3 bucket to the new region by referring to the relevant section of this document for EC2 instances or S3 buckets 2 In the CloudFront console select the distribution and then choose Distribution Settings 3 On the Origins tab choose the origin to edit (there can be only one) and then choose Edit 4 Update Origin Domain Name with the new server or bucket name Archived Page 16 5 Choose Yes Edit For more information see Listing Viewing and Updating CloudFront Distributions in the Amazon CloudFront Developer Guide 14 Migrating Storage Resources This section covers the migration of services used for object storage file storage and archiving Migrating Amazon S3 Buckets Amazon S3 provides a simple web services interface that you can use to store and retrieve any amount of data at any time from anywhere on the web When you create an S3 bucket it resides physically wi thin a single AWS Region Network latency can affect a ccess when the bucket is accessed from another remote region You should pay careful attention to any references to the S3 buckets and their geographic distribution as this can introduce latency To mi grate an S3 bucket you need to create a new S3 bucket in the region and copy the data to it The new bucket requires a universally unique name and cannot have the same name as the bucket in the source region If your goal is to preserve the bucket name wh en copying the bucket between accounts you need to perform an intermediate step In this case copy the data to an intermediary bucket first and then delet e the source bucket After the source bucket is deleted you must wait about 24 hours until the name becomes Archived Page 17 available again Then create the bucket with the old name in the new account and transfer the data using the same method as before For more information about Amazon S3 bucket naming rules see Bucket Restrictions and Limitations in the Amazon S3 User Guide 15 Virtual Hosting with Amazon S3 Buckets You might be hosting websites through the static website hosting feature of Amazon S3 For more information see Hosting a Static Website on Amazon S3 in the Amazon S3 User Guide 16 For simplicity and user friendliness customers often use a DNS CNAME alias for th eir hosted web content using Amazon S3 from a URL such as http://bucketnames3amazonawscom to http://mybucketnamecom Through a CNAME alias the specific Amazon S3 URL endpoint is abstracted from the web browser For more information see Virtual Hosting of Buckets in the Amazon S3 User Guide 17 When you migrate an S3 bucket that was previously used as a static website to a new AWS Region you need to preserve the bucket name when copying the bucket between regions First copy the data to an intermediary bucket and then delete the source bucket After the source bucket is deleted it might take some time before you can reuse th at name to create a new bucket in the destination Region After the bucket name becomes available create the bucket in the new Region with the old name and t hen t ransfer the data using the same method described previously For more information see How can I migrate my Amazon S3 bucket to another AWS Region? 18 Moving Objects Using the AWS Management Console The AWS Management Console gives you the ability to copy or move multiple objects between S3 buckets By manually selecting one or more objects and selecting Cut or Copy from the pop up menu you can paste or move these items into a target S3 bucket in another geographic region Archived Page 18 Figure 5 Copy an Amazon S3 object using the AWS Management Console Copying or Moving Objects Using Third Party Tools To copy or move Amazon S3 objects between buckets you can use a variety of thirdparty tools You can look for AWS Partner products by searching for “Storage ISV ” using the AWS Partner Solutions Finder 19 Copying Using the Amazon API and SDK You can copy or move Amazon S3 objects between buckets programmatically through the Amazon SDKs and APIs For more information about Amazon S3 object level operations see Operations on Objects in the Amazon S3 API Reference 20 To speed up the object copying process you can use the PUT Object Copy operation A PUT Object Copy operation performs a GET and then a PUT API call in a single operation which can copy to a destination bucket F or more information see PUT Object – Copy in the Amazon S3 API Reference 21 You can also use the S3DistCp tool with the Amazon EMR tool to efficiently copy large amounts of data from a n S3 bucket in the source region to an S3 bucket in the target region We recommend this for large buckets and to significantly decrease the overall migration time S3DistCp is an extension of the open source tool DistCp that is optimized to work with AWS particularly Amazon S3 S3DistCp uses a n EMR cluster to transfer the data You will incur additional charges for the EMR cluster You can reduce the cost of running the Archived Page 19 EMR cluster for copying the data by using Spot Instances as outlined here22 Find more details on S3DistCp here23 You can also use the Amazon S3 cross region replication feature to replicate data across AWS Regions With cross region replication every object uploaded to an S3 bucket is automatically replicated from the S3 bucket in the source region to a bucket in the target region Data that exists in the S3 bucket before you enable cross region replication is not replicated For migrating exis ting data you can write a script to update the underlying metadata or ACLs on the object in the source bucket which trigger s a replication to the destination bucket You can find more details on using cross region replication here24 Migrating Amazon S3 Glacier Storage Amazon S3 Glacier is the AWS deep archive storage service It is designed to handle large volumes of data that are infrequ ently accessed With Amazon S3 Glacier you have multiple options for retrieving data depending on the urgency of the requirement Options include Expedited Standard and Bulk retrieval You can find details on pricing options here25 For Standard retrieval Amazon S3 Glacier offers free retrieval up to 10 GB per month Retrieval of more than this amount of data incurs additional charges The process used to retrieve data f rom Amazon S3 Glacier depend s on the way the data was archived as follows Amazon S3 l ifecycle policy is used to tra nsition data from Amazon S3 to Amazon S3 Glacier Even though the storage class of these objects is GLACIER you can access them only via the Amazon S3 console or APIs 1 Use the Amazon S3 console or APIs to restore a temporary copy of an archived object to Amazon S3 Glacier Specify the number of days that you want the temporary copy to be available During this period you will incur storage charges for both Amazon S3 Glacier and for the temporary copy 2 Copy the S3 data from source region to target region using the steps in Migrating Amazon S3 Buckets in this whitepaper 3 Configure a n Amazon S3 lifecycle policy in the target region to transition the data from Amazon S3 to Amazon S3 Glacier Archived Page 20 4 Delete the data stored in Amazon S3 Glacier in the source region by updating the Amazon S3 lifecycle policy Amazon S3 Glacier APIs are used to store the data in archives in vaults 1 Initiate an archival retrieval job to request Amazon S3 Glacier to prepare an entire archive or a portion of the archive for subsequent down load 2 After the retrieval job completes download the bytes to a staging area If you are using Amazon S3 as your staging area and your archive is greater than 5 TB you need to use byte ranges to limit the output size to less than 5 TB Although Amazon S3 Glacier supports individual archives of up to 40 TB Amazon S3 has an object size limit of 5 TB 3 You can transfer the data from the staging area to Amazon S3 Glacier using Amazon S3 Glacier APIs Alternatively if you use Amazon S3 as your staging area in the source region you can use tools such as S3DistCp to copy the data to Amazon S3 in the target region and then use Amazon S3 Glacier APIs to recreate the archive in Amazo n S3 Glacier in the target region 4 Delete the temporary files created in the staging area and from Amazon S3 Glacier in the source region Migrating Amazon Elastic File System Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with EC2 instances in the AWS Cloud With Amazon EFS storage capacity is elastic It grows and shrinks automatically as you add and remove files so your applications have the storage they need when they need it Amazon EFS file systems c an automatically scale from gigabytes to petabytes of data without needing to provision storage Amazon EFS uses the NFSv41 protocol and is accessible from Linux based AMIs You have two options for migrat ing data stored in Amazon EFS from one region to another : • Copy the files from Amazon EFS to Amazon EBS If the data in Amazon EFS is more than what a single EBS volume (max imum size of 16 TB) can accommodate you might need to use thirdparty software to distribute the data across multiple EBS volumes Then you can migrate the EBS volumes using the cross region EBS snapshot copy capability and copy the files from EBS to EFS Archived Page 21 • Copy the files from EFS to S3 Then use the process described earlier for copying data in S3 buckets from source region to target region In the target region copy the files from S3 to EFS After confirming the successful migration make sure to delete EFS files in the source region and the temp resources (S3 and EBS) used in the transfer to avoid incurring charges for services you are not using Migrating AWS Storage Gateway The AWS St orage Gateway service helps you seamlessly integrate your existing on premises storage infrastructure and data with the AWS Cloud It uses industry standard storage protocols to connect existing storage applications and workflows to AWS Cloud storage servi ces for minimal process disruption It maintains frequently accessed data on premises to provide low latency performance while securely and durably storing data in Amazon S3 Amazon EBS or Amazon S3 Glacier After creating a gateway in the new region you can migrat e your data stored in Amazon S3 or Amazon EBS using the native migration capabilities of these services detailed elsewhere in this paper The approach you take to migrate Storage Gateway depend s on the interface you used in the source region : • File Interface Create a File Storage Gateway pointing to the target region 26 Create a n S3 bucket in the target region and copy the data from the S3 bucket in the source region to target region using the process defined earlier You can also enable S3 cross region replication to ensure that any updates to the S3 bucket in the sourc e region are automatically replicated to the target region Create a Storage Gateway file share on the S3 bucket in the target region to access your S3 files from your gateway 27 You can update the inventory of objects maintained and stored on the gateway by initiating the refresh 28 from the AWS Storage Gateway console or by using the RefreshCache operation in the API Reference 29 • Volume Interface Create a volume gateway pointing to the target region 30 Create an EBS snapshot of the volume in the source region Copy the EBS volume to the target region using the cross region EBS snapshot copy capability Create a Storag e Gateway volume in the target region from the EBS snapshot 31 Archived Page 22 • Tape Interface Archived tapes are stored in archive which provides offline storage You must first retrieve the tape from the archive back to your gateway and then from the gateway to your client machine More details on the steps can be found here32 Once you retrieve the tape data to your client machine you can store the same data in the target region by creating a tape gateway 33 which is pointing to the target region You should clean up your gateway resources that are associated with your source region to avoid incurring charges for resources you don’t plan to continue using 34 Migrating Database Resources This section covers migration of database services for relational database s NoSQL caching and data warehouse Migrating Amazon RDS Services Amazon RDS is a web service that makes it easy to set up operate and scale a rela tional database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing you to focus on your applications and business Database Security Groups Amazon RDS has its own set of secu rity groups that restrict access to the database service using either a CIDR notation IPv4 network address or an Amazon EC2 security group Each Amazon RDS security group has a name and exists in only one AWS Region (just as an Amazon EC2 security group does) Database Instances and Data The steps require d for migrating Amazon Aurora are different from the other RDS database engines such as Oracle or MySQL Amazon Aurora is a MySQL compatible relational database engine that combines the speed and av ailability of high end commercial databases with the simplicity and cost effectiveness of open source databases You can create an Amazon Aurora database ( DB) cluster as a Read Replica in the target region Read Replicas can be created for encrypted and un encrypted Archived Page 23 DB clusters You must encrypt t he Read Replica if the source DB cluster is encrypted When you create the Read Replica Amazon RDS takes a snapshot of the source cluster and transfers the snapshot to the Read Replica in the target region For eac h data modification made in the source databases Amazon RDS transfers data from the source region to the Read Replica in the target region You can find more details on the steps required for replicating data across regions here35 For database engines other than Aurora you can use the AWS Database Migration Service to migrate databases from the source region to the target region 36 Alternatively you can follow the steps given below You may need to schedule downtime in an application to quiesce the data move the d atabase and resume operation Here is a highlevel overview of the migration process : 1 Stop all transactions or take a snapshot (however changes after this point in time are lost and might need to be reapplied to the target Amazon RDS DB instance) 2 Using a temporary EC2 instance dump all data from Amazon RDS to a file: o For MySQL make use of the mysqldump tool You might want to compress this dump (see bzip or gzip) o For MS SQL use the bcp utility to export data from the Amazon RDS SQL DB instance into files You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database or for just selected objects 37 Note: Amazon RDS does not support Microsoft SQL Server backup file restores o For Oracle use the Oracle Export/Import utility or the Data Pump feature (see http://awsamazoncom/articles/Amazon RDS/4173109646282306 ) o For Postgre SQL you can use the pg_dump command to export data 3 Copy this data to an instance in the target region using standard tools such as CP FTP or Rsync Archived Page 24 4 Start a new Amazon RDS DB instance in the target region using the new Amazon RDS security group 5 Import the saved data 6 Verify that the database is active and your data is present 7 Delete the old Amazon RDS DB instance in the source region For mor e information about importing data into Amazon RDS see Importing Data into a DB instance in the Amazon RDS User Guide 38 Migrating Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability It is a fully managed cloud database and supports both document and key value store models Here is a h ighlevel overview of the process for m igrating DynamoDB from one Region to another: • (Optional) If your source table is not receiving live traffic you can skip this step Otherwise if your source table is being continuously updated you must enable DynamoDB Streams to record these live writes while the table copy is ongoing After the one time table copy (given below) is complete create a replication process that continuously consumes DynamoDB Stream records (generated from the source table) and applie s them to the destination table This will continue until the DynamoDB table in the target region catches up to the DynamoDB table in the source region At this point all new writes should go to the DynamoDB table in the target region For more information on how to do this see Capturing Table Activity with DynamoDB Streams in the Amazon DynamoDB Developer Guide 39 • Start the table copy process You can do this in a few ways : o Use the Import/Export option available via the Amazon DynamoDB console which exports data to Amazon S3 and then imports it to a different DynamoDB table For more information see Exporting and Importing DynamoDB Data Using AWS Data Pipeline in the Amazon Dynam oDB Developer Guide 40 Archived Page 25 o Use the custom Java DynamoDB Import Export T ool available in the Amazon Web Services L abs repository on GitHub that performs a parallel table scan and then writes scanned items to the destination table 41 o Write your own tool to perform the table copy essentially scanning items in the source table and using parallel PutItem calls to write items into the destination table Whichever method you choose to migrate the data you should consider how much read and write throughput will be required for the migration activity and make sure you provision sufficient capacity especially if the table is serving production traffic Migrating Amazon SimpleDB Amazon SimpleDB is a highly available and flexible non relational data store that offloads the work of database administration Developers simply store and query data items via web service requests and Amazon SimpleDB does the rest To copy Amazon SimpleDB data between AWS Regions you need to create a specific job or script that extracts the data from the Amazon SimpleDB domain in one region and copies it to the relevant destination in another region This job should be hosted on an EC2 instance You should use the relevant SDK that suits your purposes and expertise Migration approaches include: • Establishing simultaneous connections to the new and old domain s querying the existing domain for data and then pu tting data into the new domain • Extracting data from the existing domain and storing it in a file (or set of files) and then putting that data into the new domain We recommend that you use the API call BatchPutAttribute to increase performance and decreas e the number of API calls performed A third party solution that may suit your needs is also available from http://backupsdbcom/ Archived Page 26 When you use any third party solution we recommend that you share only specifically secured IAM user credentials that are deleted after the migration takes place Migrating Amazon ElastiCache Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud The service improves the performance of web applications by allowing you to retrieve information from fast managed in memory data stores instead of relying entirely on slower diskbased databases ElastiCache supports two open source in memory engines : Redis and Memcached Here is an o verview of the steps required to migrate an Amazon ElastiCache cluster running Redis : 1 Take a manual b ackup of the ElastiC ache cluster More details for carry ing out a manual backup can be found here42 The backup consists of the cluster's metadata and all of the data in the cluster 2 Export the backup to Amazon S3 using the ElastiCache console the AWS CLI or the ElastiCache API More details for exporting the backup to Amazon S3 can be found here43 3 Copy the backup data from the S3 bucket in the source region to the target region using the process defined earlier 4 Restore the ElastiCache cluster from the backup in the target region The restore operation creates a new Redis cluster and popu lates it44 For an ElastiCache cluster using Memcache d the recommended approach is to start a new ElasticCache cluster and let it start to populate itself through application usage Migrating Amazon Redshift Amazon Redshift is a fast fully managed petabyte scale data warehouse that makes it simple and cost effective to analyze all your data using your existing business intelligence tools We recommend that you pause updates to the Amazon Redshift cluster during the migration process Archived Page 27 Here is a h ighlevel overview of the steps for moving the entire cluster: • Use crossregion snapshot functionality to create a snapshot in the target region Find more details for creating a crossregion snapshot here45 • Restore the cluster from the snapshot When you do Amazon Redshift creates a new cluster with all the snapshot data on the new cluster Find more details for restoring a cluster from a snapshot here46 Here is a h ighlevel overview of the steps for moving specific tables : 1 Connect to the Amazon Redshift cluster in the source region and use the Unload command to export data from Amazon Redshift to Amazon S3 2 Copy your S3 data from the source region to the target region using the steps given earlier 3 Create a n Amazon Redshift cluster and the required tables in the target region 4 Use the COPY command to load data f rom Amazon S3 to the required tables Migrating Analytics Resources This section covers migration of analytics services for interactive query Hadoop and Elasticsearch Migrating Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL We recommend that you run Athena in the same region where the S3 bucket resides Running Athena and Amazon S3 in different regions will result in an increase in latency and inter region data transfer costs Therefore you should first migrate your S3 data from the source region to the target region and then run Athena against your S3 data in the targ et region Archived Page 28 Migrating Amazon EMR Amazon EMR provides a managed Hadoop framework that makes it easy fast and cost effective to process vast amounts of data across dynamically scalable EC2 instances The EMR cluste r must be recreated in the target region Migration of data from the source region to the target region will depend on whether the data is stored in Amazon S3 or the Hadoop Distributed File System ( HDFS ) If the data is stored in Amazon S3 you can follow the steps given earlier to migrate S3 data from the source region to the target region Here is a h ighlevel overview of the migration process if your data is stored in HDFS: • Use the S3DistCp command to copy data residing in HDFS in the source region to Amazon S3 in the target region • Use S3DistCp to copy data f rom Amazon S3 to HDFS in the target region Migrating Amazon Elasticsearch Service Amazon Elasticsearch Service (Amazon ES) is a fully managed service that makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and more You will need to recreate t he Amazon ES domain in the target region Here is a highlevel overview of the process of migrating the data from the source region to the target region : • Create a manual snapshot of your Amazon ES domain The snapshot is stored in a n S3 bucket 47 • Copy your S3 data from the source region to the target region • Restore the snapshot into your Elasticsearch domain in the target region Migrating Application Services and Messaging Resources This section covers migration of application services for queues notifications and Amazon API Gateway Archived Page 29 Migrating Amazon SQS Amazon Simple Queue Service (Amazon SQS) offers a reliable highly scalable hosted queue for storing messages as they travel between computers Amazon SQS queues exist per region To mi grate the data in a queue you need to drain the queue from the source region and insert it into a new queue in the target region When migrating a queue it is important to note if you need to continue to process the messages in order or not When order is not important: 1 Create a new queue in the target region 2 Configure applications to write messages to the new queue in the target region 3 Reconfigure applications that read messages from the Amazon SQS queue in the source region to read from the new queue in the target region 4 Have a script that repeatedly reads from the old queue and submits to the new queue 5 Delete the old queue in the source region when it’s empty When order is important: 1 Create a new firstin first out (FIFO ) queue in the target region 2 Create an additional new temporary FIFO queue in the target region 3 Configure applications to write messages to the new FIFO queue in the target region 4 Reconfigure applications that read messages from the SQS queue in the source region to read from the new temporary FIFO queue in the target region 5 Have a script that repeatedly reads from the old queue and submits to the new temporary FIFO queue 6 Delete the old queue in the source region when it’s empty Archived Page 30 7 When the temporary FIFO queue is empty reconfigure applications to read from the new FIFO queue Then delete the temporary FIFO queue Migrating Amazon SNS Topics Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud Amazon SNS topics exist per region You can recreate these in a target region manually through the AWS Management Console command line tools or direct API calls To list the current Amazon SNS topic in a designated region use the following command: aws sns list topics –region For more information about the Amazon SNS CLI tools see Using the AWS Command Line Interface with Amazon SNS 48 Migrating Amazon API Gateway Amazon API Gatew ay is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Here is a h ighlevel overview of the steps required for migrating Amazon API Gateway from the source region to the target region : 1 Export the API from the API Gateway into a Swagger file using the API Gateway Export API 49 2 Copy the Swagger file to the target region using standard to ols like CP FTP or rsynch 3 Import the Swagger file to create the API in the API Gateway in the target region 50 Archived Page 31 Migrating Deployment and Management Resources Migrating with AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources It also enables provisioni ng and updating those resources in an orderly and predictable fashion You can use AWS CloudFormation sample templates or create your own templates to describe the AWS resources and any associated dependencies or runtime parameters required to run applicat ions For more information see What is AWS CloudFormation? 51 While many customers use AWS CloudFormation to create development test and multiple production environments within a single AWS Region these same templates can be reused in other regions You can address disaster recovery and region migration scenarios by running such a template with minor modifications i n another region Commonly AWS CloudFormation templates can be readily used by changing the mapping declarations to substitute region specific information such as the unique IDs for AMIs These can vary across regions as shown below ""Mappings"" : { ""RegionMap"" : { ""useast1"" : { ""AMI"" : ""ami 97ed27fe"" } ""uswest1"" : { ""AMI"" : ""ami 59c39c1c"" } ""uswest2"" : { ""AMI"" : ""ami 9e901dae"" } ""euwest1"" : { ""AMI"" : ""ami 87cef2f3"" } ""apsoutheast 1"" : { ""AMI"" : ""ami c44e0b96"" } ""apnortheast 1"" : { ""AMI"" : ""ami688a3d69"" } ""saeast1"" : { ""AMI"" : ""ami 4e37e853"" } } } For more information on mapping declarations see Mapping in the AWS CloudFormation User Guide 52 Archived Page 32 Capturing Environments by Using CloudFormer CloudFormer is a template creation tool that enables you to create AWS CloudFormation templates from the pre existing AWS resources You provision and configure application resources using your existing processes and tools After these resources are provisioned within your environment within an AWS Region the CloudFormer tool takes a snapshot of the resource configurations The tool places t hese resources in an AWS CloudFormation temp late enabling you to launch copies of the application environment through the AWS CloudFormation console The CloudFormer tool create s a starting point for an AWS CloudFormation template that you can customize further For example you can: • Add parameters to enable stacks to be configured at launch time • Add mappings to allow the template to be customized to the specific environments and geographic regions • Replace static values with the Ref and Fn::GetAtt functions to flow property data between resources where the value of one property is dependent on the value of a property from a different resource • Fill in your EC2 instance user data to pass parameters to EC2 instances at launch time • Customize Amazon RDS DB instance names and master password s For more information on setting up CloudFormer to capture a customer resource stack see http://wwwyoutubecom/watch?v=KIpWnVLeP8k 53 For more details on steps required to create a CloudFormation template using CloudFormer see Using CloudFormer to Create AWS CloudFormation Templates fro m Existing AWS Resources 54 API Implications When programmatic access is required to connect to AWS Regions publically defined endpoints must be used for API service requests While some web services allow you to use a general endpoint that does not sp ecify a region these generic endpoints do resolve to the service's specific regional endpoint Archived Page 33 For the authoritative list of current regions and service endpoint URLs see AWS Regions and Endpoints 55 Updating Customer Scripts and Programs You may need to update your self developed scripts and programs that interact with the AWS API (either directly or using one of the SDKs or command line tools ) to ensure that they are communicating with the appropriate regional endpoint Each SDK has its own format for specifying the region being accessed The command line tools generally support the –region parameter Important Considerations Do not leave your AWS certificate or private key on the disk Clear out the shell history file in case you typed secret information in commands or in environment variables Do not leave any password active on accounts Make sure that the image does not include the public S SH key in the authorized_ keys files This leaves a back door into other people’s servers even if they do not intend to use it It is good practice to use the options [ region ] [kernel ] [ramdisk ] explicitly whenever applicable even though those options are optional Verify whether any IP address associations are associated with the AMI If so remove them or modify them with the correct details post migration Conclusions When you undertake any type of system migration we reco mmend comprehensive planning and testing You should be sure to plan all elements of the migration with fail back processes for unanticipated outcomes AWS makes this process easier by enabling cost effective testing and the ability to retain the existing system infrastructure until the migration is successfully completed Contributors The following individuals and organizations contributed to this document: Archived Page 34 • Dhruv Singhal Head Solutions Architect AISPL • Vijay Menon Solutions Architect AISPL • Raghuram Bal achandran Solutions Architect AISPL • Lee Kea r Solutions Architect AWS • Paul Reed Sr Product Manager AWS Document Revisions Date Description February 2020 Minor revisions January 2020 Minor revisions July 2017 First publication 1 http://awsamazoncom/about aws/globalinfrastructure/regional product services 2 http://docsawsamazoncom/IAM/latest/UserGuide/reference_identifiershtml# Identifiers_ARNs 3 http://docsawsamazoncom/general/latest/gr/aws security credentialshtml 4 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml 5 http://docsawsamazoncom/AWSEC2/latest/UserGuide/CopyingAMIshtml 6 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs copy snapshothtml Notes Archived Page 35 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSPerformanceht ml 8 http://awsamazoncom/ec2 9 http://docsawsamazoncom/autoscaling/latest/userguide/GettingStartedTutori alhtml 10 https://awsamazoncom/ec2/pricing/reserved instances/buyer/ 11 http://awsamazoncom/ec2/reserved instances/marketplace/ 12 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_VPNhtml 13 http://docsawsamazoncom/Route53/latest/DeveloperGuide/rrsets working withhtml 14 http://docsamazonwebservicescom/AmazonCloudFront/latest/DeveloperGuid e/HowToUpdateDistributionhtml 15 http://docsamazonwebservicescom/AmazonS3/la test/dev/BucketRestrictions html 16 http://docsamazonwebservicescom/AmazonS3/latest/dev/WebsiteHostinght ml 17 http://docsamazonwebservicescom/AmazonS3/latest/dev/VirtualHostinghtml 18 https://awsamazoncom/premiumsupport/knowledge center/s3 bucket migrate region/ 19 https:/ /awsamazoncom/partners/find/results/?keyword=Storage+ISV 20 http://docsamazonwebservicescom/AmazonS3/latest/API/RESTObjectOpsht ml 21 http://docsamazonwebservicescom/AmazonS3/latest/API/RESTObjectCOPY html Archived Page 36 22 http://docsawsamazoncom/emr/latest/ManagementGuide/emr instance purchasing optionshtml#emr spotinstances 23 http://docsawsamazoncom/emr/latest/Rel easeGuide/UsingEMR_s3distcpht ml 24 http://docsawsamazoncom/AmazonS3/latest/dev/crrhtml 25 https://awsamazoncom/glacier/pric ing/ 26 http://docsawsamazoncom/storagegateway/latest/userguide/create gateway filehtml 27 http://docsawsamazoncom/storagegateway/latest/userguide/G ettingStarted CreateFileSharehtml 28 http://docsawsamazoncom/storagegateway/latest/userguide/managing gateway filehtml#refresh cache 29 http://docsawsamazoncom/storagegateway/latest/APIReference/API_Refres hCachehtml 30 http://docsawsamazoncom/storagegateway/latest/userguide/create volume gatewayhtml 31 http://docsawsamazoncom/s toragegateway/latest/userguide/GettingStarted CreateVolumeshtml 32 http://docsawsamazoncom/storagegateway/latest/usergu ide/backup_netbac kupvtlhtml#GettingStarted retrieving tapes 33 http://docsawsamazoncom/storagegateway/latest/userguide/create tape gatewayhtml 34 http://docsawsamazoncom/storagegateway/latest/userguide/deleting gateway commonhtml 35 https://docsawsamazoncom/AmazonRDS/latest/AuroraUserGuide/AuroraMy SQLReplicationCrossRegionhtml 36 https://awsamazoncom/documentation/dms / Archived Page 37 37 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/SQLServerProce duralImportingSnapshotsh tml#SQLServerProceduralExportingSSGPSW 38 http://docsamazonwebservicescom/AmazonRDS/latest/UserGuide/ImportDat ahtml 39 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Stream shtml 40 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Dynam oDBPipelinehtml 41 https://githubcom/awslabs/dynamodb import export tool 42 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups manualhtml 43 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups exportinghtml 44 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups restoringhtml#backups restoring CON 45 http://docsawsamazoncom/redshift/latest/mgmt/managing snapshots consolehtml#snapshot crossregioncopy configure 46 http://docsawsamazoncom/redshift/latest/mgmt/managing snapshots consolehtml#snapshot restore 47 http://docs awsamazoncom/elasticsearch service/latest/developerguide/es managedomainshtml#es managedomains snapshot create 48 http://docsawsamazoncom/cli/latest/userguid e/clisqsqueue snstopichtml 49 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway export apihtml 50 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway import apihtml 51 http://docsamazonwebservicescom/AWSCloudFormation/latest/UserGuide/ Welcomehtml Archived Page 38 52 http://docsamazonwebservicescom/AWSCloudFormation/latest/UserGuide/m appings section structurehtml 53 http://wwwyoutubecom/watch?v=KIpWnVLeP8k 54 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cfn using cloudformerhtml 55 http://docsawsamazoncom/general/latest/gr/randehtml",General,consultant,Best Practices Migrating_Microsoft_Azure_SQL_Databases_to_Amazon_Aurora,"ArchivedMigrati ng Microsoft Azure SQL Database s to Amazon Aurora Using SQL Server Integration Service and Amazon S3 August 2017 This paper has been archived For the latest technical content see: Migrate Microsoft Azure SQL Database to Amazon AuroraArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Abstract v Introduction 1 Why Migrate to A mazon Aurora? 1 Architecture Overview 2 Migration Costs 4 Preparing for Migration to Amazon Aurora 4 Create a VPC 4 Create a Security Group and IAM Role 5 Create an Amazon S3 Bucket 7 Launch an Amazon RDS for SQL Server DB Instance 7 Launch an Amazon Aurora DB Cluster 8 Launch an EC2 Migration Server 10 Schema Conversion 14 AWS Schema Conversion Tool Wizard 14 Mapping Rules 16 Data Migration 17 Set Up the Repository Database 17 Build an SSIS Migration Package 17 After the Migration 33 Conclusion 33 Contributors 33 Further Reading 33 Document Revisions 34 Archived Abstract As companies migrate their workloads to the cloud there are many opportunities to increase database performance reduce licensing costs and decrease administrative overhead Minimizing downtime is a common challenge during database migrations especially for multi tenant databases with multiple schemas In this whitepaper we describe how to migrate multi tenant Microsoft Azure SQL databases to Amazon Aurora using a combination of Microsoft SQL Server Integration Services (SSIS) and Amazon Simple Storage Service (Amazon S3) which can scale to thousands of database s simultaneously while keeping downtime to a minimum when switching to new databases The target a udience for this paper includes: • Database and system administrators perform ing migrations from Azure SQL Databases into Amazon Aurora where AWS managed migration tools can’t currently be used • Database developers and administrators with SSIS experience • IT managers who want to learn about migrating databases and applications to AWS ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 1 Introduction Migrations of multi tenant databases are among the most complex and time consuming tasks handled by database administrators (DBAs) Although managed migration services such as AWS Database Migration Service (AWS DMS)1 make this task easier some multi tenant database migration s require a custom approach For example a custom solution might be required in cases whe re the source database is hosted by a third party provider who limits certain functionality of the database migration engine used by AWS DMS This whitepaper focus es on the mass migration of a multi tenant Microsoft Azure SQL Databa se to Amazon Aurora Amazon Aurora is a fully managed MySQL compatible relational database engine It combines the speed and reliability of high end commercial databases with the simplicity and costeffectiveness of open source databases 2 In the scenario covered in this whitepaper multi tenancy is defined as the deployment of numerous datab ases that have the same schema3 An example of multi tenancy would be a software asaservice ( SaaS ) provider who deploys a database for each customer We discuss how to use the AWS Schema Conversion Tool (AWS SCT)4 to convert your existing SQL Serve r schema to Amazon Aurora We also show you how to build a SQL Server Integration Services (SSIS) package that you can use to automate the simultaneous migration of multiple databases5 The m ethod described in this whitepaper can also be used to migrate to other types of databases on Amazon Web Service s (AWS ) including Amazon Redshift a fully managed data warehouse 6 Why Migrate to Amazon Aurora ? Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multi ple Availability Zones in a n AWS Region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availabi lity Zone is composed of one or more highly available data centers operated by Amazon7 Availability Zones are isolated from each other and are connected through low latency links Each segment of your database volume is replicated six times across these Availability Zones Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact —so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data This enabl es you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 2 Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Am azon Aurora it is increasingly viewed as the go to database for mission critical applications Architecture Overview A diagram of the architecture you can use for migrating a Microsoft Azure SQL database to Amazon Aurora is shown in Figure 1 Figure 1 : Diagram of resources use d in a migration solution The architecture components are explained in more detail as follows Amazon EC2 Migration Server : The migration server is an Amazon Elastic Compute Cloud (EC2) instance that runs all database migration tasks including: • Installing necessary applications • Downloading and restoring the source database for schema conversion purposes • Converting the schema between source and destination databases using AWS SCT ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 3 • Developing and testing the SSIS data migr ation package With a large EC2 instance type your migration server can run thousands of migration tasks simultaneously If your database s are read and write you can choose between two migration approaches : 1 You can disconnect all clients and put your database s into the single connection mode In this scenario the database s won’t be accessible until the migration is finished Database downtime is measure d in migration time The quicker you migrate your databases the shorter the downtime 2 You can keep your database open for write connection In this scenario you will have to adjust the update record after migration If your databases are read only you can keep the connection to them during the migration process with out any impact on the migration process itself Amazon RDS for SQL Server DB Instance : Connection strings to the Azure SQL database and Amazon Aurora database need to be stored in a small repository database For this purpose you ’ll use an Amazon RDS for SQL Server database ( DB) instance Amazon Relational Database Service (Amazon RDS) is a cloud service that makes it easier to set up operate and scale a relational database in the cloud8 It provides cost efficient resizable capacity for an industry standard relational database and manages common database administration tasks Note that the repository database is a temporary resource needed only during the migration It can be terminated after the migratio n Amazon Aurora DB Cluster : An Amazon Aurora DB cluster is made up of instances that are compatible with MySQL and a cluster volume that represents data copied across three Availability Zones as a single virtual volume There are two types of instances i n a DB cluster: a primary instance (that is your destination database) and Aurora Replicas The primary instance performs all of the data modifications to the DB cluster and also supports read workloads Each DB cluster has one primary instance An Auror a Replica supports only read workloads Each DB instance can have up to 15 Aurora Replicas You can connect to any instance in the DB cluster using an endpoint address Amazon S3 Bucket : Multiple batches of your data are loaded in parallel instead of record by record into temporary storage in an S3 bucket which improve s the performance of migration9 After sav ing your data to an S3 bucket in the last step of building an SSIS package (see the Migrate Multiple Azure SQL Databases section ) you’ll execute an Amazon Aurora SQL command to import data from the S3 bucket to the database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 4 Note : You will need to create an Amazon S3 bucket in the same AWS Region where you l aunched the Amazon Auro ra DB c luster Amazon VPC: All migration resources are created inside a virtual private cloud (VPC) Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define10 You have complete control over your virtual networking environment including selection of your own IP address rang e creation of subnets and configuration of route tables and network gateways The topology of the VPC is as follows: • Two private subnets to launch the Amazon RDS DB instance Each subnet must reside entirely within one Availability Zone and cannot span z ones11 • At least two public subnets to launch your migration server and Amazon Aurora DB cluster Each subnet must be in a different Availability Zone Migration Costs These factors have an impact on the migration cost: • Size of the migrated database (S3 st orage) • Size of the Amazon RDS instance • Size of the Amazon Aurora cluster • Size of the migration server Here are a few suggestions to reduce the migration cost: • Use Amazon S3 Reduce Redundancy Storage (RRS) • For the repository database use Amazon RDS SQL Server Express Edition dbt2micro instance • For the migration server start with t2medium instance type and scale up if necessary Preparing for Migration to Amazon Aurora This section describes how to set up and configur e your AWS env ironment to prepare for migrating your Azure SQL database to Amazon Aurora AWS CloudFormation scripts are also provided to help you automate deployment of your AWS resources12 Note : You must complete t hese steps before moving on to the s chema conversion and migration tasks Create a VPC This section describes two ways you can create a VPC: manually or from a CloudFormation template ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 5 Create a VPC ( Manual ) For step bystep guidance on creating a VPC using the Amazon VPC wizard in the Amazon VPC console s ee the Amazon VPC Getting Started Guide 13 For step bystep guidance on creating a VPC fo r use with Amazon Aurora s ee the Amazon RDS User Guide 14 Create a VPC ( CloudFormation Template ) Alternatively y ou can use this CloudFormation template to quickly set up a VPC with two public and two private subnets including a network addres s translation ( NAT ) gateway To create a VPC using the CloudFormation temp late follow these steps: 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF VPCjson 3 Choose Next 4 Enter the Stack name eg VPC (Note the stack name as you will use it later ) 5 Modify the subnet CIDR blocks or leave the default subnet s 6 Choose Next 7 Under Options leave all the default value s and then choose Next 8 Under Review choose Create 9 Wait for the status to change to CREATE_COMPLETE Optional : To improve the performance of uploading data file s to the S3 bucket from within AWS create an S3 endpoint in your VPC For more information visit: https://awsamazoncom/blogs/aws/new vpcendpoint foramazon s3/ Create a Security Group and IAM Role Access to AWS requires credentials that AWS can use to authenticate your requests Those credentials must have permissions to access AWS resources (access control) such as an Amazon RDS database For example you can control acce ss to a database by limiting it to certain IP addresses or IP address ranges and restricting access to your corporate network only or to a web server that consumes data from your database server ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 6 Create a Security Group and IAM Role (Manual ) To migrate yo ur Azure SQL database to Amazon Aurora you need to do the following: • Create an Amazon EC2 security group to control access to an EC2 instance15 • Create an AWS Identity and Access Management (IAM) role that grants the migration server access to both database servers In addition the role grants external access to the migration server Note: When you use an external IP address you should use the IP address from which you will remotely access the migration server The following table shows examples of inbound rules that need to be created in the new EC2 security group: Resource Inbound Port Source Amazon RDS SQL Server 1433 IP of Migration Server Amazon Aurora DB Cluster 3306 IP of Migration Server Migration Server 3389 User external IP address • Create an IAM role for Amazon EC2 to allow migration server access to the S3 bucket This role has to be associate d with the EC2 migration instance during the launch 16 • Create an IAM role and associate it with an Amazon Aurora DB cluster to allow the DB c luster access to the S3 bucket17 Create a Security Group and IAM Role (CloudFormation Template ) Alternatively you can create both roles and the security group w ith all required inbound rules using a CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF SGjson 3 Choose Next 4 Enter the Stack name eg SG (Note the stack name as you will use it later ) 5 Enter the Network Stack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Creat e a VPC (eg VPC) 6 Choose Next 7 Under Options leave all the default values and then choose Next ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 7 8 Under Review check the box : 9 Choose Create Create an Amazon S3 Bucket You can either use an existing S3 bucket or create a new one by follow ing the steps provided in Create a Bucket18in the Amazon S3 documentation Launch an Amazon RDS for SQL Server DB Instance This section explains how to launch an Amazon RDS for SQL Server DB instance Note that the Amazon RDS DB instance is a temporary resource that’s only needed during the migration It should be terminated after the migration to reduce the AWS cost Launch an Amazon RDS for SQL Server DB Instance ( Manual ) To launch a new Amazon RDS for SQL Server DB instance for your repository database follow these steps 1 In the AWS Management Console choose RDS 2 In the navigation pane choose Instances 3 Choose Launch DB Instance 4 Select Microsoft SQL Server and then select SQL Server Express 5 Set DB Instance Class to dbt2micro 6 Set Time Zone to your local time zone 7 Set DB Instance Identifier to repo 8 Set Master Username and Master Password 9 Leave all the other option s as their default values and choose Next Step 10 Select the VPC create d in the previous step If you create d a VPC using the CloudFormation template then the name of the VPC should be “Migration VPC” 11 Select the correct VPC S ecurity Group If you created a security group from the CloudFormation template then the name should be “SGDBSecurityGroup XXXXXXX ” where XXXXXX is a string that includes random letters and numbers 12 Leave all the other options as their default values and choose Launch DB Instance ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 8 Launch an Amazon RDS for SQL Server DB Instance (CloudFormation Template ) As an alternative method to manually launching an Amazon RDS for SQL DB instance you can use this CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSSQLjson 3 Enter the Stack name eg SQL 4 Enter the following parameters: o DBPassword and DBUser o NetworkStack Name which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the SQLServerAddress key You will need it later Launch an Amazon Aurora DB Cluster This section descri bes two ways you can launch an Amazon Aurora DB cluster: manually or from a CloudFormation template Launch an Amazon Aurora DB Cluster ( Manual ) For step bystep guidance for launch ing and configuring an Amazon Aurora DB cluster for your destination database see the Amazon RDS User Guide 19 In our tests we migrated 10 databases simultaneously For this purpose we used the dbr32xla rge DB instance type Depend ing on how many databases you are planning ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 9 to migrate we suggest that you use the biggest DB instance type for the migration and then scale down to one that is more suitable for daily (production) workload s Read this blog to l earn more about how to scale Amazon RDS DB instance s: https://awsamazoncom/blogs/database/scaling your amazon rdsinstance vertically andhorizontally/ Read Managing an Amazon Aurora DB Cluster in the Amazon RDS User Guide to learn more about choosing the right DB instance type To reduce migration time we suggest that you launch your Amazon Aurora DB c luster in a single Availability Zone and then perform a Multi AZ deployment later if required for production workload s When Multi AZ is selec ted Amazon Aurora will create read replicas in different Availability Zones In this scenario when the primary Amazon Aurora DB instance becomes unavailable one of the existing replica s will be promote d to master status in a matter of seconds In a case where Multi AZ is disabled launch ing the new primary instance can take up to 5 minutes Finally load your data to the Aurora DB instance from the S3 bucket To allow Amazon Aurora access to the S3 bucket you need to grant the necessary permission You can do this by follow ing the steps described in the Allowing Amazon Aurora to Access Amazon S3 Resources article 20 Launch an Amazon Aurora DB Cluster ( CloudFormation Template ) As an alternative method to launching an Amazon Aurora DB cluster instead of launching manually you can use this Cloud Formation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSAurorajson 3 Enter the Stack name eg Aurora 4 Enter the following parameters: o DBPassword and DBUser o NetworkStackName which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStackName which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 10 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the AuroraClusterAddress key You will need it later 10 After you launch the cluster assign an IAM role to the cluster To do this follow steps 1 6 in this topic in the Amazon RDS documentation: Authorizing Amazon Aurora to Access Other AWS Services on Your Behalf 21 Note: The name of the role created by the CloudFormation template is RDSAccessS3 Launch an EC2 Migration Server This section describes two ways to launch an EC2 Migration Server: manually and using a CloudFormation template Launch a n EC2 Migration Server (Manual ) To launch the EC2 Migration instance please follow th e documentation 22 Choose these options when launch ing a new EC2 instance: • Amazon Machine Image (AMI) : Microsoft Windows Server 2012 R2 Base • Instance Type : t2large • VPC: select the one you create d in “Create a VPC” • IAM Role : select the EC2 role you created in “ Create a Security Group and IAM Role ” • Add Storage : add two Amazon Elastic Block Store ( EBS) volumes o The f irst volume should be large enough to store all data from the Azure SQL database o The s econd volume should be 10 GB in size Under the snapshot column depend ing on the Region where you are launching the Migration Server enter: Region Snapshot ID useast1 snap 0882e0679e0edbc9d useast2 snap 0f8e882e50e145512 uswest 1 snap 0be3d0aa0c7fd6058 uswest 2 snap 044e09795b0af042d cacentral 1 snap 034a9e106a335e83e euwest 1 snap 0c4f59af047f8c680 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 11 Region Snapshot ID eucentral 1 snap 0b96dab9f8716b8a3 euwest 2 snap 0da47a13ca2333917 apsoutheast 1 snap 09e64c82ad0252691 apsoutheast 2 snap 0116831d4532fa8f0 apnortheast 1 snap 06efa146310714fda apnortheast 2 snap 0dc5415e1c5c58021 apsouth 1 snap 063223b238340215d saeast1 snap 002492e97e9a54b8b o The second volume will contain all the software necessary to accomplish the migration tasks • Security Group : select the security group you created in “ Create a Security Group and IAM Role ” Launch a n EC2 Migration Server (CloudFormation Template ) As an alternative method to launch ing an EC2 Migration Server instead of creating all resources manually you can use this CloudFormation template Server Configuration After launch ing the server either manually or from a CloudFormation template follow these steps 1 Retrieve your Windows Administrator user password The steps for doing this can be found in the article How do I retrieve my Windows administrator password after launching an instance?23 on the AWS Premium Support Center 2 Log in to the Migration Server using the RDP client If you used the CloudFormation template you can get the IP address of the Migration Server from the Output tab under IPAddress key 3 Afte r log ging in open File Explorer and check whether you see the DBTools volume If you see the DBTools volume go to step 5 ; otherwise follow step 4 4 If you do not see DBTools follow these steps: a Run the diskmgmtmsc command to open Disk Management b Under the Disk Management window scroll down until you find a disk that is offline c Right click on the disk and from the context menu se lect Online (as shown in the following screen shot ) ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 12 5 Open the command line and from the DBTools volume run Installbat This will install all the necessary applications All applications to be installed (including the link to download) are listed in Appl ication List as shown in the next screen shot Wait until all the applications are installed This might take up to 30 minutes 6 Open CreateRepositoryDBbat in Notepad and edit the following values: o serverName – This is the address of the SQL Server that you set under “Launch an Amazon RDS for SQL Server DB Instance” If you used a CloudFormation template to launch Amazon RDS you can find this value on the CloudFormation > Output tab under SQLServerAddress key o userName – This is the SQL username o userPass – This is the SQL user password 7 Save the file and execute it This script will create a repository database including the table and stored procedure on Amazon RDS for SQL Server DB instance that was created in the previous section Note: The external IP address associate d with Migration Server has to be added to Azure SQL database firewall Applications List Here is a list of the applications install ed on the Migration Server by the script described in Step 5 in the previous procedure : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 13 • SQL Server – https://wwwmicrosoftcom/en sa/sql server/sql server downloads with minimum selected services • SQL Server Management Studio – https://docsmicrosoftcom/en us/sql/ssms/download sqlserver management studio ssms • SQL Server Data Tools – https://docsmicrosoftcom/en us/sql/ssdt/download sqlserver data tools ssdt • AWS CLI (64bit) – https://awsamazoncom/cli/ • MySQL ODB C Driver (32 bit) – https://devmysqlcom/downloads/connector/odbc/ • Azure PowerShell – https://azuremicrosoftcom/en us/downloads/ • AWS Schema Conversion Tool – http://docsawsamazoncom/SchemaConversionTool/latest/userguide/CHAP_ SchemaConversionToolInstallingh tml • Microsoft JDBC Driver 60 for SQL Server – https://wwwmicrosoftcom/en us/download/detailsaspx?displaylang=en&id=11774 • MySQL JDBC Driver – https://wwwmysqlcom/products/connector/ • Optional: MySQL Workbench – https://devmysqlcom/downloads/workbench/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 14 Schema Conversion Before running the AWS Schema Conversion Tool the Azure SQL database schema needs to be restored on the Migration Server This can be done either by recreating the database from a script/backup or by restoring it from a BACPAC file For information on how to export an Azure SQL database to a BACPAC file see this article on the Microsoft Azure website24 Alternatively you can execute a PowerShell script to export the Azu re SQL database to a BACPAC file as follows : 1 Use Remote Desktop Protocol ( RDP) to connect to the Migration Server 2 Locate the AzureExportps1 PowerShell script on the DBTools volume and open it in Notepad for editing 3 Modif y the values at the top of the sc ript When you are done save the changes you made 4 Open PowerShell and execute the script by entering e:\ AzureExportps1 5 When the script has executed you should see the xxxxbacpacfile in your local folder 6 To restore the database from bacpac file open the SQL Server Management Studio connect to the Migration Server (wh ich is the local server) right click on the database name and from the menu select Import Data tier Application Then follow the wizard For more information on how to import a PACPAC file to create a new user database see: https://docsmicroso ftcom/en us/sql/relational databases/data tier applications/import abacpac filetocreate anew user database AWS Schema Conversion Tool Wizard Before migrating the SQL Server database to Amazon Aurora you have to convert the existing SQL schema to the new format supported by Amazon Aurora The AWS Schema Conversion Tool helps convert the source database schema and a majority of the custom code to a format that is compatible with the target database This is a desktop application that we installed on the desktop of the Migration Server The custom code includes views stored procedures and functions Any code that the tool cannot automatically convert is clearly marked so that you can convert it yourself To start with AWS SCT follow these steps: 1 After restoring the database open the AWS Schema Conversion Tool 2 Close the AWS SCT Wizard if it opens automatically 3 From Settings select Global Settings ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 15 4 Under Drivers select the path s to the Microsoft Sql Server and MySql drivers You can find both drivers on the DBTools volume in following locations: SQL Server : E:\Drivers \Microsoft JDBC Driver 60 for SQL Server \sqljdbc_60 \enu\jre7\sqljdbc41jar MySQL : E:\Drivers \mysql connector java5141 \ mysql connector java5141 bin 5 Choose OK 6 From File select New Project Wizard 7 In Step 1: Select Source for Source Database Engine select Microsoft SQL Server 8 Set the following c onnection parameters to the EC2 Migration SQL Server (local server): o Server name : the name of the EC2 Migration Server If you didn’t chang e it it will be something like : WIN ITKVVM7QQ08 o Server port : 1433 o User name : sa o Password : sa password – if you inst alled everything from the Installbat script the password will be Password1 9 Choose Test Connection 10 If the connection is successful choose Next Otherwise verify the connection parameters 11 In Step 2: Select Schema select the database that was restored from the bacpac file and choose Next 12 In Step 3: Run Database Migration Assessment choose Next 13 In Step 4: Select Target set the following parameters : o Target Database Engine : Amazon Aurora (MySQL compatible) o Server name : The Amazon Aurora Cluster Endpoint If you launched the Amazon Aurora DB cluster from the CloudFormation template you can find the cluster endpoint on the CloudFormation output tab under AuroraConnection va lue o Server port : 3306 o User name : The Aurora master user name o Password : The Aurora master password 14 Choose Test Connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 16 15 If the connection test is successful choose Finish Otherwise check the connection parameters Mapping Rules In some cases you might need to set up rules that change the data type of the columns move objects from one schema to another and change the names of objects For example if you have a set of tables in your source schema named test_TABLE_NAME you can set up a rule that changes the prefix test_ to the prefix demo_ in the target schema To add mapping rules perform the following steps : 1 From Actions menu of AWS SCT choose Convert Schema 2 The converted schema appears in the right hand side of AWS SCT The schema name will be in the following format: {SQL Server database name}_{database schema} For example tc_dbo 3 To rename the output schema from Settings choose Mapping Rules 4 Choose Add new rule to create a rule for renaming the database 5 Choose Edit rule 6 From the For list select database For Actions select rename and then type a new database name 7 Choose Add new rule to create a rule for renaming the database schema 8 From the For list select schema For Actions select rename and then type a new schema name 9 Choose Save All and close the window 10 Run Convert Schema The schema should now be updated with the new settings In this example the new schema name is TimeCard_Customer1 By right clicking on the new schema name you can eithe r save t he schema as an SQL script by selecting Save as SQL or apply it directly to the Amazon Aurora database by selecting Apply to database Depend ing on the complexity of the SQL Server schema the new schema might not be optimal or cor rectly convert all objects Note : As a rule of thumb you should always look at the new schema and make necessary adjustment s and optimization ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 17 If you have a small number of databases on Azure SQL (~10 or fewer ) you can apply the schema for each database by modif ying the rule for the schema name running Convert Schema and then apply ing it to the destination database If you are hosting hundreds or thousands of databases a more efficient way to apply the new schema would be to save it as an SQL script and then create a script using Bash (Linux) or PowerShell (Windows) to read an exported schema file modif y the schema name and save it as a new file ; then use a tool such as MySQL Workbench25 or a command line tool such as mysql to apply the script to the Amazon Aurora database You can find mysql here: C:\Program Files \MySQL \MySQL Workbench 63 CE Data Migration You ar e now ready to migrate the data First you need to set up the repository database and then you need to build an SSIS migration package Set Up the Repository Database From the Migration Server connect to the Amazon RDS repository ( MigrationCfg ) database us ing SQL Server Management Studio P opulate the ConnectionsCfg table with the following values: • MSSQLConnectionStr : The Azure SQL connection string which has the following format: DataSource= youraureserver databasewindowsnet;User ID=user_name ;Password= db_password ;Initial Catalog=TimeCard1;Provider=SQLNCLI111;Persist Security Info=True;Auto Translate=False; • MySQLConnectionStr : The Amazon Aurora connection string which has the following format: DRIVER={My SQL ODBC 53 ANSI Driver};SERVER=your_aurora_closter_endpoint;DATABASE=TimeCard_Custom er1;UID=user_name;Pwd=db_password; • StartExecution : Indicate s if the migration for the given database has already started This value should i nitially be set to 0 • Status : Upon completion of the database migration the status will either be Success or Failed depend ing on the migration outcome • StartTime and EndTime : These are the statistic s column s that show the database migration start and end times • DBName : Can be any string unique across all records This string will be used as the prefix in the file name of the file contain ing exported data Build an SSIS Migration Package To build an SSIS Migration Packa ge perform the following steps Create a New Project 1 On the D:\ drive create a new folder called Output 2 Open the SQL Server Data Tool 2015 application ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 18 3 Select File then New and then Project 4 From Templates select Integration Services and then s elect Integration Service s Project 5 Name your project 6 Choose OK 7 Under Solution Explorer right click on the project name and select Convert to Package Deployment Model 8 Rename you r package from Packagedtsx to something more meaning ful eg SQLMigrationdtsx 9 In Properties under Security change ProtectionLevel to EncryptSensitiveWithPassword 10 Choose PackagePassword and set the password ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 19 Set the SSIS Variables 1 From the SSIS menu select Variables 2 Add the following variables: Variable Name Variable Type ConfigID Int32 DBName String MSConnectionString String MyConnectionString String S3Input_LT1 String 3 For S3Input_LT1 add the following expression: LOAD DATA FROM S3 's3 useast1://yours3bucket/""+ @[User::DBName]+""_TL1txt' INTO TABLE [Your_First_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); 4 Adjust the table name and column name s to reflect your database schema 5 Repeat the last step to create multiple S3Input_LTx variable s—one for each table For example if you have 10 tables then you should have : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 20 S3Input_LT1 … S3Input_LT1 0 6 Modify the expression for each variable accordingly For e xample the last variable will have this expression : LOAD DATA FROM S3 's3 useast1://yours3bucket/""+ @[User::DBName]+""_ TL10txt' INTO TABLE [Your_Last_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); Notice that in each variable expression the table name as well as file name should be different When you are done you should have following variables: Retrieve Configurations from Repository Database 1 From the SSIS Toolbox drag and drop Execute SQL Task on Control Flow 2 Double click Execute SQL Task 3 Under General change ResultSet to Single row 4 Under SQL Statement exp and the list and select New connection Set up a new connection to your Amazon RDS SQL Server repository database 5 Set SQLStatement to EXEC [sp_GetConnectionStr] ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 21 6 Under Result Set add the following four rows: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 22 Create Data Migration Flow Follow the steps b elow to create a data flow from Azure SQL Server to Amazon Aurora To migrate multiple database tables simultaneously put all data flows inside Sequence Container by follow ing these steps: 1 From the SSIS Toolbox drag and drop Sequence Container onto the Control Flow panel 2 Select Get Connection Strings and connect the green arrow to Sequence Container Output Data to Temporary File 1 From the SSIS Toolbox drag and drop Data Flow Task into Sequence Container ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 23 2 Double click Data Flow Task 3 From the SSIS Toolbox drag and drop Source Assistance onto the new Data Flow Task panel 4 Under Source Type select SQL Server Under Connection Managers select new 5 Choose OK 6 Set up a connection to one of your Azure SQL databases 7 When done you should see OLE DB Source on the Data Flow Task panel Double click it 8 From the Name of table or the view menu select the first table that you want to migrate and c hoose OK 9 From the SSIS Toolbox expand Other Destinations and drag and drop Flat File Destination onto Data Flow panel 10 Select OLE DB Source and connect the green arrow to Flat File Destination 11 Double click on Flat File Destination Under Flat File connection manager choose New ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 24 12 Select Delimiter and choose OK 13 Under File name enter D:\Output \temptxt and choose OK 14 Choose Mapping You should see the following : 15 Choose OK The Data Flow Task panel should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 25 16 Under Connection Manager s select the newly created connection to the Azure SQL database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 26 17 Under Properties : a Change DelayValidation to False Choose OK a Choose Expressions Under Property select Connection String Under Expression enter : @[User::MSConnectionString] 18 Repeat steps 16 17 for Flat File Connection but set the Connection String expression to: D:\\Output \\""+@[User::DBName]+""_TL1txt ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 27 19 Change DelayValidation to False 20 Under Control Flow select Data Flow Task Under Properties change DelayValidation to True Copy Temporary Data File to Amazon S3 Bucket 1 From the SSIS Toolbox drag and drop Execute Process Task into Sequence Container 2 Select Data Flow Task and connect the green arrow to Execute Process Task The new flow should look l ike this: 3 Double click Execute Process Task and make following changes: • Under Process : o Executable : C:\Program Files \Amazon \AWSCLI \awsexe o Working Directory : C:\Program Files \Amazon \AWSCLI • Under Expressions : o Property : Arguments ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 28 o Expression : ""s3 cp D:\\Output \\""+ @[User::DBName]+""_TL1txt s3:// your s3bucket "" 4 Choose OK 5 Select Execute Process Task Under Properties change DelayValidation to False Import Data from Temporary File to Amazon Aurora 1 From the SSIS Toolbox drag and drop Execute SQL Task into Sequence Container 2 Select Execute Process Task and connect the green arrow to Execute SQL Task The new flow should look like this: 3 Double click Execute SQL Task ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 29 4 Change ConnectionType to ADONET 5 Under Connection select New connection Choose New 6 Under Provider select Net Providers Odbc Data Provider 7 Check Use connection string and enter the following connection string: Driver={MySQL ODBC 53 ANSI Driver};server= aurora_endpoint ;database=TimeCard_ Customer 1 ;UID=aurora_us er;Pwd=aurora_password ; 8 Under General s et SQLSourceType to Variable and set SourceVariable to User:S3Input_LT1 Choose OK 9 Under Connection Managers select your Aurora connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 30 10 Under Properties change DelayValidation to True 11 Choose Expressions Under Property select Connection String Under Expression enter : @[User::MyConnectionString] For each table that you want to migrate r epeat all steps define d in the following sections : Output Data to Tem porary File Copy Temporary Data File to Amazon S3 Bucket Import Data from Temporary File to Amazon Aurora Reuse connection managers for Azure SQL and Amazon Aurora cluster The Flat File connection needs to be set up for each table separately In addition for each table : • Change the Connection String expression as follow s: o For the second table: D:\\Output \\""+@[User::DBName]+""_TL2 txt o For the third table: D:\\Outpu t\\""+@[User::DBName]+""_TL3 txt o and so on • Under Expression change the file name as follow s: o s3 cp D: \\Output \\""+ @[User::DBName]+""_ TL2txt s3:// your s3bucket o s3 cp D: \\Output \\""+ @[User:: DBName]+""_ TL3txt s3:// your s3bucket o and so on • Change SourceVariable as follow s: o For the second table : to S3Input_LT2 o For the third table : to S3Input_LT3 o and so on Tracking Migration Status The database migration completion status either success or failed is store d in the repository database To track the status follow these steps: 1 Drag and drop Execute SQL Task below Sequence Container 2 Select Sequence Container and connect the green arrow to Execute SQL Task 3 Double click Exec ute SQL Task 4 Under Connection select the connection to your Amazon RDS SQL Server Express repository database 5 Under SQLStatement enter: UPDATE [ConnectionsCfg] SET [Status] = 'Success' EndTime = GETDATE() WHERE [CfgID] = ? ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 31 6 Under Parameter Mapping add a new record with the following variable name : 7 Choose OK 8 Repeat step s 16 Modify the SQL Statement as follows : UPDATE [Connect ionsCfg] SET [Status] = 'Failed ' EndTime = GETDATE() WHERE [CfgID] = ? 9 Select the green arrow connecting Sequence Container with Execute SQL Task 10 Under Properties change Value to Failure The final flow should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 32 11 Save and build the package You can test the package by executing it directly from Visual Studi o Migrate Multiple Azure SQL Databases Packages will migrate a single database To migrate multiple databases simultaneously create a Windows batch file that will call the SSIS package You can use the following command to call the SSIS package: cd C:\Program Files \Microsoft SQL Server \130\DTS\Binn dtexec /F ""C: \SSIS\SQLMigrationdtsx"" /De your_package_password Now you can execute the batch file simultaneously as many times and for as many databases as you set up in the Repository database In case of hundreds or thousands of databases the migration process should be split across multiple EC2 instances Here is one approach for setting up multiple instance s: 1 Determin e the optimal number of databases that can be migrated by a single EC2 instance (Migration Server) For instance you can start test migrating 20 databases using a single instance By monitoring the CPU and memory usage of the Migration Server you can either in crease or decrease the count of databases You could also change to a larger EC2 instance type 2 In Windows startup set up execution of multiple migration scripts – up to maximum determined in the previous step 3 Create an AMI of the instance 26 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 33 4 Create an Auto Scaling group based on the AMI with the total EC2 instances required to migrate all databases 27 Note : You can find an example of an SSIS package on the Migration Server on the DBTools volume in /Apps/ SQLMigration S3dtsx or you can download it from http://rh migration blogs3amazonawscom/SQL Migration S3dtsx After the Migration When your databases are running on Amazon Aurora here are a fe w suggestions for next steps: • Review the best practices for Amazon Aurora • Review and optimize indexes and queries • Monitor your Amazon Aurora DB cluster • Consider Amazon Aurora with PostgreSQL as an alternative option to Amazon Aurora with MySQL Conclusion This whitepaper described one method for migrating multi tenant Microsoft Azure SQL databases to Amazon Aurora Other methods exist We tested our solution a few times using the following configurations : • Source databases o 10 databases each with 10 tables o Each table had 500K records o Size of a single database was ~450 MB • Destination database o Single Amazon Aurora Cluster running on a dbr38xlarge instance class o 10 packages were executed simultaneously on an EC2 m44xlarge instance type • Total migration time of all 10 databases : ~3 minutes We found that across the tests that we did all of the results were consisten t Contributors The following individuals and organizations contributed to this document: • Remek Hetman Senior Cloud Infrastructure Architect Amazon Web Services • Yoav Eilat Senior Product Mar keting Manager Amazon Web Services Further Reading For additional information see the following : • https://awsamazoncom/rds/aurora/ • https://awsamazoncom/documentation/SchemaConversionTool/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 34 • https://awsamazoncom/cloudformation/ • https://awsamazoncom/vpc/ Document Revisions Date Description August 2017 First publication Notes 1 https://awsamazoncom/dms/ 2 https://awsamazoncom/rds/aurora/ 3 https://msdnmicrosoftcom/en us/library/aa479086aspx 4 https://awsamazoncom/documentation/SchemaConversionTo ol/ 5 https://docsmicrosoftcom/en us/sql/integration services/ssis how tocreate anetl package 6 https://awsamazoncom/redshift/ 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 8 https://awsamazoncom/rds/ 9 https://awsamazoncom/s3 10 https://awsamazoncom/vpc/ 11 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 12 https://awsamazoncom/cloudformation/ 13 http://docsawsamazoncom/AmazonVPC/latest/GettingStartedGuide/getting started ipv4html 14 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCreateVPChtml 15 http://docsawsamazoncom/Am azonVPC/latest/UserGuide/VPC_SecurityGroupsht ml#CreatingSecurityGroups ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 35 16 http://docsawsamazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html 17 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 18 http://docsawsamazoncom/AmazonS3/latest/gsg/CreatingABuckethtml 19 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCrea teInstance html 20 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 21 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml#AuroraAut horizingAWSServicesAddRoleToDBCluster 22 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2_GetStartedhtml 23 https://awsamazoncom/premiumsupport/knowledge center/retrieve windows admin password/ 24 https://docsmicrosoft com/en us/azure/sql database/sql database export 25 https://devmysqlcom/downloads/workbench/ 26 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSbacked_ WinAMIhtml 27 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSb acked_ WinAMIhtml",General,consultant,Best Practices Migrating_Oracle_Database_Workloads_to_Oracle_Linux_on_AWS,This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Migrating Oracle Database Workloads to Oracle Linux on AWS Guide January 2020 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Overview 1 Amazon RDS 1 Oracle Linux AMI on AWS 2 Support an d Updates 3 Lift and Shift to AWS 4 Migration Path Matrix 5 Migration Paths 6 Red Hat Linux to Oracle Linux 6 SUSE Linux to Oracle Linux 6 Microsoft Windows to Oracle Linux 7 Migration Methods 7 Amazon EBS Snaps hot 7 Oracle Data Guard 9 Oracle RMAN Transportable Database 11 Oracle RMAN Cross Platform Transportable Database 11 Oracle Data Pump Export/Import Utilities 12 AWS Database Migration Service 12 Other Database Migration Methods 13 Enterprise Application Considerations 13 SAP Applications 13 Oracle E Business Suite 15 Oracle Fusion Middleware 17 Conclusion 17 Contributors 17 Document Revisions 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers About this Guide Oracle databases can run on different operating systems (OS) in on premises data centers such as Solaris (SPARC) IBM AIX and HP UX Amazon Web Services (AWS) supports Oracle Linux 64 and higher for Oracle databases This guide highlights the migration p aths available between different operating systems to Oracle Linux on AWS These migration paths are applicable for migrations from any source —onpremises AWS or other public cloud environments This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 1 Overview Oracle workloads benefit tremendously from many features of the AWS Cloud such as scriptable infrastructure instant provisioning and de provisioning scalability elasticity usage based billing managed database services and the ability to support a wide variety of operating systems (OSs) When migrating your workloads choosing which operating system to run them is a crucial decision We highly recommend that you choose an Oracle supported operating system to run Oracle software on AWS You can use the follow ing Oracle supported operating systems on AWS: • Oracle Linux • Red Hat Enterprise Linux • SUSE Linux Enterprise Server • Microsoft Windows Server Specific Oracle supported operating systems can be used for specific database middleware and application workloads For example SAP workloads on AWS require that Oracle Database be run on Oracle Linux 64 or higher You have many methods for migrating your Oracle databases to Oracle Linux on AWS This guide documents the different migration paths available for the va rious source operating systems It covers migrations from any source —onpremises AWS or other public cloud environments Each migration path offers distinct advantages in terms of downtime and human effort You can choose the best migration path for your business based on your specific needs Amazon RDS For most workloads a managed database service is the preferred method Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on several database instance types —optimized fo r memory perform ance or I/O In addition Amazon RDS provides you with six familiar database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 2 engines to choose from including Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server You can use the AWS D atabase Migration Service (AWS DMS) to easily migrate or replicate your existing databases to Amazon RDS Amazon RDS for Oracle supports Oracle Database Enterprise Edition Standard Edition Standard Edition 1 and Standard Edition 2 Amazon RDS Oracle Sta ndard Editions support both Bring Your Own License (BYOL) and License Included (LI) If you are exploring other database platforms Amazon RDS offers you a choice of database engines and tools such as AWS D atabase Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) to make the migration process easier Oracle Linux AMI on AWS If you choose not to use a managed database and instead manage the Oracle database yourself you can deploy it on Amazon Elastic Compute Cloud (Amazon EC2) Oracle Linux EC2 instances can be launched using an Amaz on Machine Image (AMI) available in the AWS Marketplace or as a Community AMI You can also bring your own Oracle Linux AMI or existing Oracle Linux license to AWS In that case y our technology stack is similar to the one used by Amazon RDS for Oracle wh ich also runs on Linux based operating systems Use migration tools such as Oracle Data Pump Export/Import or AWS DMS These tools take care of migration from different OS platforms to EC2 and/or RDS for Oracle The AWS Marketplace listing for Oracle Linux is through third party vendors You will find a list of Community AMIs and Public AMIs by searching for the term “OL6” or “OL7” Public AMI listings are available in the EC2 section of the AWS Management Console under Images then AMI Two types of AMIs a re available for the same release version: • Hardware Virtual Machine (HVM) • Paravirtual Machine (PVM) HVM is an approach that uses virtualization features of the CPU chipset If a virtual machine runs in HVM mode the kernel of the OS may run unmodified PVM does not use virtualization features of the CPU chipset PVM uses a modified kernel to achieve virtualization AWS supports both HVM and PVM AMIs The Unbreakable Enterprise Kernel for Oracle Linux natively includes PV drivers SAP has specific recommendations of HVM virtualized AMIs for SAP installations The Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 3 Linux AMI published by Oracle are available in the list of Community AMIs in AWS Mark etplace Community AMIs do not have any official support Refer to the following table for some of the AMI listings: Table 1: Community AMIs Version AMI Oracle Linux 73 HVM OL73 x86_64 HVM Oracle Linux 73 PVM OL73 x86_64 PVM Oracle Linux 72 HVM OL72 x86_64 HVM Oracle Linux 72 PVM OL72 x86_64 PVM Oracle Linux 67 HVM OL67 x86_64 HVM Oracle Linux 67 PVM OL67 x86_64 PVM Anyone can upload and share an AMI Use caution when selecting an AMI Reach out to AWS Business Support or your vendor support for assistance In addition to an existing AMI you can import your own virtual machine images as AMIs in AWS Refer to the VM Import/Export page for more details This option is highly useful when you have heavily customized virtual machine images available in other cloud environments or your own data center Support and Updates Oracle offers Basic Basic Limited Premier and Premier Limited commercial support for Oracle Linux EC2 instances Refer to Oracle’s cloud license document for the in stance requirements The following table shows the level of support available for various AMI options Table 2: Support levels Option Support level AWS Marketplace Basic Support and Basic Limited This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 4 Option Support level BYOL (Bring Your Own License) Basic Basic Limited (up to 8 virtual cores) Premier Premier Limited (up to 8 virtual cores) Community AMI No commercial support If you have an Oracle Linux support contract you can register your EC2 instance using the uln_register command on your EC2 instance This command requires you to have access to an Oracle Linux CSI number Review the Oracle Linux Unbreakable Linux Network (ULN) user guide on the steps for ULN channel subscription and how to register your Oracle Linux instance Oracle Linux instances require intern et access to the public yum repository or Oracle ULN in order to download packages All Oracle Linux AMIs can access the public yum repository Only licensed Oracl e Linux systems can access the Oracle ULN repository If the EC2 instance is on a private subnet use a proxy server or local yum repository to download packages Oracle Linux systems (OL6 or higher) work with the Spacewalk system for yum package management A Spacewalk system can be in a public subnet while Oracle Linux systems can be in a private subnet The following sections detail migration path methods availa ble for Oracle databases These migration methods are available for Oracle 10g 11g 12c and 18c For other Oracle products see the respective product support notes in Oracle’s MyOracleSupport portal Lift a nd Shift to AWS Existing Oracle workloads can be migrated from existing on prem or virtualized environment to Amazon EC2 with no changes required (Lift and Shift) using CloudEndure Migration CloudEndure Migration executes a highly automated machine conversion and orchestration process allowing even the most complex applications and databases to run natively in AWS without compatibility issues CloudEndure Migration uses a continuous block leve l replication process Servers are replicated to a staging area temporarily until you are ready to cut over to your desired instance target This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 5 CloudEndure Migration replicates your existing server infrastructure via its client software as a background proces s without application disruption or performance impact Once replication is complete CloudEndure Migration allow s you to cut over your servers to the instance family and type of your choice via customized blueprints Using your blueprint you can test you r deployment before committing to an instance family and type CloudEndure Migration supports Oracle Linux Redhat Linux Windows Server and SUSE Linux For detailed version compatibility information see Supported Operating Systems CloudEndure Migration is provided at no cost for migrations into AWS Migration Pat h Matrix A migration path matrix assumes that only the operating systems change and other software versions remain the same We recommend that you change other components such as the Oracle database version or Oracle database patching separately to avoid complexity The database version and any other application version in both source and target EC2 instances should remain the same to prevent deviations in the migration path There are also vendor data replication and migration tools available that can su pport platform migration See the Migration Methods section for the list of methods Table 3: Migration methods Source database operating system Migration methods Red Hat Linux Amazon EBS snaps hot Oracle Data Guard SUSE Linux Amazon EBS snapshot Oracle Data Guard Microsoft Windows Oracle Data Guard 11g RMAN Transportable Tablespace HPUX Solaris (SPARC) RMAN Cross platform Transportable Tablespace This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 6 Migration Paths This section presents three paths for migrating to Oracle Linux on AWS Red Hat Linux to Oracle Linux Oracle Linux and Red Hat Linux are compatible operating systems When migrating from Red Hat Linux to Oracle Linux migrate to the same version level for example Red Hat Linux 64 to Oracle Linux 64 or Red Hat Linux 72 to Oracle Linux 72 Also ensure that both operating systems are patched to the same level You can migrate Red Hat Linux to Oracle Linux using either of these methods : • Amazon Elastic Block Store (Amazon EBS) snapshot • Oracle Data Guard An EBS snapshot is a faster migration method than Oracle Data Guard for non Oracle Automatic Storage Management (ASM) databases If your databases use Oracle ASM then Oracle Data Guard is a bett er choice Other standard methods such as the Oracle Recovery Manager (RMAN) and Oracle Export and Import utilities can work across operating systems However these methods require a large r downtime and a greater amount of human effort Choose the Export and Import utilities method if your specific use case requires it See the Migration Methods section for details on each migration method SUSE Linux to Oracle Linux SUSE Linux Enterprise Server (SLES) is an enterprise grade Linux offering from SUSE Oracle Linux and SUSE Linux are binary compatible That is you can move an executable directly from SUSE Linux to Oracle Linux and it will work It must match the same C compiler and bit architecture (32 bit or 64 bit) SLES follows a different versioning scheme than Oracle Linux so there is no easy way to match similar operating system versions Additionally the Linux kernel version gcc versions and bit architecture must match Contact SLES Technical Support to find which Oracle Linux version is compatible with the SLES operation system SLES Linux can also be migrated using EBS snapshots and Oracle Data Guard just as you can do with Red Hat Linux Again these metho ds have less downtime and require less human effort than Oracle RMAN or Oracle Export/Import This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Data base Workloads to Oracle Linux on AWS 7 An EBS snapshot is a much quicker and simpler method than Oracle Data Guard Whichever method you select we recommend that you don’t copy the binaries from SLES but rather perform a fresh Oracle home installation on your Oracle Linux EC2 instance The reason for this recommendation is to properly generate the Oracle Inventory directory (oraInventory) in the new Oracle Linux EC2 instance and also have the files cr eated by rootsh Simply copying Oracle home may not create oraInventory and rootsh may not create the new files Also ensure the patch level of the newly created database binary home is exactly the same as the one in the SLES instance See the Migration Methods section for details on each migration method Microsoft Windows to Oracle Linux Microsoft Windows is a completely different operating system than the various types of Linux operating systems The following mi gration methods are available for Windows: • Oracle Data Guard (heterogeneous mode) • Oracle RMAN transportable tablespace (TTS) backup and restore The Oracle Data Guard method requires much less downtime compared to the Oracle RMAN TTS method The RMAN TTS me thod still requires copying the files from your onpremises data center or source database servers to AWS Files of significant size will extend the migration time There are several methods available such as AWS Import/Export and AWS Snowball which can handle the migration of large volumes of files Transferring large volume of files over the network takes time AWS Import/Export and AWS Snowball can help by migrating the data offline using physical media devices See the Migration Methods section for details on each migration method Migration Methods Your choice of migration method depend s on your specific use case and context Repeated testing and validation is necessary before finalizing and performing on the production workload Amazon EBS Snapshot An EBS snapshot is a storage level backup mechanism It preserves the contents of the EBS volume as a point intime copy If you are migrating databases from RHEL or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 8 SUSE to Oracle Linux EBS snapshot is one of the fastest migration methods This method is applicable only if the source database is already on AWS and running on Oracle EBS storage It is not applicable for on premises databases or non AWS Cloud services The high level migration steps are: 1 Create a new Amazon EC2 instance based on Oracle Linu x AMI 2 Install an Oracle home on the new Oracle Linux EC2 instance 3 Create the new database parameter files and TNS files 4 Take an EBS snapshot of the volumes in the older EC2 instance (Red Hat Linux SUSE Linux) If possible we recommend that you take an EBS snapshot during downtime or off peak hours 5 Create a new volume based on the EBS snapshot and mount it on your Oracle Linux EC2 instance 6 Perform the post migration steps such as verifying directory and file permissions 7 Start the Oracle datab ase on the Oracle Linux EC2 instance You can take a snapshot of the Oracle home as well as the database files However we recommend that you install Oracle home binaries separately on the new Oracle Linux EC2 instance The Oracle home installation create s a few files in operating system root that may not be available if you create a snapshot and mount the binary home The EBS snapshot can be taken while the database is running but the snapshot will take longer to complete Conditions for Taking an Amazon EBS Snapshot • When you create the new volume on the target Oracle Linux EC2 instance ensure that the volume has the same path as the source EC2 instance If database files reside in the /oradata mount in the source EC2 instance the newly created volume fr om the snapshot should be mounted as /oradata in the target Oracle Linux EC2 instance It is also recommended but not required to keep the Oracle database binary home the same between source and target EC2 instances • The Unix ID number for the Oracle use r and the dba and oinstall groups should be the same number as the source operating system This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 9 For example the Oracle Linux 11g/12c pre install rpm creates an Oracle user with Unix ID number 54321 which may not be the same as the source operating system ID If it is different change the Unix ID number so that both source and target EC2 instances match • An EBS snapshot works well if all the database files are in the single EBS volume The complexity of an EBS snapshot increases when you use multiple EBS volu mes or you use Oracle ASM Refer to Oracle MOS Note 6046831 for recovering crash consistent snapshots Oracle 12c has additional features to recover from backups taken from crash consistent snapshots For more details see Amazon EBS Snapshots Oracle Data Guard Oracle Data Guard tech nology replicates the entire database from one site to another It can do physical replication as well as logical replication Oracle Data Guard operates in homogen eous mode if the primary and standby database operating systems are the same The normal Ora cle Data Guard setup would work in this case However if you are migrating from 32 bit to 64 bit or from AMD to Intel processors or vice versa it is considered to be a heterogeneous migration even if the operating system is the same Heterogeneous mode requires additional patches and steps while operating Oracle Data Guard Homogen eous Mode In homogen eous mode the source and destination operating systems are the same Oracle Data Guard send s the changes from the primary (source) database to the standby database If physical replication is set up the changes of the entire database are captured in redo logs These changes are sent from the redo logs to the standby database The standby database can be configured to apply the changes immediately or at a d elayed interval If logical replication is set up the changes are captured for a configured list of tables or schemas Logical replication does not work for the use case of migrating the entire database unless your situational constraints require it See the Oracle Data Guard Concepts and Administration Documentation for both physical and logical standby setups Heterogeneous Mode In heterogeneous mode Oracle Data Guard allows primary and standby databases in different operating systems and different binary levels (32 bit or 64 bit) Until Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 10 11g Oracle Data Guard required that both primary and standby databases have the same operating system level From 11g onward Oracle Data Gua rd has been in heterogeneous mode This allows Oracle Data Guard to support mixed mode configurations The source primary database can have a different operating system or binary level Heterogeneous set up of Oracle Data Guard is recommended for large and very large databases We present a few suggestions below which can further optimize your migration It is essential that Oracle database home on Windows and Linux has the latest supported version of the database (11204 or 12102) along with latest q uarterly patch updates Multiple migration issues were fixed in the latest patch updates Due to the mixed operating systems in the migration path we recommend that you use the Data Guard command line interface (DGMGRL) to set up Oracle Data Guard and perform role transition See Oracle MOS Note 4134841 for more details on using Oracle Data Guard to transition from Microsoft Windows to Linux This migration requires some additional patches which are detailed in the Note Also see MOS Note 4140431 for the role transition when you migrate from Windows 32 bit to Oracle Linux 64 bit Detailed steps for setting up Oracle Data Guard between Windows and Linux i s available in Oracle MOS Note 8814211 To set up Oracle Data Guard between Windows and Linux Oracle mentions the RMAN Active Duplicate method However this method impacts source database performance and creates heavy network traffic between source and target database servers An alternative method for Active Duplicate is to use the RMAN cross platform backup method (Oracle MOS Note 10795631 ): 1 Take an EBS snapshot of the Oracle database on Windows Mount it in another Windows server in STARTUP MOUNT stage 2 Create an RMAN cold backup of the newly mounted Oracle database on Windows This step is to avoid error as mentioned in Oracle MOS Note 20033271 3 Copy the RMAN backup files to Linux using SFTP or SCP 4 On Oracle Linux issue the dup licate database for standby command using RMAN backup files This step replaces the duplicate command in Step 3 of Oracle MOS Note 10795631 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 11 DUPLICATE TARGET DATABAS E FOR STANDBY BACKUP LOCATION='' NOFILENAMECHECK; You can use SQL commands or DGMGRL to start Oracle Data Guard synchronization between the primary database on Windows and the standby database on Orac le Linux Refer to the role transition notes mentioned previously to switch the primary database from Windows to Linux If the source database contains Oracle OLAP refer to Oracle MOS Note 3523061 It is recommended to back up the user created OLAP Analytical Workspace ahead of time using the Export utility Oracle RMAN Transportable Database Oracle recommends the Oracle RMAN TTS method when migrating from completely different operating systems If the un derlying chipset is different such as Sun SPARC and Intel then Oracle recommends you use the cross platform transportable tablespace (XTTS) method Different chipsets have different endian formats Endian format dictates the order in which the bytes are stored underneath The Sun SPARC chipset stores bytes in big endian format while the Intel series stores them in little endian format TTS can be used when both Windows and Oracle Linux are running on same chipset eg Intel 64 bit Oracle has published a detailed blog post to migrate from the Windows (Intel) platform to the Linux (Intel) platform using RMAN TTS This method migrates the entire database at once instead of just individual tablespaces This method involves making your source Windows database read only and requires downtime Hence this method is advised for small and medium sized databases under 400 GB and wherever downtime can be accommodated For large databases run Oracle Data Guard in heterogeneous mode Oracle RMAN Cross Platform Transportable Database If you are migrating from different endian platforms like Sun/HP refer to Oracle MOS Note 3715561 for detailed step bystep instructions This method uses the XTTS method in RMAN It is possible to reduce downtime if you are migrating from Oracle Database 11 g or later using cross platform incremental backup Refer to Oracle MOS Note 13895921 for This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 12 instructions Review the Oracle whitepaper Platform Migration Using Transportable Tablespaces: Oracle Database 11g Release 1 on using RMAN 11g XTTS best practices and recommendations Oracle Data Pump Export/Import Utilities Oracle Data Pump Export/Import utilities can migrate from different endian formats It is a more time consuming method than Oracle RMAN but it is useful when you want to combine it with other variables such as when you want to mig rate certain schemas from Oracle 10g on an HP UX on premises server to Oracle 11g on Oracle Linux on AWS To reduce the downtime leverage parallel methods in Oracle Data Pump Export/ Import See the Oracle whitepaper Parallel Capabilities of Oracle Data Pump for recommendations on how to leverage it AWS Database Migration Service AWS Databas e Migration Service (DMS) is a managed service that you can use to migrate data from on premises or your Oracle DB instance to another EC2 or RDS instance AWS DMS supports Oracle versions 10g 11g 12c and 18c in both the source and the target instances A key advantage of AWS DMS is that it requires minimal downtime AWS SCT can be used together with AWS DMS It analyzes the source database and generates a report on which automatic and manual migration steps will be required for the given source and targe t combination This report helps in planning your migration activities AWS DMS does not migrate PL/SQL objects but AWS SCT helps you locate them and alerts you on the migration step needed You can use Oracle Data Pump Export/Import filters to migrate t he PL/SQL objects AWS DMS supports Oracle ASM at source AWS DMS can also replicate data from the source database to the destination database on an on going basis You can also use it to replicate the data until cutover is complete AWS DMS can use both Oracle LogMiner and Oracle Binary Reader for change data capture See Using an Oracle Database as a Source for AWS DMS for available configuration options and known limitations for source Oracle database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 13 Other Database Migration Methods There are other methods that can help in database migration across operating system platforms Oracle MOS Note 7332051 provides a generic overview of some of the methods like RMAN Duplicate or Oracle GoldenGate Some enterprise applications have additional tools and migration paths that are specific to their own applications Finally t here are independent software vendors that offer database migration tools on the AWS Marketplace One of these tools may be the best fit for your scenario Enterprise Application Considerations SAP Applications If you’re running your SAP applications with Oracle database you have many methods for migrating from one operating system to another All of the following migration methods are supported by SAP Note: You must follow standard SAP system copy/migration guidelines to perform your migration SAP requires that a heterogeneous migration be performed by SAP certified technical consultants Check with SAP support for more details SAP Software Logistics Toolset Softwa re Provisioning Manager (SWPM) is a Software Logistics (SL) Toolset provided by SAP to install copy and transform SAP products based on SAP NetWeaver AS ABAP and AS Java You can use SWPM to perform both heterogeneous and homogen eous migrations If the e ndian type of your source operating system is the same as the target then your migration is considered a homogen eous system copy Otherwise it is considered a heterogeneous system copy or migration The SWPM tool uses R3load export/import methodology to copy or migrate your database If you need to minimize the migration downtime consider using the parallel export/import method provided by SWPM See the Software Logistics Toolset documentation page for more details Oracle Lifecycle Migration Service Oracle developed a migration service called Oracle ACS Lifecycle Management Service (formerly known as Oracle to Oracle Online [Triple O ] and Oracle to Oracle [O2O ]) to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 14 help SAP customers migrate their exi sting Oracle database to another operating system With this service you can migrate your database while the SAP system is online which minimizes the downtime required for migration This service uses Oracle’s builtin functionality and Oracle Golden Gate This is a paid service and may require additional licensing to use Oracle Golden Gate See SAP OSS Note 1508271 for more details This service only helps with the database migratio n step —you still need to complete all the other SAP standard migration steps to complete the migration Oracle RMAN For SAP applications you can use native Oracle functionality to migrate your database to another platform You can use the Oracle RMAN tran sportable database feature to migrate the database when the endian type of source and target platform are the same Starting with Oracle 12c Oracle RMAN cross platform transportable database and tablespace features can be used to migrate a database across platforms with different endian types See SAP OSS Notes 105047 and 1367451 for more details Oracle RMAN only helps with the database migration step —you still need to complete all the other SAP standard migration steps to complete the migration The following table summarizes all the migration methods available to migrate your Oracle database to the Oracle Linux platform We recommended that you evaluate all the available methods and choose the one that best suits your env ironment Table 4: Migration options for Oracle database to Oracle Linux Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) RHEL / SLES Yes Yes Yes Yes Oracle Linux Yes Yes Yes Yes Solaris (x86) Yes Yes Yes Yes AIX / HP UX / Solaris (SPARC) No Yes Yes Yes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 15 Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) Windows No Yes Yes Yes Oracle E Business Suite For Oracle E Business Suite (EBS) applications you can follow the various migration paths previously described in the document The following migration methods are available to migrate the database tier of Oracle E Business Suite: Table 5: Migration methods for Oracle E Business Suite Source Operating System Amazon EBS Snapshot Oracle Data Guard RMAN Transportable Database RHEL Yes Yes Yes SLES Yes Yes Yes Solaris x86 No Yes Yes IBM AIX / HP UX / Solaris SPARC No No No Windows No Yes Yes If you are running on IBM AIX/HP UX/Solaris SPARC consider other database migration methods such as using the Export/Import utilities Once you have migrated your database complete the following post migration steps: • Environment variables in new Oracle home include PERL5LIB PATH and LD_LIBRARY_PATH • Ensure NLS directory $ORACLE_HOME/nls/data/9idata is available in the new Oracle home This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 16 • Implement and run auto config on the new Oracle home Once db tier auto config is complete you must run auto config on the application tier as well RMAN Transportable Database The RMAN transportable database converts the source database and creates new data files compatible for the destination operating system This step involves placing the source database into read only mode RMAN transportable database consumes more downtime One option to minimize downtime is to use physical standby of th e source database for RMAN transportable database conversion step RMAN allows parallel conversion of the data files thereby reducing the conversion time See the Oracle whitepaper Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 for more details on platform migration using RMAN transportable database feature Oracle maintains a master note ( Oracle MOS Note 13772131 ) for platform migration • For Oracle EBS 11i see Oracle MOS Note 7293091 • For Oracle EBS R120 and R121 see Oracle MOS Note 7347631 • For Oracle EBS R122 see Oracle MOS Note 20111691 Migrating From 32 Bit to 64 Bit For Oracle EBS applications we recommend that you keep the bit level of the operating systems the same eg RHEL 32 bit to Oracle Linux 32 bit in order to reduce variability in the migration process If there is a driving need to change the bit level o f the operating system during the migration Oracle recommends that you follow a two step approach in migrating the system to 64 bit The two step migration path consists of setting up the application tier and then migrating the database tier See MOS Not e 4715661 for detailed steps and post migrations checks on converting Oracle E Business Suite from 32 bit to 64bit Linux Containers You can move your Oracle E Busin ess Suite R122 application tier to containers running Oracle Linux Linux containers provide the flexibility to scale on demand depending on the workloads The application tier of Oracle E Business Suite 122 is certified on Oracle Linux containers runnin g UEK3 R3 QU6 kernel Oracle EBS application tier containers must be created with a privilege flag See MOS Note 13307011 for further requirements and documentation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 17 Oracle Fusion Middleware For Oracle application tier products such as Fusion Middleware refer to the respective MOS Upgrade Support notes for the Oracle recommended path to migrate the OS platform For Fusion Middleware 11g see MOS Support Note 10732061 for the platform migration path For Oracle applications such as Oracle E Business Suite PeopleSoft or similar products check their respective Oracle MOS platform m igration notes or seek direction from the Oracle Support team for the recommended migration path for the particular product and version Conclusion Your choice of migration path depends on your application your specific business needs and your SLAs If y ou are already using AWS Amazon EBS snapshots are the best choice if the prerequisites are satisfied Whichever method you choose for the migration path repeated testing and validation is necessary for a successful and seamless migration Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect – Global ISV AWS • John Bentley Technical Account Manager AWS • Jayaraman Vellore Sampathkumar AWS Oracle Solutions Architect AWS • Yoav Eilat Sr Product Marketing Manager AWS Document Revisions Date Description January 2020 Updated for latest technologies and services Month 2018 First publication,General,consultant,Best Practices Migrating_to_Apache_HBase_on_Amazon_S3_on_Amazon_EMR,"This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/migrate apachehbases3/migrateapachehbases3html Migrating to Apache HBase on Amazon S3 on Amazon EMR Guidelines and Best Practices May 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Introduction to Apache HBase 1 Introd uction to Amazon EMR 2 Introduction to Amazon S3 3 Introduction to EMRFS 3 Running Apache HBase directly on Amazon S3 with Amazon EMR 3 Use cases for Apache HBase on Amazon S3 5 Planning the Migration to Apache HBase on Amazon S3 6 Preparation task 7 Selecting a Monito ring Strategy 7 Planning for Security on Amazon EMR and Amazon S3 9 Encryption 9 Authentication and Authorization 10 Network 12 Minimal AWS IAM Policy 12 Custom AMIs and Applying Security Controls to Harden your AMI 13 Auditing 14 Identifying Apache HBase and EMRFS Tuning Options 16 Apache HBase on Amazon S3 configuration properties 16 EMRFS Configuration Properties 36 Testing Apache HBase and EMRFS Configuration Values 37 Options to approach performance testing 37 Preparing the Test Environment 39 Preparing your AWS account for performance testing 39 Preparing Amazon S3 for your HBase workload 40 Amazon EMR Cluster Setup 42 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Troubleshooting 45 Migrating and Restoring Apache HBase Tables on Apache HBase on Amazon S3 46 Data Migration 46 Data Restore 47 Deploying into Production 48 Preparing Amazon S3 for Production load 48 Preparing the Production environment 48 Managing the Production Environment 49 Operationalization tasks 49 Conclusion 52 Contributors 52 Further Reading 52 Document Revisions 53 Appendix A: Command Reference 54 Restart HBase 54 Appendix B: AWS IAM Policy Reference 55 Minimal EMR Service Role Policy 55 Minimal Amazon EMR Role for Amazon EC2 (Instance Profile) Policy 58 Minimal Role Policy for User Launchi ng Amazon EMR Clusters 60 Appendix C: Transparent Encryption Reference 63 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper provides an overview of Apache HBase on Amazon S3 and guides data engineers and software developers in the migration of an on premises or HDFS backed Apache HBase cluster to Apache HBase on Amazon S3 The whitepaper offers a migration plan that includes detailed steps for each stage of the migration including data migration performance tuning and operational guidance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Page 1 Introduction In 2006 Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services —now commonly known as cloud computing One of the key benefits of cloud computing is the opportunit y to replace upfront capital infrastructure expenses with low variable costs that scale with your business With the cloud businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Today AWS provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world Many businesses have been taking advantage of the unique properties of the cloud by migrating their existing Apache Hadoop workloads incl uding Apache HBase to Amazon EMR and Amazon Simple Storage Service ( Amazon S3 ) The ability to separate your durable storage layer from your compute layer have flexible and scalable compute and have the ease of inte gration with other AWS services provide s immense benefits and open s up many opportunities to reimagine your data architectures Introduction to Apache HBase Apache HBase is a massively scalable distributed big data store in the Apache Hadoop ecosystem It is an open source non relational versioned database that runs on top of the Apache Hadoop Distributed File System (HDFS) It is built for random strictly consistent realtime access for tables with billions of rows and millions of columns It has tight integration with Apache Hadoop Apache Hive and Apache Phoenix so you can easily combine massively parallel analytics with fast data access through a variety of interfaces The Apache HBase data model throughput and fault tolerance are a good match for workloads in ad tech web analytics financial services applications using time series data and many more Here are some of the features and benefits when you run Apache HBase : • Strongly consistent reads and writes – when a writer returns all of the readers will see the same value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 2 • Scalability – individual Apache HBase tables comprise billions of rows and millions of columns Apache HBase stores data in a sparse form to conserve space You can use column families and column prefixes to organize your schemas and to indicate to Apache HBase that the members of the family have a similar access pattern You can also use timestamps and versioning to retain old versions of cells • Graphs and time series – you can use Apache HBase as the foundation for a more specialized data store For example you can use Titan for graph databases and OpenTSDB for time series • Coprocessors – you can write custo m business logic (similar to a trigger or a stored procedure) that runs within Apache HBase and participates in query and update processing ( refer to Apache HBase Coprocessors to learn more) • OLTP a nd analytic workloads you can run massively parallel analytic workloads on data stored in Apache HBase tables by using tools such as Apache Hive and Apache Phoenix Apache Phoenix provides ACID transaction capabilities via standard SQL and JDBC APIs For details on how to use Apache Hive with Apache HBase refer to Combine NoSQL and Massively Parallel Analytics Using Apache HBase and Apache Hive on Amazon EMR You also get easy provisioning and scaling access to a pre configured installation of HDFS and automatic node replacement for increased durability Introduction to Amazon EMR Amazon EMR provides a managed Apache Hadoop framework that makes it easy fast and cost effective to process vast amounts of data across dynamically scalable Amazon Elastic Compute Cloud (Amazon EC2 ) instances You can also run other popular distributed engines such as Apache Spark Apache Hive Apache HBase Presto and Apache Flink in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB Amazon EMR securely and reliably handles a broad set of big data use cases including log analysis web indexing data transformations (ETL) streaming machine learning financial analysis scientific simulation and bioinformatics For an overview of Amazon EMR refer to Overview of Amazon EMR Architecture and Overview of Amazon EMR This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 3 Introduction to Amazon S3 Amazon Simple Storage Service (Amazon S3) is a durable highly available and infinitely scalable object storage with a simple web service interface to store and retrieve any amount of d ata from anywhere on the web With regard to Apache HBase and Apache Hadoop s toring data on Amazon S3 gives you more flexibility to run and shut down Apache Hadoop clusters when you need to Amazon S3 is commonly used as a durable store for HDFS workloads Due to the durability and performance scalability of Amazon S3 Apache Hadoop workloads that store data on Amazon S3 no longer require the 3x replication as when the data is stored on HD FS Moreover you can resize and shut down Amazon EMR clusters with no data loss or point multiple Amazon EMR clusters at the same data in Amazon S3 Introduction to EMRFS The Amazon EMR platform consists of several layers each with specific functionality and capabilities At the storage layer in addition to HDFS and the local file system Amazon EMR offers the Amazon EMR File System (EMRFS) an implementation of HDFS that all Amazon EMR clusters use for reading and writing files to Amazon S3 EMRFS feat ures include data encryption and data authorization Data encryption allows EMRFS to encrypt the objects it writes to Amazon S3 and to decrypt them during read s Data authorization allows EMRFS to use different AWS Identify and Access Management ( IAM ) roles for EMRFS requests to Amazon S3 based on cluster users groups or the location of EMRFS data in Amazon S3 For more informatio n refer to Using EMR File System (EMRF S) Running Apache HBase directly on Amazon S3 with Amazon EMR When you run Apache HBase on Amazon EMR version 520 or later you can enable HBase on Amazon S3 By using Amazon S3 as a data store for Apache HBase you can separate your cluster’s storage a nd compute nodes This enables you to save costs by sizing your cluster for your compute requirements instead of paying to store your entire dataset with 3x replication in the on cluster HDFS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 4 Many customers have taken advantage of the numerous benefits o f running Apache HBase on Amazon S3 for data storage including lower costs data durability and easier scalability Customers such as Financial Industry Regulatory Agency ( FINRA ) have lowered their costs by 60% by moving t o an HBase on Amazon S3 architecture in addition to the numerous operational benefits that come with decoupling storage from compute and using Amazon S3 as the storage layer HBase on Amazon S3 Architecture An Apache HBase on Amazon S3 allows you to launch a cluster and immediately start querying against data within Amazon S3 You don’t have to configure replication between HBase on HDFS clusters or go through a lengthy snapshot restore process to migrate the data off you r HBase on HD FS cluster to another HBase on HDFS cluster Amazon EMR configures Apache HBase on Amazon S3 to cache data in memory and on disk in your cluster delivering fast performance from active compute nodes You can quickly and easily scale out or scale in comput e nodes without impacting your underlying storage or you can terminate your cluster to save costs and quickly re store it without having to recover using snapshots or other methods This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 5 Using Amazon EMR version 570 or later you can set up a read replica clu ster which allows you to achieve higher read availability by distributing reads across multiple clusters Use cases for Apache HBase on Amazon S3 Apache HBase on Amazon S3 is recommended for applications that require high availability of reads and do not require high availability of writes Apache HBase on Amazon S3 can be configured to achieve high requests per second for Apache HBase’s API calls This configuration together with the proper instance type and cluster size allow s you to find the optimal Apache HBase on Amazon S3 configuration values to support similar requests per second as your HDFS backed clu ster Moreover as Amazon S3 is used as a storage layer you can decouple storage f rom compute have the flexibility to bring up/down clusters as needed and considerably r educe costs of running your Apache HBase cluster Applications that require high availability of reads are supported by Apache HBase on Amazon S3 via Read Replica Clus ters pointing to the same Amazon S3 location Although Apache HBase on Amazon S3 Read Replica Clusters are not part of this whitepaper see Further Reading for more details Since Apache HBase’s Write Ahead Log (WAL) is stored in the cluster i f your application requires support for high availability of writes Apache HBase on HDFS is recommended Specifically you can set up Apache HBase on HDFS with multimaster on an Amazon EC2 custom installation or set up Apache HBase on HDFS on Amazon EMR with an HBase on HDFS replica cluster on Amazon EMR Apache HBase on Amazon S3 is recommended i f your application does not require support for high availability of writes and can tolerate failures during writes/updates If you would like to mitigate the impact of Amazon EMR Master node failure s (or Availability Zone failures that can cause the termination of the Apache HBase on Amazon S3 cluster or any temporary degradation of service due to Apache HBase RegionServer operat ional/transient issues ) we This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 6 recommend that your pipeline architecture relies on a stream/messaging platform upstream to the Apache HBase on Amazon S3 cluster We recommend that you always use the latest Amazon EMR release so you can benefit from all change s and features continuously added to Apache HBase Planning the Migration to Apache HBase on Amazon S3 To migrate an existing Apache HBase cluster to an Apache HBase on Amazon S3 cluster consider the following activities to help scope and optimize performance for Apache HBase on Amazon S3: • Select a strategy to monitor your Apache HBase cluster's performance • Plan for security on Amazon EMR and Amazon S3 • Identif y Apache HBase and EMRFS tuning option s • Test Apache HBase and EMRFS configuration values • Prepar e the test environment o Prepar e your AWS account for performance testing o Prepar e Amazon S3 for your Apache HBase workload o Set up Amazon EMR cluster o Troubleshoot • Migrat e and restore Apache HBase tables on HBase on Amazon S3 o Migrate d ata o Restore d ata • Deploy into production o Prepar e Amazon S3 for production load o Prepar e the production environment • Manag e the production environment o Manage o perationalization tasks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 7 Preparation task Before the migr ation starts we recommend that you select a strategy t o monitor the performance of your cluster Selecting a Monitoring Strategy We recommend you use an enterprise third party moni toring agent or Ganglia to guide you during the tuning of Apache HBase on Amazon S3 This agent is helpful to understand the changes in performance when changing Apache HBase properties during your tuning process Moreover this monitori ng allow s quick detection of operational issues when the cluster is in production Monitoring Apache HBase subsystems and dependent systems To measure the overall performance of Apache HBase monitor metrics such as those around Remote Procedure Calls (RPCs ) and the Java virtual machine (JVM ) heap In addition to Apache HBase metrics collect metrics from dependency systems such as HDFS the OS and the network Monitoring the write path To measure the performance of the write path monitor the metrics for the WAL HDFS (on Apache HBase on Amazon S3 on Amazon EMR WALs are on HDFS) Mem Store flushes compactions garbage collections and procedure metrics of a related procedu re Monitoring the read path To measure the performance of the read path monitor the metrics for the block cache and the bucket cache Specifically monitor the number of evictions Garbage Collection (GC) time cache size and cache hit s/misses Monitor ing with a thirdparty tool Apache HBase supports exporting metrics via Java Management Extensions (JMX ) Most third party monitoring agents can then be configured to collect metrics via JMX For more information refer to Using with JMX Section Configuring HBase to expose metrics via JMX will provide the configurations to export Apache HBase metrics via JMX on an Apache HBase on Amazon S3 cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 8 Note that the Apache HBase Web UI allows you access to the available metrics In the UI select a Region Server or the Apache HBase Master and then click the “Metrics Dump” tab This tab provide s all available metrics collected from the JMX bean and exposes the metr ics in JSON format For more details on the metrics expose d by Apache HBase refer to Metr icsRegionServerSourcejava Use the following steps to add monitoring int o your Amazon EMR Cluster: • Create an Amazon EMR bootstrap action to set up the agent of any enterprise monitoring tool used in your environment (For more information and example bootstrap actions refer to Create Bootstrap Actions to Install Additional Software • Create a dashboard in your enterprise monitoring tool with the metri cs to monitor per each Amazon EMR Cluster • Create unique tags for each cluster This tagging avoid s multiple clusters writing to the same dashboard In addition to monitoring the Amazon EMR Cluster at every layer of the stack have the monitoring dashboar d for your application’s API available for use during the performance tests for Apache HBase This dashboard keeps track of the performance of the application APIs that rely on Apache HBase Monitoring Cluster components with Ganglia The Ganglia open sourc e project is a scalable distributed system designed to monitor clusters and grids while minimizing the impact on their performance When you enable Ganglia on your cluster you can generate reports and view the performance of the cluster as a whole as we ll as inspect the performance of individual node instances For more information about the Ganglia open source project refer to http://gangliainfo/ For more information about using Ganglia with Amazon EMR clusters refer to Ganglia in Amazon EMR Documentation Configuring Ganglia is out side the scope of this whitepaper Note that Ga nglia produce s high amounts of data for large clusters Consider this information when sizing your cluster If you choose to use Ganglia to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 9 monitor your production cluster make sure to thoroughly understand Ganglia functionality and configuration properties Planning f or Security on Amazon EMR and Amazon S3 Many customers in regulated industries such as financial services or healthcare require security and compliance controls around their Amazon EMR clusters and Amazon S3 data storage It is important to consider thes e requirements as part of an overall data strategy that adheres to any regulatory or internal data security requirements in an industry such as PCI or HIPAA This section cover s a variety of security best practices around configuring your Amazon EMR envir onment for HBase on Amazon S3 Encryption There are multiple ways to encrypt data at rest in your Amazon EMR clusters If you are using EMRFS to query data on Amazon S3 you can specify one of the following options: • SSE S3: Amazon S3 manage s encryption keys for you • SSE KMS: An AWS Key Management Service (KMS) customer master key (CMK) encrypt s your data server side on Amazon S3 • CSE KMS/CSE C: The encryption and decryption takes place client side on your Amazon EMR cluster and the encrypted object is store d on Amazon S3 You can use keys provided by AWS KMS (CSE KMS) or use a custom Java class that provides the master key (CSE C) When you consider this encryption mode you should think about the overall ecosystem of tools you will use to access your data and if t hese tools support CSE KMS/CSE C In the context of HBase on Amazon S3 many customers use SSE S3 and SSE KMS as their methods of encryption because CSE encryption requires add itional key management Although the bulk of the data is stored on Amazon S3 you still need to consider the following options for local disk encryption: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 10 • Amazon EMR Security Configuration : Amazon EMR gives you the ability to encrypt your storage volumes using local disk encryption It uses a combination of open source HDFS encryption as well as LUKS encryption If you want to use this feature you must specify an AWS KMS key ARN or provide a custom Java class with the encryption artifacts • Custom AMI : You can create a Custom AMI for Amazon EMR and specify an Amazon EBS volume encryption to encrypt both your boot and storage volumes Amazon EMR security configurations allow you to choose a method for encrypting data using Transport Layer Security (TLS) You can choose to : • Manually create PEM certificates zip them in a file and reference from Amazon S3 or • Implement a certificate custom provider in Java a nd specify the Amazon S3 path to the JAR For more information on how these certificates are used with different big data technologies refer to In Transit Data Encryption with Amazon EMR Note that traffic between Amazon S3 and cluster nodes is encrypted using TLS Th is encryption is enabled automatically Authentication and Authorization Authentication and authorization are two crucial components that must be considered when controlling access to data Authentication is the verification of an entity whereas authoriz ation is checking whether the entity actually has access to the data or resources it is asking for Another way of looking at it is that authentication is the “are you really who you say you are” check and authorization is “do you actually have access to w hat you're asking for” check For example Alice can be authenticated as being Alice but this does not necessarily mean that Alice has authorization or access to look at Bob's bank account Authentication on Amazon EMR Kerberos a network authentication protocol created by the Massachusetts Institute of Technology (MIT) uses secret key cryptography to provide strong authentication and avoid sensitive information such as passwords or other This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 11 credentials being sent over the network in an unencrypted and e xposed format With Kerberos you maintain a set of services (known as a realm) and users that need to authenticate (known as principals) and then provide a means for these principals to authenticate You can also integrate your Kerberos setup with other r ealms For example you can have users authenticate from an Active Directory domain or LDAP namespace and have a cross realm trust set up such that these authenticated users can be seamlessly authenticated to access your Amazon EMR clusters Amazon EMR in stalls open source Apache Hadoop ecosystem applications on your cluster meaning that you can leverage the existing security features in these products For example you can enable Kerberos authentication for YARN giving user level authentication for appl ications running on YARN such as HBase You can configure Kerberos on an Amazon EMR cluster (known as Kerberizing) to provide a means of authentication for users who use your clusters We recommend that you become familiar with Kerberos concepts before configuring Kerberos on Amazon EMR Refer to Use Kerberos Authentication on the Amazon EMR documentation page See Further Reading for blog posts that show you how to configure Kerberos on your Amazon EMR Cluster Authorization on Amazon EMR Authorization on Amazon EMR falls into three general categories: • Object level authorization ag ainst objects in Amazon S3 • Component specific functionality that is built in (such as Apache HBase Authorization ) • Tools that provide an intermediary access layer between users running commands on Apache Hadoop components and the storage layer (such as Apache Ranger) (This category is outside the scope of this whitepaper) Object level Authorization Prior to Amazon EMR version 5100 the AWS Identity and Access Management (IAM) role attached to the Amazon EC2 instance profile role on Amazon EMR clusters determine d data access in Amazon S3 Data access to Amazon S3 could only be granular at the cluster level making it difficult to have This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 12 multiple users with potentially different levels of access to data touching the same cluster EMRFS fine grained authorization was introduced w ith Amazon EMR versions 5100 and later This authorization allows you to specify the AWS IAM role to assume at the user or group level when EMRFS is accessing Amazon S3 This allows for fine grained access control for Amazon S3 on multi tenant Amazon EMR clusters In addition it makes it easier to enable cross account Amazon S3 access to data For more information on how to configure your security configurations and AWS IAM roles appropriately refer to Configure AWS IAM Roles for EMRFS Requests to Amazon S3 and Build a Multi Tenant Amazon EMR Cluster with Kerberos Microsoft Active Directory Integration and AWS IAM Roles for EMRFS HBase Authorization Authorization on Apache HBase on Amazon S3 is feature equivalent to Apache HBase on HDFS with the ability to set authorization rules at the table column and celllevel Note that access to the Apache HBase web UIs is not restricted even when Kerberos is used Network The network topology is al so important when designing for security and privacy We recommend placing your Amazon EMR clusters in private subnets with only outbound internet access via NAT Security groups control inbound and outbound access from your individual instances With Ama zon EMR you can use both Amazon EMR managed security groups as well as your own to control network access to your instance By applying the principle of least privilege to your security groups you can lock down your Amazon EMR cluster to only the applications and/or individuals who need access Minimal AWS IAM Policy By default the AWS IAM policies that are associated with Amazon EMR are generally permissive in order to allow customers to easily integrate Amazon EMR with other AWS services When securing Amazon EMR a best practice is to start from the minimal set of permissions required for Amazon EMR to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 13 function and add permissions as necessary Since HBase on Amazon S3 depends on Amazon S3 as a storage medium it is important to ensure that bucket policies are also scoped correctly such that HBase on Amazon S3 can function while also being secure The Appendix B: AWS IAM Policy Reference at the end of this paper includes three policies that are scoped around what Amaz on EMR minimally requires for basic operation These policies could be further minimized /restricted by removing actions related to spot pricing and autoscaling Custom AMIs and Applying Security Controls to Harden your AMI Custom AMIs are another approach you can use to harden and secure your Amazon EMR cluster Amazon EMR uses an Amazon Linux Amazon Machine Image (AMI) to initialize Amazon EC2 instances when you create and launch a cluster The AMI contains the Amazon Linux operating system other softwar e and configurations required for each instance to host your cluster applications By default when you create a cluster you don't need to think about the AMI When Amazon EC2 instances in your cluster launch Amazon EMR starts with a default Amazon Linu x AMI that Amazon EMR owns runs any bootstrap actions you specify and then installs and configures the applications and components that you select Alternatively if you use Amazon EMR version 570 or later you can specify a custom Amazon Linux AMI whe n you create a cluster and customize its root volume size as well When each Amazon EC2 instance launches it starts with your custom AMI instead of the Amazon EMR owned AMI Specifying a custom AMI is useful for the following cases: • Encrypt the Amazon EBS root device volumes (boot volumes) of Amazon EC2 instances in your cluster For more information refer to Creat ing a Custom AMI with an Encrypted Amazon EBS Root Device Volume • Preinstall applications and perform other customizations instead of using bootstrap actions which can improve cluster start time and streamline the startup work flow This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 14 • Implement more sophis ticated cluster and node configurations than bootstrap actions allow Using a custom AMI as opposed to a bootstrap action can allow you to have your hardening steps pre configured into the images you use rather than having to run the bootstrap action sc ripts on instance provision time You don't have to choose between the two —you can create a custom AMI for the common less likely to change security characteristics of your cluster and leverage bootstrap actions to pull the latest configurations /scripts t hat might be cluster specific One approach many of our customers take is to apply the Center for Internet Security (CIS) benchmarks to harden their Amazon EMR clusters For more details refer to A step bystep checklist to secure Amazon Linux It is important to verify each and every control for necessity and function test against your requirements when applying these benchmarks to your clusters Auditing The ability to audit compute envir onments is a key requirement for many customers There are a variety of ways that you can support this requirement within Amazon EMR: • For Amazon EMR version 5140 and later EMR File System (EMRFS) Amazon EMR’s connector for Amazon S3 supports auditing of users who ran queries that accessed data in Amazon S3 through EMRFS This feature is enabled by default and passes on user and group information to audit logs like AWS CloudTrail providing you with comprehensive request tracking • If it exists application specific auditing can be configured and implemented on Amazon EMR • You can use tools such as Apache Ranger to implement another layer of auditing and authorization • AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service is integrated with Amazon EMR AWS CloudTrail captures all API calls for Amazon EMR as events The calls captured include calls from the Amazon EMR consol e and code calls to the Amazon EMR API operations If you create a trail you can enable This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 15 continuous delivery of AWS CloudTrail events to an Amazon S3 bucket including events for Amazon EMR • You can also audit the Amazon S3 objects that Amazon EMR is acces sing via Amazon S3 access logs AWS CloudTrail only provide s logs for AWS API calls so if a user runs a job that reads/writes data to Amazon S3 the Amazon S3 data that was accessed by Amazon EMR won’t appear in AWS Cloud Trail By using Amazon S3 access l ogs you can comprehensively monitor and audit access against your data in Amazon S3 from anywhere including Amazon EMR • Because you have full control over your Amazon EMR cluster you can always install your own third party agents or tooling via bootstra p actions or custom AMIs to help support your auditing requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 16 Identifying Apache HBase and EMRFS Tuning Options Apache HBase on Amazon S3 configuration properties This section helps you optimize components that support the read/write path for your application access patterns by identifying the components and properties to configure Specifically the goal of tuning is to prepare the initial configurations to influence cluster behavior storage footprint behavior and individual components behav ior that support the read and write paths The whitepaper covers only HBase tuning properties that we re critical to many HBase on Amazon S3 customers during migration Make sure to test any additional HBase configuration properties that have been tuned on your HDFS backed cluster but not included in this section You also need to tune EMRFS properties to prepare your cluster for scale This whitepaper should be used together with existing resources or materials on best practices and operational guidelines for HBase For a detailed description of the HBase properties mentioned on this document refer to HBase default configurations and HBase defaultxml ( HBase 146) For more details on the metrics mentioned on this document refer to MetricsRegionServerSourcejava (HBase 146) To monitor changes to some of the properties mentioned on this document you require access to the Logs for the master and specific Region Servers To access the HBase logs during tuning you can use the HBase Web UI First select the HBase Master or the specific RegionServer and then click the “Local Logs ” tab Or you can SSH to the parti cular instance that hosts th e RegionServer or HBase Master and visualize the last lines added to the logs under /var/log/hbase Next we identify the several settings on HBase and later on EMRFS to take into consideration during the tuning stage of the m igration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 17 For some of the HBase properties we propose a starting value or a setting for others you will need to iterate on a combination of configurations during performance tests to find adequate values All of the configuration settings that you decide to set can be applied to your Amazon EMR Cluster via a configuration object that the Amazon EMR service uses to configure HBase and EMRFS when d eploying a new cluster For more details s ee Applying HBase and EMRFS Configurations to the Cluster Speed ing up the Cluster initialization Use t he properties that follow when you want to speed up the clust er’s startup time speed up region assignments and speed up region initialization time These properties are associated with the HBase Master and the HB ase RegionServer HBase master tuning hbasemasterhandlercount • This property s ets the number of RPC handlers spun up on the HBase Master • The default value is 30 • If your cluster has thousands of regions you will likely need to increase this value Monitor the que ue size (ipcqueuesize ) min and max time in queue total calls time min and max proc essing time and then iterate on this value to find the best value for your use case • Customers at the 20000 region scale have configured this property to 4 times the default value HBase RegionServer tuning hbaseregionserverhandlercount • This property sets the number of RPC handlers created on RegionServers to serve requests For more information about this configuration setting refer to hbaseregionserverhandlercount This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 18 • The default value is 30 • Monitor the number of RPC Calls Queued the 99th percentile latency for RPC calls to stay in queue and RegionServer memory Iterate on this value to find the best value for your use case • Customer s at the 20000 region scale hav e configured this property to 4 times the default value hbaseregionserverexecutoropenregionthreads • This property sets the number of concurrent threads for region opening • The default value is 3 • Increase this value if the number of regions per RegionServer is high • For clusters with thousands of regions i t is common to see this value at 10–20 times the default Prevent ing initialization loops The default values of the properties that follow may be too conservative for some use cases Depending on the numbe r of regions number of RegionS ervers and the settings you have chosen to control initialization and assignment times the default values for the master timeout can prevent your cluster from starting up Relevant Master initialization timeouts hbasemasterinitializationmonitortimeout • This property sets t he amount of time to sleep in milliseconds before checking the Master’s initialization status • The default value is 900000 (15 minutes) • Monitor masterFinishedInitializationTime and the HBase Master logs for a “master failed to complete initialization ” time out message Iterate on this value to find the best value for your use case hbasemasternamespaceinittimeout This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 19 • This property sets the time for the master to wait for the namespace table to initialize • The default value is 300000 (5 minutes) • Monitor the HBase Master logs for a “waiting for namespace table to be assigned ” timeout message Iterate on this value to find the best value for your use case Scaling to a high number of regions This property allows the HBase Master to handle high number of regions • Set hbaseassignmentuse zk to false • For detailed information refer to HBase ZK less Region Assignment Getting a balanced Cluster after initialization To ensure that the HBase Master only allocate s regions when a target number of RegionS ervers is available tune the following properties : hbasemasterwaitonregionserversmintostart hbasemasterwaitonregionserversmaxtostart • These properties set the minimum and maximum number of RegionServers the HBase Master will wait for before starting to assign regions • By default hbasemasterwaitonregionservers mintostart is set to 1 • An adequate value for the min and max is 90 to of the total number of RegionS ervers A high value for both min and max result s in a more uniform distribution of regions across RegionServers hbasemasterwaitonregionserverstimeout hbasemasterwaitonregionserversinterval • The timeout property sets the time the master will wait for RegionServers to check in The default value is 4 500 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 20 • The interval property sets the time period use d by the master to decide if no new RegionServers have checked in The default value is 1500 • These properties are especially relevant for a cluster with a large number of regions • If your use case requires aggressive initialization times these propert ies can be set to lower values so that the condition that is dependent on these properties is evaluated earlier • Customer s at the 20000 region scale and with a requirement of low initial ization time have set timeout to 400 and interval to 300 • For more information on the condition used by the master to trigger allocation refer to HBASE 6389 Preventing timeouts during Snapshot operations Tune the following properties t o prevent timeouts during snapshot operations : hbasesnapshotmastertimeoutmillis • This property s ets the time the master will wait for a snapshot to conclude This property is especially relevant for tables with a large number of regions • The default value is 300000 (5 minutes) • Monitor the logs for snapshot timeout messages and iterate on this value • Customers at the 20000 region scale have set this property to 1800000 (30 minutes) hbasesnapshotthreadpoolmax • This property s ets the number of threads used by the snapshot manifest loader operation • Default value is 8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 21 • This value depend s on the instance type and the number of regions in your cluster Monitor snapshot average time CPU usage and your application API to ensure the value you choose does not impact application requests • Customers at the 20000 region scale have used 2–8 time s the default value for this property If you will be performing online snapshots while serving traffic set the following properties to prevent timeouts during the online snapshot operation hbasesnapshotregiontimeout • Sets the timeout for RegionServer s to keep threads in the snapshot request pool waiting • Default value is 300000 (5 minutes) • This property is especially relevant for tables with a large number of regions • Monitor memory usage on the RegionServers monitor the logs for snapshot timeout messages and iterate on this value Increasing this value consumes memory on the Region Servers • Customers at the 20000 region scale have used 1800000 (30 minutes) for this property hbasesnapshotregionpoolthreads • Sets the number of threads or snapsho tting regions on the RegionServer • Default value is 10 • If you decide to increase the value for this property consider setting a lower value for hbasesnapshotregiontimeout • Monitor snapshot average time CPU usage and your application API to ensure the value that you choose does not impact application requests This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 22 Running the balancer for specific periods to minimize the impact of region movements on snapshots Controlling the operation of the Balancer is crucial for smooth operation of the cluster These properties provide control over the balancer hbasebalancerperiod hbasebalancermaxbalancing • The hbasebalancerperiod property configures when the balancer runs and the hbasebalancermaxbalancing property con figures how long the balancer runs • These properties are especially relevant if you programmatically take snapshots of the data every few hours because the snapshot operation will fail if regions are being moved You can monitor the snapshot average time t o have more insight into the snapshot operation At a high level balancing requires flushing data closing the region moving the region and then opening it on a new Region Server For this reason for busy clusters consider running the balancer every cou ple of hours and confi guring the balancer to run for only one hour Tuning the Balancer Consider the following additional properties when configuring the balancer: • hbasemasterloadbalancerclass • hbasebalancerperiod • hbasebalancermaxbalancing We recom mend that you test your current LoadB alancer settings and then iterate on the configurations The default LoadBalancer is the Stochastic Balancer If you choose to use the default LoadBalancer refer to StochasticLoadBalancer for more details on the various factors and costs associated with this balancer Most use cases can use the default values Access Pattern considerations and read/write path tuning This section covers tuning the diverse HBase components that support t he read/ update/ write paths This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 23 To properly tune the components that support the read/update/write paths you start by identifying the overall access pattern of your application If the access pattern is read heavy then you can reduce the resources allocated to the write path The s ame guidelines apply for write heavy access patterns For mixed access patterns you should strive for a ba lance Tuning the Read Path This section identif ies the properties used t when tuning the read path The properties that follow are beneficial on both random read and sequential read access patterns Tuning the Size of the BucketCache The BucketCache is central to HBase on Amazon S3 The properties that follow set the overall s ize of the BucketCache per instance and allocate a percen tage of the total size of the BucketCache to speci alized areas such as single access BucketCache multiple access BucketCache and inmemory BucketCache For more details refer to HBASE 18533 The goal of this section is to configure the BucketCache to support your access pattern For an access pattern of random reads and sequential reads it is recommended to cache all data in the BucketCache which is stored in disk In other words each instance allocate s part of its disk to the bucket cache so that the total size of the BucketCache across all the instances in the cluster equals the size of the data on Amazon S3 To configure the BucketCache tune the following prop erties: hbasebucketcachesize • As a baseline set the BucketCache to a value equal to the size of data you would like cached • This property impact s Amazon S3 traffic If the data is not in the cache HBase must retrieve the data from Amazon S3 • If the BucketCache size is smaller than the amount of data being cached it may cause many cache evictions which will also increase overhead on GC Moreover it will increase Amazon S3 traffic Set the BucketCache This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 24 size to the total size of the dataset require d for your application ’s read access pattern • Take into account the available disk resources when setting this property HBase on Amazon S3 uses HDFS for the write path so the total disk available for the B ucket Cache must consider any storage requir ed by Apache Hadoop/OS/HDFS See the Amazon EMR Cluster Setup section for recommendations on sizing the cluster local storage for the Bucket Cache choosing storage type and its mix ( multiple disks versus a single larg e disk) • Monitor GC cache evictions metrics cache hit ratio cache miss ratio (you can use HBase UI to quickly access these metrics) to support the process of choosing the best value Moreover consider the application SLA requirements to increase or decrease the BucketCache size Iterate on this value to find the best value for your use case hbasebucketcachesinglefactor hbasebucketcachemultifactor hbasebucketcachememoryfactor • Note that the bucket areas follow the same areas as LRU cache A block initially read from Amazon S3 is populated in the single access area (hbasebucketcachesingle factor ) and consecutive accesses promote that block into the multi access area (hbasebucketcachemultifactor ) The in memory area is reserved for blocks loaded from column families flagged as IN_MEMORY (hbasebucketcachememoryfactor ) • By default these areas are sized at 25% 50% 25% of the total BucketCache size respectively • Tune this value according to the access pattern of your application • This property impact s Amazon S3 traffic For example if single access is more prevalent than multi a ccess you can reduce the size allocated to multi access If multi access i s common ensure that multi access areas are large enough to avoid cache evictions hbaserscacheblocksonwrite This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 25 • This property forces all blocks that are written to be added to the cache automatically Set this property to true • This property is especially relevant to read heavy workloads and setting it to true will populate the cache and reduce Amazon S3 traffic when a read request to the data is issued Setting this to false in rea dheavy workloads will result in reduced read performance and increased Amazon S3 activity • HBase on Amazon S3 uses the file base BucketC ache together with on heap cache BlockCache This setup is commonly referred as a combined cache The BucketCache only store s data blocks and the BlockCache stores bloom filters and indices The physical location of the file base BucketCache is the disk and the location of the BlockCache is the heap Prewarm ing the BucketCache HBase provides additional properties that control the prefetch of blocks when a region is opening This is a form of cache pre warming and recommended for HBase on Amazon S3 especially for read access patterns Prewarming the BucketCa che result s in reduce d Amazon S3 traffic for subsequent requests Disabling pre warming in read heavy workloads result s in reduced read performance and increased Amazon S3 activity To configure HBase to prefetch blocks set the following properties: hbasersprefetchblocksonopen • This property controls whether the server should asynchronously load all of the blocks when a store file is opened (data meta and index) Note that enabling this property co ntribute s to the time the Region Server takes to open a region and therefore initialize • Set this value to true to apply the property to all tables This can also be applied per CF instead of using a global setting Prefer this over enabling it cluster wide • If you set hbasersprefetchblockso nopen to true the properties that follow increase the number of threads and the size of the queue for the pre fetch operation: o Set hbasebucketcachewriterqueuelength to 1024 as a starting value The default value is 64 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 26 o Set hbasebucketcachewriterthreads to 6 as a starting value The default value is 3 o The values should be configured together and consider the instance type for the RegionServer and the number of regions per RegionServer By increasing the number of threads you may be able to choose a lower value for hbasebucketcachewriterqueuelength o Property hbasersprefetchblocksonopen will control how fast you get data from Amazon S3 during the pre fetch o Monitor HBase logs to understand how fast the bucket cache is being initialized and monitor cluster resources to see the impact of the properties on memory and CPU Iterate on these values to find the best value for your use case o For more details refer to HBASE 15240 Modifying the Table Schema to Support Pre warming Finally prefetch ing can be applied globally or per column family In addition the IN_MEMORY region of the Bucket Cache can be ap plied per column family You must change the schema of the tables to set these properties For each column family and for the access patterns you must identify which column families should always be placed in m emory and which column families that benefit from prefetching For example if one column family is never read by the HBase read path (only read by an ETL job) you can save resources on the cluster and avoid using the PREFETCH_BLOCKS_ON_OPEN Key or the IN_MEMORY for that column family To modify an existing table to use PREFETCH_BLOCKS_ON_OPEN or IN_MEMORY keys see the follow ing examples: hbase shell hbase(main):001:0> alter 'MyTable' NAME => 'myCF' PREFETCH_BLOCKS_ON_OPEN => 'true' hbase(main):002 :0> alter 'MyTable' NAME => 'myCF 2' IN_MEMORY => 'true' Tuning the Updates/Write Path This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 27 This section show s you how to tune an d size the Mem Store to avoid having frequent flushes and small HFiles As a result the frequency of compactions and Amazon S3 traffic is reduced hbaseregionserverglobalmemstoresize • This property sets the maximum size of all MemStores in a RegionS erver • The memory allocated to the MemStores is kept in the main memory of the RegionServers • If the value of hbaseregionserverglobalmemstoresize is exceeded u pdates are blocked and flushes are forced until the total size of all the MemStores in a RegionServer is at or below the value of hbaseregionserverglobalmemstore sizelowerlimit • The d efault value is 04 (40% of the heap ) • For write heavy access patterns you can increase this value to increase the heap area dedicate d to all Mem Stores • Consider the number of regions per Region Server and the number of column families with high write activity when setting this value • For read heavy access patterns this setting can be decreased to free up resources that support the read path hbasehregionmemstoreflushsize • This property sets the flush size per MemStore • Depending on the SLA of your API the flush size may need to be higher than the flush size configured on the source cluster • This setting impact s the traffic to Amazon S3 t he size of HFiles and the impact of compactions in your cluster The higher you set the value the fewer Amazon S3 operations are required and the higher the size of each resulting HFile • This value is dependent on the total number of regions per RegionS erver and the number of column families with high write activity This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 28 • Use 536870912 bytes (512 MB) as the starting value then monitor the frequency of flushes the Memstore Flush Queue Size and Application APIs response time If frequency of flushes and queue size is high increase this setting hbaseregionserverglobalmemstore sizelowerlimit • When the size of all Memstores exceeds this value flushes are forced This property prevents the Memstore from being blocked for updates • By default this property is set to 095 95% of the value set for hbaseregionserverglobalmemstoresize • This value depends on the number of Regions per RegionServer and the write activity in your cluster • You might want to d ecrease this value if as soon a s the Memstores reach this safety threshold the write activity quickly fills the missing 0 05 and the MemStore is blocked for writes • Monitor the frequency of flushes the Memstore Flush Queue Size and Application APIs response time If frequency and queue size is high increase this setting hbasehregionmemstoreblockmultiplier • This property is a safety threshold and controls spikes in write traffic • Specifically this property sets the threshold at which writes are blocked If the MemStore reaches hbasehregionmemstoreblockmultiplier times hbasehregionmemstoreflushsize bytes writes are blocked • In case of spikes in traffic this property prevent s the Memstore from continu ing to grow and in this way prevent s HFiles with large sizes • The default value is 4 • If you r traffic has a constant pattern consider keep ing the default value for this property and tune only hbasehregionmemstoreflushsize This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 29 hbasehregionpercolumnfamilyflushsizelowerbound min • For the tables that have multiple column families this property force s HBase to flush only the Mem Stores of each column family that exceed hbasehregionpercolumnfamilyflushsizelowerbound m in • The default value for this property is 16777216 bytes • This settin g impact s the traffic to Amazon S3 A higher value reduce s traffic to Amazon S3 • For write heavy access patterns with multiple column families this property should be changed to a value higher than the default of 16777216 bytes but less than half of the value of hbasehregionmemstoreflushsize hfileblockcachesize • This property sets the percentage of the heap to be allocated to the BlockCa che • Use the default value of 04 for this property • By default the BucketCache store s data blocks and the Block Cache store s bloom filters and indices • You will need to allocate enough of the heap to cache indices and bloom filters if applicable To measure HFile indices and bloom filters sizes access the web UI of one of the RegionServers • Iterate on this value to find the best value for your use case hbasehstoreflushercount • This property controls the number of threads availa ble to flush writes from memory to Amazon S3 • The default value is 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 30 • This settin g impact s the traffic to Amazon S3 A higher value reduce s the Mem Store flush queue and speed s up writes to Amazon S3 This setting is valuable for write heavy environments The value is dependent on the instance type used by the cluster • Test the value and gradually increase it to 10 • Monitor the MemStore flush queue size the 99th percentile for flush time and application API response times Iterate on this value to find the best value for your use case Note : Small clusters with high region density and high write activity should also tune HDFS properties that allow the HDFS NameNode and the HDFS DataNode to scale Specifically properties dfsdatanodehandlercount and dfsnamenodehandlercount should be increased to at least 3x their default value of 10 Region size considerations Since this process is a migration s et the region size to the same region size on your HDFS backed cluster As a reference on HBase o n Amazon S3 customer regions fall into these categories: small sized regions ( 110 GB ) mid sized regions ( 1050 GB ) and large sized regions ( 50100 GB ) Controlling the Size of Regions and Region Splits This property sets the size of the regions in your cluster This property should be configured together with the property hbaseregionserverregionsplitpolicy which is not covered on this whitepaper • Use your current cluster’s value for hbasehregionmaxfil esize o As a starting point you can use the value in your HDFS backed cluster • Set hbaseregionserverregionsplitpolicy to the same policy in your HDFS backed cluster o This property controls when a region should be split This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 31 o The default value is orgapacheh adoophbaseregionserverSteppingSplit Policy • Set hbaseregionserverregionSplitLimit to the same value in your HDFS backed cluster o This property acts as a guideline/limit for the RegionServer to stop splitting o Its default value is 1000 Tuning HBase Compactions This section show s you how to configure properties that control major compactions reduce the frequency of minor compactions and control the size of HFiles to reduce Amazon S3 traffic Controlling Major Compactions In production environments we recommend you disable major compaction However there should always be a process to run major compactions Some customers opt to have an application that incrementally runs major compactions in the background in a table or Region Server basis Set hbasehregionmajorcompaction to 0 to disable automatically scheduled major compactions Reduce the frequency of minor compactions and control the size of HFiles to reduce Amazon S3 traffic The following properties are dependent on the Mem Store size flush size and the need to control the frequency of minor compactions The properties that follow should be set according to response time needs and average size of generated Store Files during a Mem Store flush To control the behavior of minor compactions tune these properties : hbasehstorecompactionminsiz e • If a StoreFile is smaller than the value set by this p roperty the StoreFile will be selected for compaction If Store Files have a size equal or larger This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 32 than the value of hbasehstorecompactionminsize hbasehstorecompactionratio is used to determine if the files are eligible for compaction • By default this value is set to 134217728 bytes • This setting depends on the fre quency of flushes the size of Store Files generated by flushes and hbasehregionmemstoreflushsize • This settin g impact s the traffic to Amazon S3 T he higher you set the value the more frequent minor compactions will occur and therefore trigger Amazon S3 activity • For write heavy environments with many small Store Files you may want to decrease this value to reduce the frequency of minor compactions and therefore Amazon S3 activity • Monitor the frequency of compactions the overall StoreFile size and iterate on this value to find the best value for your use case hbasehstorecompactionmaxsize • If a StoreFile is larger than the value set by this property the Stor eFile is not selected for compaction • This value setting depend s on the size of the HFiles generated by flushes and the frequency of minor compactions • If you increase this value you will have fewer la rger StoreFiles and increased Amazon S3 activity If you decrease this value you will have more smaller StoreFiles and reduced Amazon S3 activity • Monitor the frequency of compactions the compaction output size the overall StoreFile size and iterate on this value hbasehstorecompaction ratio Accept the default of 10 as a starting value for this property For more details on this property refer to hbase defaultxml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 33 hbasehstorecompactionThreshold • If a store reaches hbasehstorecompactionThreshold a compaction is run to re write the Stor eFiles into one • A high value will result in less frequent minor compactions larger Store Files longer minor compactions and less Amazon S3 activity • The default value is 3 • Start with a value of 6 m onitor Compaction Frequency the average size of StoreFi les compaction output size compaction time and iterate on this value to get the best value for your use case hbasehstoreblockingStoreFiles • This property sets the total number of StoreFiles a sing le store can have before updates are blocked for this region If this value is exceeded updates are blocked until a compaction concludes or hbasehstoreblockingWaitTime is exceeded • For write heavy workloads use two to three times the default value as a starting value • The default value is 16 • Monitor the frequency of flushes blocked request s count and compaction time to set the proper value for this property Minor and m ajor compactions will flush the Bucket Cache For more details refer to HBASE 1597 Control ling the storage footprint locally and on Amazon S3 At a high level on H Base on Amazon S3 WALs are stored on HDFS When a compaction occurs previous HFiles ar e moved to the archive and only deleted if they are not associated with a snapshot HBase relies on a Cleaner Chore that is responsible for deleting unnecessary HFiles and expired WALs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 34 Ensuring the Cleaner Chore is always running With HBase 146 (Amazon EMR version 5170 and later ) we recommend that you deploy the cluster with th e cleaner enabled This is the default behavior The property that sets this behavior is hbasemastercleanerinterval We recommend that you use the l atest Amazon EMR release For versions prior to Amazon EMR 5170 see the Operational Considerations section for the HBase shell commands t hat control the cleaner behavior To e nable the cleaner globally set the hbasemastercleanerinterval to 1 Speeding up the Cleaner Chore HBASE 20555 HBASE 20352 and HBASE 17215 add additional control to the Cleaner C hore that deletes expired WALs (HLogCleaner) and expired archived HFiles (HFileCleaner) These controls are available on HBase 146 (Amazon EMR version 5170) and later The number of threads allocated to the preceding properties should be configured together and take into consideration the capacity and instance type of the Amazon EMR Master node hbaseregionserverhfilecleanerlargethreadcount • This property sets the number of threads allocated to clean expired large HFiles • hbaseregionserverthreadhfilecleanerthrottle sets the size that distinguishes between a small and large file The default value is 64 MB • The value for this property is dependent on the number of flushes write activity in the cluster and snapshot deletion frequency • The higher the write and snapshot deletion activity the higher the value should be • By default this property is set to 1 • Monitor the size of the HBase root directory on Amazon S3 and iterate on this value to find the best value for your use case Conside r the Amazon EMR Master CPU resources and the valu es set for the other This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 35 configuration properties identified in this section For more information see the Enabling Amazon S3 metrics for the HBase on Amazon S3 root directory section hbaseregionserverhfilecleanersma llthreadcount • This property sets the number of threads allocated to clean expired small HFiles • The value for this property is dependent on the number of flushes write activity in the cluster and snapshot deletion frequency • By default this property is set to 1 • The higher the write and snapshot deletion activity the higher the value should be • Monitor the size of the HBase root directory on Amazon S3 and iterate on this value to find the best value for your use case Consider the Amaz on EMR Master CPU resources and the values set for the other configuration properties identified in this section hbasecleanerscandirconcurrentsize • This property sets the number of threads t o scan the oldWALs directories • The value for this property i s dependent on the write activity in the cluster • By default this property is set to ¼ of all available cores • Monitor HDFS use and i terate on this value to find the best value for your use case Consideration the Amazon EMR Master CPU resources and the values set for the other configuration properties identified in this section hbaseoldwalscleanerthreadsize This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 36 • This property sets the number of threads to clean the WALs under the oldWALs directory • The value for this property is dependent on the write activity in the cluster • By default this property is set to 2 • Monitor HDFS use and iterate on this value to find the best value for your use case Consider the Amazon EMR Master CPU resources and the values set for the other configuration propertie s identified in this section For more details on how to set the configuration properties to clean expired WALs refer to HBASE 20352 Porting existing settings to H Base on Amazon S3 Some pr operties you have tuned in your on premises cluster but were not included in the Apache HBase tuning section These configurations include the heap size for HBase and supporting Apache Hadoop component s Apache HBase Split Policy Apache Zookeeper timeouts and so on For these configuration properties use the value in your HDFS backed cluster as a starting point F ollow the same proc ess to iterate and determine the best value that supports your use case EMRFS Configuration Properties Starting December 1 2020 Amazon S3 deliver s strong read after write consistency automatically for all applications Therefore there is no need to enable EMRFS consistent view and other consistent view related configurations as detailed in Configure Consistent View in the Amazon EMR Management Guide For more details on Amazon S3 strong read after write consistency see Amazon S3 now delivers strong read after write consistency automatically for all applications Setting the total number of connections used by EMRFS to read/write data from/to Amazon S3 With HB ase on Amazon S3 access to data is done via EMRFS This means that tasks such as an Apache HBase Region opening MemStore flushes and compactions all will initiate a request to Amazon S3 To support workloads for a large number of regions and datasets you must tune the total number of connection s to Amazon S3 that EMRFS can make (fss3maxConnections ) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 37 To tune fss3maxConnections account for the average si ze of the HFiles number of regions the frequency of minor compactions and the overall read and write throughput the cluster is experiencing fss3maxConnections • The default value for HBase on Amazon S3 is 10 000 This value should be set to more than 10000 for clusters with a large number of regions (2500+) large datasets (1 TB +) high minor compactions activity and intense read/write activity • Monitor the logs for the ERROR message “Unable to execute HTTP request: Timeout waiting for connection” and iterate on this value See more details a bout this error message in the Troubleshooting section • Several customers at the +50TB /20k regions scale set this property to 50000 Testing Apache HBase and EMRFS Configuration Values Options to approach performance testing During the testing phas e we recommend that you use the metrics for the relevant HBase sub components together with the overall response times of your application to gauge the impact of the changes made to HBase properties We also recommend that you start by testing the HBase configuration settings that contribute to a healthy cluster state at your dataset scale (fast initialization times balanced cluster and so on ) and then focus on testing the configuration property values for the read and write/ update paths We provide guidelines on how to size the cluster compute and local storage resources The R5/ R5d instance type s are good candidate s for a starting point as they are memory optimized instances As you focus on tuning the read and write/ update paths we recommend you iterate on the number of r egions per RegionServer (cluster size) As a starting value you can use the same region density as in your HDFS backed cluster and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 38 iterate according to the behavior indicated by the metrics for the RegionServers resources and HBase read/ write p ath compone nts For more details see Sizing Compute Capacity Selecting an Instance Type Also consider costs while you iterate on instance size and type Refer to the AWS Simple Monthly Calculator to quickly help you estimate costs for the different clusters of your test environment To test the HBase configuration values you have selected as a starting value use one of the following options Traffic Segmentation If the use case permits and the application traffic can be segmented by API/Table consider creating empty tables prepart itioned with the same number of regions as the original and then have the test cluster receive 10 50% of the production traffic Although this won’t be an accurate representation of the production load you will be able to iterate faster on the configurati ons for most HBase components This way as soon as the HBase configuration values have been identified for the smaller cluster/set up you can deploy a new cluster gradually increase the traffic load and iter ate again on the configurations Dataset Segm entation Dataset segmentation is especially relevant for datasets on the terabyte and petabyte scale If you choose this option and the use case permits we recommend that you use between 10% to 30% of the overall dataset and iterate to find the HBase con figuration values that contribute to a stable cluster and good response time for your application’s APIs Alternatively you can focus on a few tables at first As soon as you are satisfied with the performance with a subset of the dataset or some of the t ables you can deploy to a new cluster pointing to the full data set and iterate again on the configurations We provide steps on how to migrate and restore the full datasets in the next section For both options when you have identified a set of HBase p roperties that can be adjusted to improve stability and performance you can apply the configurations to each node of the cluster with a script and then restart HBase For more details on the steps to restart HBase see the Rolling Restart section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 39 When you are satisfied with the cluster behavior and application response times with segmented traffic and dataset you can also iterate on the instance size and instance type for both the Amazon EMR Master and Amazon EMR Core /Task Nodes When you are ready to do so you can terminate the test cluster update the Amazon EMR Configuration Settings and deploy a new cluster See the Cluster termination without data loss section to follow the correct cluster termination procedure Finally when you are ready to test with the full production traffic a nd full production dataset size the cluster accordingly using the metrics for the previous tests as a reference Then migra te the data and redeploy a new Amazon EMR Cluster Preparing the Test Environment Preparing your AWS account for performance testing To identify the optimal configuration of your HBase on Amazon S3 cluster you will need to iterate on several configurati on values during a testing stage Not only will you make changes to HBase configurations but also to the type and family of the cluster's Amazon EC2 instances To avoid any impact on existing workloads on the account used for testing or production we rec ommend that you raise the limits identified in this section according to your testing or production account needs Increasing Amazon EC2 and Amazon EBS Limits To avoid any delays during performance tests raise the following limits in your AWS account since you may need to deploy several clusters at the same time (replicas clusters pointing to different HBase root directories and so on ) If your cluster size is small the default values may be sufficient For more details about the current limits applied into your account refer to Trusted Advisor (Login Required) If your cluster is expected to have more than 100 instances open an AWS Support Case (Login Required) to have the following Amazon EC2 and Amazon EBS limits increased: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 40 • R5/R5d family : increase the limit to 200% of your clusters estimated size for xl 2xl and 4xl • Total volume storage of General Purpose SSD (gp2) volumes : increase the limit with additional capacity (4x the total dataset size) For example: if dataset is 40 TB the SSD available ( instance store or Amazon EBS Volume s) must be at least 40 TB Account for additional storage because you may need to deploy several clusters at the same time (replicas clusters pointing to different Apache HBase root directories) See the Sizing Loc al Storage section for more details Increasing AWS KMS limits Amazon S3 encryption works with EMR FS objects read from and written to Amazon S3 If you do not have a security requirement for data at rest then you can skip this section If your cluster is small the default values may be sufficient For additional details about AWS KMS limits refer to Requests per second limit for each AWS KMS A PI operation Preparing Amazon S3 for your HBase workload Amazon S3 can scale to support very high request rates to support your HBase on Amazon S3 cluster It’s valuable to understand the exact performance characteristics of your HBase workloads when migrating to a new storage layer especially when moving to an object store such as Amazon S3 Amazon S3 automatically scales to high request rates and currently supports up to 3500 PUT/POST/DELETE requests per second and 5500 GET requests per second per p refix in a bucket If your request rate grows steadily Amazon S3 automatically scal es beyond these rates as needed If you expect the request rate per prefix to be higher than the preceding request rate or if you expect the request rate to rapidly increa se instead of gradually increase the Amazon S3 bucket must be prepared to support the workloads of your HBase on Amazon S3 Cluster For more details on how to prepare the Amazon S3 bucket see the Preparing Amazon S3 for production load section This help s minimize throttling on Amazon S3 To understand how you can recognize that Amazon S3 is throttling the r equests from your cluster see the Troubleshooting section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 41 Enabling Amazon S3 metrics for the HBase on Amazon S3 root directory The Amazon CloudWatch request metrics for Amazon S3 enable the collection of Amazon S3 API metrics for a specific bucket These metrics provide a good understanding of the TPS driven by your HBase cluster and they can be helpful to identify any operational issues Note: Amazon CloudWatch metrics incur a cost For more information refer to How Do I Config ure Request Metrics for an S3 Bucket? and Monitoring Metrics with Amazon CloudWatch Enabling Amazon S3 lifecycle rules to end and clean up incomplete multipart upl oads HBase on Amazon S3 via EMRFS uses Amazon S3 Multipart API The Multipart upload API enables EMRFS to upload large objects in parts For more details on the Multipart API refer to Multipart Upload Overview Note: After you initiate a multipart upload and upload one or more parts you must either complete or abort the multipart upload to stop storage charges of the uploaded parts Only after you either complete or abort a multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage Amazon S3 provides a lifecycle rule that when configured automatically remove s incomplete multipart uploads For complete steps on how to creat e a Bucket Lifecycle Policy and apply it to the HBase root directory bucket refer to Aborting Incomplete Multipart Uploads Usin g a Bucket Lifecycle Policy Alternatively you can use the AWS Console and configure the Lifecycle policy For more details refer to Amazon S3 Lifecycle Management Update – Support for Multipart Uploads and Delete Markers We recommend that you configure the lifecycle policy to end and clean up incomplete multipart uploads after 3 days This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 42 Amazon EMR Cluster Setup Selecting an Am azon EMR Release We strongly recommended that you use the latest release of Amazon EMR when possible Refer to Amazon EMR 5x Release Versions to find the software vers ions available at the latest Amazon EMR release For more details refer to Migrating from Previous HBase Versions We also recommend that you deploy the cluster wi th only the required applications This is especially important in production so you can properly use the full resources of the cluster Applying HBase and EMRFS Configurations to the Cluster Amazon EMR allows the configuration of applications by supplyin g a JSON object with any changes to default values For more information refer to Configuring Applications Applying HBase configurations This section includes guidelines on how to construct the JSON object that can be supplied to the cluster during cluster deployment Most of these properties are configured on the hbasesitexml file Other settings of HBase such as Region and Master server heap size and logging settings have their ow n configuration file and their own classification when setting up the JSON object For an example JSON object to configure the properties written to hbase sitexml see Configure HBase In addition to hbasesite classification you may need to use classification hbaselog4j to change values in HBase's hbaselog4jproperties file and classification hbaseenv to change values in HBase ’s environment Configuring HBase to expose metrics via JMX An example JSON object to configure HBase to expose metrics via JMX can be found below [ { This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 43 ""Classification"": ""hbase env"" ""Properties"": { } ""Configurations"": [ { ""Classification"": ""export"" ""Properties"": { ""HBASE_REGIONSERVER_OPTS"": "" Dcomsunmanagementjmxremotessl=false Dcomsunmanagementjmxremoteauthenticate=false Dcomsunmanagementjmxremotepor t=10102"" ""HBASE_MASTER_OPTS"": “ Dcomsunmanagementjmxremotessl=false Dcomsunmanagementjmxremoteauthenticate=false Dcomsunmanagementjmxremoteport=10101"" } ""Configurations"": [ ] } ] } ] Configuring t he Log Level for HBase { ""Classification"": ""hbase log4j"" ""Properties"": { ""log4jloggerorgapachehadoophbase"": ""DEBUG"" } } Applying EMRFS configurations { ""Classification"": "" emrfssite"" ""Properties"": { ""fss3maxConnections "": ""10000"" } } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 44 Sizing the cluster compute and local storage resources Sizing Compute Capacity Selecting an Instance Type When sizing your cluster you can consider having a large cluster with a smaller instance type or having a small cluster with a more powerful instance type We recommend extensive testing to find the correct instance type for your application SLA As a starting point you can use the latest generation of memory optimized instan ce types (R5/R5d) and the same region density per RegionServer as in your HDFS backed cluster R5d instances share the same spec ifications as R5 instances and also include up to 36TB of local NVMe storage For more details on these instance types refer to Now Available: R5 R5d and z1d Instances As you progress to tune the read and write path first establish a configuration that supports the SLA of your application Then increase the region density by redu cing the number of nodes in the cluster Sizing Local Storage The disk requirements of the cluster depend on your application SLA and access patterns As a rule of thumb read intensive applications be nefit from caching data on the BucketCache For this reason the disk size should be large enough to cover all caching requirements HDFS requirements (write path) and OS and Apache Hadoop requirements Storage options on Amazon EMR On Amazon EMR you ha ve the option to choose an Amazon EBS volume or the instance store The AMI used by your cluster dictate s whether the root device volume uses the instance store or an Amazon EBS volume Some AMIs use Amazon EC2 instance store and some use Amazon EBS When you configure instance types in Amazon EMR you can add Amazon EBS volumes which contribute to the total capacity together with instance store (if present) and the default Amazon EBS volume Amazon EBS provides the following volume types: General Purpose (SSD) Provisioned IOPS (SSD) Throughput Optimized (HDD) Cold (HDD) and Magnetic They differ in performance characteristics and price to support multiple analytic and business needs For a detailed description of storage options on Amazon EMR refer t o Instance Store and Amazon EBS Selecting and Sizing Local Storage for the BucketCache Most HBase workloads perform well with General Purpose volumes (GP2) volume s The volume mix per Amazon EMR Core instances can be either two or more large volumes or multiple small volumes in addition to the root volume This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 45 Note that when your instance has multiple volumes the BucketCache is divided across n1 volumes The first volume store s logs and temporary data See the Tuning the Size of the BucketCache section for details on how to choose a starting value for the size of the BucketCache and the stark disk requirement s for your Amazon EMR Core /Task nodes Applying Security Configurations to Amazon EMR and EMRFS You can use Security Configurations to apply the configurations that support at rest data encryption in transit data encryption and authentication For more details see Create a Security Configuration Depending on the strategy you cho ose for authorizing access to HBase HBase configurations can be ap plied via the same process included in the Applying HBase and EMRFS Configurations to the Cluster Due to performance issues reported when Block encryption is using 3DES Transparent Encryption is preferred over encry pting block data transfer For more details on Trans parent Encryption see the Transparent Encryption Reference section Troubleshooting Error message excerpt Description/Solution Please reduce your request rate (Service: Amazon S3; Status Code: 503; Error Code: SlowDown …) Amazon S3 is throttling requests from your cluster due to an excessive number of transactions per second to specific object prefixes Find the request rate and p repare the Amazon S3 bucket for that request rate Use the metrics for the Amazon S3 bucket location for the HBase root directory to review the number of requests for the previous hour (request rate) See the Prepa ring Amazon S3 for your HBase workload and Preparing Amazon S3 for Production load sections for details on how to prepare the Amazon S3 bucket location for the HBase root directory for your request rate Unable to e xecute HTTP request: Timeout waiting for connection from pool Increase t he value of the fss3maxConnections property See the Setting the total number of connections used by EMRFS to read/write data from/to Amazon S3 section for more details on how to tune this property This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 46 Migrating and Restoring Apache HBase Tables on Apache HBase on Amazon S3 Data Migration This paper covers using the ExportSnapshot tool to migrate the data For additional options see Tips for Migrating to Apache HBase on Amazon S3 from HDFS Creating a Snapshot To creat e a snapshot perform the following commands from the HBase shell: hbase shell hbase(main):001 :0>disable 'table_name' hbase(main):002 :0>snapshot 'table_name' 'table_name_snapshot_date' hbase(main):003 :0>enable 'table_name' If you are taking the snapshot f rom a production HBase cluster and cannot afford service disruption you do not need to disable the table to take a snapshot There is minimal performance degradation if you keep the table active However there may be some inconsistencies between the state of the table at the end of the snapshot operation and the snapshot contents If you can afford service disruption in your production HBase cluster disabling the table guarantee s that the snapshot is fully consistent with the state of the disabled table Validating the Snapshot As soon as the snapshot is concluded use the following command to check that the snapshot was successful hbase orgapachehadoophbasesnapshotSnapshotInfo stats snapshot table_name_snaps hot_date Snapshot Info Name: table_name_snapshot_date This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 47 Type: FLUSH Table: table_name Format: 2 Created: 2018 0329T16:02:06 Owner: 10 HFiles (0 in archive) total size 488 K (10000% 488 K shared with the source table) 0 Logs total size 0 Exporting a Snapshot to Amazon S3 Next use orgapachehadoop HBasesnapshotExportSnapshot to copy the data over to the Apache HBase root directory on Amazon S3 hbase orgapachehadoophbasesnapshotExportSnapshot snapshot copyto s3://< HBase_on_S3_root_dir>/ As an example the export of 40 TB of data with 4x10GB Direct Connect takes approximately four to five hours Data Restore Creating an empty table If you are restoring data from a snapshot first create an empty table and then issue a snapshot restore instead of a snapshot clone A snapshot clone (clone_snapshot ) produces an actu al copy of the files A snapshot restore (restore_snapshot ) create s links to the files copied to the Amazon S3 root directory hbase shell hbase(main):001:0> create ‘table name’’cf1’ hbase(main):002 :0> disable ‘table name’ Restoring the snapshot from the HBase shell After creating an empty table you can r estore the snapshot This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 48 hbase(main):004 :0> restore_snapshot ‘table namesnapshot’ hbase(main):005 :0> enable ‘table name’ Deploying into Production After you complete the steps in this section you are ready to migrate the full dataset from your HDFS backed cluster to HBase on Amazon S3 and restore it to an HBase on Amazon S3 cluster running in your AWS production account Preparing Amazon S3 for Production load Analyze the Amazon CloudWatch metrics for Amaz on S3 captured for the HBase root directory in the development account and confirm the number of requests per Amazon S3 API as noted in the Preparing the Test Environment section If you expect a rapid in crease in the request rate for the HBase on Amazon S3 root directory bucket in the production account to more than the rates in the Preparing the Test Environment section open a support case to prepare for the worklo ad and to avoid any temporary limits on your request rate You do not need to open a support case for r equest rates lower than those in the Preparing the Test Environment section Preparing the Production environment Follow all the steps in the Preparing the Test Environment to prepare your Production Environment with the configuration settings you have found during the testing phase To migrate and restore the full da taset into the production environment follow the steps in the Migrating and Restoring HBase Tables on HBase on Amazon S3 section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 49 Managing the Production Environment Operation alization tasks Node Decommissioning When a node is gracefully decommissioned by the YARN Resource Manager (during a user initiated shrink operation or node failures such as bad disk) the regions are first clo sed and then shut down by the RegionS erver You can also gracefully decommission a RegionServer on any active node by stopping the daemon manually This step may be required while troubleshooting a particular RegionServer in the cluster sudo stop hbaseregionserver During shutdown the RegionServer’s Znode expire s The HMaster notices this event and consider s that RegionServer as a crashed server The HMaster then reassign s the regions the RegionServer used to serve to other online RegionServers Depending on the prefetch settings the RegionServer warms the cache on the new RegionServer that is now assigned to serve the region Rolling Restart A rolling restart restart s HMaster process on the master node and HRegionServer process on all the core nodes Check for any inconsistencies and make sure that the HBase balancer is turned off so that the load balancer does not interfere with region deployments Use the shell to disable HBase balancer : hbase(main):001:0> balance_switch false true 0 row(s) in 02970 seconds The f ollowing is a sample script that performs a rolling restart on an Apache HBase cluster This script should be executed on the Amazon EMR Master node that has the Amazon EC2 Key Pair (pem extension) file to log in to the Amazon EMR C ore nodes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 50 #!/bin/bash sudo stop hbase master; sudo start hbase master for node in $(yarn node list | grep i ip | cut f2 d: | cut f2 d'G' | xargs) ; do ssh i ~/hadooppem t o ""StrictHostKeyChecking no"" hadoop@$node ""sudo stop hbase regionserver;sudo start hbase regionserver"" done sudo stop hbase master; sudo start hbase master #Restart HMaster again to clea r out dead servers list and reenable the balancer hbase hbck #Run hbck utility to make sure HBase is consistent Cluster resize Nodes can be added or removed from the HBase clusters on Amazon S3 by performing a resize operation on the cluster If an AutoScaling policy was set based on a specific CloudWatch metric (such as IsIdle) the resize operation happen s based on that policy All these operations are performed gracefully Backup and Resto re With HBase on Amazon S3 you can still consider taking snapshots of your tables every few hours (and deleting them after some days) so you have a point in time recovery option available to you See also the Runni ng the balancer for specific periods to minimize the impact of region movements on snapshots section Cluster termination without data loss If you want to terminate the current cluster and build a new one on the same Amazon S3 root directory we recommend that you disable all of the tables in the current cluster This ensures that all of the data that have not been written to Amazon S3 yet are flushed from MemStore cache to Amazon S3 in the form of new store files To do so the script below uses an existi ng script (/usr/lib/hbase/bin/disable_all_tablessh ) to disable the tables #!/bin/bash clusterID=$(cat /mnt/var/lib/info/job flowjson | jq r ""jobFlowId"") #call disable_all_tablessh bash /usr/lib/hbase/bin/disable_all_tablessh #Store the output of ""l ist"" command in a temp file echo ""list"" | hbase shell > tableListSummarytxt This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 51 #fetch only the list of tables and store it in an another temp file tail 1 tableListSummarytxt | tr '' ' \n' | tr d '""' | tr d [ | tr d ] | tr d ' ' > tableListtxt #prepare for iteration while true; do while read line; do flag=""N"" echo ""is_enabled '$line'"" | hbase shell > booltxt bool=$(tail 3 booltxt | head 1) if [ ""$bool"" = ""true"" ]; then flag=""Y"" break fi done < tableListtxt echo ""flag: ""$flag if [ ""$flag"" = ""N"" ]; then aws emr terminate clusters clusterids $clusterID break else echo ""Tables aren't disabled yet Sleeping for 5 seconds to try again"" fi sleep 5 done #cleanup temporary files rm tableListSummarytxt tableListtxt booltxt The preceding script can be place on a file and named disable_and_terminatesh Note tha t the script does not exist on the instance You can add an Amazon EMR step to first copy the script to the instance and then run the step to disable and terminate the cluster To run the script you can use the following Amazon EMR Step properties Name=""Disable all tables""Jar=""command runnerjar""Args=[""/bin/bash""""/home/hadoop/disable_and_terminat esh""] This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 52 OS and Apache HBase patching Similar to AMI upgrades on Amazon EC2 the Amazon EMR service team plans for application upgrades with every new Amazon EMR version release This removes any OS and Apache HBase patching activities from your team The latest version of Amazon EMR (51 70 as of this paper ) runs Apache HBase version 146 Details of each Amazon EMR version release can be found on Amazon EMR 5x Release Versions Conclusion This paper includes steps to help you migrate from HBase on HDFS to HBase on Amazon S3 The migration plan provided detailed steps and HBase properties to configure when migrating to HBase on Amazon S3 Using the various best practices and recommendations highlighted in this whitepaper we encourage you to test several values for HBase configuration properties so your HBase on Amazon S3 cluster supports the performance requirements of your application and use case Contributors The following individuals contributed to th e first version of this document: • Francisco Oliveira Senior Big Data Consultant Amazon Web Services • Tony Nguyen Senior Big Data Consultant Amazon Web Services • Veena Vasudevan Big Data Support Engineer Amazon Web Services Further Reading For additional information see the following: • HBase on Amazon S3 Documentation • Tips for Mig rating to Apache HBase on Amazon S3 from HDFS • Low Latency Access on Trillions of Records: FINRA’s Architecture Using Apache HBase on Amazon EMR with Amazon S3 • Setting up Read Replica Clusters with HBase on Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 53 • Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory Document Revisions Date Description May 2021 Revie wed for technical accuracy January 2021 Removed information addressing EMRFS Consistent View because Amazon S3 now delivers strong read afterwrite consistency automatically for all applications October 2018 First publication This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 54 Appendix A: Command Reference Restart HBase Commands to run on the master: sudo stop hbase master sudo stop hbase rest sudo stop hbase thrift sudo stop zookeeper server sudo start hbase master sudo start hbase rest sudo start hbase thrift sudo start zookeeper server Commands to run in all core nodes sudo stop hbase regionserver sudo start hbase regionserver This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 55 Appendix B: AWS IAM Policy Reference The policies that follow are annotated with comments remove the comments prior to use Minimal Amazon EMR Service Role Policy { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Resource"": ""*"" ""Action"": [ ""ec2:AuthorizeSecurityGroupEgress"" ""ec2:AuthorizeSe curityGroupIngress"" ""ec2:CancelSpotInstanceRequests"" ""ec2:CreateNetworkInterface"" ""ec2:CreateSecurityGroup"" ""ec2:CreateTags"" ""ec2:DeleteNetworkInterface"" // This is only needed if you are launching clusters in a private subnet ""ec2:DeleteTags"" ""ec2:DeleteSecurityGroup"" // This is only needed if you are using Amazon managed security groups for private subnets You can omit this action if you are using custom security groups ""ec2:DescribeAvailabilityZones"" ""ec2:DescribeAccountAttributes"" ""ec2:DescribeDhcpOptions"" ""ec2:DescribeImages"" ""ec2:DescribeInstanceSt atus"" ""ec2:DescribeInstances"" ""ec2:DescribeKeyPairs"" ""ec2:DescribeNetworkAcls"" ""ec2:DescribeNetworkInterfaces"" ""ec2:DescribePrefixLists"" ""ec2:DescribeRout eTables"" ""ec2:DescribeSecurityGroups"" ""ec2:DescribeSpotInstanceRequests"" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 56 ""ec2:DescribeSpotPriceHistory"" ""ec2:DescribeSubnets"" ""ec2:DescribeTags"" ""ec2:Desc ribeVpcAttribute"" ""ec2:DescribeVpcEndpoints"" ""ec2:DescribeVpcEndpointServices"" ""ec2:DescribeVpcs"" ""ec2:DetachNetworkInterface"" ""ec2:ModifyImageAttribute"" ""ec2:ModifyInstanceAttribute"" ""ec2:RequestSpotInstances"" ""ec2:RevokeSecurityGroupEgress"" ""ec2:RunInstances"" ""ec2:TerminateInstances"" ""ec2:DeleteVolume"" ""ec2:DescribeVolumeStatus"" ""ec2:DescribeVolumes"" ""ec2:DetachVolume"" ""iam:GetRole"" ""iam:GetRolePolicy"" ""iam:ListInstanceProfiles"" ""iam:ListRolePolicies"" ""s3:CreateBucket"" ""sdb:BatchPutAttributes"" ""sdb:Select"" ""cloudwatch:PutMetricAlarm"" ""cloudwatch:DescribeAlarms"" ""cloudwatch:DeleteAlarms"" ""application autoscaling:RegisterScalableTarget"" ""application autoscaling:DeregisterScalableTarget"" ""application autoscaling:PutScalingPolicy"" ""application autoscaling:DeleteScalingPolicy"" ""application autoscaling:Describe*"" ] } { ""Effect"": ""Allow"" ""Resource"": [""arn:aws:s3:::examplebucket/*""""arn:aws:s3:::examplebucket2/*""] // Here you This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 57 can specify the list of buckets which are going to be storing cluster logs bootstrap action script custom JAR files input & output paths for EMR steps ""Action"": [ ""s3:GetBucketLocation"" ""s3:GetBucketCORS"" ""s3:GetObjectVersionForReplication"" ""s3:GetObject"" ""s3:GetBucketTagging"" ""s3:GetObjectVersion"" ""s3:GetObjectTagging"" ""s3:ListMultipartUploadParts"" ""s3:ListBucketByTags"" ""s3:ListBucket"" ""s3:ListObjects"" ""s3:ListBucketMultipartUploads"" ] } { ""Effect"": ""Allow"" ""Resource"": ""arn:aws:sqs:*:123456789012:AWS ElasticMapReduce *"" // This will allow EMR to only perform actions (Creating queue receiving messages deleting queue etc) on SQS queues whose names are prefixed with the literal string AWS ElasticMapReduce ""Action"": [ ""sqs:CreateQueue"" ""sqs:DeleteQu eue"" ""sqs:DeleteMessage"" ""sqs:DeleteMessageBatch"" ""sqs:GetQueueAttributes"" ""sqs:GetQueueUrl"" ""sqs:PurgeQueue"" ""sqs:ReceiveMessage"" ] } { ""Effect"": ""Allow"" ""Action"": ""iam:CreateServiceLinkedRole"" // EMR needs permissions to create this service linked role for launching EC2 spot instances ""Resource"": ""arn:aws:iam::*:role/aws service This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 58 role/spotamazonawscom/AWSServiceRoleForEC2Spot*"" ""Condition"": { ""StringLike"": { ""iam:AWSServiceName"": ""spotamazonawscom"" } } } { ""Effect"": ""Allow"" ""Action"": ""iam:PassRole"" // We are passing the custom EC2 instance profile (defined below) which has bare minimum permissions ""Resource"": [ ""arn:aws:iam::*:role/Custom_EMR_EC2_role"" ""arn:aws:iam::*:role/ EMR_AutoScaling_DefaultRole"" ] } ] } Minimal Amazon EMR Role for Amazon EC2 (Instance Profile) Policy { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Resource"": ""*"" ""Action"": [ ""ec2:Describe*"" ""elasticmapreduce:Describe*"" ""elasticmapreduce:ListBootstrapActions"" ""elasticmapreduce:ListClusters"" ""elasticmapreduce:ListInstanceGroups"" ""elasticmapreduce:ListInstances"" ""elasticmapreduce:ListSteps"" ] } { This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 59 ""Effect"": ""Allow"" ""Resource"": [ // Here you can specify the list of buckets which are going to be ac cessed by applications (Spark Hive etc) running on the nodes of the cluster ""arn:aws:s3:::examplebucket1/*"" ""arn:aws:s3:::examplebucket1*"" ""arn:aws:s3:::examplebucket2/*"" ""arn:aws:s3:::ex amplebucket2*"" ] ""Action"": [ ""s3:GetBucketLocation"" ""s3:GetBucketCORS"" ""s3:GetObjectVersionForReplication"" ""s3:GetObject"" ""s3:GetBucketTagging"" ""s3:GetObjectVersion"" ""s3:GetObjectTagging"" ""s3:ListMultipartUploadParts"" ""s3:ListBucketByTags"" ""s3:ListBucket"" ""s3:ListObjects"" ""s3:ListBu cketMultipartUploads"" ""s3:PutObject"" ""s3:PutObjectTagging"" ""s3:HeadBucket"" ""s3:DeleteObject"" ] } { ""Effect"": ""Allow"" ""Resource"": ""arn: aws:sqs:*:123456789012:AWS ElasticMapReduce *"" // This will allow EMR to only perform actions (Creating queue receiving messages deleting queue etc) on SQS queues whose names are prefixed with the literal string AWS ElasticMapReduce ""Action"": [ ""sqs:CreateQueue"" ""sqs:DeleteQueue"" ""sqs:DeleteMessage"" ""sqs:DeleteMessageBatch"" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 60 ""sqs:GetQueueAttributes"" ""sqs:GetQueueUrl"" ""sqs:PurgeQueue"" ""sqs:ReceiveMessage"" ] } ] } Minimal Role Policy for User Launching Amazon EMR Clusters // This policy can be attached to an AWS IAM user who will be launching EMR clusters It provides minimum access to the user to launch monitor and terminate EMR clusters { ""Version"": ""2012 1017"" ""Statement"": [ { ""Sid"": ""Statement1"" ""Effect"": ""Allow"" ""Action"": ""iam:CreateServiceLinkedRole"" ""Resource"": ""*"" ""Condition"": { ""StringLike"": { ""iam:AWSServiceName"": [ ""elasticmapreduceamazonawscom"" ""elasticmapreduceamazonawscomcn"" ] } } } { ""Sid"": ""Statement2"" ""Effect"": ""Allow"" ""Action"": [ ""iam:GetPolicyVersion"" ""ec2:AuthorizeSecurityGroupIngress"" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 61 ""ec2:Describe Instances"" ""ec2:RequestSpotInstances"" ""ec2:DeleteTags"" ""ec2:DescribeSpotInstanceRequests"" ""ec2:ModifyImageAttribute"" ""cloudwatch:GetMetricData"" ""cloudwatc h:GetMetricStatistics"" ""cloudwatch:ListMetrics"" ""ec2:DescribeVpcAttribute"" ""ec2:DescribeSpotPriceHistory"" ""ec2:DescribeAvailabilityZones"" ""ec2:CreateRoute"" ""ec2:RevokeSecurityGroupEgress"" ""ec2:CreateSecurityGroup"" ""ec2:DescribeAccountAttributes"" ""ec2:ModifyInstanceAttribute"" ""ec2:DescribeKeyPairs"" ""ec2:DescribeNetworkAcls"" ""ec2:DescribeRouteTables"" ""ec2:AuthorizeSecurityGroupEgress"" ""ec2:TerminateInstances"" //This action can be scoped in similar manner like it has been done below for ""elasticmapreduce:TerminateJobFlows"" ""iam:GetPolicy"" ""ec2:CreateTags"" ""ec2:DeleteRoute"" ""iam:ListRoles"" ""ec2:RunInstances"" ""ec2:DescribeSecurityGroups"" ""ec2:CancelSpotInstanceReq uests"" ""ec2:CreateVpcEndpoint"" ""ec2:DescribeVpcs"" ""ec2:DescribeSubnets"" ""elasticmapreduce:*"" ] ""Resource"": ""*"" } { ""Sid"": ""Statement3"" ""Effect"": ""Allow"" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 62 ""Action"": [ ""elasticmapreduce:TerminateJobFlows"" ] ""Resource"":""*"" ""Condition"": { ""StringEquals"": { ""elasticmapreduce:Resour ceTag/custom_key"": ""custom_value"" // Here you can specify the key value pair of your custom tag so that this IAM user can only delete the clusters which are appropriately tagged by the user } } } { ""Sid"": ""Statement4"" ""Effect"": ""Allow"" ""Action"": ""iam:PassRole"" ""Resource"": [ ""arn:aws:iam::*:role/Custom_EMR_Role"" ""arn:aws:iam::*:role/Custom_EMR_EC2_role"" ""arn:aws:iam::*:role/EMR_AutoScaling_DefaultRole"" ] } ] } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 63 Appendix C: Transparent Encryption Reference To configure Transparent Encryption use the following Amazon EMR Configuration JSON: [{""classification"":""hdfs encryption zones""""propert ies"":{""/user/hbase"":""hbase key""}}] In addition to the preceding classification you must disable HDFS Opensource Security By default Amazon EMR Security Configurations for a trest Encryption for Local Disks tie Open source HDFS Encryption with LUKs encry ption If you need to configure Transparent Encryption and your application is latency sensitive do not enable at rest encryption via Amazon EMR Security Configuration You can configure LUKS via a bootstrap action To check that WALs are being encrypte d use the following commands: sudo –u hdfs hdfs dfs ls /user/HBase/WAL/ip xxxxx xxec2internal160201520373175110 sudo –u hdfs hdfs crypto getFileEncryptionInfo path /user/HBase/WAL/WALs/ip xxxxx xxec2internal160201520373175110/ip xxxxx xxec2internal%2C16020%2C15203731751101520373184129 To verify that the oldWALs are being encrypted the output to the last command should be the following: {cipherSuite: {name: AES/CTR/NoPadding algorithmBlockSize: 16} cryptoProtocolVersion: CryptoProto colVersion{description='Encryption zones' version=2 unknownValue=null} edek: 7c3c2fcf8337f14bbf815697686de5a696c6670c0f41eb71678b53ee5326c33e This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 64 iv: eac6cf91bdd2eee8496f1ddb19b4fcf8 keyName: HBase key ezKeyVersionName: hbase key@0} Note: The default configurations grant access to the DECRYPT_EEK operation on all keys (/etc/hadoop kms/conf/kms aclsxml) For more details see Transparent Encryption in HDFS on Amazon EMR and Transparent Encryption in HDFS",General,consultant,Best Practices Migrating_Your_Databases_to_Amazon_Aurora,This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/ migratingdatabasestoamazonaurora/migrating databasestoamazonaurorahtml Migratin g Your Databases to Amazon Aurora First Published June 10 2016 Updated July 28 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction to Amazon Aurora 1 Database migration considerations 3 Migration phases 3 Application consid erations 3 Sharding and read replica considerations 4 Reliability considerations 5 Cost and licensing considerations 6 Other migration considerations 6 Planning your database migration process 7 Homogeneous migration 7 Heterogeneous migration 9 Migrating large databases to Amazon Aurora 10 Partition and shard consolidation on Amazon Aurora 11 Migration options at a glance 12 RDS snapshot migration 13 Migration using Aurora Read Replica 18 Migrating the database schema 21 Homogeneous schema migration 22 Heterogeneous schema migration 23 Schema migration using the AWS Schema Conversion Tool 24 Migrating data 32 Introduction and general approach to AWS DMS 32 Migration methods 33 Migration procedure 34 Testing and cutover 43 Migration testing 44 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Cutover 44 Conclusion 46 Contributors 46 Further reading 46 Document history 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Amazon Aurora is a MySQL and PostgreSQL compatible enterprise grade relational database engine Amazon Aurora is a cloud native database that overcomes many of the limitation s of traditional relational database engines The goal of this whitepaper is to highlight best practices of migrating your existing databases to Amazon Aurora It presents migration considerations and the step bystep process of migrating open source and c ommercial databases to Amazon Aurora with minimum disruption to the applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 1 Introduction to Amazon Aurora For decades traditional relational databases have been the primary choice for data storage and persistence These datab ase systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL three times better performance than PostgreSQL and comparable performance of high end commercial databases Amazon Aurora is priced at 1/10th the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazo n RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning software patching setup configuration monitoring and backup are co mpletely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones in a Region providing out ofthebox durability and fault tolerance to your data acr oss physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon Availability Zones are isolated from each other and are connected through lowlatency links Each segment of your database volume i s replicated six times across these Availability Zones Amazon Aurora enables dynamic resizing for database storage space Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impac t—so there is no need for estimating and provisioning large amount of database storage ahead of time The storage space allocated to your Amazon Aurora database cluster will automatically increase up to a maximum size of 128 tebibytes (TiB) and will automa tically decrease when data is deleted Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage Service This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Auro ra 2 (Amazon S3 ) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Optionally Aurora Global Database can be used for high read throughputs across six Regions up to 90 read replicas Amazon Aurora is highly secure and allows you to encrypt your databases using keys that you create and control through AWS Key Management Service ( AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the automated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in tra nsit For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mi ssion critical applications Amazon Aurora Serverless v2 (Preview) is the new version of Aurora Serverless an on demand auto matic scaling configuration of Amazon Aurora that automatically starts up shuts down and scales capacity up or down based on yo ur application's needs It scales instantly from hundreds to hundreds ofthousands of transactions in a fraction of a second As it scales it adjusts capacity in fine grained increments to provide just the right amount of database resources that the appli cation needs There is no database capacity for you to manage you pay only for the capacity your application consumes and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak Aurora Serverless v2 is a simpl e and cost effective option for any customer who cannot easily allocate capacity because they have variable and infrequent workloads or have a large number of databases If you can predict your application’s requirements and prefer the cost certainty of fi xedsize instances then you may want to continue using fixed size instances Amazon Aurora capabilities discussed in this whitepaper apply to both MySQL and PostgreSQL database engine s unless otherwise specified However the migration practices discusse d in this paper are specific to Aurora MySQL database engine For more information about Aurora best practices specific to PostgreSQL database engine see Working with Amazon Aurora PostgreSQL in the Amazon Aurora user guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 3 Database migration considerations A database represents a critical component in the architecture of most applications Migrating the database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliabi lity You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migration phases Because database migrations tend to be complex we advocate taking a phased iterative approach Figure 1 — Migration phases Application considerations Evaluate Aurora features Although most applications can be architected to work with many relational database engines you should make sure that your application work s with Ama zon Aurora Amazon Aurora is designed to be wire compatible with MySQL 56 and 57 Therefore most of the code applications drivers and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain My SQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora service SSH access to database nodes is restricted which may affect your ability to install thirdparty tools or plugins on the database host Performance considerations Database per formance is a key consideration when migrating a database to a new platform Therefore many successful database migration projects start with performance evaluations of the new database platform Although the Amazon Aurora Performance Assessment paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 4 applications For more useful results test the database performance for time sensitive workloads by running your queries (or s ubset of your queries) on the new platform directly Consider these s trategies: • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for those tables This gives you a good starting point Of course testing after complete data migrati on will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercial engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduc es writes to the storage system minimiz es lock contention and eliminat es delays created by database process threads Our tests with SysBench on r 516xlarge instances show that Amazon Aurora delivers close to 800000 reads per second and 200 000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to drive a large number of concurrent queries Sharding and read replica considerations If your cu rrent database is sharded across multiple nodes you may have an opportunity to combine these shards into a single Aurora database during migration A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a signif icantly higher number of reads and writes than a standard MySQL database If your application is read/write heavy consider using Aurora read replicas for offloading readonly workload from the primary database node Doing this can improve concurrency of your primary database for write s and will improve overall read and write This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 5 performance Using read replicas can also lower your costs in a Multi AZ configuration since you may be able to use smaller insta nces for your primary instance while adding failover capabilities in your database cluster Aurora read replicas offer near zero replication lag and you can create up to 15 read replicas Reliability considerations An important consideration with database s is high availability and disaster recovery Determine the RTO ( recovery time objective) and RPO ( recovery point objective) requirements of your application With Amazon Aurora you can significantly improve both these factors Amazon Aurora reduces data base restart times to less than 60 seconds in most database crash scenarios Aurora also moves the buffer cache out of the database process and makes it available immediately at restart time In rare scenarios of hardware and Availability Zone failures re covery is automatically handled by the database platform Aurora is designed to provide you zero RPO recovery within an AWS Region which is a major improvement over on premises database systems Aurora maintains six copies of your data across three Availa bility Zones and automatically attempts to recover your database in a healthy AZ with no data loss In the unlikely event that your data is unavailable within Amazon Aurora storage you can restore from a DB snapshot or perform a point intime restore oper ation to a new instance For cross Region DR Amazon Aurora also offers a global database feature designed for globally distributed transactions applications allowing a single Amazon Aurora database to span multiple AWS Regions Aurora uses storage base d replication to replicate your data to other Regions with typical latency of less than one second and without impacting database performance This enables fast local reads with low latency in each Region and provides disaster recovery from Region wide ou tages You can promote the secondary AWS Region for read write workloads in case of an outage or disaster in less than one minute You also have the option to create an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region by using MySQL binary log (binlog) replication Each cluster can have up to five Read Replicas created this way each in a different Region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 6 Cost and licensing considerations Owning and running databases come with associated costs Before planning a database migration an analysis of the total cost of ownership (TCO) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are runni ng a commer cial database engine (Oracle SQL Server DB2 and so on ) a significant portion of your cost is database licensing Since Aurora is available at one tenth of the cost of commercial engines many applications moving to Aurora are able to significantly reduce their TCO Even if you are running on an open source engine like MySQL or Postgres with Aurora’s high performance and dual purpose read replicas you can realize meaningful savings by moving to Amazon Aurora See th e Amazon Aurora Pricing page for more information Other migration considerations Once you have considered application suitability performance TCO and reliability factors you should think about what it would take to migrate to th e new platform Estimate code change effort It is important to estimate the amount of code and schema changes that you need to perform while migrating your database to Amazon Aurora When migrating from MySQL compatible databases negligible code changes are required However when migrating from non MySQL engines you may be required to make schema and code changes The AWS Schema Conversion Tool can help to estimate that effort (see the Schema migration using th e AWS Schema Conversion Tool section in this document) Application availability during migration You have options of migrating to Amazon Aurora by taking a predictable downtime approach with your application or by taking a near zero downtime approach The approach you choose depend s on the size of your database and the availability requirements of your applications Whatever the case it’s a good idea to consider the impact of the migration process on your application and business before start ing with a database migration The next few sections explain both approaches in detail This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 7 Modify connection string during migration You need a way to point the applications to your new database One option is to modify the connection strings for all of the applications Another common option is to use DNS In this case you don’t use the actual host name of your database instance in your connection string Instead consider creating a canonical name (CNAME) record that points to the host name of your database instance Doing this allows you to change the endpoint to which your application points in a single location rather than tracking and modifying multiple connection string settings If you choose to use this pattern be sure to pay close attention to the time to live (TTL) setting for your CNAME record If this value is set too high then the host name pointed to by this CNAME might be cached longer than desired If this value is set too low additional overhead might be placed on your c lient applications by having to resolve this CNAME repeatedly Though use cases differ a TTL of 5 seconds is usually a good place to start Planning your database migration process The previous section discussed some of the key considerations to take int o account while migrating databases to Amazon Aurora Once you have determined that Aurora is the right fit for your application the next step is to decide on a preliminary migration approach and create a database migration plan Homogen eous migration If your source database is a MySQL 56 or 57 compliant database (MySQL MariaDB Percona and so on ) then migration to Aurora is quite straightforward Homogen eous migration with downtime If your application can accommodate a predictable length of downtime during off peak hours migration with the downtime is the simplest option and is a highly recommended approach Most database migration projects fall into this category as most applications already have a well defined maintenance window You have the foll owing options to migrate your database with downtime • RDS snapshot migration − If your source database is running on Amazon RDS MySQL 56 or 57 you can simply migrate a snapshot of that database to Amazon Aurora For migrations with downtime you either have to stop your application or stop writing to the database while snapshot a nd migration is in progress The time to migrate primarily depends upon the size of the database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 8 and can be determined ahead of the production migration by running a test migration Snapshot migration option is explained in the RDS Snapshot Migration section • Migration using native MySQL tools — You may use native MySQL tools to migrate your data and schema to Aurora This is a great option when you need more control over the database migration process you are mo re comfortable using native MySQL tools and other migration methods are not performing as well for your use case You can create a dump of your data using the mysqldump utility and then import that data into an existing Amazon Aurora MySQL DB cluster Fo r more information see Migrating from MySQL to Amazon Aurora by using mysqldump You can copy th e full and incremental backup files from your database to an Amazon S3 bucket and then restore an Amazon Aurora MySQL DB cluster from those files This option can be considerably faster than migrating data using mysqldump For more information see Migrating data from MySQL by using an Amazon S3 bucket • Migration using AWS Database Migration Service (AWS DM S) — Onetime migration using AWS DMS is another tool for moving your source database to Amazon Aurora Before you can use AWS DMS to move the data you need to copy the database schema from source to target using native MySQL tools For the step bystep p rocess see the Migrating Data section Using AWS DMS is a great option when you don’t have experience using native MySQL tools Homogen eous migration with nearzero downtime In some scenarios you might want to m igrate your database to Aurora with minimal downtime Here are two e xamples: • When your database is relatively large and the migration time using downtime options is longer than your application maintenance window • When you want to run source and target data bases in parallel for testing purposes In such cases you can replicate changes from your source MySQL database to Aurora in real time using replication You have a couple of options to choose from: • Near zero downtime migration using MySQL binlog replication — Amazon Aurora supports traditional MySQL binlog replication If you are running MySQL database chances are that you are already familiar with classic binlog replication setup If that’s the case and you want more control over the migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 9 onetime database load using native tools coupled wi th binlog replication gives you a familiar migration path to Aurora • Near zero downtime migration using AWS Database Migration Service (AWS DMS) — In addition to supporting one time migration AWS DMS also supports real time data replication using change d ata capture (CDC) from source to target AWS DMS takes care of the complexities related to initial data copy setting up replication instances and monitoring replication After the initial database migration is complete the target database remains synchr onized with the source for as long as you choose If you are not familiar with binlog replication AWS DMS is the next best option for homogenous near zero downtime migrations to Amazon Aurora See the section Introduction and General Approach to AWS DMS • Near zero downtime migration using Aurora Read Replica — If your source database is running on Amazon RDS MySQL 56 or 57 you can migrate from a MySQL DB instance to an Aurora MySQL DB cluster by creating an A urora read replica of your source MySQL DB instance When the replica lag between the MySQL DB instance and the Aurora Read Replica is zero you can direct your client applications to the Aurora read replica This migration option is explained in the Migrate using Aurora Read Replica section Heterogeneous migration If you are looking to migrate a non MySQL compliant database (Oracle SQL Server PostgresSQL and so on ) to Amazon Aurora several options can help you accomplish this migration quickly and easily Schema migration Schema migration from a non MySQL compliant database to Amazon Aurora can be achieved using the AWS Schema Conversion Tool This tool is a desktop application that helps you convert your datab ase schema from an Oracle Microsoft SQL Server or PostgreSQL database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster In cases where the schema from your source database cannot be automatically and completely converted the AWS Schema Conversion Tool provides guidance on how you can create the equivalent schema in your target Amazon RDS database For details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 10 Data migration While supporting homo genous migrations with near zero downtime AWS Database Migration Service ( AWS DMS) also supports continuous replication across heterogeneous databases and is a preferred option to move your source database to your target database for both migrations with downtime and migrations with near zero downtime Once the migration has started AWS DMS manages all the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target Besides using AWS DMS you can use various third party tools like Attunity Replicate Tungsten Replicator Oracle Golden Gate etc to migrate your data to Amazon Aurora Whatever tool you choose take performance and licensing costs into consideration before finalizing your toolset for migration Migrating large databases to A mazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication — Large database s typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replication (using MySQL native tools AWS DMS or third party tools) fo r changes to catch up • Copy static tables first — If your database relies on large static tables with reference data you may migrate these large tables to the target database before migrating your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration — Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is no t a common migration pattern this is an option nonetheless This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 11 • Database cleanup — Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply for get to drop unused tables Whatever the reason a database migration project provides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files Partition and shard consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an o pportunity to consolidate these partitions or shards on a single Aurora database A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL dat abase Consolidating these partitions on a single Aurora instance not only reduce s the total cost of ownership and simplify database management but it also significantly improve s performance of cross partition queries • Functional partitions — Functional partitioning means dedicating different nodes to different tasks For example in an ecommerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partiti ons usually have distinct nonoverlapping schemas • Consolidation strategy — Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replic ation • Data shards — If you have the same schema with distinct sets of data across multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while kee ping the same table schema • Consolidation strategy — Since all shards share the same database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 12 migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing Migration options at a glance Table 1 — Migration options Source database type Migration with downtime Near zero downtime migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2 : Manual migration using native tools* Option 3 : Schema migration using native tools and data load using AWS DMS Option 1 : Migration using native tools + bin log replication Option 2: Migrate using Aurora Read Replica Option 3 : Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or on premises Option 1 : Migration using native tools Option 2 : Schema migration with native tools + AWS DMS for data load Option 1 : Migration using native tools + binlog replication Option 2: Schema migration using native tools + AWS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication Other non MySQL databases Option: Manual or third party tool for schema conversion + manual or third party data load in target Option: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication (GoldenGate etc) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 13 *MySQL Native tools: mysqldump SELECT INTO OUTFILE third party tools like mydumper/myloader RDS snapshot migration To use RDS snapshot migration to move to Aurora your MySQL database must be running on Amazon RDS MySQL 56 or 57 and you must make an RDS snapshot of the database This migration method does not work with on premise s databases or databases ru nning on Amazon Elastic Compute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite The biggest advantage to this migration method is that it is t he simplest and requires the fewest number of steps In particular it migrate s over all schema objects secondary indexes and stored procedures along with all of the database data During snapshot migration without binlog replication your source databas e must either be offline or in a read only mode (so that no changes are being made to the source database during migration) To estimate downtime you can simply use the existing snapshot of your database to do a test migration If the migration time fits within your downtime requirements then this may be the best method for you Note that i n some cases migration using AWS DMS or native migration tools can be faster than using snapshot migration If you can’t tolerate extended downtime you can achieve n earzero downtime by creating an Aurora Read Replica from a source RDS MySQL This migration option is explained in Migrating using Aurora Read Replica section in this document You can migrate either a manual or an automated DB snapshot The general steps you must take are as follows: 1 Determine the amount of space that is required to migrate your Amazon RDS MySQL instance to an Aurora DB cluster For more information see the next section 2 Use the Amazon RDS console to create the snapshot in the Region where the Amazon RDS MySQL instance is located 3 Use the Migrate Databas e feature on the console to create an Amazon Aurora DB cluster that will be populated using the DB snapshot from the original DB instance of MySQL This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 14 Note: Some MyISAM tables m ight not convert without errors and may require manual changes For instance the InnoDB engine does not permit an autoincrement field to be part of a composite key Also spatial indexes are not currently supported Estimating space requirements for snapshot migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Amazon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to for mat the data for migration The two features that can potentially cause space issues during migration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip thi s section because you should not have space issues During migration MyISAM tables are converted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapshot was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in th e original database then migration should succeed without encountering any space issues However if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables t hen the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the da tabase and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 128 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a maximum size of 16 TB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 15 • NonMyISAM tables in the source database can be up to 16 TB in size However due to additional space requirements during conversion make s ure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exceed 8 TB in size You might want to modify your database schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Am azon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need to provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon R elational Database Service User Guide Migrating a DB snapshot using the console You can migrate a DB snapshot of an Amazon RDS MySQL DB instance to create an Aurora DB cluster The new DB cluster is populated with the data from the original Amazon RDS MySQL DB instance The DB snapshot must have been made from an RDS DB instance runni ng MySQL 56 or 57 For information about creating a DB snapshot see Creating a DB snapshot in the Amazon RDS User Guide If the DB snapshot is not in the Region where you want to locate your Aurora DB cluster use the Amazon RDS console to copy the DB snapshot to that Region For information about copying a DB snapshot see Copying a snapshot in Amazon RDS User Guide To migrate a MySQL DB snapshot by using the console do the following: 1 Sign in to the AWS Management Console and open the Amazon RDS console (sign in requ ired) 2 Choose Snapshots 3 On the Snapshots page choose the Amazon RDS MySQL snapshot that you want to migrate into an Aurora DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 16 4 Choose Migrate Database 5 On the Migrate Database page specify the values that match your environment and processing requirements as shown in the following illustration For descriptions of these options see Migrating an RDS for MySQL snapshot to Aurora in the Amazon RDS User Guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrati ng Your Databases to Amazon Aurora 18 Figure 2 — Snapshot migration 6 Choose Migrate to migrate your DB snapshot In the list of instances c hoose the appropriate arrow icon to show the DB cluster details and monitor the progress of the migration This details pa nel displays the cluster endpoint used to connect to the prima ry instance of the DB cluster For more information on connecting to an Amazon Aurora DB cluster see Connecting to an Amazon Aurora DB Cluster in the Amazon R elational Database Service User Guide Migration using Aurora Read Replica Aurora uses MySQL DB engines binary log replication functionality to create a special type of DB cluster called an Aurora read replica for a source MySQL DB instance Updates made to the source in stance are asynchronously replicated to Aurora Read Replica This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 19 We recommend creating an Aurora read replica of your source MySQL DB instance to migrate to an Aurora MySQL DB cluster with near zero downtime The migration process begins by creating a DB snaps hot of the existing DB Instance as the basis for a fresh Aurora Read Replica After the replica is set up replication is used to bring it up to date with respect to the source Once the replication lag drops to zero the replication is complete At this point you can promote the Aurora Read Replica into a standalone Aurora DB cluster and point your client applications to it Migration will take a while roughly several hours per tebibyte (TiB) of data Replication runs somewhat faster for InnoDB tables t han it does for MyISAM tables and also benefits from the presence of uncompressed tables If migration speed is a factor you can improve it by moving your MyISAM tables to InnoDB tables and uncompressing any compressed tables For further details refer to Migrating from a MySQL DB instance to Aurora MySQL using Read Replica in the Amazon RDS User Guide To use Aurora Read Replica to migrate from RDS MySQL your MySQL database must be running on Amazon RDS MySQL 56 or 57 This migration method does not work with on premises databases or databases running on Amazon Elastic C ompute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite Create a read replica using the Console 1 To migrate an existing RDS MySQL DB Instance s imply select the instance in the AWS Management RDS Console (sign in required) choose Instance Actions and choose Create Aurora read replica : This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 20 2 Specify the Values for the Aurora cluster See Replication with Amazon Aurora Monitor the progress of the migration in the console You can also look at the sequence of e vents in RDS events console 3 After the migration is complete wait for the Replica lag to reach zero on the new Aurora read replica to indicate that the replica is in sync with the source 4 Stop the flow of new transactions to the source MySQL DB instance 5 Promote the Aurora read replica to a standalone DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Da tabases to Amazon Aurora 21 6 To see if the process is complete you can check Recent events for the new Aurora cluster: Now you can point your application to use the Aurora cluster’s reader and writer endpoints Migrating the database schema RDS DB s napshot migration migrates both the full schema and data to the new Aurora instance However if your source database location or application uptime requirements do not allow the use of RDS snapshot migratio n then you first need to migrate the database schema from the source database to the target database before you can move the actual data A database schema is a skeleton structure that represents the logical view of the entire database and typically incl udes the following : • Database storage objects — tables columns constraints indexes sequences userdefined types and data types • Database code objects — functions procedures packages triggers views materialized views events SQL scalar functions SQL inline functions SQL table functions attributes variables constants table types public types private types cursors exceptions parameters and other objects In most situations the database schema remains relatively static and therefore you don’t need downtime during the database schema migration step The schema from your source database can be extracted while your source database is up and ru nning without affecting the performance If your application or developers do make frequent changes to the database schema make sure that these changes are either paused while the migration is in process or are accounted for during the schema migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 22 Depending on the type of your source database you can use the techniques discussed in the next sections to migrate the database schema As a prerequisite to schema migration you must have a target Aurora database created and available Homogen eous schema migration If your source database is MySQL 56 compliant and is running on Amazon RDS Amazon EC2 or outside AWS you can use native MySQL tools to export and import the schema • Exporting database schema — You can use the mysqldump client utility to export the database schema To run this utility you need to connect to your source database and redirect the output of mysqldump command to a flat file The –nodata option ensu res that only database schema is exported without any actual table data For the complete mysqldump command reference see mysqldump — A Database Backup Program mysqldump –u source_db_username –p nodata routines triggers –databases source_db_name > DBSchemasql • Importing database schema into Aurora — To import the schema to your Aurora instance connect to your Aurora database from a MySQL command line client (or a corresponding Windows client) and direct the contents of the export file into MySQL mysql –h aurora clusterendpoint u username p < DBSchemasql Note the following: • If your source database contains stored procedures triggers and views you need to remove DEFINER syntax from your dump file A simple Perl command to do that is given below Doing this creates all triggers views and sto red procedures with the current connected user as DEFINER Be sure to evaluate any security implications this might have $perl pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' < DBSchemasql > DBSchemaWithoutDEFINERsql This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 23 • Amazon Aurora supports InnoDB tables only If y ou have MyISAM tables in your source database Aurora automatically changes the engine to InnoDB when the CREATE TABLE command is run • Amazon Aurora does not support compressed tables (that is tables created with ROW_FORMAT=COMPRESSED ) If you have compr essed tables in your source database Aurora automatically changes ROW_FORMAT to COMPACT when the CREATE TABLE command is run Once you have successfully imported the schema into Am azon Aurora from your MySQL 56 compliant source database the next step is to copy the actual data from the source to the target For more information s ee the Introduction and General Approach to AWS DMS later in this paper Heterogeneous schema migration If your source database isn’t MySQL compatible you must convert your schema to a format compatible with Amazon Aurora Schema conversion from one database engine to another database engine is a nontrivial task and may involve rewriting certain parts of your database and application c ode You have two main options for converting and migrating your schema to Amazon Aurora: • AWS Schema Conversion Tool — The AWS Schema Conversion Tool makes heterogeneous database migrations easy by automatically converting the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with the target database Any code that cannot be automatically converted is clearly marked so that it can be manually converted You can use this tool to convert your source databases running on either Oracle or Microsoft SQL Server to an Amazon Aurora MySQL or PostgreSQL target database in either A mazon RDS or Amazon EC2 Using the AWS Schema Conversion Tool to convert your Oracle SQL Server or PostgreSQL schema to an Aurora compatible format is the preferred method This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 24 • Manual schema migration and third party tools — If your source database is not O racle SQL Server or PostgreSQL you can either manually migrate your source database schema to Aurora or use third party tools to migrate schema to a format that is compatible with MySQL 56 Manual schema migration can be a fairly involved process depen ding on the size and complexity of your source schema In most cases however manual schema conversion is worth the effort given the cost savings performance benefits and other improvements that come with Amazon Aurora Schema migration using the AWS Sc hema Conversion Tool The AWS Schema Conversion Tool provides a project based user interface to automatically convert the database schema of your source database into a format that is compatible with Amazon Aurora It is highly recommended that you use AWS Schema Conversion Tool to evaluate the database migration effort and for pilot migration before the actual production migration The following description walks you through the high level steps of using AWS the Schema Conversion Tool For detailed instruc tions see the AWS Schema Conversion Tool User Guide 1 First install the t ool The AWS Schema Conversion Tool is available for th e Microsoft Windows macOS X Ubuntu Linux and Fedora Linux Detailed download and installation instructions can be found in the installation and update section of the user guide Where you install AWS Schema Conversion Tool is important The tool need s to connect to both source and target databases directly in order to convert and apply schema Make sure that the desktop where you install AWS Schema Conv ersion Tool has network connectivity with the source and target databases 2 Install JDBC drivers The AWS Schema Conversion Tool uses JDBC drivers to connect to the source and target databases In order to use this tool you must download these JDBC drivers to your local desktop Instructions for driver download can be found at Installing the required database drivers in the AWS Schema Conversion Tool User Guide Also check the AWS forum for AWS Schema Conversion Tool for instructions on setting up JDBC drivers for different database engines This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 25 3 Create a target database Create an Amazon Aurora target database For instructions on creating an Amazon Aurora database see Creating an Amazon Aurora DB Cluster in the Amazon RDS User Guide 4 Open the AWS Schema Conversion Tool and start the Ne w Project Wizard Figure 3 — Create a new AWS Schema Conversion Tool project 5 Configure the source database and test connectivity between AWS Schema Conversion Tool and the source database Your source database must be reachable from your desktop for this to work so make sure that you have the appropriate network and firewall setti ngs in place This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 26 Figure 4 — Create New Database Migration Project wizard 6 In the next screen select the schema of your source database that you want to convert to Amazon Aurora Figure 5 — Select Schema step of the migration wizard This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Au rora 27 7 Run the d atabase migration assessment report This report provides important information regarding the conversion of the schema from your source database to your target Amazon Aurora instance It summarizes all of the sc hema conversion tasks and details the action items for parts of the schema that cannot be automatically converted to Aurora The report also includes estimates of the amount of effort that it will take to write the equivalent code in your target database t hat could not be automatically converted 8 Choose Next to configure the target database You can view this migration report again later Figure 6 — Migration report 9 Configure the target Amazon Aurora database and test connectivi ty between the AWS Schema Conversion Tool and the source database Your target database must be reachable from your desktop for this to work so make sure that you have appropriate network and firewall settings in place 10 Choose Finish to go to the project window 11 Once you are at the project window you have already established a connection to the source and target database and are now ready to evaluate the detailed assessment report and migrate the schema This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 28 12 In the left panel that displays the schema from y our source database choose a schema object to create an assessment report for Right click the object and choose Create Report Figure 7 — Create migration report The Summary tab displays the summary information from the database migration assessment report It shows items that were automatically converted and items that could not be automatically converted For schema items that could not be automatically converted to the tar get database engine the summary includes an estimate of the effort that it would take to create a schema that is equivalent to your source database in your target DB instance The report categorizes the estimated time to convert these schema items as foll ows: • Simple – Actions that can be completed in less than one hour • Medium – Actions that are more complex and can be completed in one to four hours • Significant – Actions that are very complex and will take more than four hours to complete This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 29 Figure 8 — Migration report Important: If you are evaluating the effort required for your database migration project this assessment report is an important artifact to consider Study the assessment report in details to determine what code changes are required in the database schema and what impact the changes might have on your application functionality and design 13 The next step is to convert the schema The converted schema is not immediately applied to the target database Inst ead it is stored locally until you explicitly apply the converted schema to the target database To convert the schema from your source database choose a schema object to convert from the left panel of your project Right click the object and choose Conv ert schema as shown in the following illustration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 30 Figure 9 — Convert schema This action add s converted schema to the right panel of the project window and show s objects that were automatically converted by the AWS Schema Conversion Tool You can respond to the action items in the assessment report in different ways: • Add equivalent schema manually — You can write the portion of the schema that can be automatically converted to your target DB instance by choosing Apply to database in the right panel of your project The schema that is written to your target DB instance won't contain the items that couldn't be automatically converted Those items are listed in your d atabase migration assessment report After applying the schema to your target DB instance you can then manually create the schema in your target DB instance for the items that could not be automatically converted In some cases you may not be able to cre ate an equivalent schema in your target DB instance You might need to redesign a portion of your application and database to use the functionality that is available from the DB engine for your target DB instance In other cases you can simply ignore the schema that can't be automatically converted Caution: If you manually create the schema in your target DB instance do not choose Apply to database until after you have saved a copy of any manual work that you have done Applying the schema from your project to your target DB instance overwrites schema of the same name in the target DB instance and you lose any updates that you added manually This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 31 • Modify your source database schema and refresh the schema in your project — For some items you might be best ser ved to modify the database schema in your source database to the schema that is compatible with your application architecture and that can also be automatically converted to the DB engine of your target DB instance After updating the schema in your source database and verifying that the updates are compatible with your application choose Refresh from Database in the left panel of your project to update the schema from your source database You can then convert your updated schema and generate the database migration assessment report again The action item for your updated schema no longer appears 14 When you are ready to apply your converted schema to your target Aurora instance choose the schema element from the right panel of your project Right click the schema element and choose Apply to database as shown in the following figure Figure 10 — Apply schema to database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 32 Note: The first time that you apply your converted schema to your target DB instance the AWS Schema Conversion Tool adds an additional schema (AWS_ORACLE_EXT or AWS_SQLSERVER_EXT ) to your target DB instance This schema implements system functions of the sourc e database that are required when writing your converted schema to your target DB instance Do not modify this schema or you might encounter unexpected results in the converted schema that is written to your target DB instance When your schema is fully m igrated to your target DB instance and you no longer need the AWS Schema Conversion Tool you can delete the AWS_ORACLE_EXT or AWS_SQLSERVER_EXT schema The AWS Schema Conversion Tool is an easy touse addition to your migration toolkit For additional be st practices related to AWS Schema Conversion Tool see the Best practices for the AWS SCT topic in the AWS Schema Conversion Tool User Guide Migrat ing data After the database schema has been copied from the source database to the target Aurora database the next step is to migrate actual data from source to target While data migration can be accomplished using different tools we recommend moving data using the AWS Database Migration Service (AWS DMS) as it provides both the simplicity and the features needed for the task at hand Introduction and general approach to AWS DMS The AWS Database Migration Service ( AWS DMS) makes it easy for customers to migrate production databases to AWS with minimal downtime You can keep your applications running while you are migrati ng your database In addition the AWS Database Migration Service ensures that data changes to the source database that occur during and after the migration are continuously replicated to the target Migration tasks can be set up in minutes in the AWS Mana gement Console The AWS Database Migration Service can migrate your data to and from widely used database platforms such as Oracle SQL Server MySQL PostgreSQL Amazon Aurora MariaDB and Amazon Redshift The service supports homogenous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or SQL Server to MySQL You can perform one time migrations or you This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 33 can maintain continuous replication between databases without a customer having to install or configure any complex software AWS DMS works with databases that are on premise s running on Amazon EC2 or running on Amazon RDS However AWS DMS does not work in situation s where both the source database and the target database are on premise s; one endpoint must be in AWS AWS DMS supports specific versions of Oracle SQL Server Amazon Aurora MySQL and PostgreSQL For currently supported versions see the Sources for data migration However this whitepaper is just focusing on Amazon Aurora as a migration target Migration methods AWS DMS provides three methods for migrating data: • Migrate existing data — This method creates the tables in the target database automatically defines the metadata that is required at the target and populates the tables with data from the source database (also referred to as a “ full load”) The data from the tables is loaded in parallel for improved efficiency Tables are only created in case of homogenous migrations and secondary indexes aren’t created automatically by AWS DMS Read further for details • Migrate existing data and replicate ongoing change s — This method does a full load as described above and in addition captures any ongoing changes being made to the source database during the full load and stores them on the replication instance Once the full load is complete the stored changes are applied to the destination database until it has been brought up to date with the source database Additionally any ongoing changes being made to the source database continue to be replicated to the destinatio n database to keep them in sync This migration method is very useful when you want to perform a database migration with very little downtime • Replicate data changes only — This method just reads changes from the recovery log file of the source database and applies these changes to the target database o n an ongoing basis If the target database is unavailable these changes are buffered on the replication instance until the target becomes available This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 34 • When AWS DMS is performing a full load migration the processing put s a load on the tables in the source d atabase which could affect the performance of applications that are hitting this database at the same time If this is an issue and you cannot shut down your applications during the migration you can consider the following approaches: o Running the migrat ion at a time when the application load on the database is at its lowest point o Creating a read replica of your source database and then performing the AWS DMS migration from the read replica Migration procedure The general outline for using AWS DMS is as follows: 1 Create a target database 2 Copy the schema 3 Create an AWS DMS replication instance 4 Define the database source and target endpoints 5 Create and run a migration task Create target database Create your target Amazon Aurora database cluster using th e procedure outlined in Creating an Amazon Aurora DB Cluster You should create the target database in the Region and with an instance type that matches your business requirements Also to improve the performance of the migration verify that your target database does not have Multi AZ deployment enabled ; you can enable that once the load has finish ed Copy schema Additionally you should create the schema in this target database AWS DMS supports basic schema migration including the creation of tables and primary keys However AWS DMS doesn't automatically create secondary indexes foreign keys stored proced ures user accounts and so on in the target database For full schema migration details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 35 Create an AWS DMS replication instance In order to use the AWS DMS servi ce you must create a n AWS DMS replication instance which runs in your VPC This instance read s the data from the source database perform s the specified table mappings and write s the data to the target database In general using a larger replication in stance size speed s up the database migration (although the migration can also be gated by other factors such as the capacity of the source and target databases connection latency and so on ) Also your replication instance can be stopped once your datab ase migration is complete Figure 11 — AWS Database Migration Service AWS DMS currently supports burstable compute and memory optimized instance classes for replication instances The burstable instance classes are low cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline They are suitable for developing con figuring and testing your database migration process as well as for periodic data migration tasks that can benefit from the CPU burst capability The compute optimized instance classes are designed to deliver the highest level of processor performance an d achieve significantly higher packet per second (PPS) performance lower network jitter and lower network latency You should use this instance class if you are performing large heterogeneous migrations and want to minimize the migration time The memor yoptimized instance classes are designed for migrations or replications of highthroughput transaction systems which can consume large amounts of CPU and memory AWS DMS Storage is primarily consumed by log files and cached transactions Normally doing a full load does not require a significant amount of instance storage on your AWS DMS replication instance However if you are doing replication along with your full load then the changes to the source database are stored on the AWS DMS replication insta nce while the full load is taking place If you are migrating a very large This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 36 source database that is also receiving a lot of updates while the migration is in progress then a significant amount of instance storage could be consumed The instances come with 50 GB of instance storage but can be scaled up as appropriate Normally this amount of storage should be more than adequate for most migration scenarios However it's always a good idea to pay attention to storage related metrics Make sure to scale up your storage if you find you are consuming more than the default allocation Also in some extreme cases where very large databases with very high transaction rates are being migrated with replication enabled it is possible that the AWS DMS replication ma y not be able to catch up in time If you encounter this situation it may be necessary to stop the changes to the source database for some number of minutes in order for the replication to catch up before you repoint your application to the target Aurora DB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 37 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 38 Figure 12 — Create replication instance page in the AWS DMS console Define database source and target endpoints A database endpoint is used by the replication instance to connect to a database To perform a database migrati on you must create both a source database endpoint and a target database endpoint The specified database endpoints can be on premise s running on Amazon EC2 or running on Amazon RDS but the source and target cannot both be on premise s We highly recommended that you test your database endpoint connection after you define it The same page used to create a database endpoint can also be used to test it as explained later in this paper Note: If you have foreign key constraints in your sourc e schema when creating your target endpoint you need to enter the following for Extra connection attributes in the Advanced section: initstmt=SET FOREIGN_KEY_CHECKS=0 This disables the foreign key checks while the target tables are being loaded This in t urn prevents the load from being interrupted by failed foreign key checks on partially loaded tables This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 39 Figure 13 — Create database endpoint page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 40 Create and run a migration task Now that you have created and tested your source database endpoint and your target database endpoint you can create a task to do the data migration When you create a task you specify the replication instance that you have created the database migration method type (discussed earlie r) the source database endpoint and your target database endpoint for your Amazon Aurora database cluster Also under Task Settings if you have already created the full schema in the target database then you should change the Target table preparation mode to Do nothing rather than using the default value of Drop tables on target The latter can cause you to lose aspects of your schema definition like foreign key constraints when it drops and recreates tables When creating a task you can create table mappings that specify the source schema along with the corresponding tables to be migrated to the target endpoint The default mapping method migrate s all source tables to target tables of the same name if they exist Otherwise i t create s the source table(s) on the target (depending on your task settings) Additionally you can create custom mappings (using a JSON file) if you want to migrate only certain tables or if you want to have more control over the field and table mapping process You can also choose to migrate only one schema or all schemas from your source endpoint This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 41 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 42 Figure 14 — Create task page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migra ting Your Databases to Amazon Aurora 43 You can use the AWS Management Console to monitor the progress of your AWS Database Migration Service (AWS DMS) tasks You can also monitor the resources and network connectivity used The AWS DMS console shows basic statistics for each task including the task status percent complete elapsed ti me and table statistics as the following image shows Additionally you can select a task and display performance metrics for that task including throughput records per second migrated disk and memory use and latency Figure 15 — Task status in AWS DMS console Testing and cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are now ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 44 Migration testing Table 2 — Migration testing Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically run upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are s ome common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not succes sful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional tests These post cutover tests exercise the functionality of the application(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests These post cutover tests assess the nonfunctional characte ristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be run by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have com pleted the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 45 cutover If the planning and testing phase has been run properly cutover should not lead to unexpected issues Precutover actions • Choose a cutover window — Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) • Make sure changes are caught up — If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significa ntly lagging behind the source database • Prepare scripts to make the application configuration changes — In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applicati ons may require updates to connection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application — Stop the application processes on the source database and p ut the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Run precutover tests — Run automated pre cutover tests to make sure that the data migration was successful Cutover • Run cutover — If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Run scripts created in the pre cutover phase to change the application configuration to point to the new Aurora database • Start your application — At this point you may start your application If you have an ability to stop users from accessing the application w hile the application is running exercise that option until you have run your post cutover checks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 46 Post cutover checks • Run post cutover tests — Run predefined automated or manual test cases to make sure your application works as expected with the new database It’s a good strategy to start testing read only functionality of the database first before running tests that write to the database • Enable user access and closely monitor — If your test cases were run successfully you may give user access to the application to complete the migration process Both application and database should be closely monitored at this time Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and great er availability than other open source databases and lower costs than most commercial grade databases This paper propose s strategies for identifying the best method to migrate databases to Amazon Aurora and detail s the procedures for planning and completing those migrations In particular AWS Database Migration Service ( AWS DMS) as well as the AWS Schema Conversion Tool are the recommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Contributors Contributors to this documen t include : • Puneet Agarwal Solutions Architect Amazon Web Services • Chetan Nandikanti Database Specialist Solutions Architect Amazon Web Services • Scott Williams Solutions Architect Amazon Web Services Jonathan Doe Solutions Architect Amazon Web Servi ces Further reading For additional information see: • Amazon Aurora Product Details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 47 • Amazon Aurora FAQs • AWS Database Migration Service • AWS Database Migration Service FAQs Document history Date Description July 28 2021 Reviewed for technical accuracy June 10 2016 First publication,General,consultant,Best Practices Migrating_Your_Existing_Applications_to_the_AWS_Cloud,This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 1 of 23 Migrating your Existing Applications to the AWS Cloud A Phasedriven Approach to Cloud Migration Jinesh Varia jvaria@amazoncom October 2010 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 2 of 23 Abstract With Amazon Web Services (AWS) you can provision compute power storage and other resources gaining access to a suite of elastic IT infrastructure services as your business demands them With minimal cost and effort you can move your application to the AWS cloud and reduce capital expenses minimize support and administrative costs and retain the performance security and reliability requirements your business demands This paper helps you build a migration strategy for your company It discuss es steps techniques and methodologies for moving your existing enterprise applications to the AWS cloud To get the most from this paper you should have basic understanding of the different products and features from Amazon Web Services There are several strategies for migrating applications to new environments In this paper we shall share several such strategies that help enterprise companies take advantage of the cloud We discuss a phasedriven step bystep strategy for migrating applications to the cloud More and more enterprises are moving applications to the cloud to modernize their current IT asset base or to prepare for future needs They are taking the plunge picking up a few missioncritical applications to move to the cloud and quickly realizing that there are other applications that are also a good fit for the cloud To illustrate the step bystep strategy we provide three scenarios listed in the table Each scenario discusses the motivation for the migration describes the before and after application architecture details the migration process and summarizes the technical benefits of migration: Scenario Name Solution Use case Motivation For migration Additional Benefits Services Used Company A Web Application Marketing and collaboration Web site Scalability + Elasticity Auto Scaling pro active event based scaling EC2 S3 EBS SimpleDB AS ELB CW RDS Company B Batch processing pipeline Digital Asset Management S olution Faster time to market Automation and improved development productivity EC2 EBS S3 SQS Company C Backend processing workflow Claims Processing System Lower TCO Redundancy Business continuity and Overflow protection EC2 S3 EBS AS SQS IE This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 3 of 23 Introduction Developers and architects looking to build new applications in the cloud can simply design the components processes and workflow for their solution employ the APIs of the cloud of their choice and leverage the latest cloudbased best practices1 for design development testing and deployment In choosing to deploy their solutions in a cloudbased infrastructure like Amazon Web Services (AWS) they can take immediate advantage of instant scalability and elasticity isolated processes reduced operational effort ondemand provisioning and automation At the same time many businesses are looking for better ways to migrate their existing applications to a cloudbased infrastructure so that they too can enjoy the same advantages seen with greenfield application development One of the key differen tiators of AWS’ infrastructure services is its flexibility It gives businesses the freedom of choice to choose the programming models languages operating systems and databases they are already using or familiar with As a result many organizations are moving existing applications to the cloud today It is true that some applications (“IT assets”) currently deployed in company data centers or co located facilities might not make technical or business sense to move to the cloud or at least not yet Those assets can continue to stay in place However we strongly believe that there are several assets within an organization that can be moved to the cloud today with minimal effort This paper will help you build an enterprise application migration strategy for your organization The step by step phasedriven approach discussed in the paper will help you identify ideal projects for migration build the necessary support within the organization and migrate applications with confidence Many organizations are taking incremental approach to cloud migration It is very important to understand that with any migration whether related to the cloud or not there are onetime costs involved as well as resistance to change among the staff members (cultural and sociopolitical impedance) While these costs and factors are outside the scope of this technical paper you are advised to take into consideration these issues Begin by building organizational support by evangelizing and training Focus on longterm ROI as well as tangible and intangible factors of moving to the cloud and be aware of the latest developments in the cloud so that you can take full advantage of the cloud benefits There is no doubt that deploying your applications in the AWS cloud can lower your infrastructure costs increases business agility and remove the undifferentiated “heavy lifting” within the enterprise A successful migration largely depends on three things: the complexity of the application architecture; how loosely coupled your application is; and how much effort you are willing to put into migration We have noticed that when customers have followed the step by step approach (discussed in this paper) and have invested time and resources towards building proof of concept projects they clearly see the tremendous potential of AWS and are able to leverage its strengths very quickly 1 Architecting for the Cloud: Best Practices Whitepaper http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 4 of 23 A Phased Strate gy for Migration: Step By Step G uide Figure 1: The Phase Driven Approach to Cloud Migration Phases Benefits Cloud Assessment  Financial Assessment (TCO calculation)  Security and C ompliance Assessment  Technical Assessment (Classify application types)  Identify the tools that can be reused and the tools that need to be built  Migrate licensed products  Create a plan and measure successBusiness case for migration (Lower TCO faster time to market higher flexibility & agility s calability + elasticity) Identify gaps between your current traditional legacy architecture and next generation cloud architecture Proof of Concept  Get your feet wet with AWS  Build a p ilot and validate the technology  Test existing software in the cloudBuild confidence with various AWS services Mitigate risk by validating critical pieces of your proposed architecture Moving your Data  Understand different storage options in the AWS cloud  Migrate file servers to Amazon S3  Migrate commercial RDBMS to EC2 + EBS  Migrate MySQL to Amazon RDSRedundancy Durable Storage Elastic Scalable Storage Auto mated Management Backup Moving your Apps  Forklift m igration strategy  Hybrid migration strategy  Build “cloud aware” layers of code as needed  Create AMIs for each componentFuture proof scaled out service oriented elastic architecture Leveraging the Cloud  Leverage other AWS services  Automate elasticity and SDLC  Harden s ecurity  Create dashboard to manage AWS resources  Leverage multiple availability zonesReduction in CapEx in IT Flexibility and agility Automation and improved productivity Higher Availability (HA) Optimization  Optimize usage based on demand  Improve efficiency  Implement a dvanced monitoring and telemetry  Reengineer your application  Decompose your relational databas esIncreased utilization and transformational impact in OpEx Better visibility through advanced monitoring and telemetry This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 5 of 23 The order of the phases is not important For example several companies prefer to skip Phase 1 (Assessment Phase) and dive right into Phase 2 (Proof of Concept) or perform Application Migration (Phase 4) before they migrate all their data (Phase 3) Phase 1: Cloud Assessment Phase This phase will help you build a business case for moving to the cloud Financial Assessment Weighing the financial considerations of owning and operating a data center or colocated facilities versus employing a cloudbased infrastructure requires detailed and careful analysis In practice it is not as simple as measuring potential hardware expense alongside utility pricing for compute and storage resources Indeed businesses must take a multitude of options into consideration in order to affect a valid comparison between the two alternatives Amazon has published a whitepaper The Economics of the AWS cloud2to help you gather the necessary data for an appropriate comparison This basic TCO methodology and the accompanying Amazon EC2 Cost Calculator uses industry data AWS customer research and userdefined inputs to compare the annual fullyburdened cost of owning operating and maintaining IT infrastructure with the payforuse costs of Amazon EC2 Note that this analysis compares only the direct costs of the IT infrastructure and ignores the many indirect economic benefits of cloud computing including hig h availability reliability scalability flexibility reduced time tomarket and many other cloudoriented benefits Decision makers are encouraged to conduct a separate analysis to quantify the economic value of these features Pricing Model One time Upfront Monthly AWS Colo OnSite AWS Colo OnSite Server Hardware 0 $$$ $$ $$ 0 0 Network Hardware 0 $$ $$ 0 0 0 Hardware Maintenance 0 $$ $$ 0 0 $ Software OS 0 $$ $$ $ 0 0 Power and Cooling 0 0 $$ 0 $$ $ Data Center/C olocated Space 0 $$ $$ 0 $ 0 Administration 0 $$ $$ $ $$ $$$ Storage 0 $$ $$ $ 0 0 Bandwidth 0 $$ $ $$ $ $ Resource Management Software 0 0 0 $ $ $ 24X7 Support 0 0 0 $ $ $ Total Table 1: Cloud TCO Calculation Example (some assumptions are made) The AWS Economics Center provides all the necessary tools you need to assess your current IT infrastructure After you have performed a highlevel financial assessment you can estimate your monthly costs using the AWS Simple Monthly Calculator by entering your realistic usage numbers Project that costs over a period of 1 3 and 5 years and you will notice significant savings 2 http://mediaamazonwebservicescom/The_Economics_of_the_AWS_Cloud_vs_Owned_IT_Infrastructurepdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 6 of 23 Security and Compliance Assessment If your organization has specific IT security policies and compliance requirements we recommend that you involve your security advisers and auditors early in the process At this stage you can ask the following questions: What is my overall risk tolerance? Are there various classifications of my data that result in higher or lower tolerance to exposure? What are my main concerns around confidentiality integrity availability and durability of my data? What are my regulatory or contractual obligations to store data in specific jurisdictions? What are my security threats? What is a likelihood of those threats materializing into actual attacks? Am I concerned about intellectual property protection and legal issues of my application and data? What are my options if I decide that I need to retrieve all of my data back from the cloud? Are there internal organizational issues to address to increase our comfort level with using shared infrastructure services? Data security can be a daunting issue if not properly understood and analyzed Hence it important that you understand your risks threats (and likelihood of those threats) and then based on sensitivity of your data cl assify the data assets into different categories (discussed in the next section) This will help you identify which datasets (or databases) to move to the cloud and which ones to keep inhouse It is also important to understand these important basics regarding AWS Security: You own the data not AWS You choose which geographic location to store the data I t doesn’t move unless you decide to move it You can download or delete your data whenever you like You should consider the sensitivity of your data and decide if and how you will encrypt your data while it is in transit and while it is at rest You can set highly granular permissions to manage access of a user within your organization to specific service operations data and resources in the cloud for greater security control For more uptodate information about certifications and best practices please visit the AWS Security Center Technical and Functional Assessment A technical assessment is required to understand which applications are more suited to the cloud architecturally and strategically At some point enterprises determine which applications to move into the cloud first which applications to move later and which applications should remain inhouse In this stage of the phase enterprise architects should ask the following questions: Which business applications should move to the cloud first? Does the cloud provide all of the infrastructure building blocks we require? Can we reuse our existing resource management and configuration tools? How can we get rid of support contracts for hardware software and network? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 7 of 23 Create a Dependency Tree and a Classification Chart Perform a thorough examination of the logical constructs of your enterprise applications and start classifying your applications based on their dependencies risks and security and compliance requirements Identify the applications and their dependencies on other components and services Create a dependency tree that highlights all the different parts of your applications and identify their upward and downstream dependencies to other applications Create a spreadsheet that lists all your applications and dependencies or simply “white board” your dependency tree that shows the different levels of interconnections of your components This diagram should be an accurate snapshot of your enterprise application assets It may look something like the diagram below It could include all your ERP systems HR services Payroll Batch processing systems backend billing systems and customerfacing web applications internal corporate IT applications CRM systems etc as well as lowerlevel shared services such as LDAP servers Figure 2: Example of whiteboard diagram of all the IT assets and its dependencies (Dependency Tree) Identifying the R ight “ Candidate ” for the Cloud After you have created a dependency tree and have classified your enterprise IT assets examine the upward and downward dependencies of each application so you can determine which of them to move to the cloud quickly For a Web based application or Software as a Service (SaaS) application the dependency tree will consist of logical components (features) of the website such as database search and indexer login and authentication s ervice billing or payments and so on For backend processing pipeline there will be different interconnected processes like workflow systems logging and reporting systems and ERP or CRM systems In most cases the best candidates for the cloud are the services or components that have minimum upward and downward dependencies To begin look for systems that have fewer dependencies on other components Some examples are backup systems batch processing applications log processing systems development testing and build At this stage you will have clear visibility into your IT assets and you might be able to classify your applications into different categories:  Applications with Top Secret Secret or Public data sets  Applications with low medium a nd high compliance requirements  Applications that are internal only partner only or customer facing  Applications with low medium and high coupling  Applications with strict relaxed licensing …and so on This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 8 of 23 systems webfront (marketing) applications queuing systems content management systems or training and presales demo systems To identify which are good candidates for the cloud search for applications with underutilized assets; applications that have an immediate business need to scale and are running out of capacity ; applications that have architectural flexibility; applications that utilize traditional tape drives to backup data; applications that require global scale (for example customerfacing marketing and advertising apps); or applications that are used by partners Deprioritize applications that require specialized hardware to function (for example mainframe or specialized encryption hardware) Figure 3: Identify the right candidate for the cloud Once you have the list of ideal candidates prioritize your list of applications so that it helps you :  maximize the exposure in all aspects of the cloud (compute storage network database)  build support and awareness within your organization and creates highest impact and visibility among the key stakeholders Questions to ask at this stage:  Are you able to map the current architecture of the candidate application to cloud architecture? If not how much effort would refactoring require?  Can your application be packaged into a virtual machine ( VM) instance and run on cloud infrastructure or does it need specialized hardware and/or special access to hardware that the AWS cloud cannot provide?  Is your company licensed to move your thirdparty software used in the candidate application into the cloud?  How much effort (in terms of building new or modifying existing tools) is required to move the application?  Which component must be local ( onpremise) and which can move to the cloud?  What are the latency and bandwidth requirements?  Does the cloud support the identity and authentication mechanism you require ? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 9 of 23 Identify the Tools That You Can Reuse It is important to research and analyze your existing IT assets Identify the tools that you can reuse in the cloud without any modification and estimate how much effort (in terms of new development and deployment effort) will be required to add “AWS support” to them You might be able to reuse most of the system tools and/or add AWS support very easily All AWS services expose standard SOAP and REST Web Service APIs and provide multiple libraries and SDKs in the programming language of your choice There are some commercial tools that you won’t be able to use in the cloud at this time due to licensing issues so for those you will need to find or build replacements: 1 Resource Management Tools : In the cloud you deal with abstract resources (AMIs Amazon EC2 instances Amazon S3 buckets Amazon EBS volumes and so on) You are likely to need tools to manage these resources For basic management see the AWS management Console 2 Resource Configuration Tools : The AWS cloud is conducive to automation and as such we suggest you consider using tools to help automate the configuration process Take a look at open source tools like Chef Puppet and CFEngine etc 3 System Management Tools : After you deploy your services you might need to modify your existing system management tools (NOC) so that you can effectively monitor deploy and “watch ” the applications in the cloud To manage Amazon Virtual Private Cloud resources you can use the same security policies and use the same system management tools you are using now to manage your own local resources 4 Integration Tools: You will need to identify the framework/library/SDK that works best for you to integrate with AWS services There are libraries and SDKs available in all platforms and programming languages (See Resources section) Also take a look at development productivity tools such as the AWS toolkit for Eclipse Migrating Licensed Products It is important to iron out licensing concerns during the assessment phase Amazon is working with many thirdparty ISVs to smooth the migration path as much as possible Amazon has teamed with a variety of vendors and is currently offering three different options to choose from: 1 Bring Your Own License (BYOL) Amazon has teamed with variety of ISVs who have permitted the use of their product on Amazon EC2 This EC2 based license is the most frictionfree path to move your software into the cloud You purchase the license the traditional way or use your existing license and apply it to the product which is available as a preconfigured Amazon Machine Image For example Oracle Sybase Adobe MySQL JBOSS IBM and Microsoft have made their software and support available in the AWS cloud using BYOL option If you don’t find the softw are that you are looking for in the AWS cloud talk to your software vendor about making their software available in the cloud The AWS Business Development Team is available to help you with this discussion 2 Use a Utility Pricing Model with a Support Package Amazon has teamed with elite ISVs and they are offering their software as a Paid AMI (using the Amazon DevPay service) This is a Pay AsYouGo license in which you do not incur any upfront licensing cost and only pay for the resources you consume ISVs charge a small premium over and above the standard Amazon EC2 cost which gives you an opportunity to run any number of instances in the cloud for the duration you control For example RedHat Novell IBM Wowza offer pay asyougo licenses ISVs typically also offer a support package that goes with pay asyougo license This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 10 of 23 3 Use an ISV SaaSbased Cloud Service Some of the ISVs have offered their software as a service and charge a monthly subscription fee They offer standard APIs and webbased interfaces and are fairly quick to implement This offering is either fully or partially managed inside the AWS cloud This option is often the easiest and fastest way to migrate your existing on premise installation to a hosted ondemand offering by the same vendor or an equivalent offering by a different vendor In most cases ISVs or independent thirdparty enterprise cloud services integrators offer migration tools that can help you move your data For example Mathematica Quantivo Pervasive and Cast Iron provide a SaaS offering based on AWS If your enterprise applications are tightly coupled with complex thirdparty enterprise software systems that have not yet been migrated to the AWS cloud or if you have already invested in multiyear onpremise licensing contracts with the vendor you should consider refactoring your enterprise applications into functional building blocks Run what you can in the cloud and connect to the licensed software systems that still run onpremise Amazon VPC may be used to create an IPSec VPN tunnel that will allow resources running on AWS to communicate securely with resources at the other end of the tunnel in your existing data center The whitepaper3 discusses several ways in which you can extend your existing IT infrastructure to the cloud Define Your Success Criteria While you are at this stage it is important to ask this question: “How will I measure success? ” The following table lists a few examples Your specific success criteria will be customized to your organization’s goals and culture Success Criteria Old New Examples on How to Measure Cost (CapE x) $1M $300K 60% savings in CapEx over next 2 years Cost (OpEx) $20K/Year $10K/Year Server toStaff ratio improved by 2x 4 maintenance contracts discontinued Hardware procurement efficiency 10 machines in 7 months 100 machines in 5 minutes 3000% faster to get resources Time to market 9 months 1 month 80% faster in launching new products Reliability Unknown Redundant 40% reduction in hardware related support calls Availability Unknown At least 9999% uptime 20% reduction in operational support calls Flexibility Fixed Stack Any Stack Not locked in to particular hardware vendor or platform or technology New o pportunities 10 projects backlog 0 backlog 5 new projects identified 25 new projects initiated in 3 months Table 2: Examples on how to measure success criteria Create a Roadmap and a Plan By documenting the dependencies creating a dependency tree and identifying the tools that you need to build or customize you will get an idea of how to prioritize applications for migration estimate the effort required to migrate them understand the onetime costs involved and assess the timeline You can construct a cloud migration roadmap Most companies skip this step and quickly move to the next phase of building a pilot project as it gives a clearer understanding of the technologies and tools 3 http://mediaamazonwebservicescom/Extend_your_IT_infrastructure_with_Amazon_VPCpdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 11 of 23 Phase 2 : Proof of Concept Phase Once you have identified the right candidate for the cloud and estimated the efforts required to migrate it’s time to test the waters with a small proof of concept The goal of this phase is to learn AWS and ensure that your assumptions regarding suitability for migration to the cloud are accurate In this phase you can deploy a small greenfield application and in the process begin to get your feet wet with the AWS cloud Get your feet wet with AWS Get familiar with the AWS API AWS tools SDKs Firefox plugins and most importantly the AWS Management Console and command line tools (See the Getting Started Center for more details) At a minimum at the end of this stage you should know how to use the AWS Management Console (or the Firefox plug ins) and command line tools to do the following: Figure 4: Minimum items to learn about services in a Proof of Concept Learn about the AWS security features Be aware of the AWS security features available today Use them at every stage of the migration process as you see fit During the Proof of Concept Phase learn about the various security features provided by AWS: AWS credentials Multi Factor Authentication (MFA) authentication and authorization At a minimum learn about the AWS Identity and Access Management (IAM) features that allow you to create multiple users and manage the permissions for each of these users within your AWS Account Figure 5 highlights the topics you need to learn regarding IAM: Learn Amazon S3 Create a bucket Upload an object Create a signed URL Create a CloudFront Distribution Learn Amazon EC2 Launch AMI Customize AMI Bundle AMI Launch a customized AMI Learn about Security Groups Test different Availability Zones Create EBS Volume Attach Volume Create Snapshot of a Volume Restore Snapshot Create Elastic IP Map DNS to Elastic IP Learn Amazon RDS Launch a DB Instance Take a backup Scale up vertically Scale out horizontally (more storage) Setup Multi AZ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 12 of 23 Figure 5: Minimum items to learn about security in a Proof of Concept Phase At this stage you want to start thinking about whether you want to create different IAM groups for different business functions within your organization or create groups for different IT roles (admins developers testers etc) and whether you want to create users to match your organization chart or create users for each application Build a Proof OfConcept Build a proof ofconcept that represents a microcosm of your application or which tests critical functionality of your application in the cloud environment Start with a small database (or a dataset); don’t be afraid of launching and terminating instances or stresstesting the system For example if you are thinking of migrating a web application you can start by deploying miniature models of all the pieces of your architecture (database web application load balancer) with minimal data In the process learn how to build a Web Server AMI how to set the security group so that only the web server can talk to the app server how to store all the static files on Amazon S3 and mount an EBS volume to the Amazon EC2 instance how to manage/monitor your application using Amazon CloudWatch and how to use IAM to restrict access to only the services and resources required for your application to function Most of our enterprise customers dive into this stage and reap tremendous value from building pilots We have noticed that customers learn a lot about the capabilities and applicability of AWS during the process and quickly broaden the set of applications that could be migrated into the AWS cloud In this stage you can build support in your organization validate the technology test legacy software in the cloud perform necessary benchmarks and set expectations At the end of this phase you should be able to answer the following questions:  Did I learn the basic AWS terminology (instances AMIs volumes snapshots distributions domains and so on)?  Did I learn about many different aspects of the AWS cloud (compute storage network database security ) by building this proof of concept ?  Will this proof of concept support and create awareness of the power of the AWS cloud within the organization?  What is the best way to capture all the lessons that I learned? A whitepaper or internal presentation?  How much effort is required to roll this proof ofconcept out to production?  Which applications can I immediately move after this proof of concept? After this stage you will have far better visibility into what is available with AWS today You will get handson experience with the new environment which will give you more insight into what hurdles need to be overcome in order to move ahead Learn IAM Create Groups Create a policy Learn about Resources and Conditions Create Users Generate new access credentials Assign users to groups This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 13 of 23 Phase 3: Data Migration Phase In this phase enterprise architects should ask following questions:  What are the different storage options available in the cloud today?  What are the different RDBMS (commercial and open source) options available in the cloud today?  What is my data segmentation strategy? What tradeoffs do I have to make?  How much effort (in terms new development oneoff scripts) is required to migrate all my data to the cloud? When choosing the appropriate storage option one size does not fit all There are several dimensions that you might have to consider so that your application can scale to your needs appropriately with minimal effort You have to make the right tradeoffs among various dimensions cost durability queryability availability latency performance (response time) relational (SQL joins) size of object stored (large small) accessibility read heavy vs write heavy update frequency cacheability consistency (strict eventual) and transience (shortlived) Weigh your tradeoffs carefully and decide which ones are right for your application The beauty about AWS is that it does n’t restrict you to use one service or another You can use any number of the AWS storage options in any combination Understand Various Storage Options Available in the AWS Cloud The table will help explain which storage option to use when: Amazon S3 + CloudFront Amazon EC2 Ephemeral Store Amazon EBS Amazon SimpleDB Amazon RDS Ideal for Storing l arge write once read many types of objects Static Content Distribution Storing non persistent transient updates Offinstance persistent storage for any kind of data Query able light weight attribute data Storing and querying structured relational and referential d ata Ideal examples Media files audio video images Backups archives versioning Config d ata scratch files TempDB Clusters boot data Log or data of commercial RDBMS like Oracle DB2 Querying Indexing Mapping taggin g clickstream logs metadata Configuration catalogs Web apps Complex transactional systems inventory management and order fulfillment systems Not recommended for Querying Searching Storing d atabase logs or backups customer data Static data Web facing content key value data Complex joins or transactions BLOBs Relational Typed data Clusters Not recommended examples Database File Systems Shared drives Sensitive data Content Distribution OLTP DW cube rollups Clustered DB Simple lookups Table 3: Data Storage Options in AWS cloud Migrate your Fileserver systems Backups and Tape Drives to Amazon S3 If your existing infrastructure consists of Fileservers Log servers Storage Area Networks (SANs) and systems that are backing up the data using tape drives on a periodic basis you should consider storing this data in Amazon S3 Existing applications can utilize Amazon S3 without major change If your system is generating data every day the recommended migration flow is to point your “pipe” to Amazon S3 so that new data is stored in the cloud right away Then you can have an independent batch process to move old data to Amazon S3 Most enterprises take advantage of their existing encryption tools (256bit AES for data atrest 128bit SSL for data intransit) to encrypt the data before storing it on Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 14 of 23 Understand various RDBMS options in the AWS cloud For your relational database you have multiple options to choose: Amazon RDS RDBMS AMIs 3rd Party Database Service RDBMS MySQL Oracle 11g Microsoft SQL Server MySQL IBM DB2 Sybase Informix PostGreSQL Vertica AsterData Support provided by AWS Premium Support AWS and Vendor Vendor Managed by AWS Yes No No Pricing Model Payasyougo BYOL Pay asyougo Various Scalability Scale compute and storage with a single API call or a click Manual Vendor responsibility Table 4: Relational Database Options Migrate your MySQL Databases to Amazon RDS If you use a standard deployment of MySQL moving to Amazon RDS will be a trivial task Using all the standard tools you will be able to move and restore all the data into an Amazon RDS DB instance After you move the data to a DB instance make sure you are monitoring all the metrics you need It is also highly recommended that you set your retention period so AWS can automatically create periodic backups Migrate your Commercial D atabases to Amazon EC2 using Relational DB AMIs If you require transactional semantics (commit rollback) and are running an OLAP system simply use traditional migration tools available with Oracle MS SQL Server DB2 and Informix All of the major databases are available as Amazon Machine Images and are supported in the cloud by the vendors Migrating your data from an onpremise installation to an Amazon EC2 cloud instance is no different than migrating data from one machine to another Move Large Amounts of Data using Amazon Import/Export Service When transferring data across the Internet becomes cost or time prohibitive you may want to consider the AWS Import/Export service With AWS Import/Export Service you load your data on USB 20 or eSATA storage devices and ship them via a carrier to AWS AWS then uploads the data into your designated buckets in Amazon S3 For example if you have multiple terabytes of log files that need to be analyzed you can copy the files to a supported device and ship the device to AWS AWS will restore all the log files in your designated bucket in Amazon S3 which can then be fetched by your cloudhosted business intelligence application or Amazon Elastic MapReduce services for analysis If you have a 100TB Oracle database with 50GB of changes per day in your data center that you would like to migrate to AWS you might consider taking a full backup of the database to disk then copying the backup to USB 20 devices and shipping them Until you are ready to switch the production DBMS to AWS you take differential backups The full backup is restored by the import service and your incremental backups are transferred over the Internet and applied to the DB Instance in the cloud Once the last incremental backup is applied you can begin using the new database server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 15 of 23 Phase 4: Application Migration Phase In this phase you should ask the following question:  How can I move part of or an entire system to the cloud without disrupting or interrupting my current business? In this phase you will learn two main application migration strategies: Forklift Migration Strategy and Hybrid Migration Strategy We will discuss the pros and cons of each strategy to help you decide the best approach that suits your application Based on the classification of application types (in Phase 1) you can decide which strategy to apply for what type of application Forklift Migration Strategy Stateless applications tightly coupled applications or selfcontained applications might be better served by using the forklift approach Rather than moving pieces of the system over time fork lift or “pick it all up at once” and move it to the cloud Selfcontained Web applications that can be treated as single components and backup/archival systems are examples of these types of systems that can be moved into the cloud using this strategy Components of a 3tier web application that require extremelylow latency connectivity between them to function and cannot afford internet latency might be best suited to this approach if the entire application including the web app and database servers is moved to the cloud all at once In this approach you might be able to migrate an existing application into the cloud with few code changes Most of the changes will involve copying your application binaries creating and configuring Amazon Machine Images setting up security groups and elastic IP addresses DNS switching to Amazon RDS databases This is where AWS’s raw infrastructure services (Amazon EC2 Amazon S3 Amazon RDS and Amazon VPC) really shine In this strategy the applications might not be able to take immediate advantage of the elasticity and scalability of the cloud because after all you are swapping real physical servers with EC2 instances or replacing file servers with Amazon S3 buckets or Amazon EBS volumes; logical components matter less than the physical assets However it’s important to realize that by using this approach for certain application types you are shrinking your IT infrastructure footprint (one less thing to worry about) and offloading the undifferentiated heavy lifting to AWS This enables you to focus your resources on things that actually differentiate you from your competitors You will revisit this application in the nex t stages and will be able to realize even more benefits of the cloud Like with any other migration having a backup strategy a rollback strategy and performing end toend testing is a must when using this strategy Hybrid Migration Strategy A hybrid migration consists of taking some parts of an application and moving them to the cloud while leaving other parts of the application in place The hybrid migration strategy can be a lowrisk approach to migration of applications to the cloud Rather than moving the entire application at once parts can be moved and optimized one at a time This reduces the risk of unexpected behavior after migration and is ideal for large systems that involve several applications For example if you have a website and several batch processing components (such as indexing and search) that power the website you can consider using this approach The batch processing system can be migrated to the cloud first while the website continues to stay in the traditional data center The data ingestion layer can be made “cloud awar e” so that the data is directly fed to an Amazon EC2 instance of the batch processing system before every job run After proper testing of the batch processing system you can decide to move the website application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 16 of 23 Onsite or co lo AWS cloud Notes Service Business Component or a Feature that consists of app code business logic data access layer and database Thin Layer of “cloud aware” code to be written that uses web services interface of the component consists of stubs/skeletons Keep the DB close to the component using it: If all the components use the same database it might be advisable to move all the components and the database together all at once If all the components use different database instances/schemas and are mutually exclusive but are hosted on the same physical box it might be advisable to separate the logical databases and move them along with component during migration Proxy may or may not be used Table 5: Hybrid Lowrisk Migration Strategy of Components into the cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 17 of 23 In this strategy you might have to design architect and build temporary “wrappers” to enable communication between parts residing in your traditional datacenter and those that will reside i n the cloud These wrappers can be made “cloud aware” and asynchronous (using Amazon SQS queues wherever applicable) so that they are resilient to changing internet latencies This strategy can also be used to integrate cloud applications with other cloudincompatible legacy applications (Mainframe applications or applications that require specialized hardware to function) In this case you can write “cloud aware” web service wrappers around the legacy application and expose them as web service Since web ports are accessible from outside enterprise networks the cloud applications can make a direct call to these web services and which in turn interacts with the mainframe applications You can also setup a VPN tunnel between the legacy applications that reside onpremise and cloud applications Configuring and Creating your AMIs In many cases it is best to begin with AMIs either provided by AWS or by a trusted solution provider as the basis of AMIs you intend to use going forward Depending on your specific requirements you may also need to leverage AMIs provided by other ISVs In any case the process of configuring and creating your AMIs is the same It is recommended that you create an AMI for each component designed to run in a separate Amazon EC2 instance It is also recommended to create an automated or semiautomated deployment process to reduce the time and effort for re bundling AMIs when new code is released This would be a good time to begin thinking about a process for configuration management to ensure your servers running in the cloud are included in your process Phase 5: Leverage the Cloud After you have migrated your application to the cloud run the necessary tests and confirmed that everything is working as expected it is advisable to invest time and resources to determine how to leverage additional benefits of the cloud Questions that you can ask at this stage are:  Now that I have migrated existing applications what else can I do in order to leverage the elasticity and scalability benefits that the cloud promises? What do I need to do differently in order to implement elasticity i n my applications?  How can I take advantage of some of the other advanced AWS features and services?  How can I automate processes so it is easier to maintain and manage my applications in the cloud?  What do I need to do specifically in my cloud application so that it can restore itself back to original state in an event of failure (hardware or software)? Leverage other AWS services Auto Scaling Servic e Auto Scaling enables you to set conditions for scaling up or down your Amazon EC2 usage When one of the conditions is met Auto Scaling automatically applies the action you’ve defined Examine each cluster of similar instances in your Amazon EC2 fleet and see whether you can create an Auto Scaling group and identify the criteria of scaling automatically (CPU utilization network I/O etc ) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 18 of 23 At minimum you can create an Auto Scaling group and set a condition that your Auto Scaling group will always contain a fixed number of instances Auto Scaling evaluates the health of each Amazon EC2 instance in your Auto Scaling group and automatically replaces unhealthy Amazon EC2 instances to keep the size of your Auto Scaling group constant Amazon CloudFront With just a few clicks or command line calls you can create an Amazon CloudFront distribution for any of your Amazon S3 buckets This will edge cache your static objects closer to the customer and reduce latency This is often so easy to do that customers don’t wait until this phase to take advantage of CloudFront; they do so much earlier in the plan The Migrating to CloudFront4 whitepaper gives you more information Amazon Elastic MapReduce For analyzing any large dataset or processing large amount of media one can take advantage of Amazon Elastic MapReduce Most enterprises have metrics data to process or logs to analyze or large data sets to index With Amazon Elastic MapReduce you can create repeatable job flows that can launch a Hadoop cluster process the job expand or shrink a running cluster and terminate the cluster all in few clicks Automate Elasticity Elasticity is a fundamental property of the cloud To understand elasticity and learn about how you can build architectures that supports rapid scale up and scale down refer to the Architecting for the cloud whitepaper5 Elasticity can be implemented at different levels of the application architecture Implementing elasticity might require refactoring and decomposing your application into components so that it is more scalable The more you can automate elasticity in your application the easier it will be to scale your application horizontally and therefore the benefit of running it in the cloud is increased In this phase you should try to automate elasticity After you have moved your application to AWS and ensured that it works there are 3 ways to automate elasticity at the stack level This enables you to quickly start any number of application instances when you need them and terminate them when you don’t while maintaining the application upgrade process Choose the approach that best fits your software development lifestyle 1 Maintain Inventory of AMIs It’s easiest and fastest to setup inventory of AMIs of all the different configurations but difficult to maintain as newer versions of applications might mandate updating the AMIs 2 Maintain a Golden AMI and fetch binaries on boot This is a slightly more relaxed approach where a base AMI (“Golden Image”) is used across all application types across the organization while the rest of the stack is fetched and configured during boot time 3 Maintain a JustEnoughOS AMI and a library of recipes or install scripts This approach is probably the easiest to maintain especially when you have a huge variety of application stacks to deploy In this approach you leverage the programmable infrastructure and maintain a library of install scripts that are executed ondemand 4 http://developeramazonwebservicescom/connect/entry!defaultjspa?categoryID=267&externalID=2456 5 http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 19 of 23 Figure 6: Three ways to automate elasticity while maintaining the upgrade process Harden Securi ty The cloud does not absolve you from your responsibility of securing your applications At every stage of your migration process you should implement the right security best practices Some are listed here:  Safeguard your AWS credentials o Timely rotate your AWS access credentials and immediately rotate if you suspect a breach o Leverage multifactor authentication  Restrict users to AWS resources o Create different users and groups with different access privileges (policies) using A WS Identity and Access Management (IAM) features to restrict and allow access to specific AWS resources o Continuously revisit and monitor IAM user polici es o Leverage the power of security groups in Amazon EC2  Protect your data by encrypting it atrest (AES) and intrans it (SSL) o Automate security policies  Adopt a recovery strategy o Create periodic Amazon EBS snapshots and Amazon RDS backups o Occasionally test your backups before you need them Automate the Incloud Software Development Lifecycle and Upgrade Process In the AWS cloud there is no longer any need to place purchase orders for new hardware ahead of time or to hold unused hardware captive to support your software development lifecycle Instead developers system builders and testers can request the infrastructure they need minutes before they need it taking advantage of the vast scale and rapid response time of the cloud With a scriptable infrastructure you can completely automate your software development and deployment lifecycle You could manage your development build testing staging and production This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 20 of 23 environments by creating reusable configuration tools managing specific security groups and launching specific AMIs for each environment Automating your upgrade process in the cloud is highly recommended at this stage so that you can quickly advance to newer versions of the applications and also rollback to older versions when necessary With the cloud you don’t have to install new versions of software on old machines but instead throw away old instances and relaunch new fresh pre configured instances If upgrade fails you simply throw it away and switch to new hardware with no additional cost Create a Dashboard of your Elastic Datacenter to Manage AWS Resources It should be easy and frictionfree for the engineering and project managers to provision and relinquish AWS cloud resources At the same time the management team should also have visibility into the ways in which AWS resources are being consumed The AWS Management Console provides a view of your cloud datacenter It also provides you with basic management and monitoring capabilities (by way of Amazon CloudWatch) for your cloud resources The AWS Management Console is continually evolving It offers rich user interface to manage AWS services However if the current view does not fit your needs we advise you to consider using third party tools that you are already familiar with (like CA IBM Tivoli) or to create your own console by leveraging the Web Service APIs Using Web Service APIs It’s fairly straightforward to create a web client that consumes the web services API and create custom control panels to suit your needs For example if you have created a presales demo application environment in the cloud for your sales staff so that they can quickly launch a preconfigured application in the cloud you may want to create a dashboard that displays and monitors the activity of each sales person and each customer Manage and limit access permissions based on the role of the sales person and revoke access if the employee leaves the company There are several libraries available in our Resource Center that can help you get started with creating the dashboard that suits your specific requirement Create a Business Continuity Plan and Achieve High Availability (Leverage M ultiple Availability Zones) Many companies fall short in disaster recovery planning because the process is not fully automatic and because it is cost prohibitive to maintain a separate datacenter for disaster recovery The use of virtualization (ability to bundle AMI) and data snapshots makes the disaster recovery implementation in the cloud much less expensive and simpler than traditional disaster recovery solutions You can completely automate the entire process of launching cloud resources which can bring up an entire cloud environment within minutes When it comes to failing over to the cloud recovering from system failure due to employee error is the same as recovering from an earthquake Hence it is highly recommended that you have your business continuity plan and set your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) Your business continuity plan should include:  data replication strategy (source destination frequency) of databases (Amazon EBS)  data backup and retention strategy (Amazon S3 and Amazon RDS)  creating AMIs with the latest patches and code updates (Amazon EC2)  recovery plan to fail back to the corporate data center from the cloud postdisaster The beauty of having a business continuity strategy implemented in the cloud is that it automatically gives you higher availability across different geographic regions and Availability Zones without any major modifications in deployment and data replication strategies You can create a much higher availability environment by cloning the entire architecture and replicating it in a different Availability Zone or by simply using MultiAZ deployments (in case of Amazon RDS) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 21 of 23 Phase 6: Optimization Phase In this phase you should focus on how you can optimize your cloudbased application in order to increase cost savings Since you only pay for the resources you consume you should strive to optimize your system whenever possible In most cases you will see immediate value in the optimizations A small optimization might result in thousands of dollars of savings in your next monthly bill At this stage you can ask the following questions:  How can I use some of the other AWS features and services in order to further reduce my cost?  How can I improve the efficiency (and reduce waste) in my deployment footprint?  How can I instrument my applications to have more visibility of my deployed applications? How can I set metrics for measuring critical application performance?  Do I have the necessary cloudaware system administration tools required to manage and maintain my applications?  How can I optimize my application and database to run in more elastic fashion? Understanding your Usage Patterns With the cloud you don’t have to master the art of capacity planning because you have the ability to create an automated elastic environment If you can understand monitor examine and observe your load patterns you can manage this elastic environment more effectively You can be more proactive if you understand your traffic patterns For example if your customerfacing website deployed in AWS global infrastructure does not expect any traffic from certain part of the world in certain time of the day you can scale down your infrastructure in that AWS region for that time The closer you can align your traffic to cloud resources you consume the higher the cost savings will be Terminate the UnderUtilized Instances Inspect the system logs and access logs periodically to understand the usage and lifecycle patterns of each Amazon EC2 instance Terminate your idle instances Try to see whether you can eliminate underutilized instances to increase utilization of the overall system For example examine the application that is running on an m1large instance (1X $040/hour) and see whether you can scale out and distribute the load across to two m1small instances (2 X $010/hour) instead Leverage Amazon EC2 Reserved Instances Reserved Instances give you the option to make a low onetime payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance When looking at usage patterns try to identify instances that are running in steadystate such as a database server or domain controller You may want to consider investing in Amazon EC2 Reserved Instances (3 year term) for servers running above 24% or higher utilization This can save up to 49% of the hourly rate Improve Efficiency The AWS cloud provides utilitystyle pricing You are billed only for the infrastructure that has been used You are not liable for the entire infrastructure that may be in place This adds a new dimension to cost savings You can make very measureable optimizations to your system and see the savings reflected in your next monthly bill For example if a caching layer can reduce your data requests by 80% you realize the reward right in the next bill Improving performance of the application running in the cloud might also result in overall cost savings For example if your application is transferring a lot of data between Amazon EC2 and your private data center it might make sense to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 22 of 23 compress the data before transmitting it over the wire This could result in significant cost savings in both data transfer and storage The same concept applies to storing raw data in Amazon S3 Man agement and Maintenance Advanced Monitoring and Telemetry Implement telemetry in your cloud applications so it gives you the necessary visibility you need for your missioncritical applications or services It is important to understand that enduser response time of your applications depends upon various factors not just the cloud infrastructure – ISP connectivity thirdparty services browsers and hops just to name a few Measuring and monitoring the performance of your cloud applications will give you the opportunity to proactively identify any performance issues and help you diagnose the root causes so you take appropriate actions For example if an enduser accessing the nearest node of your globally hosted application is experiencing a lower response rate perhaps you can try launching more web servers You can send yourself notifications using Amazon Simple Notifications Service (HTTP/Email/SQS) if the metric (of a given AWS resource or an application) approaches an undesired threshold Track your AWS Usage and Logs Monitor your AWS usage bill Service API usage reports Amazon S3 or Amazon CloudFront access logs periodically Maintain Security of Your Applications Ensure that application software is consistent and always up to date and that you are patching your operating systems and applications with the latest vendor security updates Patch an AMI not an instance and redeploy often; ensure that the latest AMI is deployed across all your instance s Reengineer your application To build a highly scalable application some components may need to be reengineered to run optimally in a cloud environment Some existing enterprise applications might mandate refactoring so that they can run in an elastic fashion Some questions that you can ask:  Can you package and deploy your application into an AMI so it can run on a n Amazon EC2 instance? Can you run multiple instances of the application on one instance if needed? Or can you run multiple instances on multiple Amazon EC2 instances?  Is it possible to design the system such that in the event of a failure it is resilient enough to automatically re launch and restart?  Can you divide the application into components and run them on separate Amazo n EC2 instances? For example can you separate a complex web application into individual components or layers of Web App and DB and run them on separate instances?  Can you extract stateful components and make them stateless?  Can you consider application partitioning (splitting the load across many smaller machines instead of fewer larger machines)?  Is it possible to isolate the components using Amazon SQS?  Can you decouple code with deployment and configuration? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 23 of 23 Decompose your Relational database Most traditional enterprise applications typically use a relational database system Database administrators often start with a DB schema based on the instructions from developer Enterprise developers assume unlimited scalability on fixed infrastructure s and develop the application against the schema Developers and database architects may fail to communicate with each other on what type of data is being served which makes it extremely difficult to scale that relational database As a result much time may be wasted migrating data to a “bigger box” with more storage capacity or scaling up to get more computing horsepower Moving to the cloud gives them the opportunity to analyze their current relational database management system and make it more scalable as a part of the migration Some techniques that might help take the load off of your RDBMS:  Move large blob object and media files to Amazon S3 and store a pointer (S3 key) in your existing database  Move associated metadata or catalogs to Amazon SimpleD B  Keep only the data that is absolutely needed (joins) in the relational database  Move all relational data into Amazon RDS so you have the flexibility of being able to scale your database compute and storage resources with an API call only when you need it  Offload all the read load to multiple Read Replicas (Slaves)  Shard (or partition) the data based on item IDs or names Implement Best Practices Implement various best practices highlighted in the Architecting for the cloud whitepaper These best practices will help you to create not only a highly scalable application conducive to the cloud but will also help you to create a more secure and elastic application Conclusion The AWS cloud brings scalability elasticity agility and reliability to the enterprise To take advantage of the benefits of the AWS cloud enterprises should adopt a phasedriven migration strategy and try to take advantage of the cloud as early as possible Whether it is a typical 3tier web application nightly batch process or complex backend processing workflow most applications can be moved to the cloud The blueprint in this paper offers a proven step by step approach to cloud migration When customers follow this blueprint and focus on creating a proof of concept they immediately see value in their proof of concept projects and see tremendous potential in the AWS cloud After you move your first application to the cloud you will get new ideas and see the value in moving more applications into the cloud Further Reading 1 Migration Scenario #1: Migrating web applications to the AWS cloud 2 Migration Scenario #2: Migrating batch processing applications to the AWS cloud 3 Migration Scenario #3: Migrating backend processing pipelines to the AWS cloud,General,consultant,Best Practices Modernize_Your_Microsoft_Applications_on_AWS,ArchivedModernize Your Microsoft Applications on Amazon Web Services How to Start Your Journey March 201 6 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 2 of 14 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 3 of 14 Contents Abstract 3 Why Modernize Applications? 4 Why Run Microsoft Applications on AWS? 5 AWS for Corporate Applications 5 AWS for LoB Applications and Databases 5 AWS for Developers 5 Which Microsoft Applications Can I Run on AWS? 6 How Do I Get Started? 6 Security and Access 7 Compute: Windows Server Running on EC2 Instances 9 Databases: SQL Server Running on Amazon RDS or EC2 10 Management Services: Amazon CloudWatch AWS CloudTrail Run Command 11 Complete the Solution with the AWS Marketplace 12 Licensing Considerations 13 Conclusion 14 Abstract The cloud is now the center of most enterprise IT strategies Many enterprises find that a well planned “lift and shift” move to the cloud results in an immediate business payoff This whitepaper is intended for IT pros and business decision makers in Microsoftcentric organizations who want to take a cloudbased approach to IT and must modernize existing businesscritical applications built on Microsoft Windows Server and Microsoft SQL Server This paper covers the benefits of modernizing applications on Amazon Web Services (AWS) and how to get started on the journey ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 4 of 14 Why Modernize Applications? For m any IT organizations application modernization is a major initiative for a few major reasons:  Move off legacy software To avoid the time cost and performance and reliability challenges of maintaining legacy software and unsupported versions (Windows Server 2003 SQL Server 2003 and SQL Server 2005)  DevOps Initiatives To take advantage of new DevOps and application lifecycle management methodologies By moving to new application delivery platforms companies can increase the speed of innovation  Mobility initiatives As users move to mobile devices the use of IT services can increase by one or more orders of magnitude This poses scalability challenges if an application is not prepared for that kind of growth  New product launches New product launches can cause rapid spikes in demand for IT The underlying applications including Microsoft SQL Server and Microsoft SharePoint must be ready with the scale required to support the launch  Mergers and acquisitions (M&A) activity In the case of mergers and acquisitions complexity builds up over time After multiple acquisitions a company may find itself in possession of several hundred SharePoint sites multiple Exchange instances and countless SQL Server databases Streamlining the management of disparate applications is often a huge undertaking ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 5 of 14 Why Run Microsoft Applications on AWS? In a recent survey1 International Data Corporation (IDC) reported that 50 percent of respondents were using AWS to support productivity applications like those from Microsoft Of that number 65 percent said they planned to increase their use of AWS either to move existing applications or to expand applications already running on AWS Clearly customers are already making the move to modernize their Microsoft applications AWS for Corporate Applications Customers can improve their security posture and application performance and reliability by running corporate applications built on Microsoft Windows Server in the AWS cloud For example customers can deploy a globally accessible SharePoint environment in any of the 33 AWS Availability Zones in a matter of hours To reduce complexity customers can use AWS tools that integrate with Microsoft management and access control applications like System Center and Active Directory Customers can also use AWS CloudFormation templates to perform application deployments reliably and repeatedly AWS for LOB Applications and Databases Line of business (LOB) owners are running applications in areas as diverse as oil and gas exploration retail point of sale (POS) finance health care insurance pharmaceuticals media and entertainment and more To accelerate and simplify the time to deployment customers can launch preconfigured Amazon Machine Image (AMI) templates with fully compliant Microsoft Windows Server and Microsoft SQL Server licenses included AWS for Developers Customers who develop on AWS have access to Microsoft development tools including Visual Studio PowerShell and the NET Developer Center When these tools are combin ed with scalability and agility of AWS CodeDeploy AWS Elastic 1 http://wwwidccom/getdocjsp?containerId=256654 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 6 of 14 Beanstalk (Elastic Beanstalk) and AWS OpsWorks customers can complete and deploy code on AWS much faster and with lower risk Which Microsoft Applications Can I Run on AWS? Customers have successfully deployed virtually every Microsoft application to the AWS cloud including:  Microsoft Windows Server  Microsoft SQL Server  Microsoft Active Directory  Microsoft Exchange Server  Microsoft Dynamics CRM and Dynamics AX Dynamics ERP  Microsoft SharePoint Server  Microsoft System Center  Skype for Business (formerly Microsoft Lync)  Microsoft Project Server  Microsoft Visual Studio Team Foundation Server  Microsoft BizTalk Server  Microsoft Remote Desktop Services How Do I Get Started? For enterprises the first step is to determine which of the more than 50 AWS services will be used to support their application modernization initiative The following figure shows how the typical functions of an enterprise IT organization map to AWS offerings This paper discusses some of the key services in this map and how they fit into a Microsoft application modernization initiative ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 7 of 14 Figure 1: A Conceptual Map of Enterprise IT with Amazon Web Services Security and Access We worked with AW S to develop a security model that allows us to be more secure in AWS than we can be even in our own data centers — Rob Alexander CIO Capital One With the increasing concern and focus on security most customers start here by choosing services that ensure compliance and manage risk The same security isolations found in a traditional data center are used in the AWS cloud including physical security separation of the network isolation of server hardware and isolation of storage AWS has achieved ISO 27001 certification and has been validated as a Level 1 service provider under the Payment Card Industry (PCI) Data Security Standard (DSS) AWS undergo es annual Service Organization Control (SOC) 1 audits and has been successfully evaluated at the Moderate level for federal government systems a nd Department of Defense Information Assurance Certification and Accreditation Process (DICAP) Level 2 for Department of Defense ( DOD) systems ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 8 of 14 For many enterprises considering the right set of services for security and permissions AWS virtual private networks AWS Direct Connect and AWS Directory Services are at the heart of the discussion Amazon Virtual Private Cloud (Amazon VPC) lets customers launch AWS resources into a virtual network that they've defined This virtual network closely resembles a traditional network in an onpremises data center but with the benefits of the scalable infrastructure of AWS AWS Direct Connect links the organization’s internal network to AWS over a private 1 gigabit or 10 gigabit Ethernet fiberoptic cable One end of the cable is connected to the data center router the other to an AWS Direct Connect router With this encrypted connection in place customers can create virtual interfaces directly to the AWS cloud (for example to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3)) and to Amazon VPC bypassing Internet service providers in the network path AWS Directory Service is a managed service that makes it easy to connect AWS services to existing onpremises Microsoft Active Directory (through the use of AD Connector) or to set up and operate a new directory in the AWS cloud (through the use of Simple AD and AWS Directory Service for Microsoft Active Directory) Data encryption services are provided for data in flight (through SSL) and at rest through options for both serverside and clientside encryption AWS Certificate Manager (ACM) AWS Key Management Service (AWS KMS) and AWS CloudHSM can be used together to ensure key and certificate management services are provided to securely generate store and manage cryptographic keys used for data encryption Finally AWS WAF provides web application firewall services to help protect web applications from common web exploits that could affect application availability compromise security or consume excessive resources ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 9 of 14 Compute: Windows Server Running on EC2 Instances We didn’t have time to redesign applications AWS could support our legacy 32 bit a pplications on Windows Server 2003 a variety of Microsoft SQL Server and Oracle databases and a robust Citrix environment — Jim McDonald Lead Architect Hess After a security strategy is in place it’s time to look at the infrastructure that will support the applications that will be modernized Amazon EC2 is a web service that provides resizable computing capacity that is used to build and host software systems When designing Windows applications to run on Amazon EC2 customers can plan for rapid deployment and rapid reduction of compute and storage resources based on changing needs When customers run Windows Server on an EC2 instance they don't need to provision the exact system package of hardware virtualization software and storage the way they do with Windows Server onpremises Instead they can focus on using a variety of cloud resources to improve the scalability and overall performance of the Windows applications After an Amazon EC2 instance running Windows Server is launched it behaves like a traditional server running Windows Server For example whether Windows Server is deployed onpremises or on an Amazon EC2 instance it can run web applications conduct batch processing or manage applications requiring largescale computations Customers can remote directly into Windows Server instances using Remote Desktop Protocol for easy management They can run PowerShell scripts against a single Windows Server instance or against an entire fleet using the Amazon EC2 Run Command Applications built for Amazon EC2 use the underlying computing infrastructure on an asneeded basis They draw on resources (such as storage and computing) ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 10 of 14 on demand in order to perform a job and relinquish the resources when done In addition they often terminate themselves after the job is done While in operation the application scales up and down elastically based on resource requirements Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud This enables customers to achieve more fault tolerance in applications seamlessly providing the required amount of load balancing capacity required to distribute application traffic Auto Scaling lets customers follow the demand curve for applications very closely reducing the need to manually provision capacity in advance For example customers can set a condition to add new Amazon EC2 instances to the Auto Scaling group in increments when the average utilization of the Amazon EC2 fleet is high; similarly they can set a condition to remove instances in the same increments when CPU utilization is low Databases: SQL Server Running on Amazon RDS or Amazon EC2 Amazon Relational Database Service ( Amazon RDS) allows our DBA team to focus less o n the day today maintenance and use their time to work on enhancements And Elastic Load Balancing has allowed us to move away from expensive and complicated load balancers and retain the required functionality — Chad Marino Dir ector of Technology Services Kaplan Another key building block in modernization planning is the choice of database services Customers who want to manage scale and tune SQL Server deployments in the cloud can use Amazon RDS or run SQL Server on Amazon EC2 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 11 of 14 Customers who prefer to let AWS handle the day today management of SQL Server databases choose Amazon RDS because the service makes it easy to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patching minor version upgrades failed instance replacement and backup and recovery of SQL Server databases Amazon RDS also offers automated synchronous replication acros s multiple Availability Zones (Multi AZ) for a highly available and scalable environment fully managed by AWS This allows customers to focus on higher level tasks such as schema optimization query tuning and application development and eliminate the undifferentiating work that goes into maintenance and operation of the databases Amazon RDS for SQL Server supports Windows Authentication making it easier for customers to access and manage Amazon RDS for SQL Server instances Amazon RDS for SQL Server supports Microsoft SQL Server Express Web Standard and Enterprise Editions SQL Server Express is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments SQL Server Web Edition is best for public and Internet accessible web workloads SQL Server Standard Edition is suitable for most SQL Server workloads and can be deployed in a MultiAZ mode SQL Server Enterprise Edition is the most featurerich edition of SQL Server and can also be deployed in Multi AZ mode Management Services: Amazon CloudWatch AWS CloudTrail Run Command The way CSS automated launching instance s reduced the time to launch a project by about 75 percen t What used to take fou r days now only takes one day We’re not rebuilding web and database server s from the ground up all the time We can just clone and reuse images — Nick Morgan Enterprise Architect Unilever ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 12 of 14 AWS provides a comprehensive set of management services for the enterprise:  Amazon CloudWatch : Customers can use Amazon CloudWatch to monitor in real time AWS resources and applications running on AWS CloudWatch alarms send notifications or based on rules that customers define make changes automatically to the monitored resources  AWS CloudTrail : With AWS CloudTrail customers can monitor their AWS deployments in the cloud by getting a history of AWS API calls made in their account including API calls made through the AWS Management Console the AWS SDKs command line tools and higherlevel AWS services Customers can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address from which the calls were made and when the calls occurred CloudTrail can be integrated into applications using the API to automate trail creation for the organization check the status of trails and control how administrators turn CloudTrail logging on and off  Amazon EC2 Run Command : For automating common administrative tasks like patch management or configuration updates that apply across hundreds of virtual machines customers can use the Amazon EC2 Run Command which provides a simple method for running PowerShell scripts The Run Command is integrated with AWS Identity and Access Management (IAM) solutions to ensure administrators have access to updates for only th ose machines they own All updates are audited through AWS CloudTrail AWS addins for Microsoft System Center extend the functionality of existing System Center implementations for use with Microsoft System Center Operations Manager and Microsoft System Center Virtual Machine Manager After installation customers can use the familiar System Center interface to view and manage Amazon EC2 for Microsoft Windows Server resources in the AWS cloud as well as Windows Servers installed onpremises Complete the Solution with the AWS Marketplace Customers often have a preferred ISV for specialized software solutions for enhanced security business intelligence storage and more AWS Marketplace is an online store that makes it easy for customers to discover purchase and deploy the software and services they need to build solutions and run their businesses ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 13 of 14 With more than 2600 listings across more than 35 categories the AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements choose pricing options and automate the deployment of software and associated AWS resources with just a few clicks AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis The AWS Marketplace includes offerings from SAP Tableau NetApp Trend Micro F5 Networks and many more Customers have access to Microsoft applications such as Microsoft Windows Server Microsoft SQL Server and Microsoft SharePoint custom AMIs through Marketplace partners Licensing Considerations Customers have options for using new and existing Microsoft software licenses in the AWS cloud For new applications customers can purchase Amazon EC2 or Amazon RDS instances with a license included With this approach customers get new fully compliant Windows Server and SQL Server licenses directly from AWS Customers can use them on a “pay as you go” basis with no upfront costs or longterm investments Customers can choose from AMIs with just Microsoft Windows Server or with Windows Server and Microsoft SQL Server already installed Client access licenses (CALs) are included Customers who have already purchased Microsoft software have a “bring your own license” (BYOL) option which is allowed by Microsoft under the Microsoft License Mobility policy through Software Assurance Microsoft’s License Mobility program allows customers who already own Windows Server or Microsoft SQL Server licenses to run their deployment on Amazon EC2 and Amazon RDS This benefit is available to Microsoft Volume Licensing (VL) customers with Windows Server and SQL Server licenses (currently including Standard and Enterprise Editions) covered by Microsoft Software Assurance contracts In cases where t he customer’s license agreement requires control to the socket core or perVM level customers can use Amazon EC2 Dedicated Hosts which provide the customer with hardware that to track license consumption and compliance and report it to Microsoft or ISV s ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 14 of 14 Conclusion This paper describes the benefits of modernizing your applications on Amazon Web Services and how you can get started on the journey It shows how you can benefit from running corporate applications LOB and database applications or developing new applications using the AWS platform for your modernization initiative We recommend the AWS services that you should look to start the process of modernizing your applications on AWS,General,consultant,Best Practices Move_Amazon_RDS_MySQL_Databases_to_Amazon_VPC_using_Amazon_EC2_ClassicLink_and_Read_Replicas,"ArchivedMove Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas July 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Solution Overview 1 ClassicLink and EC2 Classic 2 RDS Read Replicas 2 RDS Snapshot s 2 Migration Topology 3 Migration Steps 5 Step 1: Enable ClassicLink for the Target VPC 6 Step 2: Set up a Proxy Server on an EC2 Classic Instance 6 Step 3: Use ClassicLink Between the Proxy Server and Target VPC 7 Step 4: Configure the DB Instance (EC2 Classic) 8 Step 5: Create a User on the DB Instance (EC2 Classic) 9 Step 6: Create a Temporary Read Replica (EC2 Classic) 9 Step 7: Enable Backups on the Read Replica (EC2 Classic) 10 Step 8: Stop Replication on Read Replica (EC2 Classic) 11 Step 9: Create Snapshot from the Read Replica (EC2 Classic) 12 Step 10: Share the Snapshot (Optional) 13 Step 11: Restore the Snapshot in the Target VPC 15 Step 12: Enable Backups on VPC RDS DB Instance 17 Step 13: Set up Replication Between VPC and EC2 Classic DB Instances 18 Step 14: Switch to the VPC RDS DB Instance 19 Step 15: Take a Snapshot of the VPC RDS DB Instance 20 Step 16: Change the VPC DB Instance to be ‘Privately’ Access ible (Optional) 20 Step 17: Move the VPC DB Instance into Private Subnets (Optional) 21 Alternative Approaches 22 AWS Database Migration Service (DMS) 22 ArchivedChanging the VPC Subnet for a DB Instance 23 Conclusion 24 Contributors 24 Further Reading 25 Appendix A: Set Up Proxy Server in Classic 25 ArchivedAbstract Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a rel ational database in the cloud If your Amazon Web Services ( AWS ) account was created before 2013 chances are you m ight be running Amazon RDS MySQL in an Amazon Elastic Compute Cloud ( EC2 )Classic environment and you are looking to migrate Amazon RDS into a n Amaz on EC2 Amazon Virtual Private Cloud ( VPC ) environment This whitepaper outlines the requirements and detailed steps needed to migrate Amazon RDS MySQL databases from EC2 Classic to EC2 VPC with minimal downtime using RDS MySQL Read Replicas and ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 1 Introduction There are two Amazon Elastic Compute Cloud (EC2) platforms that host Amazon Relational Database Service (RDS) database (DB) instances EC2 VPC and EC2 Classic On the EC2 Classic platform your instances run in a single flat network that you share with other customers On the EC2 VPC platform your instances run in a virtual private cloud (VPC) that’s logically isolated to your AWS account This logical network isolation closely resembles a traditional network you might op erate in your own data center plus it has the benefits of the AWS scalable infrastruc ture If you’re running RDS DB instances in an EC2 Classic environment you might be considering migrating your databases to Amazon VPC to take advantage of its features and capabilities However migrating databases across environments can involve complex backup and restore operations with longer down times that you might not be able to tolerate in your production environment This whitepaper focuses on how to use RDS read replica and snapshot capabilities to migrate a n RDS MySQL DB instance in EC2 Classic to a VPC over ClassicLink By leveraging RDS MySQL replication with ClassicLink you can migrate your databases easily and securely with minimal down time Alternative m ethods are also discussed Solution Overview This solution uses EC2 ClassicLink to enable an RDS DB instance in EC2 Classic (that is outside a VPC) to communicate to a VPC First a read replica of the DB instance in EC2 Classic is created Then a snapshot of the read replica (called the source DB instance ) is taken and used to set up a read replica in the VPC A ClassicLink proxy server enables communication between the source DB instance in EC2 Classic and the target read replica in the VPC Once the target read replica in the VPC has caught up with the source DB instance in EC2 Classic updates against the source are stopped and the target read replica is promoted At this point the connection details in any application that is reading or writing to the database are updated The source database remains fully operational during the migration minim izing downtime to applications Each of these components is explain ed in further detail as follows ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 2 ClassicLink and EC2 Classic EC2 ClassicLink allows you to connect EC2 Classic instances to a VPC within the same AWS R egion This allows you to associate VPC security groups with the EC2 Classic instance s enabling communication between EC2 Classic instances and VPC instances using private IP addresses The asso ciation between VPC security groups and the EC2 Classic instance removes the need to use public IP addresses or Elastic IP addresses to enable communic ation between these platforms ClassicLink is available to all users with accounts that support the EC2 Classic platform and can be used with any EC2 Classic instance Using ClassicLink and private IP address space for migration ensures all communication and data migration happens within the AWS network without requiring a public IP address for your DB instan ce or an Internet Gateway (IGW) to be set up for the VPC RDS Read Replicas You can create one or more read replicas of a given source RDS MySQL DB instance and serve high volume application read traffic from multiple copies of your data Amazon RDS uses the MySQL engine ’s native asynchronous replication to update the read replica whenever there is a change to the source DB instance The read replica operates as a DB instance that allows only read only connections; applications can connect to a read replic a just as they would to any DB instance Amazon RDS replicates all databases in the source DB instance Read replicas can also be promoted so that they become standalone DB instances RDS Snapshot s The ClassicLink solution relies on Amazon RDS snapshots t o initially create the target MySQL DB instance in your VPC Amazon RDS creates a storage volume snapshot of your DB instance backing up the entire DB instance and not just individual databases When you create a DB snapshot you need to identify which DB instance you are going to back up and then give your DB snapshot a name so you can restore from it later Creating this DB snapshot on a single Availability Zone ( AZ) DB instance results in a brief I/O suspension that typically lasts no more than a few m inutes Multi AZ DB instances are not ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 3 affected by this I/O suspension since the backup is taken on the standby instance Migration Topology ClassicLink allows you to link your EC2 Classic DB instance to a VPC in your account within the same Region After y ou've linked a n EC2 Classic DB instance it can communicate with instances in your VPC using their private IP addresses However instances in the VPC cannot directly access the AWS services provisioned by the EC2 Classic platform using ClassicLink So to migrate an RDS database from EC2 Classic to VPC you must set up a proxy server The proxy server uses ClassicLink to link the source DB instance in EC2 Classic to the VPC Port forwarding on the proxy server allows communication between the source DB instance in EC2 Classic and the target DB instance in the VPC This topology is illustrated in Figure 1 Figure 1: Topology for m igration in the same account If you ’re moving your RDS database to a different account you will need to set up a peering conne ction between the local VPC and the target VPC in the remote account This topology is illustrated in Figure 2 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 4 Figure 2: Topology for m igration to a different account Figure 3 illustrates how the snapshot of the DB instance is used to set up a read replica in the target VPC Figure 3: Creating a read replica snapshot and restoring in VPC A ClassicLink proxy enables communication between the source RDS DB instance in EC2 Classic and the target VPC replica as illustrated in Figure 4 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 5 Figure 4: Set ting up replication between the Classic and VPC read replica Figure 5 illustrates how updates against the source DB instance are stopped and the VPC replica is promoted to master status Figure 5: Cutting over application to the VPC RDS DB i nstance Migration Steps This section lists the steps you need to perform to migrate your RDS DB instance from EC2 Classic to VPC using ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 6 Step 1: Enable ClassicLink for the Target VPC In the Amazon VPC console from the VPC Dashboard select the VPC for which you want to enable ClassicLink select Actions in the drop down list and select Enable ClassicLink Then choose Yes Enable as shown below : Figure 6: Enabling ClassicLink Step 2 : Set up a Proxy Server on an EC2Classic Instance Install a proxy server on an EC2 Classic instance The proxy server forwards traffic to and from the RDS instance in EC2 Classic You can use an open source package such as NGINX for port forwarding For detailed information on setting up NGINX see Appendix A Set up appropriate security groups so the proxy server can communicate with the RDS instance in EC2 Classic In the following example the proxy server and the RDS instance in EC2 Classic are members of the same security group that allows traffic within the security group ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 7 Figure 7: Security group setup Step 3: Use ClassicLink Between the Proxy Server and Target VPC In the Amazon EC2 console from the EC2 Instances Dashboard select the EC2 Classic instance running the proxy server and choose ClassicLink on the Actions drop down list to create a ClassicLink connection with the target VPC Select the appropriate security group so that the proxy server can communicate with the RDS DB instance in your VPC In the example in Figure 8 SG A1 is selected Next choose Link to VPC ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 8 Figure 8: ClassicLink connection to VPC security group Step 4: Configure the DB Instance (EC2 Classic) In the Amazon RDS console from the RDS Dashboard under Parameter Groups select the parameter group associated with the RDS DB instance and use Edit Parameters to ensure the innodb_flush_log_at_trx_commit parameter is set to 1 (the default) This ensure s ACID compliance For more information see http://tinyurlcom/innodb flush logattrxcommit This step is necessary only if the value has been changed from the default of 1 Figure 9: Parameter group values on a Classic DB i nstance ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 9 Step 5: Create a U ser on the DB Instance (EC2 Classic) Connect to the RDS DB instance running in EC2 Classic via mysql client to create a user and grant permissions to replicate data Prompt> mysql h classicrdsinstance123456789012us east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> create user replicationuser identified by 'classictoVPC123'; Query OK 0 rows affected (001 sec) MySQL [(none)]> grant replication slave on ** to replicationus er; Query OK 0 rows affected (001 sec) Step 6: Create a Temporary Read Replica (EC2 Classic) Use a temporary read replica to create a snapshot and ensure that you have the correct information to set up replication on the new VPC DB instance In the Amazo n RDS console from the RDS Dashboard under Instances select the EC2 Classic DB instance and select Create Read Replica DB Instance Specify your re plication instance information Figure 10: Classic read replica instance properties ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 10 You then need to spec ify the network and security properties for the replica Figure 11: Classic read replica network and security properties Step 7: Enable Backups on the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica in EC2 Classic and use Modify DB Instances to set the Backup Retention Period to a nonzero number of days Setting this parameter to a positive number enables automated backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 11 Figure 12: Enabling b ackups Step 8: Stop Replication on R ead Replica (EC2 Classic) When you are ready to switch over c onnect to the RDS replica in EC2 Classic via a mysql client and issue the mysqlrds_stop_ replication command Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> call mysqlrds_stop_replication; + + | Message | + + | Slave is down or disabled | + + 1 row in set (102 sec) Query OK 0 rows affected (102 s ec) MySQL [(none)]> ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 12 Figure 13: Co nfirmation of replica status on the c onsole Using the following show slave status command save the replication status data in a local file You will need it later when setting up replication on the DB instance in VPC Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar p e ""show slave status \G"" > readreplicastatustxt Step 9 : Create Snapshot from the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica that you just stopped and use Take Snapshot to create a DB snapshot ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 13 Figure 1: Taking a Snapshot of the read r eplica Step 10: Share the S napshot (Optional) If you are migrating across account s you need to share the snapshot From the Amazon RDS console under Snapshots select the recently created read replica and use Share Snapshot to make the snapshot available across account s This step is not required if the target VPC is in same account After sharing the snapshot log in to the new account after this step is finished Figure 2: Sharing a snapshot between accounts If you are migrating to a different account you need to set up a peering connection between the local VPC and the target VPC in the remote account ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 14 You will have to allow access to the security group that you used when you enabled the ClassicLink between the proxy server and VPC Figure 16: Creating a VPC peering connection Figure 17: Enabling ClassicLink over a peering connection ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 15 Figure 18: ClassicLink settings for p eering Step 11: Restore the S napshot in the Target VPC From the Amazon RDS console under Snapshots select the Classic R ead Replica and use Restore Snapshot to restore the Read Replica snapshot You should also select MultiAZ D eployment at this time ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 16 Figure 19: Restoring s napshot in target VPC Note: We highly recommend that you enable the Multi AZ Deployment option during initial creation of the new VPC DB instance If you bypass this step and convert to Multi AZ after switching your application over to the VPC DB instance you can experience a significant performance impact especially for write intensive database w orkloads Under Networking & Security set Publicly Accessible to Yes Next select the target VPC and appropriate subnet groups to ensure connectivity from the VPC RDS DB instance to the Classic Proxy Server ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 17 Figure 20: Setti ng VPC and subnet group on V PC DB instance Figure 3: Security group settings for cross account migration Step 12: Enable Backups on VPC RDS DB Instance By default backups are not enabled on read replicas From the Amazon RDS console under Instances select the VPC RDS DB instance and use Modify DB Instances to enable backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 18 Figure 422: Setting backup r etention Step 13 : Set up Replication Between VPC and EC2 Classic DB Instance s Retrieve the log file name and log position number from information saved in the previous step Prompt> cat readreplicastatustxt | grep Master_Log_File Master_Log_File: mysql binchangelog001993 Prompt> cat readreplicastatustxt | grep Exec_Master_Log_Pos Exec_Master_Log_Pos: 120 Connect to the VPC RDS DB instance via a mysql client through the ClassicLink proxy and set the EC2 Classic RDS DB instance as the replication master by issuing the rds_start_ replication command Use the private IP address of the EC2 Classic proxy server as well as the log position from the output above MySQL [(none)]> call mysqlrds_set_external_master(' 3306'replicationuser''classictoVPC123' 'mysql binchangelog001993 '1200); Query OK 0 rows affected (012 sec) MySQL [(none)]> call mysqlrds_start_replication; + + | Message | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 19 + + | Slave running normally | + + 1 row in set (103 sec) Query OK 0 rows affected (103 sec) Verify the replication status on VPC Read Replica using the show slave status command MySQL [(none)]> show slave status \G; Step 14: Switch to the VPC RDS DB Instance After ensuring that the data in the VPC read replica has caught up to the EC2 Classic master c onfigure your application to stop writing data to the RDS DB instance in EC2 Classic After the replication lag has caught up c onnect to the VPC RDS DB instance via a mysql client and issue the rds_stop_ replication command MySQL [(none)]> call mysqlrds_stop_replication; At this point the VPC will stop replicating data from the master You can now promote the replica by connect ing to the VPC RDS DB instance via a mysql client and issuing the mysqlrds_rese t_external_master command MySQL [(none)]> call mysqlrds_reset_external_master; + + | Message | + + | Slave is down or disabled | + + 1 row in set (104 sec) + + | message | + + | Slave has been reset | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 20 + + 1 row in set (312 sec) Query OK 0 rows affected (312 sec) You can now change the endpoint in your application to write to the VPC RDS DB instance Step 1 5: Take a Snapshot of the VPC RDS DB Instance From the Amazon RDS console under Instances select the VPC RDS DB instance and use Take Snapshot to capture a user snapshot for recovery purposes Figure 23: Taking a snapshot of the DB instance in VPC Step 1 6: Change the V PC DB Instance to be ‘Privately’ A ccessible (Optional) After the migration to the new VPC RDS DB instance is complete you can make it be privately (not publicly) accessible From the Amazon RDS console under Instances select the DB instance and click Modify Under Network & Security set Publicly Accessible to No ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 21 Figure 24: Setting instance to not be publicly accessible Step 1 7: Move the VPC DB Instance into P rivate Subnets (Optional) You can edit the DB Subnet Group s membership for your VPC RDS DB instance to move the VPC RDS DB i nstance to a private subnet In the following example the subnets 1721620/24 and 1721630/24 are private subnets Figure 25: Configuring subnet groups To change the private IP address of the RDS DB instance in the VPC you have to perform a scale up or scale down operation For example you could choose a larger instance size After the IP address changes you can scale again to the original instance size ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 22 Figure 26: Forcing a scale optimization Note: Alternat ively you can open a n AWS support request (https://awsamazoncom/contact us/) and the RDS Operations team will move the migrated VPC RDS instance to the private subnet Alternative Approaches There are other ways to approach migrating your Amazon RDS MySQL databases from EC2 Classic to EC2 VPC We cover two alternatives here One approach is to use AWS Database Migrati on Service and another is to specify a new VPC subnet for a DB instance using the AWS Management Console AWS Database Migration Service (DMS) An alternative approach to migration is to use AWS Database Migration Service (DMS) AWS DMS can migrate your data to and from the most widely used commercial and open source databases The service supports homogenous migrations such as Amazon RDS to Amazon RDS as well as heterogeneous migrations between different database platforms such as Orac le to Amazon Aurora or Microsoft SQL Server to MySQL The source database remains fully operational during the migration minimizing downtime to applications that rely on the database ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 23 Although AWS DMS can provide comprehensive ongoing replication of data it replicates only a limited amount of data definition language (DDL) AWS DMS doesn't propagate items such as indexes users privileges stored procedures and other database changes not directly related to table data In addition AWS DMS does not auto matically leverage RDS snapshots for the initial instance creation which can increase migration time Changing the VPC Subnet for a DB Instance Amazon RDS provides a feature that allows you to easily move an RDS DB instance in EC2 Classic to a VPC You specify a new VPC subnet for an existing DB instance in the Amazon RDS console the Amazon RDS API or the AWS command line tools To specify a new subnet group in the Amazon RDS console under Network & Security Subnet Group expand the drop down list and select the subnet group that you want from the list You can choose to apply this change immediately or during the next scheduled maintenance window However there are a few limitations with this approach:  The DB instance isn’t available during the move The move could take between 5 to 10 minutes  Moving Multi AZ instances to a VPC is n’t currently supported  Moving an instance with read replicas to a VPC isn’t currently supported ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 24 Figure 27: Specifying a new subnet group (in a VPC) for a database instance If these limitations are acceptable for your DB instances we recommend that you test this feature by restoring a snapshot of your database in EC2 Classic and then moving it to your VPC If these limitations are not acceptable then the ClassicLin k approach presented in this white paper will enable you to minimize downtime during the migration to your VPC Conclusion This paper highlights the key steps for migrating RDS MySQL instances from EC2 Classic to EC2 VPC environments using ClassicLink and RDS read replicas This approach enables minimal down time for production environments Contributors The following individuals and organizations contributed to this document:  Harshal Pimpalkhute Sr Product Manager Amazon EC2 Networ king  Jaime Lichauco Database Administrator Amazon RDS ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 25  Korey Knote Database Administrator Amazon RDS  Brian Welcker Product Manager Amazon RDS  Prahlad Rao Solutions A rchitect Amazon Web Services Further Reading For additional help please consult the following sources:  http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PChtml  http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PCWorkingWithRDSInstanceinaVPChtml  http://docsawsamazoncom/AmazonRDS/latest/ UserGuide/CHAP_M ySQLhtml  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Net workinghtml  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpc classiclinkhtml Appendix A: Set Up Proxy Server in Classic Use an Amazon Machine Image ( AMI ) of your choice to launch an EC2 Classic instance The following example is based on the AMI Ubuntu Server 1404 LTS (HVM) Connect to the EC2 Classic instance and install NGINX: Prompt> sudo apt get update Prompt> sudo wget http://nginxorg/download/nginx 1912targz Prompt> sudo tar xvzf nginx 1912targz Prompt> cd nginx 1912 Prompt> sudo apt get install build essential Prompt> sudo apt get install libpcre3 libpcre3 dev Prompt> sudo apt get install zlib1g dev Prompt> sudo /configure withstream ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 26 Prompt> sudo make Prompt> sudo make install Edit the NGINX daemon file /etc/init/nginxconf : # /etc/init/nginxconf – Upstart file description ""nginx http daemon"" author “email"" start on (filesystem and net deviceup IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/local/nginx/sbin/nginx env PID=/usr/local/nginx/logs/nginxpid expect fork respawn respawn limit 10 5 prestart script $DAEMON t if [ $? ne 0 ] then exit $? fi end script exec $DAEMON Edit the NGINX configuration file /usr/local/nginx/conf/nginxconf : # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 27 server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } } From the command line start NGINX : Prompt> sudo initctl reload configuration Prompt> sudo initctl list | grep nginx Prompt> sudo initctl start nginx Configure NGINX port forwarding: # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } }",General,consultant,Best Practices NIST_Cybersecurity_Framework_CSF,NIST Cybersecurity Framework (CSF) Aligning to the NIST CSF in the AWS Cloud First Published January 2019 Updated October 12 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Intended audience 1 Introduction 1 Security benefits of a dopting the NIST CSF 3 NIST CSF implementation use cases 4 Healthcare 4 Financial services 5 International adoption 5 NIST CSF and AWS Best Practices 6 CSF core function: Identify 7 CSF core function: Protect 11 CSF core function: Detect 14 CSF core function: Respond 16 CSF core function: Recover 17 AWS services alignment with the CSF 19 Conclusion 20 Appendix A – Third party assessor validation 21 Contributors 22 Document revisions 22 Abstract Governments industry sectors and organizations around the world are increasingly recognizing the NIST Cybersecurity Framework (CSF) as a recommended cybersecurity baseline to help improve the cybersecurity risk management and resilience of their systems This paper evaluates the NIST CSF and the many AWS Cloud offerings public and commercial sector customers can use to align to the NIST CSF to improve your cybersecurity posture It also provides a thirdparty validated attestation confirming AWS services’ alignment with the NIST CSF risk management practices allowing you to properly protect your data across AWS Amazon Web Services NIST Cybersecurity Framework (CSF) 1 Intended audience This document is intended for cybersecurity professionals risk management officers or other organization wide decision makers considering how to implement a new or improve an existing cybersecurity framework in their organization For details on how to configure the AWS services identified in this document contact your AWS Solutions Architect Introduction The NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Cybersecurity Framework or CSF) was original ly published in February 2014 in response to Presidential Executive Order 13636 “Improving Critical Infrastructure Cybersecurity” which called for the development of a voluntary framework to help organizations improve the cybersecurity risk management and resilience of their systems NIST conferred with a broad range of partners from government industry and academia for over a year to build a consensus based set of sound guidelines and practices The Cybersecurity Enhancement Act of 2014 reinforced the legitimacy and authority of the CSF by codifying it and its voluntary adoption into law until the Presidential Executive Order on “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure” signed on May 11 2017 mandated the use of CSF for all US federal entities While intended for adoption by the critical infrastructure sector the foundational set of cybersecurity disciplines comprising th e CSF have been supported by government and industry as a recommended baseline for use by any organization regardless of its sector or size Industry is increasingly referencing the CSF as a de facto cybersecurity standard Amazon Web Services NIST Cybersecurity Framework (CSF) 2 In Feb 2018 the International Standards Organization released “ISO/IEC 27103:2018 — Information tech nology — Security techniques Cybersecurity and ISO and IEC Standards” This technical report provides guidance for implementing a cybersecurity framework leveraging existing standards In fact ISO 27103 promotes the same concepts and best practices refl ected in the NIST CSF; specifically a framework focused on security outcomes organized around five functions (Identify Protect Detect Respond Recover) and foundational activities that crosswalk to existing standards accreditations and frameworks Ado pting this approach can help organizations achieve security outcomes while benefiting from the efficiencies of re using instead of re doing Credit: Natasha Hanacek/NIST https://wwwnistgov/industry impacts/cybersecurity According to Gartner the CSF is used by approximately 30 percent of US private sector organizations and projected to reach 50 perce nt by 20201 As of the release of this report 16 US critical infrastructure sectors use the CSF and over 21 states have implemented it2 In addition to critical infrastructure and other private sector organizations other countries including Italy an d Israel are leveraging the CSF as the foundation for their national cybersecurity guidelines Since Fiscal Year 2016 US federal agency Federal Information Security Modernization Act (FISMA) metrics have been organized around the CSF and now reference it as a “standard for managing and reducing cybersecurity risks” According to the FY16 FISMA Report to Congress the Council of the Inspectors General on Integrity and Efficiency (CIGIE) aligned IG metrics with the five CSF functions to evaluate Amazon Web Services NIST Cybersecurity Framework (CSF) 3 agency p erformance and promote consistent and comparable metrics and criteria between Chief Information Officer (CIO) and Inspector General (IG) assessments The most common applications of the CSF have manifested in three distinct scenarios: • Evaluation of an orga nization’s enterprise wide cybersecurity posture and maturity by conducting an assessment against the CSF model (Current Profile) determine the desired cybersecurity posture (Target Profile) and plan and prioritize resources and efforts to achieve the Target Profile • Evaluation of current and proposed products and services to meet security objectives aligned to CSF categories and subcategories to identify capability gaps and opportunities to reduce overlap/duplicativ e capabilities for effici ency • A reference for restructuring their security teams processes and training This paper identifies the key capabilities of AWS service offerings available globally that US federal state and local agencies; global critical infrastructure owners and operators; as well as global commercial enterprises can leverage to align to the CSF (security in the cloud) It also provides support to establish the alignment of AWS Cloud services to the CSF as validated by a thirdparty asses sor (security of the cloud) based on compliance standards including FedRAMP Moderate3 and ISO 9001/27001/27017/27018 4 This means that you can have confidence that AWS services deliver on the security objectives and outcomes identified in the CSF and that you can use AWS solutions to support your own alignment with the CSF and any required compliance standard For US federal agencies in particular leveraging AWS solutions can facilitate your compliance with FISMA reporting metrics This combina tion of outcomes should empower you with confidence in the security and resiliency of your data as you migrate critical workloads to the AWS Cloud Security benefits of adopting the NIST CSF The CSF offers a simple yeteffective construct consisting of thr ee elements – Core Tiers and Profiles The Core represents a set of cybersecurity practices outcomes and technical operational and managerial security controls (referred to as Informative References) that support the five risk management functions – Identify Protect Detect Respond and Recover The Tiers characterize an organization’s aptitude and maturity for managing the CSF functions and controls and the Profiles are intended to convey the organization’s “as is” and “to be” cybersecurity postures Together these three Amazon Web Services NIST Cybersecurity Framework (CSF) 4 elements enable organizations to prioritize and address cybersecurity risks consistent with their business and mission needs It is important to note that implementation of the Core Tiers and Profiles are the responsibility of the organization adopting the CSF ( for example government agency financial institution commercial start up and so on ) This paper focuses on AWS solutions and capabilities supporting the Core that can enable you to achieve the securi ty outcomes (Subcategories) in the CSF It also describes how AWS services that have been accredited under FedRAMP Moderate and ISO 9001/27001/27017/27018 align to the CSF The Core references security controls from widely adopted internationally recognized standards such as ISO/IEC 27001 NIST 80053 Control Objectives for Information and Related Technology (COBIT) Council on Cybersecurity (CCS) Top 20 Critical Security Controls (CSC) and ANSI/ISA 62443 Standards Security for In dustrial Automation and Control Systems While this list represents some of the most widely reputed standards the CSF encourages organizations to use any controls catalogue to best meet their organizational needs The CSF was also designed to be size sector and country agnostic; therefore public and private sector organizations should have assurance in the applicability of the CSF regardless of the type of entity or nation state location The CSF encourages organizations to use any controls ca talogue to best meet their organizational needs The CSF was also designed to be size sector and country agnostic; therefore public and private sector organizations should have assurance in the applicability of the CSF regardless of the type of entity or nation state location NIST CSF implementation use cases Health care The US Department of Health and Human Services completed a mapping of the Health Insurance Portability and Accountability Act of 1996 (HIPAA)5 Security Rule to the NIST CSF Under HIPAA covered entities and business associates must comply with the HIPAA Security Rule to ensure the confidentiality integrity and availability of protected health information6 Since HIPAA does not have a set of controls that can be assessed or a formal accreditation process covered entities and business associates Amazon Web Services NIST Cybersecurity Framework (CSF) 5 like AWS are HIPAA eligible based on alignment with NIST 800 53 security controls that can be tested and verified in order to place services on the HIPAA eligibility list The mapping between the NIST CSF and the HIPAA Security Rule promotes an additional layer of security since assessments performed for certain categories of the NIST CSF may be more specific and detailed than those performed for the corresponding HIPAA Security Rule requirement Financial services The US Financial Services Sector Coordinating Council7 (FSSCC) comprised of 70 financial services associations institutions and utilities/exchanges developed a sector specific profile a customized version of th e NIST CSF that addresses unique aspects of the sector and its regulatory requirements The Financial Services Sector Specific Cybersecurity profile drafted collaboratively with regulatory agencies is a means to harmonize cybersecurity related regulator y requirements For example the FS SCC mapped the “Risk Management Strategy” category to nine different regulatory requirements and determined that the language and definitions while different largely addressed the same security objective International adoption Outside of the US many countries have leveraged the NIST CSF for commercial and public sector use Italy was one of the first international adopters of the NIST CSF and developed a national cybersecurity strategy against the five functions In June 2018 the UK aligned its Minimum Cyber Security Standard mandatory for all government departments to the five functions Additionally Israel and Japan localized the NIST CSF into their respective languages with Israel creating a cyber defense methodology based on its own adaptation of the NIST CSF Uruguay performed a mapping of the CSF to ISO standards to strengthen connections to international frameworks Switzerland Scotland Ireland and Bermuda are also among the list of countries that are using the NIST CSF to improve cybersecurity and resiliency across their public and commercial sector organizations Amazon Web Services NIST Cybersecurity Framework (CSF) 6 NIST CSF and AWS Best Practices While this paper serves as a resource to provide organizational lifecycle risk management that connects business and mission objectives to cybersecurity activities AWS also provides other best practices resources for customers moving their organizations to the cloud (AWS Cloud Adoption Framework) and customers designing building or optimizing solutions on AWS (Well Architecte d Framework)8 These resources supply complementary tools to support an organization in building and maturing their cybersecurity risk management programs processes and practices in the cloud More specifical ly this NIST CSF whitepaper can be used in parallel with either of these best practices guides serving as the foundation for your security program with Cloud Adoption Framework or WellArchitected Framework as an overlay for operationalizing the CSF secu rity outcomes in the cloud For customers migrating to the cloud the AWS Cloud Adoption Framework (AWS CAF) provides guidance that supports each unit in your organization so that each area understands how to update skills adapt existing processes and introduce new processes to take maximum advantage of the services provided by cloud computing Thousands of organizations around the world have successfully migrated their businesses to the cloud relying on the AWS CAF to guide their efforts AWS and our partners provide tools and services that can help you every step of the way to ensure complete understanding and transition https://d1awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf Amazon Web Services NIST Cybersecurity Framework (CSF) 7 → CSF core function: Identify This section addresses the six categories that comprise the “Identify” function: Asset Management Business Environment Governance Risk Assessment Risk Management Strategy and Supply Chain Risk Management that “develop an organizational understanding to manage cybersecurity risk to systems people assets data and capabilities” CSF core subcategories for identify: • Asset Management (IDAM) — The data personnel devices systems and facilities that enable the organization to achieve business purposes are identified and managed consistent with their relative importance to business objectives and the organization’s risk str ategy • Business Environment (IDBE) — The organization’s mission objectives stakeholders and activities are understood and prioritized; this information is used to inform cybersecurity roles responsibilities and risk management decisions • Governance ( IDGV) — The policies procedures and processes to manage and monitor the organization’s regulatory legal risk environmental and operational requirements are understood and inform the management of cybersecurity risk • Risk Assessment (IDRA) — The org anization understands the cybersecurity risk to organizational operations (including mission functions image or reputation) organizational assets and individuals • Risk Management Strategy (IDRM) — The organization’s priorities constraints risk tole rances and assumptions are established and used to support operational risk decisions • Supply Chain Risk Management (IDSC) — The organization’s priorities constraints risk tolerances and assumptions are established and used to support risk decisions a ssociated with managing supply chain risk The organization has established and implemented the processes to identify assess and manage supply chain risks Customer responsibility Identifying and managing IT assets is the first step in effective IT governance and security and yet has been one of the most challenging The Center for Internet Security Amazon Web Services NIST Cybersecurity Framework (CSF) 8 (CIS)9 recognized the foundational importance of asset inventory and assigned physical and logical asset inventory as controls #1 and #2 of their Top 2 0 However an accurate IT inventory both of physical assets and logical assets has been difficult to achieve and maintain for organizations of all sizes and resources Inventory solutions are limited in being able to identify and report on all IT asset s across the organization for various reasons such as network segmentation preventing the solution from “seeing” and reporting from various parts of the enterprise network endpoint software agents not being fully deployed or functional and incompatibili ty across a broad range of disparate technologies Unfortunately those assets that are “lost” or unaccounted for pose the greatest risk If they are not tracked they are most likely not receiving the most recent patches and updates are not replaced during lifecycle refreshments and malware may be allowed to exploit and maintain its hold of the asset Migrating to AWS provides two key benefits that can mitigate the challenges with maintaining asset inventories in an on prem environment Firs t AWS assumes sole responsibility for managing physical assets that comprise the AWS Cloud infrastructure This can significantly reduce the burden of physical asset management for customers for those workloads that are hosted in AWS The customer is still responsible for maintaining physical asset inventories for the equipment they keep in their environment (data centers offices deployed IoT mobile workforce and so on ) The second benefit is the ability to achieve deep visibility and asset inventory for logical assets hosted in a customer’s AWS account This may sound like a bold claim but it becomes quickly evident as it does not matter if an EC2 instance (virtual server) is turned on or off whether the endpoint agent is installed and running regardless of what network segment the asset is on or any other factor Whether using the AWS Management Console as a visual point andclick interface through the command line interface (CLI) or application programmable interface (API) customers can query and obtain visibility of AWS service assets This reduces the inventory burden on the customer to the software they install on their EC2 instances and what data assets they store in AWS AWS also has services that can perform this capab ility like Amazon Macie which can identify classify label and apply rules to data stored in Amazon Simple Storage Service (Amazon S3) An organization that unders tands its mission stakeholders and activities can utilize several AWS services to automate processes assign business risk to IT systems and manage user roles For example AWS Identity and Access Management (IAM) can be used to assign access roles based on business roles for people and services The use Amazon Web Services NIST Cybersecurity Framework (CSF) 9 of tags for services and data can be used to prioritize automated tasks and include pre determined risk decisions or stop gates for a person to evaluate the data presented and decide for which direction the system should take Governance is the “unsung hero” of cybersecurity It lays the foundation and sets the standard for people processes and technology AWS provides several services and capab ilities such as AWS IAM AWS Organizations AWS Config AWS Systems Manager AWS Service Catalog and others that customers can use to implement monitor and enforce governance Customers can leverage AWS compliance with over 50 standard s such as FedRAMP ISO and PCI DSS 10 AWS provides informati on about its risk and compliance program to enable customers to incorporate AWS controls into their governance framework This information can assist customers in documenting a complete control and governance framework with AWS included as an important part of that framework Services such as Amazon Inspector identify technical vulnerabilities that can be fed into a risk posture and management process The enhanced visibility that the cloud provides increases the accuracy of a customer’s risk posture allowing risk decisions to be made on more substantial data AWS responsibility AWS maintains stringent access control management by only providing data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or AWS All physical access to data centers by AWS employee s is routinely logged and audited Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data and server instances are logically isolated from other customers by default Privileged user access control is reviewed by an independent auditor during the AWS SOC 1 ISO 27001 PCI and FedRAMP audits AWS risk management activities include the system development lifecycle (SDLC) which incorporates industry best practices and formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment In addition the AWS control environment is subject to regular internal and ex ternal risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment Amazon Web Services NIST Cybersecurity Framework (CSF) 10 AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks In addition the AWS control environment is subject to various internal and external risk assessments AWS Compliance and Security teams have established an information security framework and policies based on the Control Objectives for Information and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v32 and the National Institute of Standards and Technology (NIST) Publication 80053 Rev 4 (Recommended Security Controls for Federal Information Systems) AWS maintains the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as alignment with the information security policy AWS Security regularly scans all internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regul arly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are no t meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements AWS maintains formal agreements with key third party suppliers and implements appropriate relationship management mechanisms in line with the ir relationship to the business The AWS third party management processes are reviewed by independent auditors as part of AWS ongoing compliance with SOC and ISO 27001 In alignment with ISO 27001 standards AWS hardware assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools AWS procurement and supply chain team maintain relationships with all AWS suppliers Refer to ISO 27001 standards; Annex A domain 8 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Amazon Web Services NIST Cybersecurity Framework (CSF) 11 CSF core function: Protect This section addresses the six categories that comprise the “Protect” function: Access Control Awareness and Training Data Security Information Protection Processes and Procedures Maintenance and Protective Technology The section also highlights AWS solutions that you can leverage to align to this function CSF Core Subcategory for Protect: • Identity Management Authentication and Access Control (PRAC) — Access to physical and logical assets and associated facilities is limited to authorized users p rocesses and devices and is managed consistent with the assessed risk of unauthorized access to authorized activities and transactions • Awareness and Training (PRAT) — The organization’s personnel and partners are provided cybersecurity awareness educat ion and are trained to perform their cybersecurity related duties and responsibilities consistent with related policies procedures and agreements • Data Security (PRDS) — Information and records (data) are managed consistent with the organization’s risk strategy to protect the confidentiality integrity and availability of information • Information Protection Processes and Procedures (PRIP) — Security policies (that address purpose scope roles responsibilities management commitment and coordination among organizational entities) processes and procedures are maintained and used to manage protection of information systems and assets • Maintenance (PRMA) — Maintenance and repairs of industrial control and information system components is performed consistent with policies and procedures • Protective Technology (PRPT) — Technical security solutions are managed to ensure the security and resilience of systems and assets consistent with related policies procedures and agreements Customer responsibility When looking at meeting the three security objectives of Confidentiality Integrity and Availability the third can be very difficult to achieve in an onpremises environment with only one or two data centers This is one of the greatest benefits of hyperscale cloud Amazon Web Services NIST Cybersecurity Framework (CSF) 12 service providers and AWS in particular due to the AWS unique infrastructure architecture You can distribute your application across multiple Availability Zones (AZs) which are logical fault isolation zones within a Region If architected properly with enhanced capacity management and automatic scaling capabilities your application and data would not be impacted by a single data center outage If you take advantage of all the Availability Zones in a Region (where there are three or more) the loss of two data centers may still not have any impact to your application Likewise services such as Amazon S3 automatically replicate your data to at least three Availability Zones in the Region for a provided availability of 9999% and data durability of 99999999999% Confidentiality can be achieved through encryption at rest and encryption in transit using AWS encryption services such as Amazon Elastic Block Store (EBS) Encryption Amazon S3 encryption Transparent Database Encryption for RDS SQL Server and RDS Oracle and VPN Gateway or encryption using your existing encryption solution AWS supports TLS/SSL encryption for all of its API endpoints and the ability to create VPN tunnels to protect data in transit AWS also provides a Key Management Service and dedicated Hardware Security Module appliances to encrypt data at rest You can choose to secure your data using th e AWS provided capabilities or use your own security tools Integrity can be facilitated in a variety of means Amazon CloudWatch and AWS CloudTrail have integrity checks customers can use digital signatures for API calls and logs MD5 checksums can be e mployed in Amazon S3 and then there are numerous thirdparty solutions from our partners AWS Config even provides integrity of the customer’s AWS environment by monitoring for changes Within the customer AWS environment AWS services such as AWS IAM A mazon Cognito AWS Single Sign On (SSO) Amazon Cloud Directory AWS Directory Service and features such as Multi Factor Authentication allows you to implement manage secure monitor and report on user identities authentication standards and access rights You are responsible for training your staff and end users on the policies and procedures for managing your environment For technical training AWS and our training partners provide comprehensive training for various roles such as Solutions Architects SysOps staff developers and security teams11 Amazon Web Services NIST Cybersecurity Framework (CSF) 13 AWS responsibility AWS employs the concept of least privilege whereby employee access is granted based on business need and job responsibilities providing temporary role based access to only those resources and data required at that moment in time AWS provides physical data center access only to approved employees All employees who need data center access must first apply for access and provide a valid business justification These re quests are granted based on the principle of least privilege where requests must specify to which layer of the data center the individual needs access and are timebound Requests are reviewed and approved by authorized personnel and access is revoked after the requested time expires Once granted admittance individuals are restricted to areas specified in their permissions Third party access is requested by approved AWS employees who must apply for third party access and provide a valid business justification These requests are granted based on the principle of least privilege wher e requests must specify to which layer of the data center the individual needs access and are time bound Thes e requests are approved by authorized personnel and access is revoked after request time expires Once granted admittance individuals are restricted to areas specified in their permissions Anyone granted visitor badge access must present identification when arriving on site and are signed in and escorted by authorized staff AWS has implemented formal documented security awareness and training policies and procedures for our employees and contractors that address purpose scope roles responsibilities management commitment coordination among organizational entities and compliance AWS FedRAMP and ISO 27001 certifications document in detail t he policies and procedures by which AWS operates maintains controls approves deploys reports and monitors all changes to its environment and infrastructure as well as how AWS provides redundancy and emergency responses for its physical infrastructure Additionally the certifications document in detail the manner in which all remote maintenance for AWS servic es is approved performed logged and reviewed so as to prevent unauthorized access They also address the manner in which AWS sa nitizes media and destroys data AWS uses products and procedures that align with NIST Special Publication 800 88 Guidelines for Media Sanitization You are also responsible for preparing the policies processes and procedures for data protection To supp ort billing and maintenance requirements AWS assets are assigned an owner tracked and monitored with AWS proprietary inventory management tools AWS asset Amazon Web Services NIST Cybersecurity Framework (CSF) 14 owner maintenance procedures are carried out by utilizing a proprietary tool with specified checks that must be completed according to the documented maintenance schedule Third party auditors test AWS asset management controls by validating that the asset owner is documented and that the condition of the assets is visually inspected according to the documented asset management policy AWS services can also greatly improve managing and performing systems maintenance for our customers First based on AWS infrastructure previously discussed with Availability Zones an application that was architected for high availability across multiple Availability Zones can allow you to segregate maintenance activities You can take assets within an Availability Zone offline for maintenance without affecting the performance of the ov erall application as the duplicate assets in the other Availability Zones scale out and pick up the load Maintenance can be accomplished one Availability Zone at a time and can be automated with stop gates and reporting as required In addition entire architectures can be shifted over from a DevTest (Blue) environment to an operations (Green) environment and vice versa wher e that method is desired CSF core function: Detect This section addresses the three categories that comprise the “Detect” function: Anomalies and Events Securit y Continuous Monitoring and Detection Processes It summarize s the key AWS solutions you can leverage to align to this function CSF core subcategory for detect: • Anomalies and Events (DEAE) — Anomalous activity is detected in a timely manner and the potential impact of events is understood • Security Continuous Monitoring (DECM) — The information system and assets are monitored at discrete intervals to identify cybersecurity events and verify the effectiveness of protective measures • Detection Processes (DEDP) — Detection processes and procedures are maintained and tested to ensure timely and adequate awareness of anomalous events Customer responsibility The ability to gather synthesize and alert on security relevant events is fundamental to any cybersecurity risk management program The API driven nature of cloud technology Amazon Web Services NIST Cybersecurity Framework (CSF) 15 provides a new level of visibility and automation not previously possible With every action taken resulting in one or more audit records AWS provides a wealth of activity information available to customers within their account structure However the volume of data can present its own challenges Finding the proverbial “needle in the haystack” is a real problem but the capacity and capabilities the cloud provides are well suited to resolve these challenges With the appropriate log processing infrastructure automation and data analysis it is possible to achieve near realtime detection and response for critical events while filtering out false positives and low/accepted risks AWS has several services that can be utilized as part of a comprehensive Security Operations strategy for nearly continuous monitoring and threat detection At the fundamental le vel there are services such as AWS CloudTrail for logging all API calls where the logs can be digitally signed and encrypted and then stored in a secure Amazon S3 bucket Virtual Private Cloud (VPC) Flow Logs monitor all network activity going in and out of your VPC There is also Amazon CloudWatch which monitors your AWS environment and generates alerts similar to a Security Information Event Management (SIEM) system and can be ingested into a customer’s on prem ises SIEM There are also other advanced services such as Amazon GuardDuty that correlate activity within your AWS environment with threat intelligence from multiple sources that provides additional risk context and anomaly detection Amazon Macie is another advanced service that can identify sensitive data classify and label it and track its location and access Some customers may even choose to take advantage of AWS artificial intelligence (AI) and machine learning (ML) services to model and analyze log data AWS responsibility AWS provides near realtime alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usag e matching the characteristics of bad actors AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities Amazon Web Services NIST Cybersecurity Framework (CSF) 16 AWS maintains the AWS Security Bulletins webpage to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of securit y announcements on the Security Bulletin webpage The AWS Support team maintains a Service Health Dashboard webpage to alert customers to any broadly impacting availability issues CSF core function: Respond This section addresses the five categories that comprise the “Respond” function: Response Planning Communications Analysis Mitigations and Improvements We also summarize the key AWS solutions that you can leverage to align to this function CSF core subcategory for respond: • Response Planning (RSRP) — Response processes and procedures are run and maintained to ensure timely response to detected cybersecurity events • Mitigation (RSMI) — Activities are performed to prevent expansion of an event mitigate its effects and eradicate the incident • Communications (RSCO) — Response activities are coordinated with internal and external stakeholders as appropriate to include external support from law enforcement agencies • Analysis (RSAN) — Analysis is conducted to ensure adequate response and support recovery activities • Improvements (RSIM) — Organizational response activities are improved by incorporating lessons learned from current and previous detection/response activities Customer responsibility The time between detection and response is critical Well run repeatabl e response plans minimize exposure and speed recovery Automation enabled by the cloud allows for the implementation of sophisticate d playbooks as code with much quicker response times By simply tagging an Amaz on Elastic Compute Cloud (Amazon EC2) instance for example automation can isolate the instance take a forensic snapshot install analysis tools connect the suspect instance to a forensic workstation and cut a ticket to a cybersecurity analyst The capabilities listed below facilitate the creation of automated Amazon Web Services NIST Cybersecurity Framework (CSF) 17 processes to add spee d and consistency to your incident response processes Moreover these tools allow you to maintain a history o f the communications for use in a postevent review While the cloud does offer capabilities to streamline and expedite the collection and dissemination of information there is always a human element involved in response coordination Cybersecurity analysis require s investigative action forensics and understanding of the incident These necessarily require some level of human interaction Though AWS services do not provide direct incident analytics they do provide services to assist with creating a formalized process and assessing the breadth of impact AWS responsibility AWS has implemented a formal documented incident response policy and program The policy addresse s purpose scope roles responsibilities and management commitment AWS utilizes a three phased approach to manage incidents: • Activation and notification phase • Recovery phase • Reconstitution phase To ensure the effectiveness of the AWS Incident Management plan AWS conducts incident response testing This testing provides excellent coverage for the discovery of previously unknown defects and fai lure modes In addition it allows the Amazon Security and Service teams to test the systems for potential customer impact and further prepare staff to handle incidents such as detection and analysis containment eradication and recovery and postincident activities The Incident Response Test Plan is run annually in conjunction with the Incident Response plan AWS Incident Management planning testing and test results are reviewed by third party auditors CSF core function: Recover This sect ion addresses the three categories that comprise the “Recover” function: Recovery Planning Improvements and Communications It also summarize s the key AWS solutions that you can leverage to align to this function Amazon Web Services NIST Cybersecurity Framework (CSF) 18 Customer responsibility Customers are responsible for planning testing and performing recovery operations for their applications and data to maintain their business continuity The cause of an outage may come from many different sources AWS services provide many advanced capab ilities for self healing and automated recovery For example the use of Auto Scaling groups across multiple Availability Zones allows for the infrastructure to monitor the health of EC2 instances and rapidly replace a failed instance with a new Amazon Machine Image (AMI) Additionally the use of Amazon CloudWatch AWS Lambda and other services/service capabilities can automate recovery actions to include everything from deploying an entire AWS environment and application to failing over to a different AWS Region restoring data from backups and more Lastly actions involving public relations reputation management and communicating recovery activities are respective to how the organization handles the event that impacted their environment which in this case is the customer AWS responsibility The AWS resilient infrastructure reliable automation disciplined processes and exceptional people are able to recover from events very quickly and with minimal (if any) disruption to customers The AWS business continuity plan details the three phased approach that AWS has developed to recover and reconstitute the AWS infrastructure: • Activation and notification phase • Recovery phase • Reconstitution phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions AWS maintains a ubiquitous security control environment across all Regions Each data center is built to physical environmental and security standards in an active active configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Compo nents (N) have at least one independen t backup component (+1) so the backup component is active in the operation even if all other Amazon Web Services NIST Cybersecurity Framework (CSF) 19 components are fully functional To reduce single points of failure this model is applied throughout AWS including network and data center implementation All data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS services alignment with the CSF AWS assessed the alignment of our cloud services to the CSF to demonstrate “security of the cloud” In an increasingl y interconnected world applying strong cybersecurity risk management practices for each interconnected system to protect the confidentiality integrity and availability of data is a necessity AWS public and private sector customers fully expec t that AWS employ s bestinclass security to safeguard its cloud services and the data processed and stored in those systems To effectively protect data and systems at hyperscale security cannot be an afterthought but rather an integral part of AWS systems lifecycle management This means that securit y starts at Phase 0 (systems inception) and is continuously delivered as an inherent part of the AWS service delivery model AWS exercises a rigorous risk based approach to the security of our services and the safeguarding of customer data It enforce s its own internal security assurance process for our services which evaluates the effectiveness of the managerial technical and operational controls necessary for protecting against current and emerging security threats impacting the resiliency of our services Hyper scale commercial cloud service providers such as AWS are already subject to robust security requirements in the form of sector specific national and international security certifications (for example FedRAMP ISO 27001 PCI DSS SOC and so on ) that sufficiently address the risk concerns identified by public and private sector customers worldwide AWS adopts the security high bar across all of its services based on its “high watermark” approach for its customers This means that AWS takes the highest classification level of data traversing and stored in its cloud services and apply those same levels of protection to all of its services and for all of its customers These services are then queued for certification against the highest compliance bar which translates to customers benefiting from elevated levels of protection for customer data processed and stored in the AWS Cloud Amazon Web Services NIST Cybersecurity Framework (CSF) 20 As validated by our third party assessor AWS solutions available today for our public and commercial sector customers align with the CSF Core Each of these services maintains a current accreditat ion under FedRAMP Moderate and/or ISO 27001 When deploying AWS solutions organizations can have the assurance that AWS services uphold ris k management best practices defined in the CSF and can leverage these solutions for their own alignment to the CSF Refer to Appendix A for the third party attestation letter As validated by a thirdparty assessor AWS solutions available today for its public and commercial sector customers align with the NIST CSF Each of these services maintains a current accreditation under FedRAMP Moderate and/or ISO 27001 When deploying AWS solutions organizations can have the assurance that AWS services uphold risk management best practices defined in the CSF and can leverage these solutio ns for their own alignment to the CSF Conclusion Public and private sector entities acknowledge the security value in adopting the NIST CSF into their environments US federal agencies in particular are directed to align their cybersecurity risk management and reporting practices to the CSF As US state and local governments non US governments critical infrastructure operators and commercial organizations assess their own alignment with the CSF they need the right tools and solutions to achieve a secure and compliant system and organizational risk posture You can strengthen your cybersecurity posture by leveraging AWS as part of your enterprise technology to build automated innovative and secure solutions to achieve the security outcomes in the CSF You reap an additiona l layer of security with the assurance that AWS services also employ sound risk management practices identified in the CSF which have been validated by a thirdparty assessor Amazon Web Services NIST Cybersecurity Framework (CSF) 21 Appendix A – Third party assessor validation Amazon Web Services NIST Cybersecurity Framework (CSF) 22 Contributors Contributors to this document include : • Min Hyun Sr Manager Security/Compliance/Privacy • Michael South Principal Industry Specialist ADFS DC Tech • James Mueller AWS Security Assurance : Gov FedRAMP AWS Security Document revisions Date Description October 12 2021 Updated January 2019 First publication Notes 1 https://wwwnistgov/industry impacts/cybersecurity 2 Ibid 3 Federal Risk and Authorization Management Program (FedRAMP) is the US government’s standardized federal wide program for t he security authorization of cloud services FedRAMP’s “do once use many times” approach was designed to offer significant benefits such as increasing consistency and reliability in the evaluation of security controls reducing costs for service provider s and agency customers and streamlining duplicative authorization assessments across agencies acquiring the same service 4 ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approac h to managing company and customer information that’s based on periodic risk assessments appropriate to ever changing threat scenarios ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public Amazon Web Services NIST Cybersecurity Framework (CSF) 23 cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set 5 HIPAA includes provisions to protect the security and privacy of protected health information (PHI) PHI includes a very wide set of personally identifiable health and health related data including insurance and billing information diagnosis data clini cal care data and lab results such as images and test results The HIPAA rules apply to covered entities which include hospitals medical services providers employer sponsored health plans research facilities and insurance companies that deal directly with patients and patient data The HIPAA requirement to protect PHI also extends to business associates 6 PHI includes a very wide set of personally identifiable health and health related data including insurance and billing information diagnosis data clinical care data and lab results such as images and test results 7 https://wwwfssccorg/About FSSCC 8 The AWS Well Architected Framework documents architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cloud It provides a set of foundational questions that allow you to understand if a speci fic architecture aligns well with cloud best practices https://docsawsamazoncom/wellarchitected/latest/framework/welcomehtml 9 https://wwwcisecurityorg/controls/ 10 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council https://wwwpcisecuritystandardsorg/ which was founded by American Express Discover Financial Services JCB International MasterCard Worldwide and Visa Inc PCI DSS applies to all entities that store process or tran smit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers 11 Available online and classroom training can be found at https://awsamazoncom/training There are also several books covering many aspects of AWS which can be found at https://wwwamazoncom by searching for “AWS” AWS whitepapers can be found at https://awsamazoncom/whitepapers,General,consultant,Best Practices Optimizing_ASP.NET_with_C_AMP_on_the_GPU,"ArchivedOptimizing ASPNET with C++ AMP on the GPU High Performance Parallel Code in the AWS Cloud Scott Zimmerman April 2015 This paper has been archived i r A r AForthe latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 2 of 42 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purp oses only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Licensed under the Apache License Version 20 (the ""License"") You may not use this file except in compliance with the License A copy of the License is located at http://awsamazoncom/apache20/ or in the ""license"" file accompanying this file This code is distributed on an ""AS IS"" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License Portions of the code were developed by Heaton Research and are licensed under the Apache License Version 20 available here: https://wwwapacheorg/licenses/LICENSE20html Portions of the code were developed by Microsoft Corporation and are licensed under Microsoft MSPL available here: http://opensourceorg/licenses/ms pl ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 3 of 42 Contents Abstract 4 Introduction 4 Introduction to C++ AMP 6 Introduction to Amazon EC2 7 Install the AWS Toolkit for Visual Studio 7 Set up the Amazon EC2 Windows Server Instance with NVIDIA GPU 7 Create a Security Group with the AWS Toolkit 8 Launch G2 Instance in Amazon EC2 with the AWS Toolkit 11 Connect to the Instance to Install the NVIDIA Driver and Visual C++ Redistributable 14 Comparing the Performance of Various Matrix Multiplication Algorithms 20 Working with the Code 21 Deploying the Web Application with AWS Elastic Beanstalk 21 Using ebextensions with AWS Elastic Beanstalk 24 Model Code for Data Passed Between Controller and View 25 Accessing the Model in the View 25 Controller Code to Invoke Each Algorithm and Populate the Model 26 C# Basic Serial (CPU) 31 C# Optimized Serial (CPU) 32 C# Parallel with TPL (CPU) 33 C++ Basic Serial (CPU) 33 C++ Parallel with PPL (CPU) 36 C++ Parallel with AMP (GPU) 37 C++ Parallel with AMP Tiling (GPU) 39 Conclusion 40 Further Reading 41 Notes 41 ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 4 of 42 Abstract This whitepaper is intended for Microsoft Windows developers who are considering writing high performance parallel code in Amazon Web Services (AWS) using the Microsoft C++ Accelerated Massive Parallelism (C++ AMP) library This paper describes an ASPNET ModelViewController ( MVC ) web application written in C# that invokes C++ functions running on the graphics processing unit (GPU) for matrix multiplication Since matrix multiplication is of order Ncubed multiplying two 1024 x 1024 matrixes requires over one billion multiplications and is therefore an example of a computeintensive operation that would be a good candidate for GPU programming This paper shows how to use AWS Elastic Beanstalk and the AWS Toolkit for Visual Studio to launch a Microsoft Windows Server instance with an NVIDIA GPU in the Amazon Elastic Compute Cloud (Amazon EC2) on AWS Introduction Certain types of parallel algorithms can run hundreds of times faster on a GPU than similar serial algorithms on a CPU This paper describes matrix multiplication as one example of a parallel algorithm that is suitable for GPU programming Performance increases of this order are obviously very attractive for certain workloads but there are several technologies that must be understood and integrated in order to achieve these gains First you’ll need a GPU programming language or library The next section briefly discuss es the advantages of Microsoft C++ AMP and this whitepaper includes working code examples written in C++ AMP Second this paper will describe how to use the AWS Toolkit for Visual Studio to launch Amazon EC2 instances with a GPU connect to them remotely and install the NVIDIA GPU graphics driver Third although the focus here is on C++ programming we’ll need a simple user interface to display results and it’s typically easier to do this in C# than in C++ So this whitepaper shows a small program written in C# that uses ASPNET MVC to invoke a function written in C++ AMP Fourth bringing ASPNET MVC into the solution means you also need to add the Internet Information Services (IIS) role to Windows Server and deploy the web application This will be accomplished from inside Visual Studio with the AWS Elastic Beanstalk service Of course it’s not necessary to develop a web front end ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 5 of 42 or use C# to take advantage of C++ AMP but that is a common use case so this whitepaper covers how to integrate those technologies with C++ and Windows Server running on Amazon EC2 Figure 1 shows how the ASPNET MVC architecture spans the physical tiers in this application and the coding technologies that will be used on each tier Note that this simple application doesn’t include a data tier Also the application tier is only a logical concept in this scenario It is a way of looking at the C# and C++ algorithms as distinct from the web application even though they run on the CPU or GPU of the same web server virtual machine Figure 1: The ASPNET MVC Architecture and Languages Used This application starts with a basic matrix multiplication function in C# to show the simplest way to implement the solution Then the program is optimized six times each time adding a technology and comparing performance Subsequent sections of this paper will describe how each variation is coded and how to set up the technologies Download the source code and Visual Studio solution 1 Here’s an overview of the seven matrix multiplication algorithms that will be illustrated: Algorithm Description C# Basic Serial (CPU) Written in C# to serve as a performance baseline on which we hope to improve by using C++ C# Improved Serial (CPU) Optimizes the order of loop indexes to improve performan ce ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 6 of 42 Algorithm Description C# Parallel with TPL (CPU) Uses the NET Framework Task Parallel Library (TPL) When run on a machine with multiple cores this multithreaded algorithm improves performance when compared with the serial C# version C++ Basic Serial (CPU) Converts the basic serial algorithm to C++ to demonstrate how to invoke C++ code from ASPNET MVC C# code running on IIS C++ Parallel with PPL (CPU) Rewrites the serial C++ function to make it parallel by using the Microsoft Parallel Patterns Library (PPL) C++ Par allel with AMP (GPU) Rewrites the parallel C++ function to run on the GPU using basic techniques with C++ AMP C++ Parallel with AMP Tiling (GPU) Rewrites the AMP C++ function to use AMP with tiling Implementing tiling algorithms takes a bit more work th an basic AMP but if done carefully it can improve performance The performance comparisons illustrated in this application are not meant to be scientific benchmarks but they may provide useful insight into the potential relative performance of the various techniques The algorithms are not intended to be optimal If you really need to do fast matrix multiplications you should look into tested and optimized libraries such as Basic Linear Algebra Subprograms (BLAS) or Linear Algebra Package (LAPACK ) Introduction to C++ AMP Until now programming the GPU has been tedious or nonportable or limited to the C language Microsoft C++ AMP enables Visual C++ developers to optimize computeintensive programs in a highly productive way AMP is an open specification for an extension to standard C++ that greatly simplifies porting parallel algorithms from the CPU to the GPU AMP is also elegant and takes advantage of modern C++ features such as lambdas You’ll see that after taking the first step with AMP parallel code still looks similar to the original serial code The popular OpenCL library is portable across multiple operating systems and GPU hardware vendors It’s been around longer than C++ AMP and is recognized for providing very fast runtime performance However OpenCL is a Clanguage library that misses out on modern C++ features AMP is portable across GPU hardware but because it’s designed for DirectX it runs on Windows In 2012 Intel released a free download called Shevlin Park as a proof ofconcept that enables C++ AMP code to run on top of OpenCL which means your C++ AMP code can run on Linux and other operating systems ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 7 of 42 In 2013 the HSA Foundation published an opensource C++ AMP compiler2 that outputs OpenCL code This also enables you to write C++ AMP code to run on Linux and other operating systems Microsoft maintains a C++ AMP Algorithms Library modeled after the Standard Template Library3 and a few dozen C++ AMP sample projects on the AMP blog 4 Introduction to Amazon EC2 Amazon EC2 is a service that allows customers to run Windows Server and Linux in the AWS cloud Amazon EC2 provides over 30 types of compute instances 5 including memoryoptimized storageoptimized and GPUenabled instances The G2 double extralarge ( g22xlarge ) instance type has eight virtual CPUs and an NVIDIA GPU with 1536 CUDA cores and 4 GB of video memory CUDA is a parallel computing platform and programming model invented by NVIDIA 6 Install the AWS Toolkit for Visual Studio This paper assumes that you have Visual Studio Professional 2013 or Visual Studio Community 2013 already installed on your computer It is possible to write the code with Visual Studio Express; however that edition doesn’t support plugins such as the AWS Toolkit for Visual Studio The AWS Toolkit makes it very convenient to perform several account management tasks without ever leaving Visual Studio You’ll use the AWS Toolkit extensively to launch and administer an Amazon EC2 instance in AWS although it’s also possible to do that with the Amazon EC2 console in a web browser Please download and install the AWS Toolkit for Visual Studio7 from the AWS website For this whitepaper please ensure you have at least version 1810 of the AWS Toolkit for Visual Studio After installing the toolkit you should see an option for the AWS Explorer appear in the Visual Studio View menu Set up the Amazon EC2 Windows Server Instance with NVIDIA GPU This paper assumes that you have an AWS account with permission to launch Amazon EC2 instances AWS provides a limited free tier8 for one year for new customers to experiment with cloud computing The free tier covers several ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 8 of 42 services including Amazon EC2 However it applies to the T2micro instance type not the G2 instance type Important Please be aware that there is a cost to run the G2 instance type used in this paper This profile in the AWS Simple Monthly Calculator shows the estimated cost to run one on demand G2 instance with Windows Server nonstop for a month Note that significant cost savings can be achieved by using spot or reserved instances rather than ondemand instances and by stopping the instance when it’s not in use The following sections explain how to use the AWS Toolkit for Visual Studio to launch a G2 instance with Windows Server Create a Security Group with the AWS Toolkit Microsoft Remote Desktop Connection (RDC) is useful for manually administering Windows Server remotely but the NVIDIA display driver that you need for the GPU and the Remote Desktop Protocol (RDP) used by RDC are not compatible RealVNC offers a free version of their VNC Server software that enables remote connections graphically and it uses a different protocol that is compatible with the NVIDIA driver So before you install the NVIDIA driver you will need to install VNC Server on the instance Then you can disconnect from RDP reconnect over VNC and install the NVIDIA driver Don’t worry about installing that now; the detailed instructions are provided later RDP uses port 3389 VNC Server uses port 5900 And of course the web application will use port 80 The default security group when launching a Windows Server instance only opens port 3389 You could simply add rules to the default group after you launch the instance but instead you’ll create your own custom security group and give it a name You’ll also use this custom security group later when you deploy the web application with AWS Elastic Beanstalk To create a security group in the AWS Toolkit: 1 In Visual Studio on the View menu click AWS Explorer (or press Ctrl+K A) 2 Expand Amazon EC2 and doubleclick Security Groups Your security groups are displayed in the right pane On the menu bar above that pane ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 9 of 42 click Create Security Group Fill in the Name and Description and leave the No VPC option selected as shown in Figure 2 Click OK Figure 2: Creating a Security Group 3 Step 2 creates an empty security group Now let’s add the rules to it I n the lower pane click Add Permission to open the Add IP Permission dialog box as shown in Figure 3 Leave Protocol as TCP For Port Range type 5900 for both the Start and End fields Click OK Caution For RDP and VNC it’s highly advisable to limit the Source CIDR to your local IP address with either /32 or an appropriate subnet of your private network appended to the address You may use the estimated IP address shown in the Add IP Permission dialog box (Figure 3) or you can type “what is my IP” into a search engine to see your public IP address AWS creates a default RDP rule with Source CIDR as 0000/0 (which means the whole Internet) to simplify the experience for new users who are launching an instance But opening VNC and RDP ports to the w hole Internet means that hackers can try to guess your administrator password to gain control of your server ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 10 of 42 Figure 3: Adding a Rule in the Security Group 4 Repeat step 3 to add port 3389 (for Protocol you can select RDP ) 5 Repeat step 3 once more to add port 80 (for Protocol you can select HTTP ) With your security group selected in the top pane your rules should appear in the middle pane similar to Figure 4 Figure 4: You should Have Three Rules in Your Security Group Note This security group wi ll serve you while you are installing software on the Amazon EC2 instance After you complete that task and create an Amazon Machine Image (AMI) AWS Elastic Beanstalk will apply an automatic security group with only ports 22 and 80 open So if you need to manually administer your Amazon EC2 instance after deploying with AWS Elastic Beanstalk you must add port 5900 to that security group ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 11 of 42 Launch G2 Instance in Amazon EC2 with the AWS Toolkit Now that you have a custom security group you’re ready to launch a G2 instance: 1 In Visual Studio on the View menu click AWS Explorer (or press Ctrl+K A) AWS Explorer appears as in Figure 5 where it’s shown with the Amazon EC2 service expanded Figure 5: AWS Explorer in AWS Toolkit 2 In AWS Explorer expand Amazon EC2 as shown in Figure 5 Rightclick Instance and then click New Instance 3 In the Quick Launch wizard click Advanced AWS has created special AMIs to optimize the deployment time for IIS and the NET Framework with AWS Elastic Beanstalk The wizard lets you pick one of those AMIs as your base image After you get your instance prepared with the NVIDIA drivers you’ll save your own AMI 4 In the Launch new Amazon EC2 Instance dialog box (see Figure 6) type net beanstalk in the search text box (the third Viewing field) Then change the setting of the first field from Owned by me to Amazon Images Do it in that order; otherwise it takes longer Click the Name ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 12 of 42 column heading to sort the AMIs by name Expand the Description column so you can see the dates the images were created Scroll down to select the most recently created Windows Server 2012 R2 (not core) image At the time this screenshot was taken the latest version of the Beanstalk Container was v2026 However new images are released from time to time to incorporate the latest Windows updates from Microsoft so you’lll likely see a newer version Now click Next Figure 6: Choosing an AMI 5 In the AMI Options dialog box in the Instance Type list select GPU Double Extra Large Click Next 6 In the Storage dialog box click Next 7 In the Tags dialog box provide a name for the instance so it’s easy to distinguish it 8 In the Security dialog box (Figure 7) click Create New Key Pair and give it a name Choose the security group you created earlier (this is very important) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 13 of 42 Figure 7: Choosing the Security Group You Created Earlier 9 Click Launch 10 In the AWS Explorer left pane under Amazon EC2 doubleclick Instances That will display the panel of your instances and you should see that your new instanc e is launching The status will show as “pending” for a few minutes and then it will change to “running” You can continue to the next step while the launch is pending 11 You’ll need an Elastic IP address for this instance so you can easily reconnect to it if you stop and restart the instance Rightclick the instance (even if the status is pending) and then click Associate Elastic IP In the Attach Elastic IP to Instance dialog box (Figure 8) click Create new Elastic IP and then click OK Figure 8: Creating a New Elastic IP ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 14 of 42 Note Remember there’s an hourly cost for the instance while it’s running so it’s a good idea to stop (not terminate) the instance and restart it if you’re not able to finish all the steps in this whitepaper in one session Connect to the Instance to Install the NVIDIA Driver and Visual C++ Redistributable In this section you’ll download and install VNC Server on the instance using Microsoft Internet Explorer But before you can do that you’ll need to turn off the Internet download protection feature that is enabled by default in Internet Explorer 11 on Windows Server 2012 R2 While you’re on the instance you’ll also download and install the Visual C++ 2013 redistributable package Doing this manually is simpler than creating a setup program with a merge module The reason you’ll do this now is so you can create a fully prepared AMI of the instance that you can use later to deploy your web application with AWS Elastic Beanstalk For some of the steps in this section you’ll use the AWS To olkit on your local workstation; for others you’ll use the Amazon EC2 instance connected through RDC or VNC The transitions will be mentioned as needed After the status of your instance changes from “pending” to “running” follow these steps i n the AWS Toolkit: 1 The AWS Toolkit has a convenient option to log in directly with the key pair we created previously without requiring you to enter the administrator password This works until you change the password on the instance which you’ll need to do to connect with VNC Rightclick the instance in the AWS Toolkit and then click Open Remote Desktop In the Open Remote Desktop dialog box (Figure 9) leave the Use EC2 keypair to log on option selected and then click OK The toolkit automatically decrypts the AWSgenerated password from the key pair passes it to Microsoft RDC launches RDC and then logs you into the Amazon EC2 instance ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 15 of 42 Figure 9: Open Remote Desktop For steps 2 7 you’ll use RDC connected to the instance During steps 2 12 if you get a popup message indicating that Windows has updates to install on the instance you should go ahead and apply those so they’ll be included in the AMI you’ll create in step 14 If Windows Update requires a reboot restart your machine and then resume th ese instructions after reconnecting through RDC (or VNC Viewer) 2 You must first change the Windows administrator password on the instance to a password you can remember In Windows Server 2012 R2 click the Windows icon (Start button) in the lowerleft corner of your screen to get to the Start menu Click Administrative Tools Double click Computer Management Expand Local Users and Groups Click once on Users Rightclick Administrator and then click Set Password Click Proceed Enter the new password and then click OK Now the AWSgenerated password is obsolete Close Computer Management 3 To enable file downloads in Internet Explorer click the Windows Start button again Click Server Manager In the left pane click Local Server You should see that Internet Explorer enhanced security configuration is turned on by default Click to turn it off for administrators and then click OK Close Server Manager 4 To run Visual C++ code you’ll need to install the Visual C++ 2013 redistributable from Microsoft It includes the C++ runtime and the AMP DLL file Click the Windows Start button again Click Internet Explorer Browse to the Microsoft download page for Visual C++ Redistributable ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 16 of 42 Packages for Visual Studio 2013 9 Click Download and choose the file vcredist_x64exe from the list of downloads Run the program after downloading it 5 Open Internet Explorer Browse to the RealVNC website 10 Download VNC Server for Windows The free version is adequate for this whitepaper but you will need to register with RealVNC to get a license Install VNC Server (you don’t need to install the Printer Driver or VNC Viewer) 6 On the Windows Start menu click All Programs to display all installed applications Under VNC click Enter VNC Server License Key Go through the VNC wizard to license your server software 7 Now you can close RDC but leave the instance running Now that you will no longer be using RDP with the instance we recommend that you delete the security group rule that permits RDP traffic to the instance You still need to leave port 5900 open for VNC 8 Install and launch the VNC Viewer program on your local workstation11 It prompts you for the VNC Server public IP address To retrieve the IP address rightclick your instance in AWS Explorer and then click Properties In the Properties dialog box (Figure 10) rightclick the Elastic IP value and then click Copy Paste the address into the VNC Server address box in VNC Viewer Figure 10: Getting the Elastic IP Address from the Instance Properties ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 17 of 42 9 When you connect to the instance in VNC Viewer it will prompt you to press Ctrl+Alt+Delete to log in Ordinarily that keystroke sequence is captured by your local workstation The trick is to slide your mouse toward the top center of the VNC Viewer window That will drop down the toolbar where you can click the Ctrl+Alt+Delete button to transmit the keystroke to the remote machine VNC Viewer shows the remote machine prompting you for your Windows administrator password Type the password that you set in Windows when you logged in previously with RDC Do steps 10 12 on the instance while connected through VNC 10 Open Internet Explorer to download the NVIDIA graphics driver As of this writing the latest version on the NVIDIA support site is NVIDIA GRID K520/K340 Release 33412 (Figure 11) Although the page title says 334 the version is 335 Regardless you should be fine if you get the latest version When the NVIDIA installation completes it prompts you to reboot You can save time if you complete the next few steps first Figure 11: Installing the NVIDIA Graphics Driver 11 Don’t reboot after installing the NVIDIA graphics driver Instead on the Windows Start screen type ec2 and click to run the EC2Config service To make the image compatible with AWS Elastic Beanstalk select the User Data box on the General tab (Figure 12) and choose Random for the Administrator Password on the Image tab (Figure 13) Click Apply and then click OK ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 18 of 42 Figure 12: Checking User Data in EC2Config Figure 13: Checking Random Password in EC2Config 12 Click the Windows Start button Click Administrative Tools Double click Computer Management Click Device Manager Under display adapters you should see both the NVIDIA driver and the Microsoft Basic Display Adapter as shown in Figure 14 Rightclick Microsoft Basic Display Adapter and then click Disable ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 19 of 42 Figure 14: Disabling the Microsoft Basic Display Adapter in Device Manager 13 Now on your local workstation in AWS Explorer expand Amazon EC2 Instances Rightclick your GPU instance and choose Stop (do not choose Terminate ) This will automatically disconnect your VNC session Later you’ll use AWS Elastic Beanstalk to start a new instance when you deploy the code 14 After the instance status changes from stopping to stopped rightclick your GPU instance again in AWS Explorer and then click Create Image (EBS AMI) Give the image a name and description and then let it run in the background There is a small storage charge for the images you save in AWS but it’s convenient to be able to reuse the images with everything pre installed if you decide to terminate the instance Whenever you make configuration changes or apply Windows Update on your instance in the future you should create a new image and then optionally deregister your older images 15 After the image is created look in AWS Explorer under Amazon EC2 AMIs and jot down the AMI ID of the image you just created The ID is casesensitive Now that you have your own AMI you’re ready to switch hats and start working with the code in Visual Studio ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 20 of 42 Comparing the Performance of Various Matrix Multiplication Algorithms Before you deploy the code with AWS Elastic Beanstalk here’s a screenshot of the web application after it completes running The user interface is simple: it consists of an HTML table listing the timing and relative performance (versus the baseline) of each algorithm as shown in Figure 15 Figure 15: The ASPNET MVC Application Displaying the Results You’ll notice in the UI that the matrix size used is 1024 x 1024 There are 1536 CUDA cores on the NVIDIA GPU instance type in Amazon EC2 Because the outer loop of the algorithm will execute in parallel once for each row of the matrix 1024 was selected as the matrix size to take advantage of a large number of the CUDA cores Also note that the matrix size must be a multiple of the tile size used in the AMP tiling algorithm You may also notice a couple of curiosities in the relative performance of the algorithms First the performance of the basic C++ algorithm is almost identical to th e performance of the basic C# algorithm That’s interesting because many ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 21 of 42 developers suspect that C++ is about twice as fast as C# A possible explanation for this might be that the C# code is using ragged arrays which is a known optimization for the NET Framework Another curiosity is that the parallel C# code that uses TPL is about seven times faster than the serial C# code but the parallel C++ code that uses PPL is only about four times faster than the serial C++ code Since there are eight virtual cores on the instance we might expect a parallel algorithm to be about seven times faster There are ways to get more out of PPL but that’s outside the scope of this paper Working with the Code If you haven’t downloaded the Visual Studio solution and sour ce code for this whitepaper yet you should download it now 13 Open the CSharpMatrixMultiply solution in Visual Studio The solution includes two projects The ASPNET MVC project is adapted from the basic project that was created with the Visual Studio New Project wizard The following sections explain the C# code and C++ code in the projects The C# project has a dependency on the C++ DLL Deploying the Web Application with AWS Elastic Beanstalk To deploy the application by using the image and security group you created earlier: 1 (Recommended) Switch the build configuration in Visual Studio from Debug to Release 2 In Solut ion Explorer rightclick the CSharpMatrixMultiply project (not the CSharpMatrixMultiply solution ) and then click Publish to AWS 3 Click Next to accept the defaults in the first screen 4 In the Application Environment dialog box you must provide an environment name but the default name for this project is too long so just shorten it until the red border disappears from the text box (Figure 16) Click Next ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 22 of 42 Figure 16: Specifying the Application Environment Details 5 In the Amazon EC2 Launch Configuration screen (Figure 17) verify that Windows Server 2012 R2 is selected For the instance type select GPU Double Extra Large Select your key pair Finally you must provide the AMI ID of the image you created previously You can find that ID in the Amazon EC2 console under Images or in AWS Explorer under Amazon EC2 AMIs Note that you must enter the ID in lowercase eg ami 12345678 Click Next Next and Deploy ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 23 of 42 Figure 17: Picking the G2 instance type and Your Custom AMI ID To ensure smooth builds and deployments of this solution with AWS Elastic Beanstalk make sure that the version of your AWS Toolkit for Visual Studio is 1920 or higher The first time you deploy your project with AWS Elastic Beanstalk it can take 5 10 minutes When it’s done you may notice that the console or the AWS Toolkit temporarily reports that the deployment is complete but with errors This can be disconcerting but if you wait another minute you should see the status change to success To run your application open AWS Explorer and expand the AWS Elastic Beanstalk node Fully expand your environment name and doubleclick it to see the status pane displayed The status will show as “L aunching ” for a few minutes When the status changes to “Environment is healthy” (again there could be a delay after it temporarily reports that the environment is unhealthy) click the URL at the top of the status pane This should launch your default browser and now you get to wait another couple of minutes while the application performs all seven matrix multiplications in the background To keep things simple the web application does not display a progress bar or use an AJAX framework (such as KnockoutJS) for partial updates (In your production code you would certainly ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 24 of 42 want to consider implementing a feature for the user to see the progress of the computation running in the background and to cancel it if desired) After running your application you may change your program and need to deploy it again Redeployment is much faster than an initial deployment In Visual Studio Solution Explorer rightclick the menu for the web project (again rightclick the project not the solution ) and then click Republish to Environment Using ebextensions with AWS Elastic Beanstalk When you run the web application the C++ DLL gets loaded into the IIS process on the web server This locks the file on the server disk which can prevent AWS Elastic Beanstalk from being able to overwrite it with a new version when you redeploy your application One workaround is to connect through VNC and restart the IIS service Another solution is to use the ebextensions feature that is built into AWS Elastic Beanstalk In Solution Explorer notice the folder in the C# project called ebextensions (prefaced by a dot) Any text files in this folder that have a file extension of config will be executed on the server after the deployment The only tricky thing is that Visual Studio opens config files in a different editor that doesn’t preserve line breaks so you need to rightclick the file and choose Open With Source Code (Text) Editor Here is the file: commands: restart iis: command: iisreset /restart waitForCompletion:0 This ebextensions file instructs AWS Elastic Beanstalk to run the iisreset command on the server For more information see the blog post “Customizing Windows Elastic Beanstalk Environments” Part 114 and Part 215 on the AWS NET Development blog ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 25 of 42 Model Code for Data Passed Between Controller and View The following code is the Model class in the web application In this application the data flows one way from the Controller to the View public class TaskResults { public int NumAlgorithms { get; set; } public int Dimension { get; set; } public string[] Description { get; set; } public string[] Time { get; set; } public string[] RelativeSpe edLabel { get; set; } public int[] PercentOfMax { get; set; } public string StatusMessage { get; set; } public string AMPDeviceName { get; set; } public TaskResults(int _NumAlgorithms) { NumAlgorithms = _NumAlgorithms; Description = new string[NumAlgorithms]; Time = new string[NumAlgorithms]; RelativeSpeedLabel = new string[NumAlgorithms]; PercentOfMax = new int[NumAlgorithms]; StatusMessage = stringEmpty; AMPDeviceName = stringE mpty; } } Accessing the Model in the View The following code is the first few lines of the file Indexcshtml You see that the TaskResults object created in the Controller is retrieved through the MVC ViewBag and then the @ syntax with the Razor Engine is used on the viewdata object to insert data (eg @viewdataAMPDeviceName ) from the Model into HTML ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 26 of 42 @using CSharpMatrixMultiplyModels; @{ ViewBagTitle = ""Home Page""; var viewdata = ViewData[""TaskResults""] as TaskResults; }

Matrix Multiplication Results (@viewdataDimension X @viewdataDimension)

AMP Default Device: @viewdataAMPDeviceNam e

@viewdataStatusMessage

Controller Code to Invoke Each Algorithm and Populate the Model The following code is the main Controller class in the web application It invokes each algorithm (except the fir st one) three times calculates the average elapsed time and stores the results in the TaskResults class (the Model) enum Algorithms // this must exactly duplicate enum in C++ { CSharp_Basic = 0 CSharp_ImprovedSerial = 1 CSharp_TPL = 2 CPP_Basic = 3 CPP_PPL = 4 CPP_AMP = 5 CPP_AMPTiling = 6 }; delegate float[][] CSharpMatrixMultiply(float[][] A float[][] B int N); const int TESTLOOPS = 3; const int N = 1024; // matrix size must be multiple of C++ tilesi ze public unsafe ActionResult Index() { int NumAlgorithms = EnumGetNames(typeof(Algorithms))Length; var rand = new Random(); double[] durations = new double[NumAlgorithms]; ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 27 of 42 var TaskResults = new TaskResults(NumAlgorithms); TaskResultsDescription[0] = ""C# Basic Serial (CPU)""; TaskResultsDescription[1] = ""C# Improved Serial (CPU)""; TaskResultsDescription[2] = ""C# Parallel with TPL (CPU)""; TaskResultsDescription[3] = ""C++ Basic Serial (CPU)""; TaskResultsDescription [4] = ""C++ Parallel with PPL (CPU)""; TaskResultsDescription[5] = ""C++ Parallel with AMP (GPU)""; TaskResultsDescription[6] = ""C++ Parallel with AMP Tiling (GPU)""; TaskResultsDimension = N; TaskResultsNumAlgorithms = NumAlgorithms; ViewData[""TaskResults""] = TaskResults; // According to // http://wwwheatonresearchcom/content/choosing bestcarraytype matrixmultiplication // ragged arrays perform better in C# than 2D arrays for matrix multiplication float[][] A = Cre ateRaggedMatrix(N); float[][] B = CreateRaggedMatrix(N); FillRaggedMatrix(A N rand); FillRaggedMatrix(B N rand); // C++ doesn't need ragged arrays for performance and it's easier to marshall // and process the data as 2D arrays float[] A2 = new float[N N]; float[] B2 = new float[N N]; // for comparing results use the same random data in C++ as in C# CopyRaggedMatrixTo2D(A A2 N); CopyRaggedMatrixTo2D(B B2 N); // warm up AMP and get GPU name before timing var sb = new StringBuilder(256); CPPWrapperWarmUpAMP(sb sbCapacity); TaskResultsAMPDeviceName = sbToString(); //*** Basic C# Save this original result for future comparisons long start = DateTimeNowTicks; float[][] original = MatrixMultiplyBasic(A B N); long stop = DateTimeNowTicks; durations[0] = (stop start) / 100000000; if (!RunCSharpAlgorithm( original A B N MatrixMultiplySerial ""C# Improved Serial"" (int)AlgorithmsCSharp_ImprovedSerial TaskResults ref durations)) { return PartialView(); ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 28 of 42 } if (!RunCSharpAlgorithm( original A B N MatrixMultiplyTPL ""C# TPL"" (int)AlgorithmsCSharp_TPL TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N ""C++ Basic"" (int)AlgorithmsCPP_Basic TaskResults ref duratio ns)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N ""C++ PPL"" (int)AlgorithmsCPP_PPL TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N ""C++ AMP"" (int)AlgorithmsCPP_AMP TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N ""C++ AMP Tiling"" (int)AlgorithmsCPP_AMPTiling TaskResults ref durations)) { return PartialView(); } var slowest = durationsMax(); var fastest = durationsMin(); // populate the Model for the HTML table in the View for (int k = 0; k < NumAlgorithms; k++) { TaskResultsTime[k] = stringFormat (""{0:0000}"" durations[k]); TaskResultsRelativeSpeedLabel[k] = stringFormat(""{0:00}X"" slowest / durations[k]); TaskResultsPercentOfMax[k] = (int)(fastest / durations[k] * 1000); } return PartialView(); } ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 29 of 42 bool RunCSharpAlgorithm( float[][] original float[][] A float[][] B int N CSharpMatrixMultiply function string FunctionName int AlgorithmIndex TaskResults results ref double[] durations) { double[] test_durations = new double[TESTLOOPS]; for (int k = 0; k < TESTLOOPS; k++) { long start = DateTimeNowTicks; float[][] C = function(A B N); test_durations[k] = (DateTimeNowTicks start) / 100000000; if (!CompareMatrixes(original C N)) { resultsStatusMessage = ""Error verifying "" + FunctionName; return false; } } durations[AlgorithmIndex] = test_durationsAverage(); return true; } unsafe bool RunCPPAlgorithm( float[][] original float[] A2 float[] B2 int N string FunctionName int AlgorithmIndex TaskResults results ref double[] durations) { double[] test_durations = new double[TESTLOOPS]; for (int k = 0; k < TESTLOOPS; k++) { // allocate memory in C# to simplify marshalling/deallocation float[] C2 = new float[N N]; long start = DateTimeNowTicks; fixed (float* pA2 = &A2[0 0]) fixed (float* pB2 = &B2[ 0 0]) fixed (float* pC2 = &C2[0 0]) { var error = new StringBuilder(1024); // allocate string memory ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 30 of 42 if (!CPPWrapperCallCPPMatrixMultiply(AlgorithmIndex pA2 pB2 pC2 N error errorCapacity) ) { resultsStatusMessage = errorToString(); return false; } } if (!CompareRaggedMatrixTo2D(original C2 N)) { resultsStatusMessage = ""Error verifying "" + Funct ionName; return false; } test_durations[k] = (DateTimeNowTicks start) / 100000000; } durations[AlgorithmIndex] = test_durationsAverage(); return true; } // Standard algorithm float[][] MatrixMultiplyBasic(fl oat[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // C is the result matrix for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) C[i][j] += A[i][k] * B[k][j]; return C; } // This function was developed by Heaton Research and is licensed under the Apache License Version 20 // available here: https://wwwapacheorg/licenses/LICENSE 20html // Improve the basic serial algorithm with optimized index order float[][] MatrixMultiplySerial(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // according to http://wwwheatonresearchcom/content/choosing bestc arraytypematrixmultiplication // this ikj index order performs the best for C# matrix multiplication for (int i = 0; i < N; i++) { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) { ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 31 of 42 crowi[j] += aik * browk[j]; } } } return C; } // Parallel algorithm using TPL float[][] MatrixMultiplyTPL(float[][] A float[][] B int N) { float[][] C = Create RaggedMatrix(N); ParallelFor(0 N i => { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) { crowi[j] += aik * browk[j]; } } }); return C; } C# Basic Serial (CPU) A basic algorithm for matrix multiplication is used as the baseline for the algorithms in subsequent sections There is only one optimization applied in this basic algorithm When using twodimensional arrays in the NET Framework method calls would ordinarily be made to the Array class Since the inner loop executes so many times that’s expensive But there is a simpl e workaround: use ragged arrays For example instead of declaring a 10x20 array like this: double[ ] MyArray = new double[1020]; declare it like this and create each row as a separate array of 20 columns in a for loop: double[][] MyArray = new double[10 ][]; for (int i = 0; i < 10; i++) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 32 of 42 MyArray[i] = new double[20]; Here is the code for basic matrix multiplication This will execute in serial fashion on the CPU: float[][] MatrixMultiplyBasic(float[][] A float[][] B int N) { float[][] C = CreateRagged Matrix(N); // C is the result matrix for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) C[i][j] += A[i][k] * B[k][j]; return C; } C# Optimized Serial (CPU) The code for this algorithm was obtained from the article Choosing the Best C# Array Type for Matrix Multiplication16 By Heaton Research In the article the author writes several variations of the order of the for loop indexes and measures the timing of each For this whitepaper we are using the variation that was found to perform the best with the NET Framework 45 float[][] MatrixMultiplySerial(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // according to http://wwwheatonresearchcom/content/choosing bestc arraytypematrixmultiplication // this ikj index order performs the best for C# matrix multiplication for (int i = 0; i < N; i++) { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 33 of 42 { crowi[j] += aik * browk[j]; } } } return C; } C# Parallel with TPL (CPU) The following code simply replaces the standard outer loop in the previous algorithm with a ParallelFor loop from the NET Framework Task Parallel Library (TPL) For more information see Matrix Multiplication in Parallel with C# and the TPL17 by James D McCaffrey float[][] MatrixMulti plyTPL(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); ParallelFor(0 N i => { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k] ; float aik = arowi[k]; for (int j = 0; j < N; j++) { crowi[j] += aik * browk[j]; } } }); return C; } C++ Basic Serial (CPU) If you decide to build your own program you must follow the steps in the blog post How to use C++ AMP from C#18 on the Parallel Programming with NET ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 34 of 42 blog on MSDN If you only want to download and run the sample code provided with this whitepaper there is no need to follow that procedure because those steps have already been included in the Visual Studio solution One difference between our solution and the information in the blog post is that our solution uses all 64bit code When combining C# and C++ you need to be careful to use the same platform in each language The platform is usually set to Any CPU in C# but it must be changed to x64 in the Visual Studio Configuration Manager as shown in Figure 18 Figure 18: Setting the Platform to x64 in the Visual Studio Configuration Manager See the blog post Debugging VS2013 websites using 64bit IIS Express19 for additional helpful information Before you can invoke C++ functions from C# you need to declare them for P/Invoke on the C# side The following code shows the CPPWrapper class in the Controller folder in the Visual Studio solution As required these methods are declared with the unsafe keyword in C# Rather than create a public entry point for each C++ algorithm it was deemed a bit cleaner to create a single function to call each one based on the algorithm index passed in This simplifies the exception handling which had to be written in C++ I would have liked to write a single exception handler in C# for all the calls to the different algorithms ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 35 of 42 including C++ but it was necessary to write an error handler in C++ for the error codes that can be returned by C++ AMP public class CPPWrapper { [DllImport(""CPPMatrixMultiplydll"" CallingConvention = CallingConventionStdCall CharSet = CharSetUnicode)] public extern unsafe static bool CallCPPMatrixMultiply(int algorithm float* A float* B float* C int N StringBuilder error int errsize); [DllImport(""CPPMatrixMultiplydll"" CallingConvention = CallingConventionStdCall CharSet = CharSetUnicode)] public extern unsafe static void WarmUpAMP(StringBuilder buffer int bufsize); } Here is the C++ dispatcher function which is exported for C#: extern ""C"" __declspec (dllexport) bool _stdcall CallCPPMatrixMultiply(int algorithm flo at A[] float B[] float C[] int N wchar_t* error size_t errsize) { try { switch (algorithm) { case Algorithms::CPP_Basic: MatrixMultiplyBasic(A B C N); break; case Algorithms::CPP_PPL: MatrixMultiplyPPL(A B C N); br eak; case Algorithms::CPP_AMP: MatrixMultiplyAMP(A B C N); break; case Algorithms::CPP_AMPTiling: MatrixMultiplyTiling(A B C N); break; default: wcscpy_s(error errsize L""Invalid C++ algorithm index""); return false; } } catch (concurrency::runtime_exception& ex) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 36 of 42 { std::wstring result = stows(exwhat()); wcscpy_s(error errsize resultc_str()); return false; } return true; } Now that you’ve taken care of those preliminaries you’re ready to implement the C++ function for basic matrix multiplication It looks very similar to the basic algorithm in C# except that it doesn’t use ragged arrays and it introduces a temporary sum variable to reduce array references to the result array in the inner loop void MatrixMultiplyBasic(float A[] float B[] float C[] int N) { for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { float sum = 00; for (int k = 0; k < N; k++) { sum += A[i*N + k] * B[k*N + j]; } C[i*N + j] = sum; } } } C++ Parallel with PPL (CPU) The next optimization is to rewrite the serial C++ function as a parallel function This code will still be running on the CPU but it will give us an interesting comparison with the parallel code we’ll write later to run on the GPU In the past writing parallel code in Windows with the Win32 thread APIs was complicated There are still many difficulties in multithreaded programming but ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 37 of 42 now the Microsoft Parallel Patterns Library (PPL) makes it much easier For more information about PPL see the following:  This article in the MSDN Library explains a parallel matrix multiplication algorithm written in C++ using PPL : How to: Write a parallel_for Loop20  This article describes several optimization techniques for writing parallel for loops in C++: C++11: Multicore Programming – PPL Parallel Aggregation Explained 21 Here’s the non optimized parallel C++ function: void MatrixMultiplyPPL(float A[] float B[] float C[] int N) { parallel_for(0 N [&](int i) { for (int j = 0; j < N; j++) { float sum = 00; for (int k = 0; k < N; k++) { sum += A[i* N + k] * B[k*N + j]; } C[i*N + j] = sum; } }); } C++ Parallel with AMP (GPU) Now you’re ready to write AMP code To get started you may want to review the blog post How to measure the performance of C++ AMP alglorithms22 on the Parallel Programming in Native Code blog on MSDN As that author points out there is overhead when AMP initializes itself on first use It enumerates the GPU devices in the system and picks the default one The idea of warming up AMP before timing it may or may not apply to your use case but the code provided with this whitepaper does implement such a function The following function returns the name of the GPU device so it can be displayed in the ASPNET MVC web page // Return the name of the default GPU device (or the emulator if no GPU exists) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 38 of 42 // AMP will enumerate devices to initialize itself outside of the timing code extern ""C"" __declspec (dllexport) void _stdcall WarmUpAMP(wchar_t* buffer size_t bufsize) { accelerator default_device; wcscpy_s(buffer bufsize default_deviceget_description()c_str()); } String types in C# and C++ are not directly compatible but there are various ways to pass strings between them (this is called marshaling ) In all cases it’s important to pay attention to where the string memory is allocated and how it will be freed The P/Invoke declaration in C# must be carefully written to match the string passing technique you decide to use in C++ The technique used in the previous code is to allocate a StringBuilder object with a fixed capacity in C# before passing it into C++ That way the C# side is responsible for freeing the memory when the object goes out of scope which only happens after the C++ function is done writing to the memory The C++ code just copies the name of the GPU device into the buffer passed in from C# The next task is to adapt the parallel C++ matrix multiplication algorithm to use AMP The following AMP code is based on the Matrix Multiplication Sample23 on the Parallel Programming in Native Code blog on MSDN void MatrixMultiplyAMP(float A[] float B[] float C[] int N) { extent<2> e_a(N N) e_b(N N) e_c(N N); array_view a(e_a A); array_view b(e_b B); array_view c(e_c C); cdiscard_data(); // avoid copying memory to GPU parallel_for_each(cextent [=](index<2> idx) restrict(amp) { int row = idx[0]; int col = idx[1]; float sum = 0; for (int inner = 0; inner < N; inner++) { ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 39 of 42 index<2> idx_a(idx[0] inner); index<2> idx_b(inner idx[1]); sum += a[idx_a] * b[ idx_b]; } c[idx] = sum; }); csynchronize(); } C++ Parallel with AMP Tiling (GPU) Finally let’s take another step with the AMP code to use a technique called tiling In a nutshell tiling is a method of optimizing the way the algorithm uses memory in the GPU When you call C++ AMP from C# there are four levels of memory you should be aware of:  Managed memory This lives in RAM associated with the CPU and the NET Framework CLR managed process and is controlled by the NET Framework garbage collector Data passed between C# and C++ must be “marshaled” between managed and unmanaged memory according to very particular rules such as padding  Unmanaged memory This also lives in RAM associated with the CPU but this memory space requires Win32 memory APIs and does not include a garbage collector  Global memory on the GPU Programming in AMP requires that data be moved —with thread synchronization —between unmanaged memory and the GPU  Registers associated with each thread on the GPU Accessing data in these registers can be 1000 times faster than GPU global memory so the idea is to move frequently accessed data into the registers But the registers aren’t large enough to hold an entire matrix so algorithms must be written to process one “tile” at a time and then move another tile into the registers and so on A full explanation of tiling is beyond the scope of this whitepaper but this article by Daniel Moth covers it well 24 Here is the C++ AMP code with a tiling algorithm: ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 40 of 42 const int TILESIZE = 8; // array size passed in must be a multiple of TILESIZE void MatrixMultiplyTiling(float A[] float B[] float C[] int N) { assert((N % TILESIZE) == 0); array_view a(N N A); array_view b(N N B); array_view c(N N C); cdiscard_data(); parallel_for_each(cextenttile() [=](tiled_index t_idx) restrict(amp) { int row = t_idxlocal[0]; int col = t_idxlocal[1]; tile_static float locA[TILESIZE][TILESIZE]; tile_static float locB[TILESIZE][TILESIZE]; float sum = 0; for (int i = 0; i < aextent[1]; i += TILESIZE) { locA[row][col] = a(t_idxglobal[0] col + i); locB[row][col] = b(row + i t_idxglobal[1]); t_idxbarrierwait(); for (int k = 0; k < TILESIZE; k++) sum += locA[row][k] * locB[k][col]; t_idxbarrierwait(); } c[t_idxglobal] = sum; }); csynchronize(); } Conclusion This whitepaper demonstrated how to set up the G2 instance type in Amazon EC2 with Windows Server The NVIDIA GPU on those instances provides 1536 cores that developers can use for computeintensive application functions But programming the GPU requires the C or C++ language whereas most Windows developers are using C# This article showed how to pass data between C# and ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 41 of 42 C++ and how to use the C++ AMP library to make GPU programming accessible and highly productive for C# web developers on Windows The tiled matrix multiplication algorithm written in C++ AMP was hundreds of times faster than the basic algorithm written in C# Further Reading  AWS Toolkit for Visual Studio25  AWS for Windows and NET Developer Center26  Getting Started with Amazon EC2 Windows Instances27  Elastic Beanstalk Documentation28  C++ AMP documentation29  ASPNET MVC documentation30 Notes 1 http://d0awsstaticcom/whitepapers/CSharpMatrixMultiplyzip 2 https://bitbucketorg/multicoreware/cppampdriverng/overview 3 https://ampalgorithmscodeplexcom/documentati on 4 http://blogsmsdncom/b/nativeconcurrency/archive/2012/01/30/camp sampleprojectsfordownloadaspx 5 http://awsamazoncom/ec2/instancetypes/ 6 http://wwwnvidiacom/object/cuda_home_newhtml 7 http://awsamazoncom/visualstudio/ 8 http://awsamazoncom/free/ 9 http://wwwmicrosoftcom/enus/download/detailsaspx?id=40784 10 http://wwwrealvnccom/ 11 http://wwwrealvnccom/ 12 http://wwwNVIDIAcom/download/driverResultsaspx/74642/en us 13 http://d0awsstaticcom/whitepapers/CSharpMatrixMultiplyzip ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 42 of 42 14 http://blogsawsamazoncom/net/post/Tx1RLX98N5ERPSA/Customizing Windows ElasticBeanstalkEnvironmentsPart1 15 http://blogsawsamazoncom/net/post/Tx2EMAYCXUW3HAK/Customizing Windows ElasticBeanstalkEnvironmentsPart2 16 http://wwwheatonresearchcom/content/choosingbestcarraytypematrix multiplication 17 http://jamesmccaffreywordpresscom/2012/04/22/matrixmultiplication in parallelwithca ndthetpl/ 18 http://blogsmsdncom/b/pfxteam/archive/2011/09/21/10214538aspx 19 http://blogsmsdncom/b/rob/archive/2013/11/14/debuggingvs2013 websites using 64bit iisexpressaspx 20 http://msdnmicrosoftcom/enus/library/dd728073aspx 21 https://katyscodewordpresscom/2013/08/17/c11multi coreprogramming pplparallelaggregationexplained/ 22 http://blogsmsdncom/b/nativeconcurrency/archive/2011/12/28/how to measuretheperformance ofcamp algorithmsaspx 23 http://blogsmsdncom/b/nativeconcurrency/archive/2011/11/02/matrix multiplicationsampleaspx 24 http://msdnmicrosoftcom/enus/magazine/hh882447aspx 25 http://awsamazoncom/visualstudio/ 26 http://awsamazoncom/net/ 27 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win_GetSt artedhtml 28 http://docsawsamazoncom/elasticbeanstalk/latest/dg/customize containerswindowsec2html 29 http://msdnmicrosoftcom/enus/library/hh265137aspx 30 http://wwwaspnet/mvc",General,consultant,Best Practices Optimizing_Electronic_Design_Automation_EDA_Workflows_on_AWS,"ArchivedOptimizing Electronic Design Automation (EDA) Workflows on AWS September 2018 This version has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/semiconductordesign onaws/semiconductordesignonawshtmlArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Abstract vi Introduction 1 EDA Overview 1 Benefits of the AWS Cloud 2 Improved Productivity 2 High Availability and Durability 3 Matching Compute Resources to Requirements 3 Accelerated Upgrade Cycle 4 Paths for Migrating EDA Workflows to AWS 5 Data Access and Transfer 5 Consider what Data to Move to Amazon S3 5 Dependencies 6 Suggested EDA Tools for Initial Proof of Concept (POC) 7 Cloud Optimized Traditional Architecture 7 Buildi ng an EDA Architecture on AWS 8 Hypervisors: Nitro and Xen 9 AMI and Operating System 9 Comp ute 11 Network 15 Storage 15 Licensing 23 Remote Desktops 25 User Authent ication 27 Orchestration 27 Optimizing EDA Tools on AWS 29 Amazon EC2 Instance Types 29 Archived Operating System Optimization 30 Networking 36 Storage 36 Kernel Virtual Memory 37 Security and Governance in the AWS Cloud 37 Isolated Environments for Data Protection and Sovereig nty 38 User Authentication 38 Network 38 Data Storage and Transfer 40 Governance and Monitoring 42 Contributors 44 Document Revisio ns 44 Appendix A – Optimizing Storage 45 NFS Storage 45 Appendix B – Reference Architecture 47 Appendix C – Updating the Linux Kernel Command Line 49 Update a system with /etc/default/grub file 49 Update a system with /boot /grub/grubconf file 50 Verify Kernel Line 50 Archived Abstract Semiconductor and electronics companies using e lectronic design automation (EDA ) can significantly accelerate the ir product development lifecycle and time to market by taking advantage of the near infinite compute storage and resources available on AWS This white paper present s an overview of the EDA workflow recommendations for moving EDA tools to AWS and the specific AWS architectural components to optimize EDA work loads on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 1 Introduction The workflows applications and methods used for the design and verification of semiconductors integrated circuits (ICs) and printed circuit boards (PCBs) have been largely unchanged since the invention of computer aided engineering (CAE) and electronic design automation (EDA) software However as electr onics systems and integrated circuits have become more complex with smaller geometries the comput ing power and infr astructure requirements to design test validate and build these systems have grown significantly CAE EDA and emerging workloads such as computational lithography and metrology have driven the need for massive scale computing and data management in next generation electronic products In the semiconductor and electronics sector a large portion of the overall design time is spent verif ying components for example in the characterization of intellectual property (IP) cores and for full chip functional and timing verifications EDA support organizations —the specialized IT teams that provid e infrastru cture support for semiconductor companies —must invest in increasingly large server farms and high performance storage systems to enable high er quality and fast er turnaround of semiconductor test and validat ion The introduction of new and upgraded IC fabri cation technologies may require large amounts of compute and storage for relatively short times to enable rapid completion of hardware regression testing or to recharacterize design IP Semiconductor companies today use Amazon Web Services ( AWS ) to take advantage of a more rapid flexible deployment of CAE and EDA infrastructure from the complete IC design workflow from register transfer level (RTL) design to the delivery of GDSII files to a foundry for chip fabrication AWS compute storage and higher level services are available on a dynamic asneeded basis with out the significant up front capital expenditure that is typically required for performance critical EDA workloads EDA Overview EDA workloads comprise workflow s and a supporting set of software tools that enable the efficient design of microelectronics and in particular semiconductor integrated circuits (ICs) Semiconductor design and verification relies on a set of commercial or open source tools collectively referred to as EDA softw are which expedites and reduces time to silicon tape out and fabrication EDA is a highly iterative engineering process that can take from months and in some cases years to produce a single integrated circuit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 2 The increasing complexity of integrated circuits has resulted in a n increased use of preconfigured or semi customized hardware components collectively known as intellectual property (IP) cores These cores (provided by IP developers as generic gate level netlists ) are either designed inhouse by a semiconductor company or purchased from a third party IP vender IP cores themselves requires EDA workflows for design and verification and to characteriz e performance for specific IC fabrication technologies The se IP cores are used in co mbination with ICspecific custo mdesigned components to create a complete IC that often includes a complex system onchip (SoC) making use of one of more embedded CPUs standard peripherals I/O and custom analog and/or digital components The complet e IC itself with all its IP cores and custom components then requires large amounts of EDA processing for full chip verification —including modeling (that is simulat ing) all of the components within the chip This modeling which includes HDL source level validation physical synthesis and initial verification (for example using techniques such as formal verification) is known as the front end design The physical implementation which includes floor planning place and route timing analysis design rulecheck (DRC) and final verification is known as the back end design When the back end design is complete a file is produced in GDSII format The production of this file is known for historical reasons as tapeout Wh en completed the file is sent to a fabrication facility (a foundry ) which may or may not be operated by the semiconductor company where a silicon wafer is man ufactured This wafer containing perhaps thousands of individual ICs is then inspected cut into dies that are themselves tested packaged into chips that are tested again and assembled onto a board or other system through highly automated manufacturing processes All of these steps in the semiconductor and electronics supply chain can benefit from the scalability of cloud Benefits of the AWS Cloud Before discussing the specific s of moving EDA workloads to AWS it is worth noting the benefits of cloud computing on the AWS Cloud Improved Productivity Organizations that move to the cloud can see a dramatic improvement in development productivity and time to market Your organization can achieve this by scaling out your compute needs to meet the demands of the job s waiting to be processed AWS uses per ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 3 second billing for our compute resources allowing you to optimize cost by only paying for w hat you use down to the second By scaling horizontally you can run more compute servers (that is Amazon Elastic Compute Cloud [Amazon EC2 ] instances) for a shorter period of time and pay the same amount as if you were running fewer servers for a longer period of time For example because the number of compute hours consumed are the same you could complete a 48 hour design regression in just two hours by dynamically growing your cluster by 24X or more in order to run many thousands of pending jobs in parallel These extreme levels of parallelism are commonplace on AWS across a wide variety of industries and performance critical use cases High Availability and Durability Amazon EC2 is hosted in multiple locations worldwide These locations comprise regions and Availability Zones (AZs) Each AWS R egion is a separate geographic area around the wo rld such as Oregon Virginia Ireland and Singapore Each AWS Region where Amazon EC2 runs is designed to be completely isolated from the other regions This design achieves the greatest possible fault tolerance and stability Resources are not replicate d across regions unless you specifically configure your services to do so Within e ach geographic region AWS has multiple isolated locations known as Availability Zones Amazon EC2 provides you the ability to place resources such as EC2 instances and d ata in multiple locations using these Availability Zones Each Availability Zone is isolated but the Availability Zones in a region are connected through low latency links By taking advantage of both multiple regions and multiple Availability Zones you can protect against failures and ensure you have enough capacity to run even your most compute intensive workflows Additionally this large global footprint enables you to position computing resources near your IC design engineers in situations where low latency performance is important For more information refer to AWS Global Infrastructure Matching Compute Resources to Requirements AWS offers many different configurations of hardware called instance families in order to enable customers to match their compute needs with those of their jobs Because of this and the on demand nature of the clo ud you can get the exact systems you need for the exact job you need to perform for only the time you need it ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 4 Amazon EC2 instances come in many different sizes and configurations These configurations are built to support jobs that require both large and small memory footprints high core counts of the latest generation processors and storage requirements from high IOPS to high throughput By right sizing the instance to the unit of work it is best suited for you can achieve high er EDA performance at lo wer overall cost You no longer need to purchase EDA cluster hardware that is entirely configured to meet the demands of just a few of your most demanding jobs Instead you can choose servers launch entire clusters of servers and scale these clusters up and down uniquely optimiz ing each cluster for specific applications and for specific stages of chip development For example consider a situation where you ’re performing gate level simulations for a period of jus t a few weeks such as during the development of a critical IP core In this example y ou might need to have a cluster of 100 machines (representing over 2 000 CPU cores) with a specific memory tocore ratio and a specific storage configuration With AWS you can deploy and run th is cluster dedicated only for this task for only as long as the simulations require and then terminate the cluster when that stage of your project is complete Now consider another situation in which you have multiple semicondu ctor design teams working in different geographic regions each using their own locally installed EDA IT infrastructure This geographic diversity of engineering teams has productivity benefits for modern chip design but it can create challenges in managi ng large scale EDA infrastructure (for example to efficiently utilize globally licensed EDA software ) By using AWS to augment or replace these geographically separated IT resources you can pool all of your global EDA licenses in a smaller number of locations using scalable on demand clusters on AWS As a result you can more rapidly complete critical batch workloads such as static timing analysis DRC and physical verification Accelerated Upgrade Cycle Another important reason to move EDA workloads to the cloud is to gain access to the latest processor storage and network technologies In a typical on premise s EDA installation you must select configure procure and deploy servers and storage d evices with the assumption that they remain in service for multiple years Depending on the selected processor generation and time ofpurchase this means that performance critical production EDA workloads might be running on hardware devices that are already multiple years and multiple processor generations out of date When using AWS you have the opportunity to select and deploy the latest processor generations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 5 within minutes and configure your EDA clusters to meet the unique needs of each application in your EDA workflow Paths for Migrating EDA Workflows to AWS When you begin the migration of EDA workflows to AWS you will find there are many parallels with managing traditional EDA deployments across multiple data centers Larger organizations in the semiconductor industry typically have multiple data centers that are geographically segregated because of the distributed nature of their design teams These organizations typically choose specific workloads to run in specific locations or replicate and synchronize data to allow for multiple sites to take the load of large scale global EDA workflows If your organization uses this approach you need to consider that the specifics around topics such as data replication caching and license server managem ent depend on many internal and organizational factors Most of the same approaches and design decisions related to multiple data centers also apply to the cloud With AWS you can build one or more virtual data centers that mirror existing EDA data center designs The foundational technologies that enable things like compute resources storage servers and user workstations are available with just a few keystrokes However the real power of using the AWS Cloud for EDA workloads comes from the dynamic capa bilities and enormous scale provided by AWS Data Access and Transfer When you first consider running workloads in the cloud you might envision a bursting scenario where cloud resources are set up as an augmentation to your existing on premises compute cl uster Although you can use this model successfully data movement presents a significant challenge when building an architecture to support bursting in a seamless way Your organization might see the most benefit if you consider bursting on a project byproject basis and choose to run entire workflows on AWS thereby freeing up existing on premises resources to handle other tasks By approaching cloud resources this way you can use much simpler data transfer mechanisms because you are not trying to sync d ata between AWS and your data centers Consider what Data to Move to Amazon S3 Prior to moving your EDA tools to AWS consider the process es and methods that will be in place as you move from initial experiments to full production For example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 6 consider what data will be needed for an initial performance test or for a first workflow proof of concept (POC) Data is gravi ty and moving only the limited amount of data needed to run your EDA tools to an Amazon Simple Storage Service (Am azon S3) bucket allows for flexibly and agility when building and iterating your architecture on AWS There are several benefits to storing data in Amazon S3; for an EDA POC using Amazon S3 allow s you to iterate quickly as the S3 transfer speed to an EC2 instance is up to 25 Gbps With your data stored in an S3 bucket you can more quickly experiment with different EC2 instance types and also experiment with different working storage options such as creating and tuning temporary shared file systems Deciding what data to transfer is dependent on the tools or designs you are planning to use for the POC We encourage customers to start with a relatively small amount of POC data ; for example only the data required to run a single simulation job Doing so allows you to q uickly gain experience with AWS and build an understanding of how to build production ready architecture on AWS while in the process of running an initial EDA POC workload Dependencies Semiconductor design environments often have c omplex dependencies that can hinder the process of moving workflows to AWS We can work with you to build an initial proof of concept or even a complex architecture However it is the designer ’s or tool engineer’s responsibility to unwind any legacy on premises data dependencies The initial POC process require s effort to determine which dependencies such as shared libraries need to be moved along with project data There are tools available that help you bui ld a list of dependencies and some of these tools yield a file manifest that expedite s the process of moving data to AWS For example one tool is Ellexus Container Checker which can be found on the AWS Marketplace Dependencies can include authentication methods ( for example NIS) shared file systems cross organization collaboration and globally distributed designs (Identifying and managing such dependencies is not unique to cloud migration; semiconductor design teams face similar challenges in any distributed EDA environment) Another approach may be to launch a net new semiconductor project on AWS which should significantly reduce the number of legacy dependencies ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 7 Suggested EDA T ools for Initial Proof of Concept (POC) An HDL compile and s imulation workflow may be the fastest approach to launching an EDA POC on AWS or creating a production EDA environment HDL files are typically not large and the ability to use an on premises license server (via VPN) reduces the additional effort of moving your licensing environment to AWS HDL compile and simulation workflows are representative of other EDA workloads including their need for shared file systems and some form of job scheduling Cloud Optimized Traditional Architecture On AWS compute and storage resources are available on demand allowing you to launch on what you need and when you need it This enables a different approach to architecting your semiconductor design environment Rather than having one large cluster where multiple projects are running you can use AWS to launc h multiple clusters Because you can configure compute resources to increase or decrease on demand you can build clusters that are specific to different parts of the workflow or even specific projects This allows for many benefits including project based cost allocation right size compute and storage and environment isolation Figure 1: Workload specific EDA clusters on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 8 As seen in Figure 1 moving to AWS allows you to launch a separate set of resources for each of you r EDA work load s (for example a cluster) This multi cluster approach can also be u sed for global and cross organization al collaboration The multi cluster approach can be used for example to dedicate and manage specific cloud resources for specific projects encouraging organizations to use only the resources required for their project Job Scheduler Integration The EDA workflow that you build on AWS can be a similar environment to the one you have in your on premises data center Many if not all of the same EDA tools and applications running in your data center as well as orchestration software can also be run on AWS Job schedulers such as IBM Platform LSF Adaptive PBS Pro and Univa Grid Engine (or their open source alternatives) are typically used in the EDA industry to manage compute resources optimize license usage and coordinate and prioritize jobs When you migrate to AWS you may choose to use these existing schedulers essentially unchanged to minimize the impact on your end user workflows and processes Most of these job schedulers already have s ome form of native integration with AWS allowing you to use the master node to automatically launch cloud resources when there are jobs pending in the queue You should refer to the documentation of your specific job management tool for the steps to autom ate resource allocation and management on AWS Building an EDA Architecture on AWS Building out your production ready EDA workflow on AWS requires an end toend examination of you r current environment This examination begin s with the operating system you are using for running your EDA tools as well as your job scheduling and user management environments AWS allows for a mix of architectures when moving semiconduct or design workloads and you can leverage s ome combination of the following two approaches : • Build an architecture similar to a traditional cluster using traditional job scheduling software but ensuring that a cloud native approach is used • Use more cloud native methods such as AWS Batch which uses containerization o f your applications Where needed we will make the distinction when using AWS Batch can be advantageous for example when running massively parallel parameter sweeps ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 9 Hypervisors: Nitro and Xen Amazon EC2 instances use a hypervisor to divide resources on the server so that each customer has separate CPU memory and storage resources for just that customer’s instance We do not use the hypervisor to share resources between instances except for the T* family On previous generation instance types for ex ample the C4 and R4 families EC2 instances are virtualized using the Xen hypervisor In current generation instances for example C5 R5 and Z1d we are using a specialized piece of hardware and a highly custom ized hypervisor based on KVM This new hyper visor system is called Nitro At the time of this writing these are the Nitro based instances: Z1d C5 C5d M5 M5d R5 R5d Launching Nitro based instances require s that specific drivers for networking and storage be installed and enabled before the in stance can be launched We provide the details for this configuration in the next section AMI and Operating System AWS has built in support for numerous operati ng systems (OSs) For EDA users CentOS Red Hat Enterprise Linux and Amazon Linux 2 are used more than other operating systems The operating system and the customizations that you have made in your on premises environment are the baseline for buildi ng out your EDA architecture on AWS Before you can launch an EC2 ins tance you must decide wh ich Amazon Machine Image (AMI) to use An AMI contains the OS any required OS and driver customizations and may also include the application software For EDA o ne approach is to launch an instance from an existing AMI customize the instance after launch and then save this updated configuration as a custom AMI Instances launched from this new custom AMI include the customizations that you made when you created the AMI ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 10 Figure 2: Use Amazon provided AMI to build a Customized AMI Figure 2 illustrate s the process of launching an instance with an AMI You can select the AMI from the AWS Console or from the A WS Marketplace and then customize that instance with your EDA tools and environment After that you can use the customized instance to create a new customized AMI that you can then use to launch your entire EDA environment on AWS Note also that the cus tomized AMI that you create using this process can be further customized For example you can customize the AMI to add additional application software load additional libraries or apply patches each time the customized AMI is launched onto an EC2 insta nce As of this writing we recommend these OS levels for EDA tools (more detail on OS versions is provided in following sections) : • Amazon Linux and Amazon Linux 2 ( verify certification with EDA tool vendor s) • CentOS 74 or 75 • Red Hat Enterprise Linux 74 or 75 These OS levels have the necessary drivers already included to support the current instance ty pes which include Ni tro based instances If you are not using one of these levels you must perform extra steps to take advantage of the features of our current instances Specifi cally you must build and enable enhanced networkin g which relies on ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 11 the elastic network adaptor (ENA) drivers See Network and Optimizing EDA Tools on AWS for m ore detail ed information on ENA drivers and AMI drivers If you use an instance with Nitro (Z1d C 5 C5d M5 M5d R 5 R5d ) you must use an AMI that has the AWS ENA driver built and en abled and the NVMe drivers installed At this time a Nitro based instance does not launch unless you have these drivers These OS levels include the required drivers : • CentOS 74 or later • Red Hat Enterprise Linux 74 or later • Amazon Linux or Amazon L inux 2 (current versions) To verify that you can launch your AMI on a Nitro based instance first launch the AMI on a Xen based instance type and then run the c5_m5_checks_scriptsh script found on the awslabs GitHub repo at awslabs/aws support tools/EC2/C5M5InstanceChecks/ The script analyze s your AMI and determine s if it can run on a Nitro based instance If it cannot it display s recommended changes You can also import your own on premises image to use for your AMI This process includes extra steps but may result in time savings Before importing an on premises OS image you first require a VM image for y our OS AWS supports certain VM formats (for example Linux VMs that use VMware ESX ) that must be uploaded to an S3 bucke t and subsequently converted into an AMI Detailed information and instructions can be found at https://awsamazoncom/ec2/vm import/ The same operati ng system requirements mentioned above are also applicable to import ed images (that is you shou ld use CentOS/RHEL 74 or 75 Amazon Linux or Amazon Linux 2) Compute Although AWS has many different types and sizes of instances the instance types in the compute optimized and memory optimized categories are typically best suited for EDA workloads When running EDA software on AWS you should choose instances that feature the lat est generations of Intel Xeon processors using a few different configurations to meet the needs of each application in your overall workflow ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 12 The compute optimized instance family features instances that have the highest clock frequencies available on AWS and typically enough memory to run some memory intensive workloads Typical EDA use cases for compute optimized instance types: • Simulations • Synthesis • Formal verification • Regression tests Z1d for EDA Tools AWS has recently announced a powerful new insta nce type that is well optimized for EDA applications The faster clock speed on the Z1d instance with up to 4 GHz sustained Turbo performance allows for EDA license optimization while achieving faster time to results The Z1d uses an AWS specific Intel Xeon Platinum 8000 series (Skylake) processor and is the fastest AWS instance type The following list summarizes the features of the Z1d instance: • Sustained all core frequency of up to 40 GHz • Six different instance sizes with u p to 24 cores (48 threads) per instance • Total memory of 384 GiB • Memory to core ratio of 16 GiB RAM per core • Includes local Instance Store NVMe storage (as much as 18 TiB) • Optimized for EDA and other high performance worklo ads Additional Compute Optimized Instances C5 C5d C4 In addition to the Z 1d t he C5 instance feature s up to 36 cores (72 threads) and up to 144 GiB of RAM The processor used in the C5 is the same as the Z1d the Intel Xeon Platinum 8000 series (Skylake) but also includes a base clock speed of 30 GHz and the ability to turbo boost up to 35 GHz The C5d instance is the same configuration as the C5 but offers as much as 18 TiB of local NVMe SSD storage ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 13 Previous generation C4 ins tances are also commonly used by EDA customers and still remain a suitable option for certain workloads such as those that are not memory intensive Memory Optimized Instances Z1d R5 R5d R4 X1 X1e The Z1d instance is not only compute optimized but m emory optimized as well including 384 GiB of total memory The Z1d has the highest clock frequency of any instance and with the except ion of our X1 and X1e instances is equal to the most memory per core (16 GiB/core) If you require larger amounts of memory than what is available on the Z1d consider another memory optimized instance such as the R5 R5d R4 X1 or X1e Typical EDA use cases for memory optimized instance types: • Place and route • Static timing analysis • Physical verification • Batch mode RTL simulation (multithread optimized tools ) The R5 and R5d have the same processor as the Z1d and C5 the Intel Xeon Platinum 8000 series (Skylake) With the largest R5 and R5d instance types having up to 768 GiB memory E DA workloads that could previously only run on the X1 or X1e can now run on the R5 and R5d instances These recently released instances are serving as a drop in replacement for the R4 instance for both place and route as well as batch mode RTL simulatio n The R416xlarge instance is viable option with a high core count (32) and 15 GiB/core ratio For this reason w e see a large number of customers using the R416xlarge instance type The X1 and X1e instance types can also be used for memory intensive wo rkloads ; however testing of EDA tools by Amazon internal silicon teams has indicate d that most EDA tools will run well on the Z1d R4 R5 or R5d instances The need for the amount of memory provided on the X1 (1952 GiB) and X1d (3904 GiB) has been relatively infrequent for semiconductor design Hyper Threading Amazon EC2 instances support Intel Hyper Threading Technology (HT Technology) which enables multiple threads to run concurrently on a single Intel Xeon CPU core ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 14 Each thread is repr esented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type Each vCPU is a hyperthread of an Intel Xeon CPU core except for T2 instances You can specify the following CPU option s to optimize your instance for semiconductor design workloads: • Number of CPU cores : You can customize the number of CPU cores for the instance This customization may optimize the licensing costs of your software with an instance that has sufficient amoun ts of RAM for memory intensive workloads but fewer CPU cores • Threads per core : You can disable Intel Hyper Threading Technology by specifying a single thread per CPU core This scenario applies to certain workloads such as high performance computing (HPC) workloads You can specify these CPU options during instance launch (curren tly on support through the AWS Command Line Interface [ AWS CLI] an AWS software development kit [ SDK ] or the Am azon EC2 API only) There is no additional or reduced charge for specifying CPU options You are charged the same amount as instances that are launched with default CPU options Refer to Optimizing CPU Options in the Amazon Elastic Compute Cloud User Guide for Linux Instances for m ore details and rules for specifying CPU options Divide the vCPU number by 2 to find the number of physical cores on the instance You can disable HT Technology if you determine that it has a negative impact on your application ’s performance See Optimizing EDA Tools on AWS for details on disabling Hyper Threading Table 1 lists the instance types that are typically used for EDA tools Table 1: Instance specifications suitable for EDA workloads Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe Z1d 24 40 GHz 384 16 Yes R5 / R5d 48 Up to 31 GHz 768 16 Yes on R5d R4 32 23 GHz 488 1525 M5 / M5d 48 Up to 31 GHz 384 8 Yes on M5d C5 / C5d 36 Up to 35 GHz 144 4 Yes on C5d ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 15 Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe X1 64 23 GHz 1952 305 Yes X1e 64 23 GHz 3904 61 Yes C4 18 29 GHz boost to 35 60 333 *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Network Amazon e nhanced networking technology enables instances to communicate at up to 25 Gbps for current generation instances and up to 10 Gbps for previous generation instances In addition enhanced networkin g reduces latency and network jitter Enhanced networking is enabled by default on these operating system levels : ▪ Amazon Linux ▪ Amazon Linux 2 ▪ CentOS 74 and 75 ▪ Red Hat Enterprise Linux 74 and 75 If you have an older version of Cent OS or R HEL you can enable enhanced networking by installing the network module and updat ing the enhanced network adapter ( ENA ) support attribute for the instance For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Storage For EDA workloads running at scale on any infrastructure storage can quickly become the bottleneck for pushing jobs through the queue Traditional centralized filers serving network file systems ( NFS ) are commonly purchased from hardware vendors at significant costs in support of high EDA throughout However these centralized filers can quickly become a bottleneck for EDA resulting in increased job run times and correspondingly higher EDA license cost s Planned or unexpected increases in EDA data and the need to access that data across a fast growing EDA cluster means that the filers eventually run out of storage space or become bandwidth constrained by either the network or storage tier ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 16 EDA a pplica tions can take advantage of the wide array of storage options available on the AWS resulting in reduced run times for large batch workloads Achieving these benefits may require some amount of EDA workflow rearchitecting but the benefits of making these optimizations can be numerous Types of Storage on AWS Before discussing the differ ent options for deploying EDA storage it is important to understand the different types of storage services available on AWS Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS cloud EBS volumes are attached to instances over a high bandwidth network fabric and appear as local block storage that can be formatted with a file system on the instance itself Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes offer the consistent a nd low latency performance required to run semiconductor workloads When selecting your instance type you should select an instance that is Amazon EBS optimized by default An Amazon EBS optimiz ed instance provides dedicated throughput to Amazon EBS whic h is isolated from any other network traffic and an optimized configuration stack to provide optimal Amazon EBS I/O performance If you choose an instance that is not Amazon EBS optimized you can enable Amazon EBS optimization by using ebsoptimized with the modifyinstanceattribute parameter in the AWS CLI but additional charges may apply (cost is include d with instances where Amazon EBS is optimiz ed by default) Amazon EBS is the storage that backs all modern Amazon EC2 instances (with a few exceptions) and is the foundat ion for creating high speed file systems on AWS With Amazon EBS it is possible to achieve up to 80000 IOPS and 1750 MB/s from a single Amazon EC2 instance It is important to choose the correct EBS volume types when building your EDA architecture on AWS Table 2 shows the EBS volumes types that you should consider ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 17 Table 2: EBS Volume Types io1 gp2* st1 sc1 Volume Type Provisioned IOPS SSD General Purpose SSD Throughput Optimized HDD Cold HDD Volume Size 4 GB 16 TB 1 GB 16 TB 500 GB 16 TB 500 GB 16 TB Max IOPS**/Volume 32000 10000 500 250 Max Throughput/Volume 500 MB/s 160 MB/s 500 MB/s 250 MB/s *Default volume type **io1/gp2 based on 16K I/O size st1/sc1 based on 1 MB I/O size When choosing your EBS volume types consider the performance characteristics of each EBS volume This is particularly important when building a NFS server or another file system solutions Achieving the maximum capable performance of an EBS volume depend s on the size of the volume Additionally the gp2 st1 and sc1 volume types use a burst credit system and this should be taken in to consideration as well Each AWS EC2 instance type has a throughput and IOPS limit For example the Z1d12xlarge has EBS limits of 175 GB/s and 80000 IOPS (For a c hart that shows the Amazon EBS throughput expected for each instance type refer to Instance Types that Support EBS Optimization in the Amazo n Elastic Compute Cloud User Guide for Linux Instances ) To achieve these speeds you must stripe multiple EBS volumes together as each volume has its own throughput and IPOS limit Refer to Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide for Linux Instances for detailed information about throughput IOPS and burst credits Enhancing Scalability with Dynamic EBS Volumes Semiconductor design has a long history of over provisioning hardware to meet the demands of backend workloads that may not be run for months or years after the customer specifications are received On AWS you provision only the resources you need when you need them For the typic al on premises EDA cluster IT teams are accustomed to purchasing large arrays of network attached storage even though their initial needs are relatively small ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 18 A key feature of EBS storage is elastic volumes ( available on all current generation EBS volu mes attached to current generation EC2 instances ) This feature allows you to provision a volume that meets your application requirements today and as your requirements change allows you to increase the volume size adjust performance or change the volu me type while the volume is in use You can continue to use your application while the change takes effect An on premises installation normally require s manual intervention to adjust storage configurations Leveraging EBS elastic volumes and other AWS ser vices you can automate the process of resi zing your EBS volumes Figure 3 shows the automated process of increasing the volume size using Amazon CloudWatch (metrics and monitoring service and AWS Lambda (an event driven serverless compute service ) The volume increase event is trigger ed (eg usage threshold) using a CloudWatch alarm and a Lamba function T he resulting increase is automatically detected by the operat ing system and a subsequent file system grow operation resize s the file system Figure 3: Lifecycle for automatically resizing an EBS volume Instance Storage For use cases where the performance of Amazon EBS is not sufficient on a single instance Amazon EC2 instances with Instance Store are available Instance Store is block level storage that is physically attached to the instance As the storage is directly attached to the instance it can provide signi ficantly higher throughput and IOPS than is ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 19 available through network based storage similar to Amazon EBS However because the storage is locally attached to the instance data on the Instance Store does not persist when you stop or terminate the instance Additionally hardware failures on the instance would likely result in data loss For these reasons i nstance Store is recommended for temporary scratch space or for data replicated off of the instance ( for example Amaz on S3) You can increase durability by choosing an instance with multiple NVMe devices and create a RAID set with one or more parity devices The I3 instance family and the recently announced Z1d C5d M5d and R5d instances are wellsuited for EDA workloa ds requiring a significant amount of fast local storage such as scratch data These instances use NVMe based storage devices and are designed for the highest possible IOPS The Z1d and C5d instances each have up to 18 TiB of local instance store and the R5d and M5d instances each have up to 36 TiB of local instance store The i316xlarge can deliver 33 million random IOPS at 4 KB block size and up to 16 GB/s of sequential disk throughput This performance m akes the i316xlarge well suited for serving file systems for scratch or temporary data over NFS Table 3 shows the instance types typically found in the semiconductor space that have instance store Tab le 3: Instances typically found in the EDA space with Instance Store Instance Name Max Raw Size TiB Number and size of NVMe SSD (GiB) I3 152 TiB 8 x 1 920 Z1d 18 TiB 2 x 900 R5d 36 TiB 4 x 900 M5d 36 TiB 4 x 900 C5d 18 TiB 2 x 900 X1 3840 TiB 2 x 920 X1e 3840 TiB 2 x 1920 The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 20 encryption keys are dest royed when the instance is stopped or terminated and cannot be recovered You cannot disable this encryption and you cannot provide your own encryption key1 NVMe on EC2 Instances Amazon EC2 instances based on the Nitro hypervisor feature local NVMe SSD st orage and also expose Amazon Elastic Block Store (Amazon EBS ) volumes as NVMe block devices This is why certain operating system levels are required for Nitro based instances In other words only an AMI that has the required NVMe drives installed allows you to launch a Nitro based instance See AMI and Operating System for instructions on verify ing that your AMI will run on a Nitro based instance If you use EBS volumes on Nitro based instances configure two kernel settings to ensure optimal performance Refer to the NVMe EBS Volumes page of the Amazon Elastic Compute Cloud User Guide for Linux Instances for more information Amazon Elastic File System ( Amazon EFS) You can opt for building your own NFS file server on AWS (discussed in the “Traditional NFS File System” section) or you can launch a shared NFS file system using Amazon Elastic File System ( Amazon EFS) Amazon EFS provides simple scalable NFS based file s torage for use with Amazon EC2 instances in the AWS Cloud A fully managed petabyte scale file system Amazon EFS provides a simple interface that enables you to create and configure file systems quickly and easily With Amazon EFS storage capacity is elastic increasing and decreasing automatically as you add and remove files so your applications have the storage they need when they need it Amazon EFS is designed for high availability and durability and can deli ver high throughput when deployed at scale The data stored on an EFS file system is redundantly stored across multiple Availability Zones In addition a n EFS file system can be accessed concurrently from all Availability Zones in the region where it is l ocated However because all Availability Zones must acknowledge file system actions ( that is create read update or delete) latency can be higher than traditional shared file systems that do not span multiple Availability Zones Because of this it is important to test your workload s at scale to ensure EF S meets your performance requirements Amazon S3 Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere o n the web It is designed to deliver 99999999999% durability and scale to handle millions of ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 21 concurrent requests and grow past trillions of objects worldwide Amazon S3 offerings include following range of storage classes • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard – IA (for i nfrequent access ) for long lived but less frequently accessed data • Amazon Glacier for long term data archiv al Amazon S3 also offers configurable lifecycle policie s for managing your objects so that they are stored cost effectively throughout their lifecycle Amazon S3 is accessed via HTTP REST requests typically through the AWS software development kits (SDKs) or the AWS Command Line Interface (AWS CLI) You can us e the AWS CLI to copy data to and from Amazon S3 in the same way that you copy data to other remote file system s using ls cp rm and sync command line operations For EDA workflows we recommend that you consider Amazon S3 for your primary data storage solution to manag e data uploads and downloads and to provide high data durability For example y ou can quickly and efficiently cop y data from Amazon S3 to Amazon EC2 instances and Amazon EBS storage to populate a high performan ce shared file system prior to launching a large batch regression test or timing analysis However we recommend that you do not use Amazon S3 to directly access (read /write) individual files during the runtime of a performance critical application The be st architectures for high performance data intensive computing available on AWS consist of Amazon S3 Amazon EC2 Amazon EBS and Amazon EFS to balance performance durability scalability and cost for each specific application Traditional NFS File Systems For EDA workflow migration the first and most popular option for migrating storage to AWS is to build systems similar to your onpremise s environment This option enables you to migrate applications quickly without having to rearchitect your applicat ions or workflow With AWS it’s simple to create a storage server by launching an Amazon EC2 instance with adequate bandwidth and Amazon EBS throughput attaching the appropriate EBS volumes and sharing the file system to your compute nodes using NFS When building storage systems for the immense scale that EDA can require for large scale regression and verification tests there are a number of approaches you can take to ensure your storage systems are able to handle the throughput ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 22 The largest Amazon EC2 instances support 2 5 Gbps of network bandwidth and up to 80000 I OPS and 1750 MB/s to Amazon EBS If the data is temporary or scratch data you can use an instance with NVMe volumes to optimize the backend storage For example you can use the i316xl arge with 8 NVMe volumes that is capable of up to 16GB/s and 3M IOPS for local access The 25 Gbps network connection to the i316xlarge then becomes the bottleneck and not the backend storage This setup results in an NFS that is capable of 25 GB/s For EDA workloads that require more performance in aggregate than can be provided by a single instance you can build multiple NFS servers that are delegated to specific mount points Typically this means that you build servers for shared scratch tools directories and individual projects By building servers in this way you can right size the server and the storage allocated to it according to the demands of a specific workload When projects are finished you can archive the data to a low cost long term storage solution like Amazon Glacier Then you can delete the storage server thereby saving additional cost When building the storage servers you have many options Linux software raid ( mdadm ) is often a popular choice for its ubiquity and stability However in recent years ZFS on Linux has grown in popularity and customers in the EDA space use it for the data protection and expansion features that it provides If you use ZFS it’s relatively simple to build a solution that pools a group of EBS volumes together to ensure higher performance of the volume set up automatic hourly snapshots to provide for point in time rollbacks and replicate data to backup servers that are in other Availability Zones to provide for fault tolerance Although out of the scope of this document if you want more automated and managed solutions consider AWS partner storage solutions Examples of partners that provid e solutions for running high performance storage on AWS include SoftN AS WekaIO and NetApp Cloud Nat ive Storage Approaches Because of its low cost and strong scaling behaviors Amazon S3 is wellsuited for EDA workflows because you can adapt the workflows to reduce or eliminate the need for traditional shared storage systems Cloud optimized EDA workflow s use a combination of Amazon EBS storage and Amazon S3 to achieve extreme scalability at very low costs without being bottlenecked by traditional storage systems ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 23 To take advantage of a solution like this your EDA organization and your supporting IT teams might need to untangle many years of legacy tools file system sprawl and large numbers of symbolic links in order to understand what data you need for specific projects (or job deck) and prepackage the data along with the job that requires it The typical first step in this approach is to separate out the static data ( for example application binaries compilers and so on ) from dynamically changing data and IP in order to build a front end workflow that doesn’t re quire any shared file systems This is an important step for optimized cloud migration and also provides the benefit of increasing the scalability and reliability of legacy on premise s EDA workflows By using this less NFS centric approach to manag e EDA storage operating system images c an be regularly updated with static assets so that they’re available when the instance is launched Then when a job is dispatched to the instance it can be configured to first download the dynamic data from Amazon S3 to local or Amazon EBS storage before launching the application When complete results are uploaded back to Amazon S3 to be aggregated and processed when all jobs are finished This method for decoupling compute from storage can provide substantial performance and reliability benefit s in pa rticular for frontend RTL batch regressions Licensing Application licensing is required for most EDA workloads both on premises and on AWS From a technical standpoint managing and accessing licenses is unchanged when migrating to AWS License Server Access On AWS each Amazon EC2 instance launched is provided with a unique hostname and hardware (MAC) address using Amazon elastic network interfaces that cannot be cloned or spoofed Therefore traditional license server technologies ( such as Flexera) work natively on AWS without any modification The inability to clone license servers which is prevented by AWS by not allowing the duplication of MAC addresses also provides EDA software vendors with increased confidence that EDA software can be deployed and used in a secure manner Because of the connectivity options available which include the use of VPNs and AWS Direct Connect you can run your license servers on AWS using an Amazon EC2 instance or within your own data centers By a llowing connectivity through a VPN or AWS Direct Connect between cloud resources and on premise s license servers AWS enables users to ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 24 seamlessly run workloads in any location without having to split licenses and dedicate them to specific groups of compute resourc es Figure 4: License server deployment scenarios Licensed applications are sometimes sensitive to network latency and jitter between the execution host and the license server Although internet based VPN is often a good choice f or connecting to AWS from your corporate datacenter network latency over the Internet can vary affecting performance and reliability of some licensed applications Alternatively a private dedicated connection from your on premises network to the neares t AWS Region using AWS Direct Connect can provide a reliable network connection with consistent latency Improving License Server Reliability License servers are critical components in almost any EDA computing infrastructure A loss of license services can bring engineering work to a halt across the enterprise Hosting licenses in the AWS Cloud can provide improved reliability of license services with the use of a floating elastic network interface (ENI) These ENIs have a fixed immutable MAC address that can be associated with software license keys The implementation of this high availability solution begins with the creation of an ENI that is attached to a license server instance Your license keys are associated with this network interface If a failure is detected on this instance you or your custom automation can detach the ENI and attach it to a standby license server Because the ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 25 ENI maintains its IP and MAC address es network traffic begins flowing to the standby instance as soon as you attach the network interface to the replacement instance This unique capability enables license administrators to provide a level of reliability that can be difficult to achieve using on premises servers in a traditional datacenter This is another exampl e of the benefits of the elastic and programmable nature of the cloud Working with EDA Vendors AWS works closely with thousands of independent software vendors (ISVs) that deliver solutions to customers on AWS using methods that may include software as a service (SaaS ) platform as a service ( PaaS ) customer self managed and bring your own license (BYOL ) models In the semiconductor sector AWS works closely with major vendors of EDA software to help optimize performance scalability cost and applicatio n security AWS can assist ISVs and your organization with deployment best practices as described in this whitepaper EDA vendors that are members of the AWS Partner Network (APN) have access to a variety of tools training and support that are provided directly to the EDA vendor which benefits EDA end customers These Partner Programs are designed to s upport the unique technical and business requirements of APN members by providing them with increased support from AWS including access to AWS partner team members who specialize in design and engineering applications In addition AWS has a growing number of Consulting P artners who can assist EDA vendors and their customers with EDA cloud migration Remote Desktops While the majority of EDA workloads are executed as batch jobs (see Orchestration ) EDA users may at times require direct console access to compute servers or use applications that are graphical in nature For example it might be necessary to view waveforms or step through a simulation to identify and reso lve RTL regression errors o r it might be necessary to view a 2D or 3D graphical representation of results generated during signal integrity analysis Some applications such as printed circuit layout software are inherently interactive and require a high quality low latency user experience There are multiple ways to deploy remote desktops for such applications on AWS You have the option of using open source software such as V irtual Network Computing (VNC) or commercial remote desktop solutions available from AWS partners You can ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 26 also make use of AWS solutions including NICE desktop cloud visualization ( NICE DCV ) and Amazon Work Spaces NICE DCV NICE Desktop Cloud Visualization is a remote visualization technology that enables users to securely c onnect to graphic intensive 3D applications hosted on a n Amazon EC2 instance With NICE DCV you can provide high performance graphics processing to remote users by creating secure client sessions This enables your interactive EDA users to use resource intensive applications with relatively low end client computers by using one or more EC2 instances as remote desktop servers including GPU acceleration of graphics rendered in the cloud In a typical NICE DCV scenario for EDA a graphic intensive applicatio n such as a 3D visualization of an electromagnetic field simulation or a complex interactive schematic capture session is hos ted on a high performance EC2 instance that provides a high end GPU fast I/O capabilities and large amounts of memory The N ICE DCV server software is installed and configured on a server (an EC2 instance) and it is used to create a secure session You use a NICE DCV client to remotely connect to the session and use the application hosted on the server The server uses its hard ware to perform the high performance processing required by the hosted application The NICE DCV server software compresses the visual output of the hosted application and streams it back to you as an encrypted pixel stream Your NICE DCV client receives t he compressed pixel stream decrypts it and then outputs it to your local display NICE DCV was specifically designed for high performance technical applications and is an excellent choice for EDA in particular if you are using Red Hat Enterprise Linux or CentOS operating systems on your remote desktop environment NICE DCV also supports modern Linux desktop environments including modern Linux desktops such as Gnome 3 on RHEL 7 NICE DCV uses the latest NVIDIA Grid SDK technologies such as NVIDIA H264 hardware encoding to improve performance and reduce system load NICE DCV also supports lossless quality video compression when the network and processor conditions allow and it automatically adapts the video compression levels based on the network's available bandwidth and latency ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 27 Amazon Workspaces Amazon WorkSpaces is a managed secure cloud desktop service You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of deskto ps to workers across the globe You can pay either monthly or hourly just for the WorkSpaces you launch which helps you save money when compared to traditional desktops and on premises virtual desktop infrastructure (VDI) solutions Amazon WorkSpaces hel ps you eliminate the complexity in managing hardware inventory OS versions and patches and VDI which helps simplify your desktop delivery strategy With Amazon WorkSpaces your users get a fast responsive desktop of their choice that they can access an ywhere anytime from any supported device Amazon WorkSpaces offers a range of CPU memory and solid state storage bundle configurations that can be dynamically modified so you have the right resources for your applications You don’t have to waste time trying to predict how many desktops you need or what configuration those desktops should be helping you reduce costs and eliminate the need to over buy hardware Amazon WorkSpaces is an excellent choice for organizations wanting to centrally manage remote desktop users and applications and for users that can make use of Windows or Amazon Linux 2 for the remote desktop environment User Authentication User authentication is covered in more detail in the Security and Governance in the AWS Cloud section but AWS offers several options for connecting with an on premises authentication server migrating users to AWS or archit ecting an entirely new authentication solution Orchestration Orchestration refers to the dynamic management of compute and storage resources in an EDA cluster as well as the management (scheduling and monitoring) of individual jobs being processed in a c omplex workflow for example during RTL regression testing or IP characterization For these and many other typical EDA workflows the efficient use of compute and storage resources —as well as the efficient use of EDA software licenses —depends on having a wellorchestrated well architected batch computing environment ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 28 EDA workload management gains new levels of flexibility in the cloud making resource and job orchestration an important consideration for your workload AWS provides a range of solutions fo r workload orchestration: fully managed services enable you to focus more on job requests and output over provisioning configuring and optimizing the cluster and job scheduler while self managed solutions enable you to configure and maintain cloud native clusters yourself leveraging traditional job schedulers to use on AWS or in hybrid scenarios Describing all possible methods of orchestration for EDA is beyond the scope of this document; however it is important to know that the same orchestration meth ods and job scheduling software used in typical legacy EDA environments can also be used on AWS For example commercial and open source job scheduling software can be migrated to AWS and be enhanced by the addition of Auto Scaling (for dynamic resizing of EDA clusters in response to d emand or other triggers) CloudW atch (for monitoring the compute environment for example CPU utilization and server health) and other AWS services to increase performance and security while reducing costs CfnCluster CfnC luster (cloud formation cluster) is a framework that deploys and maintains high performance computing clusters on Amazon Web Services (AWS) Developed by AWS CfnCluster facilitates both quick start proof of concepts (POCs) and production deployments CfnC luster supports many different types of clustered applications including EDA and can easily be extended to support different frameworks CfnCluster integrates easily with existing job scheduling software and can automatically launch servers in response to queue depths and other triggers CfnCluster is also able to launch shared file systems cluster head nodes license servers and others resources CfnCluster is open source and easily extensible for your unique workflow requirements AWS Batch AWS Bat ch is a fully managed service that enables you to easily run large scale compute workloads on the cloud including EDA jobs without having to worry about resource provisioning or managing schedulers Interact with AWS Batch via the web console AWS CLI o r SDKs AWS Batch is an excellent alternative for managing massively parallel workloads ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 29 EnginFrame EnginFrame is an HPC portal that can be deployed on the cloud or on premise EnginFrame is integrated with a wide range of open source and commercial batch scheduling systems and is a o nestop shop for job submission control and data management All of the preceding options (CfnCluster AWS Batch and EnginFrame) as well as partner provided solutions are being successfully deployed by EDA users on AWS Discuss your specific orchestration needs with an AWS technical specialist Optimizing EDA Tools on AWS EDA software tools are critical for modern semiconductor design and verification Increa sing the performance of EDA software —measured both as a function of individual job run times and on the completion time for a complete set of EDA jobs —is important to reduce time toresults/time totapeout and to optimize EDA license costs To this point we have covered the solution components for your architecture on AWS Now in an effort to be more prescriptive we present specific recommendations and configura tion parameters that should help you achi eve expected performance for your EDA tools Choosing the right Amazon EC2 instance type and the right OS level optimizations is critical for EDA tools to perform well This section provides a set of recommendations that are based on actual daily use of ED A software tools on AWS —usage by AWS customers and by Amazon internal silicon design teams The recommendations include such factors as instance type and configuration as well as OS recommendations and other tunings for a representative set of EDA tools These recommendations have been tested and validated internally at AWS and with EDA customers and vendors Amazon EC2 Instance Types The following table highlights EDA tools and provides corresponding Amazon EC2 instance type recommendations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 30 Table 4: EDA tools and corresponding instance type Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB & (GiB/core) Local NVMe Typical EDA Application Z1d 24 40 GHz 384 (16) Y Formal verification RTL Simulation Batch RTL Simulation Interactive RTL Gate Level Simulation R5 / R5d 48 Up to 31 GHz 768 (16) Y (R5d) RTL Simulation Multi Threaded R4 32 23 GHz 488 (1525) RTL Simulation Multi Threaded Place & Route M5 / M5d 48 Up to 31 GHz 384 (16) Y (M5d) Remote Desktop Sessions C5 / C5d 36 Up to 35 GHz 144 (4) Y (C5d) RTL Simulation Interactive RTL Gate Level Simulation X1 64 23 GHz 1952 (305) Y Place & Route Static Timing Analysis X1e 64 23 GHz 3904 (61) Y Place & Route Static Timing Analysis C4 18 29 GHz (boost to 35 ) 60 (333) Formal verification RTL Simulation Interactive *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Operating System Optimization After you have chosen the instance types for your EDA tools you need to customize and optimize your OS to maximize performance Use a Current Generation Operating System If you are running a Nitro based instance you need to use certain operating system levels If you run a Xen based instance instead you should still use one of these OS levels for EDA workloads ( specifically required for ENA and NVMe drivers) : • Amazon Linux or Amazon Linux 2 • Cent OS 74 or 75 • Red Hat Enterprise Linux 74 or 75 ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 31 Disable Hyper Threading On current generation Amazon EC2 instance families (other than the T2 instance family) AWS instances have Intel Hyper Threading Technology (HT Technology) enabled by default You can disable HT Tech nology if you determine that it has a negative impact on your application ’s performance You can run this command to get detailed information about each core (physical core and Hyper Thread) : $ cat /proc/cpuinfo To view cores and the corresponding online Hyper Thread s use the lscpu –extended command For example consider the Z1d2xlarge which has 4 cores with 8 total Hyper Threads If you run the lscpu –extended command before and after disabling Hyper Threading you c an see which threads are online and offline: $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 0 0 0 0:0:0:0 yes 5 0 0 1 1:1:1:0 yes 6 0 0 2 2:2:2:0 yes 7 0 0 3 3:3:3:0 yes $ /disable_htsh $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 ::: no 5 ::: no 6 ::: no 7 ::: no ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 32 Another way to view the vCPUs pairs ( that is Hyper Threads) of each core is to view the thread_siblings_list for each core This list shows two numbers that indicate Hyper Threads for each core To view all thread siblings you can use the following command or substitute “*” with a CPU number: $ cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort un 04 15 26 37 Disable HT Using the AWS feature CPU Options To disable Hyper Threading using CPU Options use the AWS CLI with runinstances and the cpuoptions flag The following is an example with the Z1d12xlarge: $ aws ec2 run instances imageid ami asdfasdfasdfasdf \ instance type z1d12xlarge cpuoptions \ ""CoreCount=24ThreadsPerCore=1"" keyname My_Key_Name To verify the CpuOptions were set use describeinstances : $ aws ec2 describe instances instance ids i1234qwer1234qwer ""CpuOptions"": { ""CoreCount"": 24 ""ThreadsPerCore"": 1 } Disable HT on a Running System You can run the following script on a Linux instance to disable HT Technology while the system is running This can be set up to run from an init script so that it applies to any instance when you launch the instance See the following example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 33 for cpunum in $(cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | \ sort un | cut s d f2) do echo 0 | sudo tee /sys/devices/system/cpu/cpu${cpunum}/online done Disable HT Using the Boot F ile You can also disable HT Technology by setting the Linux kernel to only initialize the first set of threads by setting maxcpus in GRUB to be half of the vCPU count of the instance For example the maxcpus value for a Z1d12 xlarge instance is 24 to disable Hyper Threading : GRUB_CMDLINE_LINUX_DEFAULT=""console=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 maxcpus=24 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line When you d isabl e HT Technology it does not change the workload density per server because these tools are demanding on DRAM size and reducing the number of threads only help s as GB/core increases Change C locksource to TSC On previous generation instances that are using the Xen hypervisor consider updating the clocksource to TSC as the default is the Xen pvclock (whi ch is in the hypervisor) To avoid communication with the hypervisor and use the CPU clock instead use tsc as the clocksource The tsc clocksource is not supported on Nitro instances The default kvmclock clocksource on these instance types provides similar performance benefits as tsc on previous generation Xen based instances To change the clocksource on a Xen based instance run the following command : $ sudo su c ""echo tsc > /sys/devices/system/cl*/cl*/current_cl ocksource "" ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 34 To verify that the clocksource is set to tsc run the following command : $ cat /sys/devices/system/cl*/cl*/current_clocksource tsc You set the clock source in the initialization scripts on the instance You can also verify that the clocksource change d with the dmesg command as shown below : $ dmesg | grep clocksource clocksource: Switched to clocksource tsc Limiting Deeper C states (Sleep State) Cstates control the sleep levels that a core may enter when it is inactive You may want to control C states to tune your system for latency versus performance Putting cores to sleep takes time and although a sleeping core allows more hea droom for another core to boost to a higher frequency it takes time for that sleeping core to wake back up and perform work GRUB_CMDLINE_LINUX_DEFAULT=""console=tty0 conso le=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 in tel_idlemax_cstate=1"" Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Enable Turbo Mode (Processor State) on Xen Based Instances For our current Nitro based instance types you cannot change turbo mode as this is already set to the optimized value for each instance ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 35 If you are running on a Xen based instance that is using a n entire socket or multiple sockets ( for example r416xlarge r48xlarge c48xlarge) you c an take advantage of the turbo frequency boost especially if you have disabled HT Technology Amazon Linux and Amazon Linux 2 have turbo mode enabled by default b ut other distributions may not To ensure that turbo mode is enabled run the following command: sudo su c ""echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo"" For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Change to Optimal Spinlock S etting on Xen Based Instances For the instances that are using the Xen hypervisor (not Nitro) you should update the spinlock setting Amazon Linux Amazon Linux 2 and other distributions by default implement a paravirtualized mode of spinlock that is optimized for l owcost preempting virtual machines ( VMs ) This can be expensive from a performance perspective because it causes the VM to slow down when running multithreaded with locks Some EDA tools are not optimized for multi core and consequently rely heavily on sp inlocks Accordingly we recommend that EDA customers disable paravirtualized spinlock on EC2 instances To disable the paravirtualized mode of spinlock on a Xen based instnace add xen_nopvspin=1 to the kernel command line in /boot/grub/grubconf and restart The following is an e xample kernel command : kernel /boot/vmlinuz 44413655amzn1x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 xen_nopvspin=1 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 36 Networking AWS Enhance d Networking Make sure to use enhanced networking for all instances which is a requirement for launching our current Nitro based instances For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Cluster Placement Groups A cluster placement group is a logical grouping of instances within a single Availability Zone Cluster placement groups provide nonblocking non oversubscribed fully bisectional connectivity In other words all instances within the placement group can communicate with all other nodes within the placement group at the full line rate of 10 Gpbs flows and 25 Gpbs aggr egate without any sl owing due to over subscription For more information about placement groups refer to the Placement Groups page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Verify Network Bandwidth One method to ensure you are configuring ENA correctly is to benchmark the instance to instance network performance with iperf3 Refer to Network Throughput Benchmark Linux EC2 for more information Storage Amazon EBS Optimization Make sure to choose your instance and EBS volumes to suit the storage requirements for your workloads Each EC2 instance type has an associated EBS limit and each EBS volume type has limits as well For example the m416xlarge instance type has a io1 volume type with a maximum throughput of 500MB/s NFS Configuration and Optimization Prior to setting up an NFS server on AWS you need to enable Amazon EC2 enhanced networking We recommend using Amazon Linux 2 for your NFS server AMI A crucial part of high performing NFS are the mount parameters on the client For example: ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 37 rsize=1048576wsize=1048576hardtimeo=600retrans=2 A typical EFS mount command is shown in following example : $ sudo mount t nfs4 –o \ nfsvers=41 rsize=1048576wsize=1048576hardtimeo=600retrans=2 \ filesystemidefsaws regionamazonawscom:/ /efs mountpoint When bui lding an NFS server on AWS choose the correct instance size and number of EBS volumes Within a single family larger instanc es typically have more network and Amazon EBS bandwidth available to them The largest NFS servers on AWS are often built using m416xlarge instances with multiple EBS volumes striped together in order to achieve the best possible performance Refer to Appendix A – Optimizing Storage for more information and diagrams for building an NFS server on AWS Kernel Virtual Memory Typical operating system distributions are not tuned for large machines like th ose offered by AWS for EA workloads As result out of the box configurations often have suboptimal performance settings for kernel network buffers and storage page cache background draining While the specific numbers may vary by instance size and applications runs the AW S EDA team has found that these kernel configuration settings and values are a good starting point to optimize memory utilization of the instances : vmmin_free_kbytes=1048576 vmdirty_background_bytes=107374182 Security and Governance in the AWS Cloud The cloud offers a wide array of tools and configurations that enable your organization to protect your data and IP in ways that are difficult to achieve with traditional on premise s environments This section outlines some of the ways you can protect data in the AWS Cloud ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 38 Isolated Environments for Data Protection and Sovereignty Security groups are similar to firewalls —they ensure that access to specific resources is tightly controlled Subnets containing compute and storage resources can be isolated so that they do not have any direct access to the internet Users who need to access the environment must first connect to the Bastian Node (also referred to as a jump box ) through secure protocols like SSH From there they can log into interactive desktops or job schedulers as permitted through your organization ’s security policies Often secure FTP is required in isolated environment s Organization s commonly use secure FTP to download tools from vendors copy completed designs to fabri cation facilities or to update IP from suppliers To do this securely you can set up an FTP client in an isolated subnet that has limited access to external IP addresses as necessary Segment this client from the rest of the network and configure strict controls and monitoring to ensure that everything on that server is secure User Authentication When managing users and access to compute nodes you can adapt the technologies that you use today to work in the same way on AWS Many organizations already h ave existing LDAP Microsoft Active Directory or NIS services that they use for authentication Almost all of these services provide replication and functionality to support multiple data centers With the appropriate network and VPN setup in place you c an manag e these systems on AWS using the same methods and configurations as you do for any remote data center configuration If your organization wants to run an isolated directory on the cloud you have a number of options to choose from If you want to use a managed solution AWS Directory Service for Microsoft Active Directory (Standard) is a popular choice 2 AWS Micros oft AD (Standard Edition) is a managed Microsoft Active Directory (AD) that is optimized for small and midsize businesses (SMBs) Other options include running your own LDAP or NIS infrastructure on AWS and more current solutions like FreeIPA Network AWS employs a number of technologies that allow you to isolate components from each other and control access to the network ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 39 Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications You can easily customize the network configuration for your Amazon VPC For example you can create a public facing subnet for your FTP and Bastian servers that has access to the internet Then you can place your design and engineering systems in a private subnet with no internet access You can leverage multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and leverage the AWS Cloud as an extension of your organization’s data center Security Groups Amazon VPC provides advanced security features such as security groups and network access control lists to enable inbound and outbound filtering at the instance level and subnet level respectively A security group acts as a virtual firewa ll for your instance to control inbound and outbound traffic When you launch an instance in a VPC you can assign the instance to up to five security groups Network access control lists ( ACLs ) control inbound and outbound traffic for your subnets In mo st cases security groups can meet your needs However you can also use network ACLs if you want an additional layer of security for your VPC For more information refer to the Security page in the Amazon Virtual Private Cloud User Guide You can create a flow log on your Amazon VPC or subnet to capture the traffic that flows to and from the network interfaces in your VPC or subnet You can also create a flow log on an individual network interface Flow logs are published to Amazon CloudWatch Logs ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 40 Data Storage and Transfer AWS o ffers many ways to protect dat a both in transit and at rest Many third party storage vendors also offer additional encryption and security technologies in their own implementations of storage in the AWS Cloud AWS Key Management Service ( KMS ) AWS Key Management Service ( KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data In addition it uses Hardware Security Modules (HSMs) to protect the security of your keys AWS KMS is integrated with other AWS services including Amazon EBS Amazon S3 Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail Amazon Relational Database Service (Amazon RDS) and others to help you protect the dat a you store with these services AWS KMS is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs With AWS KMS you can create master keys that can never be exported from the service You use the master keys to encrypt and decrypt data based on policies that you define Amazon EBS Encryption Amazon Elastic Block Store (Amazon EBS ) encryption offers you a simple encryption solution for your EBS volumes requiring you to build maintain and secure your own key management infrastructure When you create an encrypted EBS volume and attach it to a supported instance type the following types of data are encrypted: • Data at rest inside the volume • All data in transit between the volume and the instance • All snapshots c reated from the volume The encryption occurs on the servers that host EC2 instances providing encryption of data in transit from EC2 instances to Amazon EBS storage EC2 Instance Store Encryption The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All encryption keys are destroyed when the instance is stopped or termi nated and cannot be ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 41 recovered You cannot disable this encryption and you cannot provide your own encryption key 1 Amazon S3 Encryption When you u se encryption with Amazon S3 Amazon S3 encrypts your data at the object level Amazon S3 writes the data to disks in AWS data centers and decrypts your data when you access it As long as you authenticate your request and you have access permissions there is no difference in how you access encrypted or unencrypted objects AWS KMS uses customer master keys (CMKs) to encrypt your Amazon S3 objects You use AWS KMS via the Encryption Keys section in the AWS Identity and Access management (AWS IAM) console or via AWS KMS APIs to create encryption keys define the policies that control how keys can be used and audit key usage to ensure that they are used correctl y You can use these keys to protect your data in Amazon S3 buckets Server side encryption with AWS KMS managed keys ( SSEKMS ) provides the following : • You can choose to create and manage encryption keys yourself or you can choose to generate a unique default service key on a customer /service /region level • The ETag in the response is not the MD5 of the object data • The data keys used to encrypt your data are also encrypted and stored alongside the data they protect • You can create rotate and disable auditable master keys in the IAM console • The security controls in AWS KMS can help you meet encryption related compliance requirements If you require server side encryption for all objects that are stored in your bucket Amazon S3 supports bucket policies t hat can be used to enforce encryption of all incoming S3 objects Because access to Amazon S3 is provided over HTTP endpoints you can also leverage bucket policies to ensure that all data transfer in and out occurs over a TLS connection to guarantee that data is also encrypted in transit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 42 Governance and Monitoring AWS provides several services that you can use to enforce governance and monitor your AWS C loud deployment: AWS Identity and Access Management ( IAM) – Enables you to securely control access to AWS services and resources for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources For more information refer to the AWS IAM User Guide Amazon CloudWatch – Enables you to monitor your AWS resources in near real time including EC2 instances EBS volumes and S3 buckets Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also provide CloudWatch access to your own logs or custom application and system metrics such as memory usage transaction volumes or error rates and CloudWatch can monitor these too For more information refer to the Amazon CloudWatch User Guide Amazon CloudWatch Logs – Use to monitor store and access your log files from E C2 instances AWS CloudTrail and other sources You can then retrieve the associated log data from CloudWatch Logs You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting For more information refer to the Amazon CloudWatch Log User Guide AWS CloudTrail – Enables you to l og continuously monitor a nd retain events related to API calls across your AWS infrastructure CloudTrail provides a history of AWS API calls for your account including API calls made through the AWS Management Console AWS SDKs command line tools and other AWS services For mo re information refer to the AWS Cloud Trail User Guide Amazon Macie – Amazon Macie is a security service that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into ho w this data is being accessed or moved The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 43 AWS GuardDuty – Amazon GuardDut y is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise GuardDuty also detects potentially compromised instances or reconnaissance by attackers AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield provides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection AWS Config – Use to assess audit and evaluate the config urations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations For more information refer to the AWS Config Developer Guide AWS Organizations – Offers policy based management for multiple AWS accounts With Organizations you can create Service Control Policies (SCPs) tha t centrally control AWS service use across multiple AWS accounts Organizations helps simplify the billing for multiple accounts by enabling you to setup a single payment method for all the accounts in your organization through consolidated billing You ca n ensure that entities in your accounts can use only the services that meet your corporate security and compliance policy requirements For more information refer to the AWS Organizations User Guide AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application architectures AWS Service Catalog allows you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 44 Contributors The following individuals contributed to this document: • Mark Duffield Worldwide Tech Leader Semiconductors Amazon Web Services • David Pellerin Principal Business Development for Infotech/Semiconductor Amazon Web Services • Matt Morris Senior HPC Solutions Architect Amazon Web Services • Nafea Bshara VP/Distinguished Engineer Amazon Web S ervices Document Revisions Date Description September 2018 2018 update October 2017 First publication ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 45 Appendix A – Optimizing Storage There are many storage options on AWS and some have already been covered at a high level As semiconductor workloads rely on shared storage building an NFS server may be the fi rst step to running EDA tools This section includes two possible NFS architectures that can achieve suitable performance for most workloads NFS Storage NFS server capabl e of 175 GB/s with 75000 IOP S 6 EBS Vol 20K IOPS Each ZFS RAID6 pool using EBS vols 25 Gpbs ENA connection 6 x EBS Provisioned IOPS25 GpbsNFS Clients Running EDA Toolsr416xlarge NFS Server for Tools Project Data etcArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 46 NFS server capable of 25 GB/s and > 100000 IOPS i316xlarge 8 x NVMe Volumes RAID0 Pool with mdadm EXT4 file system 25 Gpbs ENA connection NFS Server for Temporary/Scratch data 25 GpbsNFS Clients Running EDA ToolsArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 47 Appendix B – Reference Architecture The following diagram represents a common architecture for an elastic EDA computing environment in AWS This design provides the f ollowing key infrastructure components: • Amazon EC2 AutoScaling Group for elasticity • AWS Direct Connect for dedicated connectivity to AWS • Amazon Linux WorkSpaces for remote desktops • Amazon EC2 based compute license and scheduler instances • Amazon EC2 based NFS servers and Amazon EFS for sharing file systems between compute instances ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 48 Figure 5: EDA architecture on AWS Corporate Data Center EDA AutoScaling Group Amazon AI Services EFS S3 BucketRemote DesktopInternetHome Office Coffee Shop or Customer Site AWS Direct Connect /tools (NFS) /project (NFS) /scratch (NFS)License ServerJob SubmitArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 49 Appendix C – Updating the Linux Kernel Command Line Update a system with /etc/default/grub file 1 Open the /etc /default/grub file with your editor of choice $ sudo vim /etc/default/grub 2 Edit the GRUB_CMDLINE_LINUX_DEFAULT line and make necessary changes For example: GRUB_CMDLINE_LINUX_DEFAULT=""cons ole=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvm e_coreio_timeout=4294967295 intel_idlemax_cstate=1 "" 3 Save the file and exit your editor 4 Run the following command to rebuild the boot configuration $ grub2mkconfig o /boot/grub2/grubcfg 5 Reboot your instance to enable the new kernel option $ sudo reboot ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 50 Update a system with /boot/grub/grubcon f file 1 Open the /boot/grub/grubconf file with your editor of choice $ sudo vim /boot/grub/grubconf 2 Edit the kernel line for example (some info removed for clarity) # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz veramzn1x86_64 intel_idlemax_cstate=1 initrd /boot/initramfs 314262446amzn1x86_64img 3 Save the file and exit your editor 4 Reboot your instance to enable the new kernel option $ sudo reboot Verify Kernel Line Verify that the setting by running dmesg or /proc/cmdline kernel command line: $ dmesg | grep ""Kernel command line"" [ 0000000] Kernel command line: root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 $ cat /proc/cmdline root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 1 https://docsawsamazoncom/AWSEC2/latest/UserGuide/ssd instance storehtml 2 http://docsawsamazoncom/directoryservice/latest/admin guide/directory_simple_adhtml Notes",General,consultant,Best Practices Optimizing_Enterprise_Economics_with_Serverless_Architectures,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures September 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All right s reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlContents Introduction 1 Understanding Serverless Architectures 2 Is Serverless Always Appropriate? 2 Serverless Use Cases 3 AWS Serverless Capabilities 6 Service Offerings 6 Developer Support 9 Security 11 Partners 12 Case Studies 13 Serverless Websites Web Apps and Mobile Backends 13 IoT Backends 14 Data Processing 15 Big Data 16 IT Automation 17 Machine Learning 17 Conclusion 18 Contributors 19 Further Reading 19 Reference Architectures 19 Document Revisions 20 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlAbstract This whitepaper is intended to help Chief Information Officers ( CIOs ) Chief Technology Officers ( CTOs ) and senior architects gain insight into serverless architectures and their impact on time to market team agility and IT economics By eliminating idle underutilized servers at the design level and dramatically simplifying cloud based software designs serverless approaches rapidly change the IT landscape This whitepaper covers the basics of serverless approaches and the AWS serverless portfolio It includes several case studies illustrating how existing companies are gaining significant agility and ec onomic benefits from adopting serverless strategi es In addition it describ es how organizations of all sizes can use serverless architectures to architect reactive event based systems and quickly deliver cloud native microservices at a fraction of conventional costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 1 Introduction Many companies are already gaining benefits from running applications in the public cloud including cost savings from pay asyougo billing and improved agility through the use of on demand IT r esources Multiple studies across application types and industries have demonstrated that migrating existing application architectures to the cloud lowers the T otal Cost of Ownership (TCO) and improves time to market 1 Relative to on premises and private cloud solutions the public cloud makes it significantly simpler to build deploy and manage fleets of servers and the applications that run on them The public cloud has established itself as the new normal with double digit year overyear growth since its inception2 However companies today have options beyond classic server or virtual machine (VM) based architectures to take advantage of the public cloud Although the cloud eliminates the need for companies to purchase and maintain their hardware any server based architecture still requires them to architect for scalability and reliability Plus companies need to own the challenges of patching and deploying to those server fleets as their applications evolve Moreover they must scale their server f leets to account for peak load and then attempt to scale them down when and where possible to lower costs —all while protecting the experience of end users and the integrity of internal systems Idle underutilized servers prove to be costly and wasteful R esearchers calculated the average server utilization to be around only 18 percent for enterprises3 Using serverless services developers and architects can design and develop complex application architectures focusing just on business logic without deali ng with the complexity of servers As a result product owners can achieve faster time to market with shorter development deployment and testing cycles In addition the r eduction of server management overheads reduces the TCO which ultimately results in competitive advantages for the companies With significan tly reduced infrastructure costs more agile and focused teams and faster time to market companies that have already adopted serverless approaches are gaining a key adv antage over their competitors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 2 Understanding Serverless Architectures The advantages of the serverless approaches cited above are appealing but what are the considerations for practical implementation ? What separates a serverless application from its conv entional server based counterpart? Serverless uses managed services where the cloud provider handles infrastructure management tasks like capacity provisioning and patching This allows your workforce to focus on business logic serving your customers while minimiz ing infrastructure management configuration operations and idle capacity In addition Serverless is a way to describe the services practices and strategies that enable you to build more agile applications so you can innovate and respond to ch ange faster Serverless applications are designed to run whole or parts of the application in the public cloud using serverless services AWS offers many serverless services in domains like compute storage application integration orchestration and datab ases The serverless model provide s the following advantages compared to conventional server based design: •There is no need to provision manage and monitor the underlying infrastructure All of the actual hardware and platform server software packages are managed by the cloud provider You need to just deploy your application and its configuration •Serverless services have fault tolerance built in by default Serverless applications require minimal configuration and management from the user to achieve high availability •Reduced operatio nal overhead allows your teams to release quickly get feedback and iterate to get to market faster •With a pay forvalue billing model you do not pay for over provisioning and your resource utilization is optimized on your behalf •Serverless applications have built in service integrations so you can focus on building your application instead of configuring it Is Serverless Always Appropriate? Almost all modern application s can be modified to run successfully and in most cases in a more economical and scalable fashion on a serverless platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 3 The choice between serverless and the alternatives do not need to be an all or nothing proposition Individual components could b e run on servers using containers or using serverless architectures within an application stack However here are a few scenarios when serverless may not be the best choice: •When the goal is explicitly to avoid making any changes to existing application architecture •For the code to run correctly fine grained control over the environment is required such as specifying particular operating system patches or accessing low level networking operations •Applications with ultra low latency requirements for all incoming requests •When an on premises application hasn’t been migrated to the public cloud •When certain aspects of the application component don’t fit within the limits of the serverless services for example if a function takes more time to execute than the AWS Lambda function ’s execution timeout limit or the backend application takes more time to run than Amazon API Gateway’s timeout Serverless Use Cases The serverless application model is generic and applies to almost any application from a st artup’s web app to a Fortune 100 company’s stock trade analysis platform Here are several examples: •Data processing – Developers have discovered that it’s much easier to parallelize with a serverless approach4 main ly when triggered through events leadin g them to increasingly apply serverless techniques to a wide range of big data problems without the need for infrastructure management Th ese include map reduce problems high speed video transcoding stock trade analysis and compute intensive Monte Carlo simulations for loan applications •Web applications – Eliminating servers makes it possible to create web applications that cost almost nothing when there is no traffic while simultaneously scaling to handle peak loads even unexpected onesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 4 •Batch process ing – Serverless architectures can be used in a run multi step flow chart like use cases like ETL jobs •IT automation – Serverless functions can be attached to alarms and monitors to provide customization when required Cron jobs (used to schedule and auto mate tasks that need to be carried out periodically) and other IT infrastructure requirements are made substantially simpler to implement by removing the need to own and maintain servers for their use especially when these jobs and condition s are infreque nt or variable in nature •Mobile backends – Serverless mobile backends offer a way for developers who focus on client development to quick ly create secure highly available and perfectly scaled backends without becoming experts in distributed systems desi gn •Media and log processing – Serverless approaches offer natural parallelism making it simpler to process compute heavy workloads without the complexity of building multithreaded systems or manually scaling compute fleets •IoT backends – The ability to bring any code including native libraries simplifies the process of creating cloud based systems that can implement device specific algorithms •Chatbots (including voice enabled assistants) and other webhook based systems – Serverless approaches are perfect for any webhook based system like a chatbot In addition t heir ability to perform actions (like running code) only when needed (such as when a user requests information from a chatbot) makes them a straightforward and typically lower cost approach fo r these architectures For example the majority of Alexa Skills for Amazon Echo are implemented using AWS Lambda •Clickstream and other near real time streaming data processes – Serverless solutions offer the flexibility to scale up and down with the flow of data enabling them to match throughput requirements without the complexity of building a scalable compute system for each application For example w hen paired with technology like Amazon Kinesis AWS Lambda can offer high speed records processing for clickstream analysis NoSQL data triggers stock trade information and moreThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 5 • Machine learning inference – Machine learning models can be hosted on serverless functions to support inference requests eliminating the need for owning or maintaining servers for supporting intermittent inference requests • Content delivery at the edge –By moving serverless event s handing to the edge of the internet developers can take advantage of lower latency and customize retrievals and content fetches quick ly enabling a new spectrum of use cases that are latency optimized based on the client’s location • IoT at the edge – Enabling serverless capabilities such as AWS Lambda functions to run inside commercial residential and hand held Internet of Things (IoT) devices e nables these devices to respond to events in near realtime Devices can take actions such as aggregat ing and filtering data locally perform ing machine learning inference or sending alerts Typically serverless applications are built using a microservices architecture in which an application is separated into independent components that perform discrete jobs These components made up of a compute layer a nd APIs message queues database and other components can be independently deployed tes ted and scaled The ability to scale individual components needing additional capacity rather than entire application s can save substantial infrastructure costs It would allow an application to run lean with minimal idle server capacity without the need for rightsizing activities 5 Serverless applications are a natural fit for microservices because of their decoupled nature Organizations can become more agile by avoiding monolithic designs and architectures because developers can deploy incrementally and replace or upgrade individual components such as the database tier if needed In many cases not all layers of the architecture need to be moved to serverle ss services to reap its benefits For instance simply isolating the business logic of an application to deploy onto the AWS Lambda serverless compute service is all that’s required to reduce server management tasks idle compute capacity and operational overhead immediately Serverless architecture also has significant economic advantages over server based architectures when considering disaster recovery scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 6 For most serverless architectures the price for managing a disaster recovery site is ne ar zero even for warm or hot sites Serverless architectures only incur a charge when traffic is present and resources are being consumed Storage cost is one exception as many applications require readily accessible data Nonetheless serverless archit ectures truly shine when planning disaster recovery sites especially when compared to traditional data centers Running a disaster recovery on premises often doubles infrastructure costs as many servers are idle waiting for disaster to happen Serverless disaster recovery sites can be set up quick ly as well Once serverless architectures have been defined with infrastructure as code using AWS native services such as AWS CloudFormation an entire architecture can be duplicated in a separate region by runni ng a few commands AWS Serverless Capabilities Like any other traditional server and VM based architecture serverless provides core capabilities such as compute storage messaging and more to its users However serverless services are distributed acros s multiple managed services rather than sprea d across software installed virtual machines As a result AWS provides a complete serverless application that require s a broad array of services tools and capabilities spanning storage messaging diagnostics and more Each of these services is available in the developer’s toolbox to build a practical application Service Offerings Since the introduction of Lambda in 2014 AWS has introduced a wide variety of fullymanaged serverless services that enable organizations to create serverless apps that can integrate seamlessly with other AWS services and thirdparty services The launched serverless services include but are not limited to Amazon API Gateway (2015) Am azon EventBridge (2019) and Amazon Aurora Serverless v2 (2020) The pace of innovation has not stopped for individual services as Lambda has had more than 100 major feature releases since its launch 6 Figure 1 illustrates a subset of the components in the AWS serverless platform and their relationships This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 7 Figure 1: AWS serverless platform components AWS’ s serverless offering consists of services that span across all infrastr ucture layers including compute storage and orchestration In addition AWS provides tools needed to author build deploy and diagnose serverless architectures Running a serverless application in production requires a reliable flexible and trustwo rthy platform that can handle the demands of small startups to global worldwide corporations The platform must scale all of an application’s elements and provide end toend reliability Just as with conventional applications helping developers create a nd deliver serverless solutions is a multi dimensional challenge To meet the needs of large scale enterprises across various industries the AWS serverless platform offers the following capabilities through a diverse set of services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 8 • A high performance scalable and reliable serverless compute layer The serverless compute layer is at the core of any serverless architecture such as AWS Lambda or AWS Fargate responsible for running the business logic Because these services are run in response to events simple integration with both first party and third party event sources is essential to making solutions simple to express and enabling them to scale automatically in response to varying workloads In addition serverless architectures eliminate all of the scaling and management code typically required to integrate such systems shifting that operational burden to AWS • Highly available durable and scalable storage layer – AWS offers fully managed storage layers that offload the overhead of ever increasing storage requirements to support the serverless compute layer Instead of manually adding more servers and storage services such as Amazon Aurora Serverless v2 Amazon DynamoDB and Amazon Simple Storage Service (Amazon S3) scal es based on usage and users are only billed for the consumed resources In addition AWS offers purpose built storage services to meet diverse customer needs from DynamoDB for keyvalue storage Amazon S3 for object storage and Aurora Serverless v2 for r elational data storage • Support for loosely coupled and scalable decoupled serverless workloads – As applications mature and grow they become more challenging to maintain or add new features and some transform into monolithic applications As a result they mak e it challenging to implement changes and slow down the development pace What is needed is individual components that are decoupled and can scale independently Amazon Simple Queue Service (Amazon SQS) Amazon Simple Notification Service (Amazon S NS) Amazon EventBridge and Amazon Kinesis enable developers to decouple individual components allowing developers to create and innovate without being dependent on one another In addition these components all being serverless implies that customers are only being billed for the resources that each component is consuming eliminating unnecessary resources being wasted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 9 • Orchestration offer ing state and workflow management – Orchestration and state management are also critical to a serverless platform’s success As companies adopt serverless architectures there is an increased need to orchestrate complex workflows with decoupled components AWS Step Functions is a visual workflow service that satisfies this need It is used to orchestrate AWS services automate business processes and build serverless applications Step Functions manage failures retries parallelization service integration s and observability so developers can focus on higher value business logic Building applications from individual components that perform a discrete function lets you scale easily and change applications quickly Developers can change and add steps withou t writing code enabling your team to evolve your application and innovate faster • Native service integrations between serverless services mentioned above such as Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (Amazon SNS) and Amaz on EventBridge act as application integration services enabling communication between decoupled components within microservices Another benefit of these services is that minimal code is needed to allow interoperability between them so you can focus on building your application instead of configuring it For instance integration between Amazon API Gateway a fully managed service for hosting APIs to a Lambda function can be done without writing any code and simply walking through the AWS console Deve loper Support Providing the right tool and support for developers and architects is essential to boosting productivity AWS Developer Tools are built to work with AWS making it easier for teams to set up and be productive In addition to popular and well known developer tools such as AWS Command Line Interface (AWS CLI) and AWS Software Development Kits (AWS SDKs) AWS also provides various AWS open source and third party web frameworks that simplify serverless application development and deployment This includes the AWS Serverless Application Model (AWS SAM) and AWS Cloud Development Kit (AWS CDK) that allows customers to onboard faster to serverless architectures offloading undifferentiated heavy lifting of managing the infrastructure for your appli cations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 10 This enable s developers to focus on writing code that creates value for their customers In addition AWS provides the following support for developers adopting serverless technologies • A collection of fit forpurpose application modeling framew orks – Application modeling frameworks such as the open specification AWS SAM or AWS CDK enable a developer to express the components that make up a serverless app lication and enable the tools and workflows required to build deploy and monitor those app lications Both frameworks work nicely with the AWS SAM Command Line Interface (AWS SAM CLI) making it easy for them to create and manage serverless applications It also allows developers to build test locally and debug serverless applications then deploy them on AWS It can also create secure continuous integration and deployment (CI/CD) pipelines that follow best practices and integrate with AWS ’ native and third party CI/CD systems • A vibrant developer ecosystem that helps developers discover and apply solutions in a variety of domains and for a broad set of third party systems and use cases Thriving on a serverless platform requires that a company be able to get started quick ly including finding ready made templates for everyday use cases whet her they involve firstparty or third party services These integration libraries are essential to convey successful patterns —such as processing streams of records or implementing webhooks —especially when developers are migrating from server based to serverless architectures7 A closely related need is a broad and diverse ecosystem that surrounds the core platform A large vibrant ecosystem helps developers discover and use solutions from the community an d makes it easy to contribute new ideas and approaches Given the variety of toolchains in use for application lifecycle management a healthy ecosystem is also necessary to ensure that every language Integrated Development Environment (IDE) and enterpri se build technology has the runtimes plugins and open source solutions essential to integrate the building and to deploy ment of serverless app lication s into existing approaches Finally a broad ecosystem provides signific ant acceleration across domains and enables developers to repurpose existing code more readily in a serverless architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 11 Security All AWS customers benefit from a data center and network architecture built to satisfy the requirements of our most security sensitive customers This means that you get a resilient infrastructure designed for high security without a traditional data center’s capital outlay and operational overhead Serverless architecture is no exception To accomplish this AWS’ serverless services offer a broad array of security and access controls including support for virtual private networks role based and access based permissions robust integration with API based authentication and access control mechanisms and support for encrypting application elements such as environment variable settings These outofthebox offered features and services can help developers deploy and publish workloads confidently and reduce time to market Serverless systems by their design also provide s an additional level of sec urity and control for the following reasons: • First class fleet management including security patching – For managed serverless services such as Lambda API Gateway and Amazon SQS the servers that host the services are constantly monitored cycled and s ecurity scanned As a result t hey can be patched within hours of essential security update availability instead of many enterprises ’ compute fleets with much looser service level agreements (SLAs ) for patching and updating • Perrequest authentication access control and auditing – Every request between natively integrated services is individually authenticated authorized to access specified resources and can be fully audited Requests arriving from outside of AWS via Amazon API Gateway provide other internet facing defense systems For example AWS Web Application Firewall (AWS WAF) is a web application firewall that integrates natively with Amazon API Gateway It helps protect hosted APIs against common web exploits and bots that may affect availability compromise security or consume excessive resources including distributed denial ofservice (DDoS) attack defenses In addition c ompanies migrating to serverless architectures can use AWS CloudTrail to gain detailed insight into which users are accessing which systems with what privileges Finally t hey can use AWS tools to process the audit records programmatically This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 12 These security features of serverless help eliminate additional costs often overlooked when calculating the TCO of one’s infrastr ucture Such costs include security and monitoring software licenses installed on servers staffing of information security personnel to ensure that all servers are secure as well as costs associated with regulatory compliance and many others Serverless architecture s also have a smaller blast radius compared to monolithic applications running on virtual machines As AWS takes responsibility of the security of the servers behind the scenes customers can focus on implementing least privilege access between the services Once least privilege access is implemented the blast radius is dramatically reduced The decoupled nature of the architecture will limit the impact to a smaller set of services compared to a scenario where a malicious actor gains a ccess to a n internal server Considering the significant financial impact of a security breach this is also a n added benefit that help enterprises optimize on infrastructure costs Adopting serverless architectures help in reducing or eliminating such expense s that are no longer needed and capital can be repurposed and teams are freed to work on higher value activities Partners AWS has an expansive partner network that assists our customers with building solutions and services on AWS AWS works closely with validated AWS Lambda Partners for building serverless architecture s that help customers develop services and applications without provisioning or managing servers Lambda Partners provide developer tooling solutions validated by AWS serverless experts against the AWS Well Architected Framework Customers can simplify their technology evaluation process and increase purchasing confidence knowing these companies’ solutions have passed a strict AWS validation of security performance and reliability Customers can ultimately reduce time to market with the assistance of qualified partners leveraging serverless technologies For a complete list of AWS Lambda Ready Partners visit our AWS Partner Network page 8 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 13 Case Studies Companies have applied serverless architectures to use cases from stock trade validation to e commerce website construction to natural language processing AWS serverless portfolio offer s the flexibility to create a wi de array of applications including those requiring assurance programs such as PCI or HIPAA compliance The following sections illustrate some of the most common use cases but are not a comprehensive list For a complete list of customer references and us e case documentation see Serverless Computing 9 Serverless Websites Web Apps and Mobile Backends Serverless approaches are ideal for applications where the load can vary dynamically Using a serverless approach means no compute costs are incurred when there is no end user traffic while still offering instant sca le to meet high demand such as a flash sale on an e commerce site or a social media mention that drives a sudden wave of traffic Compared to traditional infrastructure approaches it is also often significantly less expensive to develop deliver and op erate a web or mobile backend when architected in a serverless fashion AWS provides the services developers need to construct these applications rapidly : • Amazon Simple Storage Service (Amazon S3) and AWS Amplify offer a simple hosting solution for static content • AWS Lambda in conjunction with Amazon API Gateway provides support for dynamic API requests using functions • Amazon DynamoDB offers a simple storage solution for the session and peruser state • Amazon Cognito provides an easy way to handle end user registration authentication and access control to resources • Developers can use AWS Serverless Application Model (SAM ) to describe the various elements of an application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 14 • AWS CodeStar can set up a CI/CD toolchain with just a few clicks To learn more see the whitepaper AWS Serverless Multi Tier Architectures which provides a detailed examination of patterns for building serverless web applic ations10 For complete reference architectures see Serverless Reference Architecture for creating a Web Application11 and Serverless Reference Architecture for creating a Mobile Backend12 on GitHub Customer Example – Neiman Marcus A luxury household name Neiman Marcus has a reputation for delivering a first class personalized customer service experience To modernize and enhance that experience the company wanted to develop Connect an omnichannel digital selling application that would empower associates to view rich personalized customer information with the goal of making each customer interaction unforgettable Choos ing a serverless architecture with mobile development solutions on Amazon Web Services (AWS) enabled the development team to launch the app much faster than in the 4 months it had originally planned “Using AWS cloud native and serverless technologies we increased our speed to market by at least 50 percent and were able to accelerate the launch of Connect” says Sriram Vaidyanathan senior director of omni engineering at Neiman Marcus This approach also greatly reduced app building costs and provided dev elopers with more agility for the development and rapid deployment of updates The app elastically scales to support traffic at any volume for greater cost efficiency and it has increase d associate productivity For more information see the Neiman Marcus case study 13 IoT Backends The benefits that a serverless architecture brings to web and mobile apps make it easy to construct IoT backends and device based analytic processing systems that seamlessly scale with the number of devices For an example reference architecture see Serverless Reference Architecture for creating an IoT Backend on GitHub14 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 15 Customer Example – iRobot iRobot which makes robots such as the Roomba cleaning robot uses AWS Lambda in conjunction with the AWS IoT service to create a serverless backend for its IoT platform As a popular gift on any holiday iRobot experienc es increased traffic on these days While h uge traffic spikes could also mean huge headaches for the company and its customers alike iRobot’s engineering team doesn’t have to worry about managing infrastructure or manually writing code to handle availabi lity and scaling by running on serverless This enabl es them to innovate faster and stay focused on customers Watch the AWS re:Invent 2020 video Building the next generation of residential robots for more information 15 Data Processing The largest serverless applications process massive volumes of data much of it in real time Typical serverless data processing architectures use a combination of Amazon Kinesis and AWS Lambda to process streaming d ata or they combine Amazon S3 and AWS Lambda to trigger computation in response to object creation or update events When workloads require more complex orchestration than a simple trigger developers can use AWS Step Functions to create stateful or long running workflows that invoke one or more Lambda functions as they progress To learn more about serverless data processing architectures see the following on GitHub: • Serverless R eference Architecture for Real time Stream Processing16 • Serverless Reference Architecture for Real time File Processing17 • Image Recognition and Processing Backend reference architecture18 Customer Example – FINRA The Financial Industry Regulatory Authority (FINRA) u sed AWS Lambda to build a serverless data processing solution that enables them to perform half a trillion data validations on 37 billion stock market events daily In his talk at AWS re:Invent 2016 entitled The State of Serverless Computing (SVR311) 19 Tim Griesbach Senior Director at FINRA said “We found that Lambda was going to provide us with the best solution for this serverless cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 16 solution With Lambda the system was faster cheaper and more scalable So at the end of the day we’ve reduced our costs by over 50 percent and we can track it daily even hourly ” Customer Example – Toyota Connected Toyota Connected is a subsidiary of Toyota and a technology company offering connected platform s big data mobility services and other automotive related services Toyota Connected chose server less computing architecture to build its Toyota Mobility Services Platform leveraging AWS Lambda Amazon Kinesis Data Streams (Amazon KDS) and Amazon S3 to offer personalized localized and predictive data to enhance the driving experience With its se rverless architecture Toyota Connected seamlessly scaled to 18 times its usual traffic volume with 18 billion transactions per month running through the platform reducing aggregation job times from 15+ hours to 1/40th of the time while reducing operatio nal burden Additionall y serverless enabled Toyota Connected to deploy the same pipeline in other geographies with smaller volumes and only pay for the resources consumed For more information read our Big Data Blog on Toyota Connected or watch the re:Invent 2020 video Reimagining mobility with Toyota Connected (AUT303) 20 21 Big Data AWS Lambda is a perfect match for many highvolume parallel processing workloads For an example of a reference architecture using MapReduce see Reference Architecture for running serverless MapReduce jobs 22 Customer Example – Fannie Mae Fannie Mae a leading source of financing for mortgage lenders uses AWS Lambda to run an “embarrassingly parallel ” workload for its financial modeling Fannie Mae uses Monte Carlo simulation processes to project future cash flows of mortgages that help manage mortgage risk The company found that its existing HPC grids were no longer meeting its growing busi ness needs So Fannie Mae built its new platform on Lambda and the system successfully scaled up to 15000 concurrent function executions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 17 during testing The new system ran one simulation on 20 million mortgages completed in 2 hours which is three times faster than the old system Using a serverless architecture Fannie Mae can run large scale Monte Carlo simulations effectively because it doesn’t pay for idle compute resources It can also speed up its computations by running multiple Lambda functions concurrently Fannie Mae also experienced shorter than typical time tomarket because they were able to dispense with server management and monitoring along with the ability to eliminate much of the complex code previously required to manage application sc aling and reliability See the Fannie Mae AWS Summit 2017 presentation SMC303: Real time Data Processing Using AWS Lambda23 for more information IT Automation Serverless approaches eliminate the overhead of managing servers making most infrastructure tasks including provisioning configuration management alarms/monitors and timed cron jobs easier to create and manage Customer Example – Autodesk Autodesk which makes 3D design and engineering software uses AWS Lambda to automate its AWS account creation and management processes across its engineering organization Autodesk estimates that it realized cost savings of 98 percent (factoring in estimated savings in labor hours spent provisioning accounts) It can now provision accounts in just 10 minutes instead of the 10 hours it took to provision with the previous infrastructure based process The serverless solution enables Autodesk to a utomatically provision accounts configure and enforce standards and run audits with increased automation and fewer manual touchpoints For more information see the Autodesk AWS Summit 2017 presentation SMC301: The State of Serverless Computing 24 Visit GitHub to see the Autodesk Tailor service25 Machine Learning You can use serverless services to capture store and preprocess data before feeding it to your machine learning model After training the model you can also This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 18 serve the model for prediction at scale for inference without providing or managing any infrastr ucture Customer Example – Genworth Genworth Mortgage Insurance Australia Limited is a leading provider of lenders ’ mortgage insurance in Australia Genworth has more than 50 years of experience and data in this industry and wanted to use this historical information to train predictive analytics for loss mitigation machine learning models To achieve this task Genworth built a serverless machine learning pipeline at scale using services like AWS Glue a serverless managed ETL processing service to ingest and transform data and Amazon SageMaker to batch transform jobs and perform ML inference and process and publish the results of the analysis With the ML models Genworth could analyze recent repayment patterns for each insurance policy to prioritize t hem in likelihood and impact for each claim This process was automate d endtoend to help the business make data driven decisions and simplify high value manual work performed by the Loss Mitigation team Read the Machine Learning blog How Genworth built a serverless ML pipeline on AWS using Amazon SageMaker and AWS Glue for more information26 Conclusion Serverless approaches are designed to tackle two classic IT management problems: idle servers and operating fleets of servers that distract and detract from the business of creating differentiated customer value AWS serverless offerings solve these long standing problems with a pay for value billing model and by eliminating the need to manage the underlying infrastructure AWS constantly scans patches and monitors the underlying infrastructure making these applications more secure and provides built in fault tolerance with minimal configuration needed for high availability As a result developers can focus on writing business logic rather than managing infrastructure allowing enterprises to reduce time to market while paying for only the resources co nsumed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 19 Existing companies are gaining significant agility and economic benefits from adopting serverless architectures and e nterprises should consider serverless first strategy for building cloud native microservices To learn more and read whitepapers on related topics see Serverless Computing and Applications 27 Contributors The following individuals and organizations contributed to this document: • Tim Wagner General Manager of AWS Serverless Applicatio ns Amazon Web Services • Paras Jain Technical Account Manager Amazon Web Services • John Lee Solutions Architect Amazon Web Services • Diego Magalh ães Principal Solutions Architect Amazon Web Services Further Reading For additional information see the following: • Architecture Best Practices for Serverless 28 • AWS Ramp Up Guide: Serverless29 Reference Architectures • Web Applications30 • Mobile Backends 31 • IoT Backends32 • File Processing33 • Stream Processing34 • Image Recognition Processing35 • MapReduce36 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 20 Document Revisions Date Description October 2017 First publication September 2021 Content refresh 1 https://wwwperlecom/articles/the costsavings ofcloud computing 40191237shtml 2 https://wwwgartnercom/en/newsroom/press releases/2021 0628gartner saysworldwide iaaspublic cloud services market grew 407percent in2020 3 https://d39w7f4ix9f5s9cloudfrontnet/e3/79/42bf75c94c279c67d777f002051f/ carbon reduction opportunity ofmoving toawspdf 4 Occupy the Clo ud: Eric Jonas et al Distributed Computing for the 99% https://arxivorg/abs/170204024 5 https://awsamazoncom/aws costmanagement/aws costoptimization/right sizing/ 6 https://docsawsamazoncom/lambda/latest/dg/lambda releaseshtml 7 https://serverlesslandcom/patterns 8 https://awsamazoncom/partners 9 https://awsamazoncom/serverless/ 10 https://d0awsstaticcom/whitepapers/AWS_Serverless_Multi Tier_Architecturespdf 11 https://githubcom/awslabs/lambda refarch webapp 12 https://githubcom/awslabs/lambda refarch mobilebackend 13 https://awsamazoncom/solutions/case studies/neimanmarcus case study 14 https://githubcom/awslabs/lambda refarch iotbackend Notes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 21 15 https://wwwyoutubecom/watch?v= 1PDC6UOFtE 16 https://githubcom/awslabs/lambda refarch streamprocessing 17 https://githubcom/awslabs/lambda refarch fileprocessing 18 https://githubcom/awslabs/lambda refarch imagerecognition 19 https://wwwyoutubecom/watch?v=AcGv3qUrRC4&feature=youtube&t=1153 20 https://awsamazoncom/blogs/big data/enhancing customer safety by leveraging thescalable secure andcostoptimized toyota connected data lake/ 21 https://wwwyoutubecom/watch?v=IpuRyJY3B4k 22 https://githubcom/awslabs/lambda refarch mapreduce 23 https://wwwslidesharenet/AmazonWebServices/smc303 realtime data processing using awslambda/28 24 https:/ /wwwslidesharenet/AmazonWebServices/smc301 thestate of serverless computing 75290821/22 25 https://githubcom/alanwill/aws tailor 26 https://awsamazoncom/blogs/machine learning/how genworth builta serverless mlpipeline onawsusing amazon sagemaker andawsglue/ 27 https://awsamazoncom/serverless/ 28 https://awsamazoncom/architecture/serverless/ 29 https://d1awsstaticcom/training andcertification/ramp up_guides/Ramp Up_Guide_Serverlesspdf?svrd_rr1 30 https://githubcom/awslabs/lambda refarch webapp 31 https://githubcom/awslabs/lambda refarch mobilebackend 32 https://githubcom/awslabs/lambda refarch iotbackend 33 https://githubcom/awslabs/lambda refarch fileprocessing 34 https://githubcom/awslabs/lambda refarch streamprocessing 35 https://githubcom/awslabs/lambda refarch imagerecognition 36 https://githubcom/awslabs/lambda refarch mapreduce,General,consultant,Best Practices Optimizing_Multiplayer_Game_Server_Performance_on_AWS,"Optimizing Multiplayer Game Server Performance on AWS April 201 7 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon EC2 Instance Type Considerations 1 Amazon EC2 Compute Optimized Instance Capabilities 2 Alternative Compute Instance Options 3 Performance Optimization 3 Networking 4 CPU 13 Memory 27 Disk 34 Benchmarking and Testing 34 Benchmarking 34 CPU Performance Analysis 36 Visual CPU Profiling 36 Conclusion 39 Contributors 40 ArchivedAbstract This whitepaper discusses the exciting use case of running multiplayer game servers in the AWS Cloud and the optimizations that you can make to achieve the highest level of performance In this whitepaper we provide you the information you need to take advantage of the Amazon Elastic Compute Cloud (EC2) family of instances to get the peak performance required to successfully run a multiplayer game server on Linux in AWS This paper is intended for technical audiences that have experience tuning and optimizing Linuxbased servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 1 Introduction Amazon Web Services (AWS) provides benefits for every conceivable gaming workload including PC/console single and multiplayer games as well as mobile based socialbased and webbased games Running PC/console multiplayer game servers in the AWS Cloud is particularly illustrative of the success and cost reduction that you can achieve with the cloud model over traditional on premises data centers or colocations Multiplayer game servers are based on a client/server network architecture in which the game server holds the authoritative source of events for all clients (players) Typically after p layers send their actions to the server the server runs a simulation of the game world using all of these actions and sends the results back to each client With Amazon Elastic Compute Cloud (Amazon EC2) you can create and run a virtual server (called an instance ) to host your client/server multiplayer game1 Amazon EC2 provides resizable compute capacity and supports Single Root I/O Virtualization (SRIOV) high frequency processors For the compute family of instances Amazon EC2 will support up to 72 vCPUs (36 physical cores) when we launch the C5 computeoptimized instance type in 2017 This whitepaper discusses how to optimize your Amazon EC2 Linux multiplayer game server to achieve the best performance while maintaining scalability elasticity and global reach We start with a brief description of the performance capabilities of the compute optimized instance family and then dive into optimization techniques for networking CPU memory and disk Finally we briefly cover benchmarking and testing Amazon EC2 Instance Type Considerations To get the maximum performance out of an Amazon EC2 instance it is important to look at the compute options available In this section we discuss the capabilities of the Amazon EC2 compute optimized instance family that make it ideal for multiplayer game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 2 Amazon EC2 Compute Optimized Instance Capabilities The current generation C4 compute optimized instance family is ideal for running yo ur multiplayer game server2 (The C5 instance type announced at AWS re:Invent 2016 will be the recommended game server platform when it launches) C4 instances run on hardware using the Intel Xeon E52666 v3 (Haswell) processor This is a custom processor designed specifically for AWS The following table lists the capabilities of each instance size in the C4 family Instance Size vCPU Count RAM (GiB) Network Performance EBS Optimized: Max Bandwidth (Mbps) c4large 2 375 Moderate 500 c4xlarge 4 75 Moderate 750 c42xlarge 8 15 High 1000 c44xlarge 16 30 High 2000 c48xlarge 36 60 10 Gbps 4000 As the table shows the c48xlarge instance provides 36 vCPUs Since each vCPU is a hyperthread of a full physical CPU core you get a total of 18 physical cores with this instance size Each core runs at a base of 29 GHz but can run at 32 GHz all core turbo (meaning that each core can run simultaneously at 32 GHz even if all the cores are in use ) and at a max turbo of 35 GHz (possible when only a few cores are in use) We recommend t he c44xlarge and c48xlarge instance sizes for running your game server because they get exclusive access to one or both of the two underlying processor sockets respectively Exclusive access guarantees that you get a 32 GHz all core turbo for most workloads The primary exception is for applications running Advanced Vector Extension (AVX) workloads 3 If you run AVX workloads on the c48xlarge instance the best you can expect in most cases is 31 GHz when running three cores or less It is important to test your specific workload to verify the performance you can achieve The following table shows a comparison between the c44xlarge instances and the c48xlarge instances for AVX and nonAVX workloads ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 3 C4 Instance Size and Workload Max Core Turbo Frequency (GHz) All Core Turbo Frequency (GHz) Base Frequency (GHz) C48xlarge – non AVX workload 35 (when fewer than about 4 vCPUs are active) 32 29 C48xlarge – AVX workload ≤ 33 ≤ 31 depending on the workload and number of active cores 25 C44xlarge – non AVX workload 32 32 29 C44xlarge – AVX workload 32 ≤ 31 depending on the workload and number of active cores 25 Alternative Compute Instance Options There are situations for example for some roleplaying games (RPGs) and multiplayer online battle arenas (MOBAs) where your game server can be more memory bound than compute bound In these cases the M4 instance type may be a better option than the C4 instance type since it has a higher memory to vCPU ratio The compute optimized instance family has a higher vCPU to memory ratio than other instance families while the M4 instance has a higher memory to vCPU ratio M4 instances use a Haswell processor for the m410xlarge and m416xlarge size s; smaller sizes use either a Broadwell or a Haswell processor The M4 instance type is similar to the C4 instance type in networking performance and has plenty of bandwidth for game servers Performance Optimization There are many performance options for Linux servers with networking and CPU being the two most important This section documents the performance options that AWS gaming customers have found the most valuable and /or the options that are the most appropriate for running game servers on virtual machines (VMs) The performance options are categorized into four sections: networking CPU memory and disk This is not an allinclusive list of performance tuning options and not all of the options will be appropriate for every gaming workload We strongly recommend testing these settings before implementing them in production ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 4 This section assumes that you are running your instance in a VPC created with Amazon Virtual Private Cloud (VPC)4 that uses an Amazon Machine Image (AMI)5 with a hardware virtual machine (HVM) All of the instructions and settings that follow have been verified on the Amazon Linux AMI 201609 using the 44 233154 kernel but they should work with all future releases of Amazon Linux Networking Networking is one of the most important areas for performance tuning Multiplayer client/server games are extremely sensitive to latency and dropped packets A list of performance tuning options for networking is provided in the following table Performance Tuning Option Summary Notes Links or Commands Deploying game servers close to players Proximity to players is the best way to reduce latency AWS has numerous Regions across the globe List of AWS Regions Enhanced networking Improved networking performance Nearly every workload should benefit No downside Linux /Windows UDP Receive buffers Helps prevent dropped packets Useful when the latency bet ween client and server is high Little downside but should be tested Add the following to /etc/sysctlconf: netcorermem_default = New_Value netcorermem_max = New_Value (Recommend start by doubling the current values set for your system ) Busy polling Reduce latency of incoming packet processing Can increase CPU utilization Add the following to /etc/sysctlconf: netcorebusy_read = New_Value netcore busy_poll = New_Value (Recommend testing a value of 50 first then 100 ) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 5 Performance Tuning Option Summary Notes Links or Commands Memory Helps prevent dropped packets Add the following to /etc/sysctlconf: netipv4udp_mem = New_Value New_Value New_Value (Recommend doubling the current values set for your system) Backlog Helps prevent dropped packets Add the following to /etc/sysctlconf: netcorenetdev_max_backlog= New_Value (Recommend doubling the current values set for your system) Transmit and receive queues Possible performance boost by disabling hyperthreading The following recommendations cover how to reduce latency avoid dropped packets and obtain optimal networking performance for your game servers Deploying Game Servers Close to Players Deploying your game servers as close as possible to your players is a key element for good player experience AWS has numerous Regions across the world which allows you to deploy your game servers close to your players For the most current list of AWS Regions and Availability Zones see https://awsamazoncom/aboutaws/globalinfrastructure/ 6 You can package your instance AMI and deploy it to as many Regions as you choose Customers often deploy AAA PC/ console games in almost every available Region As you determine where your players are globally you can decide where to deploy your game servers to provide the best experience possible Enhanced Networking Enhanced networking is another performance tuning option7 Enhanced networking uses single root I/O virtualization (SRIOV) and exposes the ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 6 network card directly to the instance without needing to go through the hypervisor8 This allows for general ly higher I/O performan ce lower CPU utilization higher packets per second (PPS) performance lower interinstance latencies and very low network jitter The performance improvement provided by enhanced networking can make a big difference for a multiplayer game server Enhanced networking is only available for instances running in a VPC using an HVM AMI and only for certain instance types such as the C4 R4 R3 I3 I2 M4 and D2 These instance types use the Intel 82599 Virtual Function Interface (which uses the “ixgbevf” Linux driver ) In addition the X1 R4 P2 and M416xlarge (and soon the C5) instances support enhanced networking using the Elastic Network Adapter (ENA) The Amazon Linux AMI includes these necessary drivers by default Follow the Linux or Windows instructions to install the driver for other AMIs9 10 It is important to have the latest ixgbevf driver which can be downloaded from Intel’s website 11 The minimum recommended version for the ixgbevf driver is version 2142 To check the driver version running on your instance run the following command: ethtool i eth0 User Datagram Protocol ( UDP ) Most firstperson shooter games and other similar client/server multiplayer games use UDP as the protocol for communication between clients and game servers The following sections lay out four UDP optimizations that can improve performance and reduce the occurrence of dropped packets Receive Buffers The first UDP optimization is to increase the default value for the receive buffers Having too little UDP buffer space can cause the operating system kernel to discard UDP packets resulting in packet loss Increasing this buffer space can be helpful in situations where the latency between the client and server is high The default value for both rmem_default and rmem_max on Amazon Linux is 212992 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 7 To see the current default values for your system run the following commands: cat /proc/sys/net/core/rmem_default cat /proc/sys/net/core/rmem_max A common approach to allocating the right amount of buffer space is to first double both values and then test the performance difference this makes for your game server Depending on the results you may need to decrease or increase these values Note that the rmem_default value should not exceed the rmem_max value To configure these parameters to persist across reboots set the new rmem_default and rmem_max values in the /etc/sysctlconf file: netcorermem_default = New_Value netcorermem_max = New_Value Whenever making changes to the sysctlconf file you should run the following command to refresh the configuration: sudo sysctl p Busy Polling A second UDP optimization is busy polling which can help reduce network receive path latency by having the kernel poll for incoming packets This will increase CPU utilization but can reduce delays in packet processing On most Linux distributions including Amazon Linux busy polling is disabled by default We recommend that you start with a value of 50 for both busy_read and busy_poll and then test what difference this makes for your game server Busy_read is the number of microseconds to wait for packets on the device queue for socket reads while busy_poll is the number of microseconds to wait for packets on the device queue for socket poll and selects Depending on the results you may need to increase the value to 100 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 8 To configure these parameters to persist across reboots add the new busy_read and busy_poll values to the /etc/sysctlconf file: netcorebusy_read = New_Value netcorebusy_poll = New_Value Again run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p UDP Buffers A third UDP optimization is to change how much memory the UDP buffers use for queueing The udp_mem option configures the number of pages the UDP sockets can use for queueing This can help reduce dropped packets when th e network adaptor is very busy This setting is a vector of three values that are measured in units of pages (4096 bytes) The first value called min is the minimum threshold before UDP moderates memory usage The second value called pressure is the memory threshold after which UDP will moderate the memory consumption The final value called max is the maximum number of pages available for queueing by all UDP sockets By default Amazon Linux on the c48xlarge instance uses a vector of 1445727 1927636 2891454 while the c44xlarge instance uses a vector of 720660 960882 1441320 To see the current default value s run the following command: cat /proc/sys/net/ipv4/udp_mem A good first step when experimenting with new values for this setting is to double the values and then test what difference this makes for your game server It is also good to adjust the values so they are multiples of the page size (4096 bytes) To configure these parameters to persist across reboots add the new UDP buffer values to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 9 netipv4udp_mem = New_Value New_Value New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Backlog The final UDP optimization that can help reduce the chance of dropped packets is to increase the backlog value This optimization will increase the queue size for incoming packets for situations where the interface is receiving packets at a faster rate than the kernel can handle On Amazon Linux the default value of the queue size is 1000 To check the default value run the following command: cat /proc/sys/net/core/netdev_max_backlog We recommend that you double the default value for your system and then test what difference this makes for your game server To configure these parameters to persist across reboots add the new backlog value to the /etc/sysctlconf file: netcorenetdev_max_ backlog = New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Transmit and Receive Queues Many game servers put more pressure on the network through the number of packets per second being processed rather than on the overall bandwidth used ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 10 In addition I/O wait can become a bottleneck if one of the vCPUs gets a large volume of interrupt requests (IRQs) Receive Side Scaling (RSS) is a common method used to address these networking performance issues12 RSS is a hardware option that can provide multiple receive queues on a network interface controller (NIC) For Amazon Elastic Compute Cloud (Amazon EC2) the NIC is called an Elastic Network Interface (ENI)13 RSS is enabled on the C4 instance family but changes to the configuration of RSS are not allowed The C4 instance family provides two receive queues for all of the instance sizes when using Linux Each of these queues has a separate IRQ number and is mapped to a separate vCPU Running the command $ ls 1 /sys/class/net/eth0/queues on a c48xlarge instance displays the following queues: $ ls l /sys/class/net/eth0/queues total 0 drwxrxrx 2 root 0 Aug 18 21:00 rx 0 drwxrxrx 2 root root 0 Aug 18 21:00 rx 1 drwxrxrx 3 root root 0 Aug 18 21:00 tx 0 drwxrxrx 3 root root 0 Aug 18 21:00 tx 1 To find out which IRQs are being used by the queues and how the CPU is handling those interrupts run the following command: cat /proc/interrupts Alternatively run this command to output the IRQs for the queues: echo eth0; grep eth0 TxRx /proc/interrupts | awk '{printf "" %s\n"" $1}' What follows is the reduced output when viewing the full contents of /proc/interrupts on a c48xlarge instance showing just the eth0 interrupts The first column is the IRQ for each queue The last two columns are the process ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 11 information In this case you can see the TxRx0 and TxRx1 are using IRQs 267 and 268 respectively CPU0 CPU23 CPU33 267 634 2789 0 xenpirqmsix eth0TxRx0 268 600 0 2587 xenpirqmsix eth0TxRx1 To verify which vCPU the queue is sending interrupts to run the following commands (replacing IRQ_Number with the IRQ for each TxRx queue): $ cat /proc/irq/ 267/smp_affinity 00000000000000000000000000800000 $ cat /proc/irq/ 268/smp_affinity 00000000000000000000000200000000 The previous output is from a c48xlarge instance It is in hex and needs to be converted to binary to find the vCPU number For example the hex value 00800000 converted to binary is 00000000100000000000000000000000 Counting from the right and starting at 0 you get to vCPU 23 The other queue is using vCPU 33 Because vCPUs 23 and 33 are on different processor sockets they are physically on different nonuniform memory access (NUMA) nodes One issue here is that each vCPU is by default a hyperthread (but in this particular case they are each hyperthreads of the same core) so a performance boost could be seen by tying each queue to a physical core The IRQs for the two queues on Amazon Linux on the C4 instance family are already pinned to particular vCPUs that are on separate NUMA nodes on the c48xlarge instance This default state may be ideal for your game servers However it is important to verify on your distribution of Linux that there are two queues that are configured for IRQs and vCPUs (which are on separate NUMA nodes) On C4 instance sizes other than the c48xlarge NUMA is not an issue since the other sizes only have one NUMA node One option that could improve performance for RSS is to disable hyperthreading If you disable hyperthreading on Amazon Linux then by ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 12 defau lt the queues will be pinned to physical cores (which will also be on separate NUMA nodes on the c48xlarge instance) See the Hyperthreading section in this whitepaper for more information on how to disable hyperthreadi ng If you don’t pin game server processes to cores you could prevent the Linux scheduler from assigning game server processes to the vCPUs (or cores) for the RSS queues To do this you need to configure two options First in your text editor edit the /boot/grub/grubconf file For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add isolcpus=NUMBER at the end of the line where NUMBER is the number of the vCPUs for the RSS queues For example if the queues are using vCPUs 3 and 4 replace NUMBER with “34” # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 ro ot=LABEL=/ console=ttyS0 isolcpu s=NUMBER initrd /boot/initramfs 31426 2446amzn1x86_64img Using isolcpus will prevent the scheduler from running the game server processes on the vCPUs you specify The problem is that it will also prevent irqbalance from assigning IRQs to these vCPUs To fix this you need to use the IRQBALANCE_BANNED_CPUS option to ban all of the remaining CPUs Version 1110 or later of irqbalance on current versions of Amazon Linux prefers the IRQBALANCE_BANNED_CPUS option and will assign IRQs to the vCPUs specified in isolcpus in order to honor the vCPUs specified by IRQBALANCE_BANNED_CPUS Therefore for example if you isolated vCPUs 34 using isolcpus you would then need to ban the other vCPUs on the instance using IRQBALANCE_BANNED_CPUS To do this you need to use the IRQBALANCE_BANNED_CPUS option in the /etc/sysconfig/ir qbalance file This is a 64bit hexadecimal bit mask The best way to find the value would be to write out the vCPUs you want to include in ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 13 this value in decimal format and then convert to hex So in the earlier example where we used isolcpus to exclude vCPUs 34 we would then want to use IRQBALANCE_BANNED_CPUS to exclude vCPUs 1 2 and 514 (assuming we are on a c44xlarge instance) which would be 1111111111100111 in decimal and finally FFE7n when converted to hex Add the following line to the /etc/sysconfig/irqbalance file using your favorite editor: IRQBALANCE_BANNED_CPUS=” FFE7n” The result is that vCPUs 3 and 4 will not be used by the game server processes but will be used by the RSS queues and a few other IRQs used by the system Like everything else all of these values should be tested with your game server to determine what the performance difference is Bandwidth The C4 instance family offers plenty of bandwidth for a multiplayer game server The c44xlarge instance provides high network performance and up to 10 Gbps is achievable between two c48xlarge instances (or other large instance sizes like the m410xlarge) that are using enhanced networking and are in the same placement group 14 The bandwidth provided by both the c44xlarge and c48xlarge instances has been more than sufficient for every game server use case we have seen You can easily determine the networking performance for your workload on a C4 instance compared to other instances in the same Availability Zone other instances in another Availability Zone and most importantly to and from the Internet Iperf is probably one of the best tools for determining network performance on Linux15 while Nttcp is a good tool for Win dows16 The previous links also provide instructions on doing network performance testing Outside of the placement group you need to use a tool like Iperf or Nttcp to determine the exact network performance achievable for your game server CPU CPU is one of the two most important performancetuning areas for game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 14 Performance Tuning Option Summary Notes Links or Commands Clock Source Using tsc as the clock source can improve performance for game servers Xen is the default clocksource on Amazon Linux Add the following entry to the kernel line of the /boot/grub/grubconf file: tsc=reliable clocksource=tsc CState and PState Cstate and P state options are optimized by default except for the C state on the c48xlarge Setting Cstate to C1 on the c48xlarge should improve CPU performance Can only be changed on the c48xlarge Downside is that 35 GHz max turbo will not be available However the 32 GHz all core turbo will be available Add the following entry to the kernel line of the /boot/g rub/grubconf file: intel_idlemax_cstate=1 Irqbalance When not pinning game servers to vCPUs irqbalance can help improve CPU performance Installed and running by default on Amazon Linux Check your distribution to see if this is running NA Hyperthrea ding Each vCPU is a hyperthread of a core Performance may improve by disabl inghyperthrea ding Add the following entry to the kernel line of the /boot/grub/grubconf file: Maxcpus=X (where X is the number of actual cores in the instance) CPU Pinning Pinning the game server process to vCPU can provide benefits in some situations CPU pinning does not appear to be a common practice among game companies ""numactl physcpubind $phys_cpu_core membind $associated_numa_node /game_server_executable"" Linu x Scheduler There are three particular Linux scheduler configuration options that can help with game servers sudo sysctl w 'kernelsched_min_granularity_ns= New _Value ' (Recommend start by doubling the current value set for your system) sudo sysctl w 'kernelsched_wakeup_granularity_ns= New_Value ' sudo sysctril –w (Recommend start by halving the current value set for your system) 'kernelsched_migration_cost_ns= New _Value ' ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 15 Performance Tuning Option Summary Notes Links or Commands (Recommend start by doubling the current value set for your system) Clock Source A clock source gives Linux access to a timeline so that a process can determine where it is in time Time is extremely important when it comes to multiplayer game servers given that the server is the authoritative source of events and yet each client has its own view of time and the flow of events The kernelorg web site has a good introduction to clock sources17 To find the current clock source: $cat /sys/devices/system/clock source/clocksource0/current_clocksource By default on a C4 instance running Amazon Linux this is set to xen To view the available clock sources: cat /sys/devices/system/clocksource/clocksource0/available_clocksource This list should show xen tsc hpet and acpi_pm by default on a C4 instance running Amazon Linux For most game servers the best clock source option is TSC (Time Stamp Counter) which is a 64bit register on each processor I n most cases TSC is the fastest highestprecision measurement of the passage of time and is monotonic and invariant See this xenorg article for a good discussion about TSC when it comes to XEN virtualization18 Synchronization is provided across all processors in all power states so TSC is considered synchronized and invariant This means that TSC will increment at a constant rate TSC can be accessed using the rdtsc or rdtscp instructions Rdtscp is often a better option than rdtsc since rdtscp takes into account that Intel processors ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 16 sometimes use out oforder execution which can affect getting accurate time readings The recommendation for game servers is to change the clock source to TSC However you should test this thoroughly for your workloads To set the clock source to TSC edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (note that there may be more than one kernel entry you only need to edit the first one) add tsc=reliable clocksource=tsc at the end of the line # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 tsc=reliable clocksource=tsc initrd /boot/initramfs 31426 2446amzn1x86_64img Processor State Control (CStates and PStates) Processor State Controls can only be modified on the c48xlarge instance (also configurable on the d28xlarge m410xlarge and x132xlarge instances )19 C states control the sleep levels that a core can enter when it is idle while Pstates control the desired performance (in CPU frequency) for a core Cstates are idle power saving states while Pstates are execution power saving states Cstates start at C0 which is the shallowest state where the core is actually executing functions and go to C6 which is the deepest state where the core is essentially powered off The default Cstate for the c48xlarge instance is C6 For all of the other instance sizes in the C4 family the default is C1 This is the reason that the 35 GHz max turbo frequency is only available on the c48xlarge instance Some vCPUs need to be in a deeper sleep state than C1 in order for the cores to hit 35 GHz An option on the c48xlarge instance is to set C1 as the deepest Cstate to prevent the cores from going to sleep That reduces the processor reaction latency but also prevents the cores from hitting the 35 GHz Turbo Boost if only a few cores are active; it would still allow the 32 GHz all core turbo Therefore ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 17 you would be trading the possibility of achieving 35 GHz when a few cores are running for the reduced reaction latency Your results will depend on your testing and application workloads If 32 GHz all core turbo is acceptable and you plan to utilize all or most of the cores on the C48xlarge instance the n change the Cstate to C1 Pstates start at P0 where Turbo mode is enabled and go to P15 which represents the lowest possible frequency P0 provides the maximum baseline frequency The default Pstate for all C4 instance sizes is P0 There is really no reason for changing this for gaming workloads Turbo Boost mode is the desirable state The following table describes the C and Pstates for the c44xlarge and c48xlarge Instance size Default Max C State Recommended setting Default PState Recommended setting c44xlarge and smaller 1 1 0 0 c48xlarge 6a 1 0 0 a) Running cat /sys/module/intel_idle/parameters/max_cstate will show the max Cstate as 9 It is actually set to 6 which is the maximum possible value Use turbostat to see the Cstate and max turbo frequency that can be achieved on the c48xlarge instance Again these instructions were tested using the Amazon Linux AMI and only work on the c48xlarge instance but not on any of the other instance sizes in the C4 family First run the following turbostat command to install stress on your system (If turbostat is not installed on your system then install that too) sudo yum install stress The following command stress es two cores (ie two hyperthreads of two different physical cores): sudo turbostat debug stress c 2 t 60 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 18 Here is a truncated printout of the results of running the command: Definitions: AVG_MHz: number of cycles executed divided by time elapsed %Busy: percent of time in ""C0"" state Bzy_MHz: average clock rate while the CPU was busy (in ""c0"" state) TSC_MHz: average MHz that the TSC ran during the entire interval The output shows that vCPUs 9 and 20 spent most of the time in the C0 state (%Busy) and hit close to the maximum turbo of 35 GHz (Bzy_MHz) vCPUs 2 and 27 the other hyperthreads of these cores are sitting in C1 C state (CPU% c1) waiting for instructions A frequency close to 35 GHz was achievable because the default Cstate on the c48xlarge instance was C6 and so most of the cores were in the C6 state (CPU%c6) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 19 Next try stressing all 36 vCPUs to see the 32 GHz All Core Turbo: sudo turbostat debug stress c 36 t 60 Here is a truncated printout of the results of running the command: You can see that all of the vCPUs are in C0 for over 99% of the time (%Busy) and that they are all hitting 32 GHz (Bzy_MHz) when in C0 To set the CState to C1 edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add intel_idlemax_cstate=1 at the end of the line to set C1 as the deepest C state for idle cores: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 20 # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 intel_idlemax_cstate=1 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Now rerun the turbostat command to see what changed after setting the Cstate to C1: sudo turbostat debug stress c 2 t 10 Here is a truncated printout of the results of running the command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 21 The output in the table above shows that all of the cores are now at a Cstate of C1 The maximum average frequency of the two vCPUs that were stressed vCPUs 16 and 2 in the example above is 32 GHz (Bzy_MHz) The maximum turbo of 35 GHz is no longer available since all of the vCPUs are at C1 Another way to verify that the Cstate is set to C1 is to run the following command: cat /sys/module/intel_idle/parameters/max_cstate Finally you may be wondering what the performance cost is when a core switches from C6 to C1 You can query the cpuidle file to show the exit latency in microseconds for various Cstates There is a latency penalty each time the CPU transitions between Cstates ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 22 In the default Cstate cpuidle shows that to move from C6 to C0 requires 133 microseconds: $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 C1EHSW 10 C3HSW 33 C6HSW 133 After you change the Cstate default to C1 you can see the difference in CPU idle Now we see that to move from C1 to C0 takes only 2 microseconds We have cut the latency by 131 microseconds by setting the vCPUs to C1 $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 The instructions above are only relevant for the c48xlarge instance For the c44xlarge instance (and smaller instance sizes in the C4 family) the Cstate is already at C1 and all core turbo 32 GHz is available by default Turbostat will not show that the processors are exceeding the base of 29 GHz One problem is that even when using the debug option for turbostat the c44xlarge instance does not show the Avg_MHz or the Bzy_MHz values like in the output shown above for the c48xlarge instance One way to verify that the vCPUs on the c44xlarge instance are hitting the 32 GHz all core turbo is to use the showboost script from Brendan Gregg20 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 23 For this to work on Amazon Linux you need to install the msr tools To do this run these commands: sudo yum groupin stall ""Development Tools"" wget https://launchpadnet/ubuntu/+archive/primary/+files/msr tools_13origtargz tar –zxvf msr tools_13origtargz sudo make sudo make install cd msrtools_13 wget https://rawgithubusercontentcom/brendangregg/msr cloud tools/master/showboost chmod +x showboost sudo /showboost The output only shows vCPU 0 but you can modify the options section to change the vCPU that will be displayed To show the CPU frequency run your game server or use turbostat stress and then run the showboost command to view the frequency for a vCPU Irqbalance Irqbalance is a service that distributes interrupts over the cores in the system to improve performance Irqbalance is recommended for most use cases except where you are pinning game servers to specific vCPUs or cores In that case disabling irqbalance may make sense Please test this with your specific workloads to see if there is a difference By default irqbalance is running on the C4 instance family To check if irqbalance is running on your instance run the following command: sudo service irqbalance status Irqbalance can be configured in the /etc/sysconfig/irqbalance file You want to see a fairly even distribution of interrupts across all the vCPUs You can view the status of interrupts to see if they are properly being distributed across vCPUs by running the following command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 24 cat /proc/interrupts Hyperthreading Each vCPU on the C4 instance family is a hyperthread of a physical core Hyperthreading can be disabled if you determine that this has a detrimental impact on the performance of your application However many gaming customers do not find a need to disable hyperthreading The table below shows the number of physical cores in each C4 instance size Instance Name vCPU Count Physical Core Count c4large 2 1 c4xlarge 4 2 c42xlarge 8 4 c44xlarge 16 8 c48xlarge 36 18 All of the vCPUs can be viewed by running the following: cat /proc/cpuinfo To get more specific output you can use the following: egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo In this output the “processor” is the vCPU number The “physical id” shows the processor socket ID For any C4 instance other than the c48xlarge this will be 0 The “core id” is the physical core number Each entry that has the same “physical id” and “core id” will be hyperthreads of the same core Another way to view the vCPUs pairs (ie hyperthreads) of each core is to look at the thread_siblings_list for each core This will show two numbers that are ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 25 the vCPUs for each core Change the X in “cpuX” to the vCPU number that you want to view cat /sys/devices/system/cpu/cpu X/topology/thread_siblings_list To disable hyperthreading edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add maxcpus=NUMBER at the end of the line where NUMBER is the number of actual cores in the C4 instance size you are using Refer to the table above on the number of physical cores in each C4 instance size # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 maxcpus=18 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Again this is one of those settings that you should test to determine if it provides a performance boost for your game This setting would likely need to be combined with CPU pinning before it would provide any performance boost In fact disabling hyperthreading without using pinning may degrade performance Many major AAA games running on AWS do not actually disable hyperthreading If there is no performance boost you can avoid this setting to avoid the administrative overhead of having to maintain this on each of your game servers CPU Pinning Many of the game server processes we see usually have a main thread and then a few ancillary threads Pinning the process for each game server to a core ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 26 (either a vCPU or physical core) is definitely an option but not a configuration we often see Usually pinning is done in situations where the game engine truly needs exclusive access to a core Often game companies simply allow the Linux scheduler to handle this Again this is something that should be tested but if the performance is sufficient without pinning it can save you administrative overhead to not have to worry about pinning As will be discussed in the NUMA section you can pin a process to both a CPU core and a NUMA node by running the following command (replacing the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name ): “numactl – physcpubind $phys_cpu_core –membind $associated_numa_node /game_server_executable ” Linux Scheduler The default Linux scheduler is called the Completely Fair Scheduler (CFS) 21 and it is responsible for executing processes by taking care of the allocation of CPU resources The primary goal of CFS is to maximize utilization of the vCPUs and in turn provide the best overall performance If you don’t pin game server processes to a vCPU then the Linux scheduler assigns threads for these processes There are a few parameters for tuning the Linux scheduler that can help with game servers The primary goal of the three parameters documented below is to keep tasks on processors as long as reasonable given the activity of the task We focus on the scheduler minimum granularity the scheduler wakeup granularity and the scheduler migration cost values To view the default value of all of the kernelsched options run the following command: sudo sysctl A | grep v ""kernelsched _domain"" | grep ""kernelsched"" The scheduler minimum granularity value configures the time a task is guaranteed to run on a CPU before being replaced by another task By default ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 27 this is set to 3 ms on the C4 instance family when running Amazon Linux This value can be increased to keep tasks on the processors longer An option would be to double this setting this to 6 ms Like all other performance recommendations in this whitepaper these settings should be tested thoroughly with your game server This and the other two scheduler commands do not persist the setting across reboots so it needs to be done in a startup script: sudo sysctl w 'kernelsched_min_granularity_ns= New_Value The scheduler wakeup granularity value affects the ability of tasks being woken to replace the current task running The lower the value the easier it will be for the task to force removal By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option of halving this value to 2 ms and testing the result Further reductions may also improve the performance of your game server sudo sysctl w 'kernelsched_ wakeup_granularity_ns= New_Value ' The scheduler migration cost value sets the duration of time after a task ’s last execution where the task is still considered “cache hot” when the scheduler makes migration decisions Tasks that are “cache hot” are less likely to be migrated which helps reduce the possibility the task will be migrated By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option to double this value to 8 ms and test sudo sysctril – w 'kernelsched_migration_cost_ns= New_Value ' Memory It is important that any customers running game servers on the c48xlarge instance pay close attention to the NUMA information Performance Tuning Option Summary Notes Links or Commands NUMA On the c48xlarge NUMA can become None of the C4 instance size s There are three options to deal with NUMA: CPU ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 28 Performance Tuning Option Summary Notes Links or Commands an issue since there are two NUMA nodes smaller than the c48xlarge will have NUMA issues since they all have one NUMA node pinning NUMA balancing and the numad process Virtual Memory A few virtual memory tweaks can provide a performance boost for some game servers Add the following to /etc/sysctlconf: vmswappiness = New_Value (Recommend start by halving the current value set for your system) Add the following to /etc/sysctlconf: vmdirty_ratio = New_Value (Recommend going with the default value of 20 on Amazon Linux) Add the following to /etc/sysctlconf: vmdirty_background_ratio = New_Value (Recommend going with the default value of 10 on Amazon Linux) NUMA All of the current generation EC2 instances support NUMA NUMA is a memory architecture used in multiprocessing systems that allows threads to access both the local memory memory local to other processors or a shared memory platform The key concern here is that the remote memory usage provides much slower access than the local memory There is a performance penalty when a thread access es remote memory and there are issues with interconnect contention For an application that is not able to take advantage of NUMA you want to ensure that the processor only uses the local memory as much as possible This is only an issue for the c48xlarge instance because you have access to two processor sockets that each represent a separate NUMA node NUMA is not a concern on the smaller instances in the C4 family since you are limited to a ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 29 single NUMA node In addition the NUMA topology will remain fixed for the lifetime of an instance The c48xlarge instance has two NUMA nodes To view details on these nodes and the vCPUs that are associated with each node run the following command: numactl hardware To view the NUMA policy settings run: numactl show You can also view this information in the following directory (just look in each of the NUMA node directories): /sys/devices/system/node Use the numastat tool to view perNUMAnode memory statistics for processes and the operating system The –p option allows you to view this for a single process while the –v option provides more verbose data numastat p process_name numastat – v CPU Pinning There are three recommended options to address potential NUMA performance issues The first is to use CPU pinning the second is automatic NUMA balancing and the last is to use numad These options should be tested to determine which provides the best performance for your game server First we will look at CPU pinning This involves binding the game server process both to a vCPU (or core) and to a NUMA node You can use numactl to do this Change the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name in the following command for ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 30 each game server running on the instance See the numactl man page for additional options22 numactl physcpubind= $phys_cpu_core membind=$associated_numa_node game_server _executable Automatic NUMA Balancing The next option is to use automatic NUMA balancing This feature attempts to keep the threads or processes in the processor socket where the memory that they are using is located It also tries to move application data to the processor socket for the tasks accessing it As of Amazon Linux Ami 201603 automatic NUMA balancing is disabled by default23 To check if automatic NUMA balancing is enabled on your instance run the following command: cat /proc/sys/kernel/numa_balancing To permanently enable or disable NUMA balancing set the Value parameter to 0 to disable or 1 to enable and run the following command: sudo sysctl w 'kernelnuma_balancing=Value ' echo 'kernelnuma_balancing = Value ' | sudo tee /etc/sysctld/50 numabalancingconf Again these instructions are for Amazon Linux Some distributions may set this in the /etc/sysctlconf file Numad Numad is the final option to look at Numad is a daemon that monitors the NUMA topology and works to keep processes on the NUMA node for the core It is able to adjust to changes in the system conditions The article Mysteries of NUMA Memory Management Revealed explains the performance differences between automatic NUMA balancing and numad24 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 31 To use numad you need to disable automatic NUMA balancing first To install numad on Amazon Linux visit the Fedora numad site and then download the most recent stable commit25 From the numad directory run the following commands to install numad: sudo yum groupinstall ""Development Tools"" wget https://gitfedorahostedorg/cgit/numadgit/snapshot/numad 05targz tar –zxvf numad 05targz cd numad 05 make sudo make install The logs for numad can be found in /var/log/numadlog and there is a configuration file in /etc/numadconf There are a number of ways to run numad The numad –u option sets the maximum usage percentage of a node The default is 85% The recommended setting covered in the Mysteries of NUMA article is –u100 so this setting would configure the maximum to 100% This forces processes to stay on the local NUMA node up to 100% of their memory requirement sudo numad –u100 Numad can be terminated by using the following command: sudo /usr/bin/nu mad –i0 Finally disabling NUMA completely is not a good choice because you will still have the problem with remote memory access so it is better to work with the NUMA topology F or the c48xlarge instance we recommend taking some action for most game servers We recommend testing the available options that we discussed to determine which provides the best performance While none of these options may eliminate memory calls to the remote NUMA node for a process they each should provide a better experience for your game server ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 32 You can test how well an option is doing by running your game servers on the instance and using the following command to see if there are any numa_foreign (ie memory allocated to the other NUMA node but meant for this node) and numa_miss (ie memory allocated to this node but meant for the other NUMA node) entries: numastat v A more general way to test for NUMA issues is to use a tool like stress and then run numastat to see if there are foreign/miss entries: stress vm bytes $(awk '/MemFree/{printf ""%d \n"" $2 * 0097;}' < /proc/meminfo)k vmkeep m 10 Virtual Memory There are also a few virtual memory tweaks that we see customers use that may provide a performance boost Again these should be tested thoroughly to determine if they improve the performance of your game VM Swappiness VM Swappiness controls how the system favors anonymous memory or the page cache Low values reduce the occurrence of swapping processes out of memory which can decrease latency but reduce I/O performance Possible values are 0 to 100 The default value on Amazon Linux is 60 The recommendation is to start by halving that value and then testing Further reductions in the value may also help your game server performance To view the current value run the following command: cat /proc/sys/vm/swappiness To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 33 vmswappiness = New_Value VM Dirty Ratio VM Dirty Ratio forces a process to block and write out dirty pages to disk when a certain percentage of the system memory becomes dirty The possible values are 0 to 100 The default on Amazon Linux is 20 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/ dirty_ratio To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: vmdirty_ratio = New_Value VM Dirty Background Ratio VM Dirty Background Ratio forces the system to start writing data to disk when a certain percentage of the system memory becomes dirty Possible values are 0 to 100 The default value on Amazon Linux is 10 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/dirty_background_ratio To configure this parameter to persist across reboots add the following with the recommended value to the /etc/sysctlconf file: dirty_background_ratio= 10 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 34 Disk Performance tuning for disk is the least critical because disk is rarely a bottleneck for multiplayer game servers We have not seen customers experience any disk performance issues on the C4 instance family The C4 instance family only uses Amazon Elastic Block Store (EBS) for storage with no local instance storage; so C4 instances are EBSoptimized by default26 Amazon EBS can provide up to 48000 IOPS if needed You can take standard disk performance steps such as using a separate boot and OS/game EBS volume Performance Tuning Option Summary Notes Links or Commands EBS Performance C4 instances are EBSoptimized by default IOPS can be configured to fit the requirements of the game server NA Benchmarking and Testing Benchmarking There are many ways to benchmark Linux One option you may find useful is the Phoronix Test Suite 27 This open source Pythonbased suite provides a large number of benchmarking (and testing) options You can run tests against existing benchmarks to compare results after successive tests You can upload the results to OpenBenchmarkingorg for online viewing and comparison28 There are many benchmarks available and most can be found on the OpenBenchmarkingorg tests site 29 Some tests that can be useful for benchmarking in preparation for a game server are the cpu30 multicore 31 processor 32 and universe tests33 These tests usually involve multiple subtests Be aware that some of the subtests available may not be available for download or may not run properly To get started you need to install the prerequisites first: sudo yum groupi nstall ""Development Tools"" y sudo yum install php cli php xml –y ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 35 sudo yum install {libaiopcrepopt} devel glibc {develstatic} y Next download and install Phoronix: wget https://githubcom/phoronix testsuite/phoronix test suite/archive/masterzip unzip masterzip cd phoronix testsuitemaster /install sh ~/directory ofyourchoice/phoronix tester To install a test run the following from the bin subdirectory of the directory you specified when you ran the installsh command: phoronix testsuite install To install and run a test use: phoronix testsuite benchmar k You can choose to have the results uploaded to Openbenchmarkorg This option will be displayed at the beginning of the test If you choose “yes” you can name the test run At the end a URL will be provided to view all the test results Once the results are uploaded you can rerun a benchmark using the benchmark result number of previous tests so the results are displayed sidebyside with previous results You can repeat this process to display the results of many tests together Usually you would want to make small changes and the rerun the benchmark You can also choose not to upload the test results and instead view them in the command line output phoronix testsuite benchmark TEST RESULTNUMBER ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 36 The screenshot below shows an example of the output displayed on OpenBenchmarkingorg for a set of multicore benchmark tests run on the c48xlarge instance: CPU Performance Analysis One of the best tools for CPU performance analysis or profiling is the Linux perf command 34 Using this command you can record and then analyze performance data using perf record and perf report respectively Performance analysis is beyond the scope of this whitepaper but a couple of great resources are the kernelorg wiki and Brendan Gregg ’s perf resources 35 The next section describes how to produce flame graphs using perf to analyze CPU usage Visual CPU Profiling A common issue that comes up during game server testing is that while multiple game servers are running (often unpinned to vCPUs) one vCPU will hit near 100% utilization while the other vCPUs will show low utilization Troubleshooting this type of performance problem and other similar CPU issues can be a complex and timeconsuming process The process basically involves looking at the function running on the CPUs and finding the code paths that are the most CPU heavy Brendan Gregg’s flame graphs allow you to visually analyze and troubleshoot potential CPU performance issues36 Flame graphs ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 37 allow you to quickly and easily identify the functions used most frequentl y during the window visualized There are multiple types of flame graphs including graphs for memory leaks but we will focus on CPU flame graphs 37 We will use the perf command to generate the underlying data and then the flame graphs to create the visualization First install the prerequisites: # Install Perf sudo yum install perf # Remove the need to use root for running perf record sudo sh c 'echo 0 >/proc/sys/kernel/perf_event_paranoid' # Download Flame graph wget https://githubcom/brendangregg/FlameGraph/archive/masterzip # Finally you need to unzip the file that was dow nloaded This will create a directory called FlameGraph master where the flame graph executables are located unzip masterzip To see interesting data in the flame graph you either need to run your game server or a CPU stress tool Once that is running you run a perf profile recording You can run the perf record against all vCPUs against specific vCPUs or against particular PIDs Here is a table of the various options: Option Notes F Frequency for the perf record 99 Hz is usually sufficient for most use cases g Used to capture stack traces (as opposed to on CPU function or instructions) C Used to specify the vCPUs to trace a Used to specify that all vCPUs should be traced sleep Specified the number of seconds for the perf record to run ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 38 The following are the common commands for running a perf record for a flame graph depending on whether you are looking at all the vCPUs or just one Run these commands from the FlameGraphmaster directory: # Run perf record on all vCPUs perf record F 99 a g sleep 60 # Run perf record on specific vCPUs specified by number after the –C option perf record F 99 C CPU_NUMBER g sleep 60 When the perf record is complete run the following commands to produce the flame graph: # Create perf file When you run this you will get an error about “no symb ols found” This can be ignored since we are generating this for flame graphs perf script > outperf # Use the stackcollapse program to fold stack samples into single lines /stackcollapse perfpl outperf > outfolded # Use flamegraphpl to render a SVG /flamegraphpl outfolded > kernelsvg Finally use a tool like WinSCP to copy the SVG file to your desktop so you can view it Below are two examples of flame graphs The first was produced on a c48xlarge instance for 60 seconds while sysbench was running using the following options (for each in 1 2 4 8 16; do sysbench test=cpu cpumaxprime=20000 num threads=$each run; done) You can see how little of the total CPU processing on the instance was actually devoted to sysbench You can hover over various elements of the flame graphs to get additional details about the number of samples and percentage spent for each area ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 39 The second graph was produced on the same c48xlarge instance for 60 seconds while running the following script: (fulload() { dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/ze ro of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd) The output presents a more interesting set of actions taking place under the hood: Conclusion The purpose of this whitepaper is to show you how to tune your EC2 instances to optimally run game servers on AWS It focuses on performance optimization of the network CPU and memory on the C4 instance family when running game servers on Linux Disk performance is a smaller concern because disk is rarely a bottleneck when it comes to running game servers This whitepaper is meant to be a central compendium of information on the compute instances to help you run your game servers on AWS We hope this guide saves you a lot of time by calling out key information performance ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 40 recommendations and caveats to get up and running quickly using AWS in order to make your game launch as successful as possible Contributors The following individuals and organizations contributed to this document:  Greg McConnel Solutions Architect Amazon Web Services  Todd Scott Solutions Architect Amazon Web Services  Dhruv Thukral Solutions Architect Amazon Web Services ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 41 1 https://awsamazoncom/ec2/ 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/c4instanceshtml 3 https://enwikipediaorg/wiki/Advanced_Vector_Extensions 4 https://awsamazoncom/vpc/ 5 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 6 https://awsamazoncom/aboutaws/globalinfrastructure/ 7 https://awsamazoncom/ec2/faqs/#Enhanced_Networking 8 https://enwikipediaorg/wiki/Singleroot_input/output_virtualization 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 10 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/enhanced networkingwindowshtml 11 https://downloadcenterintelcom/download/18700/NetworkAdapter VirtualFunctionDriverfor10GigabitNetworkConnections 12 https://wwwkernelorg/doc/Documentation/networking/scalingtxt 13 http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingenihtml 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/placement groupshtml 15 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarklinuxec2/ 16 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarkwindows ec2/ 17 https://wwwkernelorg/doc/Documentation/timers/timekeepingtxt 18 https://xenbitsxenorg/docs/43testing/misc/tscmodetxt 19 http://docsawsamazoncom/AWSEC2/latest/UserGuide/processor_state_co ntrolhtml 20 https://rawgithubusercontentcom/brendangregg/msrcloud tools/master/showboost 21 https://enwikipediaorg/wiki/Completely_Fair_Scheduler Notes ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 42 22 http://linuxdienet/man/8/numactl 23 https://awsamazoncom/amazonlinuxami/201603releasenotes/ 24 http://rhelblogredhatcom/2015/01/12/mysteries ofnumamemory managementrevealed/#more599 25 https://gitfedorahostedorg/git/numadgit 26 https://awsamazoncom/ebs/ 27 http://wwwphoronixtestsuitecom/ 28 http://openbenchmarkingorg/ 29 http://openbenchmarkingorg/tests/pts 30 http://openbenchmarkingorg/suite/pts/cpu 31 http://openbenchmarkingorg/suite/pts/multicore 32 http://openbenchmarkingorg/suite/pts/processor 33 http://openbenchmarkingorg/suite/pts/universe 34 https://perfwikikernelorg/indexphp/Main_Page 35 http://wwwbrendangreggcom/perfhtml 36 http://wwwbrendangreggcom/flamegraphshtml 37 http://wwwbrendangreggcom/FlameGraphs/cpuflamegraphshtml Archived",General,consultant,Best Practices Optimizing_MySQL_Running_on_Amazon_EC2_Using_Amazon_EBS,"This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlOptimizing MySQL Running on Amazon EC2 Using Amazon EBS First Published November 2017 Updated December 7 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlContents Introduction 1 Terminology 1 MySQL on AWS deployment options 2 Amazon EC2 block level storage options 3 EBS volume features 4 EBS monitoring 4 EBS durability and a vailability 4 EBS snapshots 4 EBS security 5 Elastic Volumes 6 EBS volume types 6 General Purpose SS D volumes 6 Provisioned IOPS SSD (io1) volumes 7 MySQL considerations 8 Caching 8 Database writes 9 MySQL read replica configuration 9 MySQL replication considerations 10 Switching from a physical environment to AWS 11 MySQL backups 12 Backup methodologies 12 Creating snapshots of an EBS RAID array 15 Monitoring MySQL and EBS volumes 16 Latency 16 Throughput 18 MySQL benchmark observations and considerations 19 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlThe test environment 19 Tuned compared to default configuration parameter testing 21 Comparative analysis of different storage types 22 Conclusion 25 Contributors 26 Further reading 26 Document revisions 26 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAbstract This whitepaper is intended for Amazon Web Services ( AWS ) customers who are considering deploying their MySQL database on Amazon Elastic Compute Cloud (Amazon EC2) using Amazon Elastic Block Store ( Amazon EBS) volumes This whitepaper describes the features of EBS volumes and how they can affect the security availability durability cost and performance of MySQL databases There are many deployment options and configurations for MySQL on Amazon EC2 This whitepaper provide s performance benchmark metrics and general guidance so AWS customers can make an informe d decision about whether to deploy their MySQL workloads on Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 1 Introduction MySQL is one of the world’s most popular open source relational database engine s Its unique storage architecture provides you with many different ways of customizing database configuration according to the needs of your application It supports transaction processing and high volume operations Apart from the robustness of the database engine another benefit of MySQL is that the total cost of owner ship is low Several companies are moving their MySQL workloads into the cloud to extend the cost and performance benefits AWS offers many compute and storage options that can help you optimize your MySQL deployments Terminology The following definitions are for the common terms that will be referenced throughout this paper: • IOPS — Input/output (I/O) operations per second (Ops/s) • Throughput — Read/write transfer rate to storage (MB/s) • Latency — Delay between sending an I/O request and receiving an acknowledgment (ms) • Block size — Size of each I/O (KB) • Page size — Internal basic structure to organize the data in the database files (KB) • Amazon Elastic Block Store (Amazon EBS) volume — Persistent block level storage devices for use with Amazon Elastic Compute Cloud (Amazon EC2) instances This white paper focus es on solid state drive (SSD) EBS volume types optimized for transactional workloads involving frequent read/write operations with small I/O size where the dominant performance attribute is IOPS • Amazon EBS General Purpose SSD volume — General Purpose SSD volume that provides a balance of price and performance AWS recommend s these volumes for most workloads Currently AWS offer two types of General Purpose SSD volumes : gp2 and gp3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 2 • Amazon EBS Provisioned IOPS SSD volume — Highest performance SS D volume designed for high performance for mission critical low latency or high throughput workloads Currently AWS offer two types of Provisioned IOPS SSD volumes : io1 and io2 • Amazon EBS Throughput Optimized hard disk drive ( HDD ) (st1) volume — Low cost HDD volume designed for frequently accessed throughput intensive workloads MySQL on AWS deployment options AWS provides various options to deploy MySQL like a fully managed database service Amazon Relational Database Service (Amazon RDS) for MySQL The Amazon Aurora database engine is designed to be wire compatible with MySQL 56 and 57 using the InnoDB storage engine You can also host MySQL on Amazon EC2 and self manage the database or browse the thirdparty MySQL offerings on the AWS Marketplace This whitepaper explores the implementation and deployment considerations for MySQL on Amazon EC2 using Amazon EBS for storage Although Amazon RDS and Amazon Aurora with MySQL compatibility is a good choice for most of t he use cases on AWS deployment on Amazon EC2 m ight be more appropriate for certain MySQL workloads With Amazon RDS you can connect to the database itself which gives you access to the familiar capabilities and configurations in MySQL; however access to the operating system (OS) isn’t available This is an issue when you need OS level access due to specialized configurations that rely on low level OS settings such as when using MySQL Enterprise tools For example enabling MySQL Enterprise Monitor requi res OS level access to gather monitoring information As another example MySQL Enterprise Backup requires OS level access to access the MySQL data directory In such cases running MySQL on Amazon EC2 is a better alternative MySQL can be scaled vertically by adding additional hardware resources (CPU memory disk network) to the same server For both Amazon RDS and Amazon EC2 you can change the EC2 instance type to match the resources required by your MySQL database Amazon Aurora provides a Serverless MySQL Compatible Edition that allows compute capacity to be auto scaled on demand based on application needs Both Amazon RDS and Amazon EC2 have an option to use EBS General Purpose SSD and EBS Provisioned IOPS volumes The ma ximum provisioned storage limit for Amazon RDS database (DB) instances running MySQL is 64 TB The EBS volume for MySQL on Amazon EC2 conversely supports up to 16 TB per volume This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 3 Horizontal scaling is also an option in MySQL where you can add MySQL secon dary servers or read replicas so that you can accommodate additional read traffic into your database With Amazon RDS you can easily enable this option through the AWS Management Console with click of a button Command Line Interface (CLI) or REST API A mazon RDS for MySQL allows up to five read replicas There are certain cases where you m ight need to enable specific MySQL replication features Some of these features may require OS access to MySQL or advanced privileges to access certain system procedure s and tables MySQL on Amazon EC2 is an alternative to Amazon RDS and Aurora for certain use cases It allows you to migrate new or existing workloads that have very specific requirements Choosing the right compute network and —especially —storage configu rations while taking advantage of its features plays a crucial role in achieving good performance at an optimal cost for your MySQL workloads Amazon EC2 blocklevel storage options There are two block level storage options for EC2 instances The first opt ion is an instance store which consists of one or more instance store volumes exposed as block I/O devices An instance store volume is a disk that is physically attached to the host computer that runs the EC2 virtual machine (VM) You must specify instan ce store volumes when you launch the EC2 instance Data on instance store volumes will not persist if the instance stops ends or if the underlying disk drive fails The second option is an EBS volume which provides off instance storage that will persist independently from the life of the instance The data on the EBS volume persist s even if the EC2 instance that the volume is attached to shuts down or there is a hardware failure on the underlying host The data persists on the volume until the volume is explicitly deleted Refer to Solid state drives (SSD) in the AWS documentation for the details about SSD backed EBS volumes Due to the immediate proximity of the instance to the instance store volume the I/O latency to an instance store volume tends to be lower than to an EBS volume Use cases for instance store volumes include acting as a layer of cache or buffer storing temporary dat abase tables or logs or providing storage for read replicas For a list of the instance types that support instance store volumes refer to Amazon EC2 instance store within the Amazon EC2 User Guide for Linux instances The remainder of this paper focus es on EBS volume backed EC2 instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC 2 Using Amazon EBS 4 EBS volume features EBS monitoring Amazon EBS automatically sends data points to Amazon CloudWatch for one minute intervals at no charge Amazon CloudWatch metrics are statistical data that you can use to view analyze and set alarms on the operational behavior of your volumes The EBS metrics can be viewed by selecting the monitoring tab of the volume in the Amazon EC2 console For more information about the EBS metrics collected by CloudWatch refer to the Amazon CloudWatch metrics for Amazon EBS EBS durability and availability Durability in the storage subsystem for MySQL is especially important if you are storing user data valuable production data and individual data points EBS volumes are designed for reliability with a 0 1 percent to 02 percent annual failure rate (AFR) compared to the typical 4 percent of commodity disk drives EBS volumes are backed by multiple physical drives for redundancy that is replicated within the Availability Zone to protect your MySQL workload from component failure EBS snapshots You can perform backups of your entire MySQL database using EBS snapshots These snapshots are stored in Amazon Simple Storage Service (S3) which is designed for 99999999999% (11 nines ) of durability To satisfy you r recovery point and recovery time objectives you can schedule EBS snapshots using Amazon CloudWatch Events Apart from providing backup other reasons for creating EBS snapshots of your MySQL database include : • Set up a non production or test environment — You can share the EBS snapshot to duplicate the installation of MySQL in different environments and also share between different AWS accounts within t he same Region For example you can restore a snapshot of your MySQL database that’s in a production environment to a test environment to duplicate and troubleshoot production issues • Disaster recovery — EBS snapshots can be copied from one AWS Region to another for site disaster recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 5 A volume that is restored from a snapshot loads slowly in the background which means that you can start using your MySQL database right away When you perform a query on MySQL that finds a table that has not been downlo aded yet the data will be downloaded from Amazon S3 You also have the option of enabling Amazon EBS fast snapshot restore to create a volume from a snapshot that is fully initialized at creation Refer to Amazon EBS fast snapshot restore for more information Best practices for restoring EBS snapshots are discussed in the MySQL backups section of this whitepaper EBS securit y Amazon EBS supports several security features you can use from volume creation to utilization These features prevent unauthorized access to your MySQL data You can use tags and resource level permissions to enforce security on your volumes upon creatio n Tags are key value pairs that you can assign to your AWS resources as part of infrastructure management These tags are typically used to track resources control cost implement compliance protocols and control access to resources through AWS Identity and Access Management (IAM) policies You can assign tags on EBS volumes during creation time which allows you to enforce the management of your volume as soon as it is created Additionally you can have granular control on who can create o r delete tags through the IAM resource level permissions This granularity of control extends to the RunInstances and CreateVolume APIs where you can write IAM policies that requires the encryption of the EBS volume upon creation After the volume is creat ed you can use the IAM resource level permissions for Amazon EC2 API actions where you can specify the authorized IAM users or groups who can at tach delete or detach EBS volumes to EC2 instances Protection of data in transit and at rest is crucial in most MySQL implementations You can use Secure Sockets Layer (SSL) to encrypt the connection from your application to your MySQL database To encr ypt your data at rest you can enable volume encryption during creation time The new volume will get a unique 256 bit AES key which is protected by the fully managed AWS Key Management Service EBS snapshots created from the encrypted volumes are automat ically encrypted The Amazon EBS encryption feature is available on all current generation instance types For more information on the supported instance types refer to the Amazon EBS Encryption documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 6 Elastic Volumes The Elastic Volumes feature of EBS SSD volumes allows you to dynamically change the size performance and type of EBS volume in a single API call or within the AWS Management Console without any interruption of MySQL operations This simplifies some of the administration and maintenance activities of MySQL workloads running on current generation EC2 instances You can call the ModifyVolume API to dynamically increase the size of the EBS volume if the MySQL database is running low on usable storage capacity Note that decreasing the size of the EBS volume isn’t supported so AWS recommend s that you do not over allocate the EBS volume size any more than necessary to avoid paying for extra resources that you do not use In situations where there is a planned increase in your MySQL utilizatio n you can either change your volume type or add additional IOPS The time it takes to complete these changes will depend on the size of your MySQL volume You can monitor the progress of the volume modification either through the AWS Management Console or CLI You can also create CloudWatch Events to send alerts after the changes are complete EBS volume types General Purpose SSD volumes General Purpose SSD volumes are designed to provide a balance of price and performance The General Purpose SSD (gp3) volumes offer cost effective storage that is ideal for a broad range of database workloads These volumes deliver a consistent baseline rate of 3000 IOPS and 125 MiB/s included with the price of storage You can provision additional IOPS (up to 16000) an d throughput (up to 1000 MiB/s) for an additional cost The maximum ratio of Provisioned IOPS to provisioned volume size is 500 IOPS per GiB The maximum ratio of provisioned throughput to Provisioned IOPS is 25 MiB/s per IOPS The following volume confi gurations support provisioning either maximum IOPS or maximum throughput: • 32 GiB or larger: 500 IOPS/GiB x 32 GiB = 16000 IOPS • 8 GiB or larger and 4000 IOPS or higher: 4000 IOPS x 025 MiB/s/IOPS = 1000 MiB/s This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 7 The older General Purpose SSD (gp2) volume is also a good option because it also offers balanced price and performance To maximize the performance of the gp2 volume you need to know how the burst bucket works The size of the gp2 volume determines the baseline performance level of the volume and how quickly it can accumulate I/O credits Depending on the volume size baseline performance ranges between a minimum of 100 IOPS up to a maximum of 16000 IOPS Volumes earn I/O credits at the baseline performance rate of 3 IOPS/GiB of volume size The larger the volume size the higher the baseline performance and the faster I/O credits accumulate Refer to General Purpose SSD volumes (gp2) for more inf ormation related to I /O characteristics and burstable performance of gp2 volumes In addition to changing the volume type size and provisioned throughput (for gp3 only); you can also use RAID 0 to stripe multiple gp2 or gp3 volumes together to achieve greater I/O performance The RAID 0 configuration distributes the I/O across volumes in a stripe Adding an additional volume also increases the throughput of your MySQL database Throughput is the read/write transfer rate which is the I/O block size multipl ied by the IOPS rate performed on the disk AWS recommend s adding the same volume size into the stripe set since the performance of the stripe is limited to the worst performing volume in the set Also consider fault tolerance in RAID 0 A loss of a single volume results in a complete data loss for the array If possible use RAID 0 in a MySQL primary/secondary environment where data is already replicated in multiple secondary nodes Provisioned IOPS SSD (io1) volumes Provisioned IOPS SSD (io1 and io2) volumes are designed to meet the needs of I/O intensive workloads particularly database workloads that are sensitive to storage performance and consistency Provisioned IOPS SSD volumes use a consistent IOPS rate which you specify when you create the volume and Amazon EBS delivers the provisioned performance 999 percent of the time • io1 volumes are designed to provide 998 to 999 percent volu me durability with an annual failure rate (AFR) no higher than 02 percent which translates to a maximum of two volume failures per 1000 running volumes over a one year period • io2 volumes are designed to provide 99999 percent volume durability with an AFR no higher than 0001 percent which translates to a single volume failure per 100000 running volumes over a one year period This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 8 The maximum ratio of Provisioned IOPS to requested volume size (in GiB) is 50:1 for io1 volumes and 500:1 for io2 volumes Fo r example a 100 GiB io1 volume can be provisioned with up to 5000 IOPS while a 100 GiB io2 volume can be provisioned with up to 50000 IOPS To maximize the volume throughput AWS recommend s using an EBSoptimized EC2 instance type (note that most new EC2 instances are EBS optimized by default with no extra charge) This provides dedicated throughput between your EBS volume and EC2 instance As instance s ize and type affects volume throughput choose an instance that has more channel bandwidth than the maximum throughput of the io1 volume For example an r512xlarge instance provides a maximum bandwidth of 9 500 MB/s Therefore it can more than handle th e 11875 MB/s maximum throughput of the io1 volume Another approach to increasing io1 throughput is to configure RAID 0 on your EBS volumes For more information about RAID configuration refer to RAID configuration in the EC2 User Guide MySQL considerations MySQL offers a lot of parameters that you can tune to obtain optimal performance for every type of workload This section focus es on the MySQL InnoDB storage engine It also look s at the MySQL parameters that you can optimize to improve performance related to the I/O of EBS volumes Caching Caching is an important feature in MySQL Knowing when MySQL will perform a disk I/O instead of accessing the cache will help you tune for performance When you are reading or writing data an InnoDB buffer pool caches y our table and index data This in memory area resides between your read/write operations and the EBS volumes Disk I/O will occur if the data you are reading isn’t in the cache or when the data from dirty (that is modified only in memory) InnoDB pages nee ds to be flushed to disk The buffer pool uses the Least Recently Used (LRU) algorithm for cached pages When you size the buffer pool too small the buffer pages may have to be constantly flushed to and from the disk which affects performance and lowers the query concurrency The default size of the buffer pool is 128 MB You can set this value to 80 percent of your server’s memory; however be aware that there may be paging is sues if other processes are consuming memory Increasing the size of the buffer pool works well when your dataset and queries can take advantage of it For example if you have one This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 9 GiB of data and the buffer pool is configured at 5 GiB then increasing the buffer pool size to 10 GiB will not make your database faster A good rule of thumb is that the buffer pool should be large enough to hold your “hot” dataset which is composed of the rows and indexes that are used by your queries Starting in MySQL 57 v ersion the innodb_buffer_pool_size can be set dynamically which allows you to resize the buffer pool without restarting the server Database writes InnoDB does not write directly to disk Instead it first writes the data into a double write buffer Dirty pages are the modified portion of these in memory areas The dirty pages are flushed if there isn’t enough free space The default setting (innodb_flush_n eighbors = 1 ) results in a sequential I/O by flushing the contiguous dirty pages in the same extent from the buffer pool This option should be turned off (by setting innodb_flush_neighbors = 0 ) so you can maximize the performance by spreading the write op erations over your EBS SSD volumes Another parameter that can be modified for write intensive workloads is innodb_log_file_size When the size of your log file is large there are fewer data flushes which reduces disk I/O However if your log file is too big you will generally have a longer recovery time after a crash MySQL recommends that the size of your log files should be large enough where your MySQL server will spread out the checkpoint flush activity over a longer period The recommendation from MySQL is to size the log file to where it can accommodate an hour of write activity MySQL read replica configuration MySQL allows you to replicate your data so you can scale out your read heavy workloads with primary / secondary (read replica) configurati on You can create multiple copies of your MySQL database into one or more secondary databases so that you can increase the read throughput of your application The availability of your MySQL database can be increased with the secondary When a primary ins tance fails one of the secondary servers can be promoted reducing the recovery time MySQL supports different replication methods There is the traditional binary log file position based replication where the primary’s binary log is synchronized with the secondary’s relay log The following diagram shows the binary log file position based replication process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 10 Binary log file position based replication process Replication between primary and secondary using global transaction identifiers (GTIDs) was introduced in MySQL 56 A GTID is a unique identifier created and associated with each transaction committed on the server of origin (primary) This identifier is unique not only to the server on which it originated but is unique across all servers in a given replication setup With GTID based replication it is no longer necessary to keep track of the binary log file or position on the primary to replay those events on the secondary The benefits of this solution include a more malleable replication topo logy simplified failover and improved management of multi tiered replication MySQL replication considerations Prior to MySQL 56 replication was single threaded with only one event occurring at a time Achieving throughput in this case was usually don e by pushing a lot of commands at low latency To obtain larger I/O throughput your storage volume requires a larger queue depth An EBS io1 SSD volume can have up to 20000 IOPS which in turn This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 11 means it has a larger queue depth AWS recommend s using thi s volume type on workloads that require heavy replication As mentioned in the Provisioned IOPS SSD volumes section of this document RAID 0 increases the performance and throughput of EBS volumes for your MySQL dat abase You can join several volumes together in a RAID 0 configuration to use the available bandwidth of the EBS optimized instances to deliver the additional network throughput dedicated to EBS For MySQL 56 and above replication is multi threaded This performs well on EBS volumes because it relies on parallel requests to achieve maximum I/O throughput During replication there are sequential and random traffic patterns There are the sequential writes for the binary log (binlog) shipment from the prima ry server and sequential reads of the binlog and relay log Additionally there is the traffic of regular random updates to your data files Using RAID 0 in this case improves the parallel workloads since it spreads the data across the disks and their queu es However you must be aware of the penalty from the sequential and single threaded workloads because extra synchronization is needed to wait for the acknowledgments from all members in the stripe Only use RAID 0 if you need more throughput than that which the single EBS volume can provide Switching from a physical environment to AWS Customers migrating from their physical MySQL Server environment into AWS usually have a battery backed caching RAID controller which allows data in the cache to survive a power failure Synchronous operations are set up so that all I/O is committed to the RAID controller cache instead of the OS main memory Therefore it is the controller instead of the OS that completes the write process Due to this environment the following MySQL parameters are used to ensure that there is no data loss: On the Primary Side sync_binlog = 1 innodb_flush_log_at_trx_commit=1 On the Secondary Slide sync_master_info = 1 sync_relay_log = 1 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 12 sync_relay_log_info = 1 innodb_flush_log _at_trx_commit=1 These parameters will cause MySQL to call fsync() to write the data from the buffer cache to the disk after any operation with the binlog and relay log This is an expensive operation that increases the amount of disk I/O The immediate synchronize log to disk MySQL parameter does not provide any benefit for EBS volumes In fact it causes degraded performance EBS volumes are automatically replicated within an Availability Zone which protects them from component failures Turning off the sync_binlog parameter allows the OS to determine when to flush the bin and relay log buffers to the disk reducing I /O The innodb_flush_log_at_trx_commit =1 is required for full ACID compliance If you need to synchronize the log to disk for every transaction then you may want to consider increasing the IOPS and throughput of the EBS volume In this situation you may want to separate the binlog and relay log from your data files as separate EBS volumes You can use Provisioned IOPS SSD volumes for the binlog and relay log to have more predictable performance You may also use the local SSD of the MySQL secondary instance if you need more throughput and IOP S MySQL backups Backup methodologies There are several approaches to protecting your MySQL data depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements The choice of performing a hot or cold backup is based on the uptime requirement of the database Wh en it comes meeting your RPO your backup approach will be based the logical database level or the physical EBS volume level backup This section explore s the two general backup methodologies The first general approach is to back up your MySQL data using database level methodologies This can include making a hot backup with MySQL Enterprise Backup making backups with mysqldump or mysqlpump or by making incremental backups by enabling binary logging If the primary database ser ver exhibits performance issues during a backup a replication secondary database server or a read replica database server can be created to provide the source data for the backups to alleviate the backup load from the primary This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 13 database server One approach can be to back up from a secondary server’s SSD data volume to a backup server’s Throughput Optimized HDD (st1) volume The high throughput of 500 MiB/s per volume and large 1 MiB I/O block size make it an ideal volume type for sequential backups meaning it can use the larger I/O blocks The following diagram shows a backup server using the MySQL secondary server to read the backup data Using an st1 volume as a backup source Another option is to have the MySQL secondary server back up the database files directly to Amazon Elastic File System (Amazon EFS) or Amazon S3 Amazon EFS is an elastic file system that stores its data redundantly across multiple Availability Zones Both the primary and the secondary instances can attach to the EFS file system The secondary instance can initiate a backup to the EFS file system from where the primary instance can do a restore Amazon S3 can also be used as a backup target Amazon S3 can be used in a manner similar to Amazon EFS except that Amazon S3 is object based storage rather than a file system The following diagram depicts the option of using Amazon EFS or Amazon S3 as a backup target This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 14 Using Amazon EFS or Amazon S3 as a backup target The second general approach is to use volume level EBS snapshot s Snapshots are incremental backups which means that only the blocks on the device that have changed after your most recent snapshot are saved This minimizes the time required to create the snapshot and saves on storage costs When you delete a snapshot only the data unique to that snapshot is removed Active snapshots contain all of the information needed to restore your data (from the time the snapshot was taken) to a new EBS volume One consideration when utilizing EBS snapshots for backups is to mak e sure the MySQL data remains consistent During an EBS snapshot any data not flushed from the InnoDB buffer cache to disk will not be captured There is a MySQL command flush tables with read lock that will flush all the data in the tables to disk and on ly allow database reads but put a lock on database writes The lock only needs to last for a brief period of time until the EBS snapshot starts The snapshot will take a point intime capture of all the content within that volume The database lock needs t o be active until This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 15 the snapshot process starts but it doesn’t have to wait for the snapshot to complete before releasing the lock You can also combine these approaches by using database level backups for more granular objects such as databases or tables and using EBS snapshots for larger scale operations such as recreating the database server restoring the entire volume or migrating a database server to another Availability Zone or another Region for disaster recovery (DR) Creating snapshots of an EB S RAID array When you take a snapshot of an attached EBS volume that is in use the snapshot excludes data cached by applications or the operating system For a single EBS volume this might not be a problem However when cached data is excluded from snap shots of multiple EBS volumes in a RAID array restoring the volumes from the snapshots can degrade the integrity of the array When creating snapshots of EBS volumes that are configured in a RAID array it is critical that there is no data I/O to or from the volumes when the snapshots are created RAID arrays introduce data interdependencies and a level of complexity not present in a single EBS volume configuration To create an application consistent snapshot of your RAID array stop applications from writing to the RAID array and flush all caches to disk At the database level (recommended) you can use the flush tables with read lock command Then ensure that the associated EC2 instance is no longer writing to the RAID array by taking steps such as free zing the file system with the sync and fsfreeze commands unmounting the RAID array or shutting down the associated EC2 instance After completing the steps to halt all I/O take a snapshot of each EBS volume Restoring a snapshot creates a new EBS volume then you assemble the new EBS volumes to build the RAID volumes After that you mount the file system and then start the database To avoid the performance d egradation after the restore AWS recommend s initializing the EBS volume The initialization of a large EBS volume can take some time to complete because data blocks have to be fetched from the S3 bucket where the snapshots are stored To make the database available in a shorter amount of time the initialization of the EBS volume can be done through multi threaded reads of all the required database files for th e engine recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 16 Monitoring MySQL and EBS volumes Monitoring provides visibility into your MySQL workload Understanding the resource utilization and performance of MySQL usually involves correlating the data from the database performance metrics gathere d from MySQL and infrastructure related metrics in CloudWatch There are many tools that you can use to monitor MySQL some of which include : • Tools from MySQL such as MySQL Enterprise Monitor MySQL Workbench Performance and MySQL Query Analyzer • Third party software tools and plugins • MySQL monitoring tools at the AWS Marketplace When the bottleneck for MySQL performance is related to storage database administrators usually look at latency when they run into performance issues of transactional operations Further if the performance is degraded due to MySQL loading or replicating data then throughput is evaluated These issues are diagnose d by looking at the EBS volume metrics collected by CloudWatch Latency Latency is defined as the delay between request and completion Laten cy is experienced by slow queries which can be diagnosed in MySQL by enabling the MySQL performance schema Latency can also occur at the disk I/O level which can be viewed in the “Average Read Latency (ms/op)” and “Average Write Latency (ms/op)” in the monitoring tab of the EC2 console This section covers the factors contributing to high latency High latency can result from exhausting the available Provisioned IOPS in you r EBS volume For gp2 volumes the CloudWatch metric BurstBalance is presented so that you can determine if you have depleted the available credit for IOPS When bandwidth (KiB/s) and throughput (Ops/s) are reduced latency (ms/op) increases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 17 BurstBalanc e metric showing that when bandwidth and throughput are reduced latency increases Disk queue length can also contribute to high latency Disk queue length refers to the outstanding read/write requests that are waiting for resources to be available The CloudWatch metric VolumeQueueLength shows the number of pending read/write operation requests for the volume This metric is an important measurement to monitor if you have reached the full utilization of the Provisioned IOPS on your EBS vol umes Ideally the EBS volumes must maintain an average queue length of about one per minute for every 200 Provisioned IOPS Use the following formula to calculate how many IOPS will be consumed based on the disk queue length: Consumed IOPS = 200 IOPS * VolumeQueueLength For example say you have assigned 2000 IOPS to your EBS volume If the VolumeQueueLength increases to 10 then you consum e all of your 2000 Provisioned IOPS which result s in increased latency Pending MySQL operations will stack up if you observe the increase of the VolumeQueueLength without any corresponding increase in the Provisioned IOPS as shown in the following screenshot This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazo n EC2 Using Amazon EBS 18 Average queue length and average read latency metrics Throughput Throughput is the read/write transfer rate to storage It affects MySQL database replication backup and import/export activities When considering which AWS storage option to use to achieve high throughput you must also consider that MySQL ha s random I/O caused by small transactions that are committed to the database To accommodate these two different types of traffic patterns our recommendation is to use io1 volumes on an EBS optimized instance In terms of throughput io1 volumes have a maximum of 320 MB/s per volume while gp2 volumes have a maximum of 160 MB/s per volume Insufficient throughput to underlying EBS volumes can cause MySQL secondary servers to lag and can also cause MySQL backups to take longer to complete To diagnose thro ughput issues CloudWatch provides the metrics Volume Read/Write Bytes (the amount of data being transferred) and Volume Read/Write Ops (the number of I/O operations) In addition to using CloudWatch metrics AWS recommend s reviewing the AWS Trusted Adviso r to check alerts when an EBS volume attached to an instance isn’t EBS optimized EBS optimization ensures dedicated network throughput for your volumes An EBS optimized instance has segregated traffic which is useful as many EBS volumes have significant network I/O activities Most new instances are EBS optimized by default at no extra charge This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 19 MySQL benchmark observations and considerations Testing your MySQL database will help you determine what type of volume you need and ensure that you are choosing the most cost effective and performant solution There are a couple of ways to determine the number of IOPS that you need For an existing workload you can monitor the current consumption of EBS volume IOPS through the CloudWatch metrics detailed in the Monitoring MySQL and EBS volumes section of this document If this is a new workload you can do a synthetic test which will provide you with the maximum number of IOPS that your new AWS infrastructure can achieve If you are moving your workload to the AWS Cloud you can run a tool such as iostat to profile the IOPS required by your workload While you can use a synthetic test to estimate your storage performance needs the best way to quantify your storage performance needs is through profiling an existing production database if that is an option Performing a synthetic test on the EBS volume allows you to specify the amount of concurrency and throughput that you want to simulate Testing will allow you to determine the maximum number of IOPS and throughput needed for your MySQL workload There are a couple of tools that you can use: • Mysqlslap is an application that emulates client load for MySQL Server • Sysbench is a popular open source benchmark used to test open source database management systems (DBMS) The test environment To simulate the MySQL client for the Sysbench tes ts this example uses an r58xlarge instance type with a 10 gigabit network interface Table 1: Sysbench machine specifications Sysbench server Instance type r58xlarge Memory 256 GB This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 20 Sysbench server CPU 32 vCPUs All of the MySQL servers tested on used the r58xlarge instance type Table 2: MySQL server machine specifications MySQL server Instance type r58xlarge Memory 256 GB CPU 32 vCPUs Storage 500 GB gp2 EBS Volume Root volume 256 GB gp2 MySQL data volume 500 GB (gp2 gp3 io1 or io2) To increase performance on the Sysbench Linux client enable Receive Packet Steering (RPS) and Receive Flow Steering (RFS) RPS generates a hash to determine which CPU will process the packet RFS handles the distribu tion of packets to the available CPUs Enable RPS with the following shell command : sudo sh c 'for x in /sys/class/net/eth0/queues/rx *; do echo ffffffff > $x/rps_cpus; done' sudo sh c ""echo 4096 > /sys/class/net/eth0/queues/rx 0/rps_flow_cnt"" sudo sh c ""echo 4096 > /sys/class/net/eth0/queues/rx 1/rps_flow_cnt Enable RFS with the following shell command: sudo sh c ""echo 32768 > /proc/sys/net/core/rps_sock_flow_entries"" This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 21 Tuned compared to default configuration parameter testing Perform a Sysbench test to compare the difference between tuned MySQL and default parameter configurations ( refer to Table 3) Use a MySQL dataset of 100 tables with 10 million records per table for the test Table 3: MySQL parameters Parameters Default Tuned innodb_buffer_pool_size 134MB 193G innodb_flush_method fsync (Linux) O_DIRECT innodb_flush_neighbors 1 0 innodb_log_file_size 50MB 256MB Run the following Sysbench read/write command: $ sysbench / oltp_read_writelua table_size=10000000 maxrequests=0 simpleranges=0 distinct ranges=0 sumranges=0 orderranges=0 pointselects=0 time=3600 threads=1024 randtype=uniform run Results of the Sysbench test are presen ted in Table 4 Under optimized conditions the MySQL server processed approximately 12 times the number of transactions per section compared to the default configuration Table 4: Sysbench results Sysbench metrics Default Tuned Queries: Read 17511928 223566532 Write 5003408 63876152 Other 2501704 31938076 Total 25017040 319380760 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 22 Sysbench metrics Default Tuned Transactions 1250852 (34737 per sec) 15969038 (443457 per sec) Queries 25017040 (694743 per sec) 319380760 (8869133 per sec) ignored errors: 0 (000 per sec) 0 (000 per sec) reconnects: 0 (000 per sec) 0 (000 per sec) General statistics: Total time 36009046s 36010355s Total number of events 1250852 15969038 Latency (ms): min 772 4843 avg 294765 23090 max 9588504 615804 95th percentile 928415 125808 sum 368707402445 368718958127 Thread fairness: events (avg/stddev): 12215352/4886 155947637/4563 runtime (avg/stddev): 36006582/011 36007711/004 Other InnoDB configuration options to consider for better performance of heavy I/O MySQL workloads are detailed in the MySQL Optimizing InnoDB Disk I/O documentation When conside ring these configurations AWS suggest s performing a test after deployment to ensure that it will be safe for your application Comparative analysis of different storage types Conduct the test across four different MySQL server configurations with the foll owing configurations: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 23 • MySQL Server EBS General Purpose SSD (gp2) o 500 GB SQL data drive o 1500 baseline IOPS / 3000 burstable IOPS • MySQL Server EBS Provisioned IOPS SSD (gp3) o 500 GB SQL data drive o 3000 Provisioned IOPS • MySQL Server EBS Provisioned IOPS SSD (io1) o 500 GB SQL data drive o 3000 Provisioned IOPS • MySQL Server EBS Provisioned IOPS SSD (io2) o 500 GB SQL data drive o 3000 Provisioned IOPS *Note: Unless specified all EBS volumes are unencrypted Sysbench client and MySQL server setup Table 5 : Server setup for MySQL database and Sysbench client Use case Instance type vCPUs Memory Instance storage EBS optimized Network MySQL database r58xlarge 32 256 EBS only Yes 10 Gigabit Sysbench client (AWS Cloud9) r58xlarge 32 256 EBS only Yes 10 Gigabit Tests were performed using Sysbench read/write OLTP test by running the following Sysbench command below over a onehour period $ sysbench /oltp_read_writelua table_size=10000000 maxrequests=0 simpleranges=0 distinct ranges=0 sumranges=0 orderranges=0 pointselects=0 time=3600 threads=1024 randtype=uniform run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 24 Results The various tests of the four different volume configurations yielded similar results with each server processing approximately 3 600 Sysbench transactions per second There was no discernable workload difference is noticed while running performance consistency test in all four volumes Upon closer examination you observe that the minimum latency is offered by the IO2 volume and less than one millisecond latency is observed for the same workload Table 6 : Performance analysis of same MySQL workload on different EBS volume types Sysbench metrics gp2 gp3 io1 io2 SQL statistics read queries 17511928 181507690 188343428 186051460 write queries 5003408 51859340 53812408 53157560 other queries 2501704 25929670 26906204 26578780 total queries 25017040 259296700 269062040 265787800 transactions 12508520 (347037 per sec) 12964835 (360093 per sec) 13453102 (373312 per sec) 13289390 (369020 per sec) queries 250170400 (6947043 per sec) 259296700 (7201853 per sec) 269062040 (7466242 per sec) 265787800 (7380392 per sec) Latency (ms) min 772 682 61 602 avg 29465 28435 27424 27745 max 9588504 4371824 3317931 3480375 95th percent ile 92815 81663 94316 86195 sum 368707402445 368655915883 36893868342 368713853608 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Ama zon EC2 Using Amazon EBS 25 Sysbench metrics gp2 gp3 io1 io2 EBS statistics Write latency (ms) 11 101 0994 0824 Volume queue length (count) 349 301 3227 271 Conclusion The AWS Cloud provides several options for deploying MySQL and the infrastructure supporting it Amazon RDS for MySQL provides a very good platform to operate scale and manage your MySQL database in AWS It removes much of the complexity of managing and maintaining your database allowing you to focus on improving your applications However there are some cases where MySQL on Amazon EC2 and Amazon EBS that work better for some workloads a nd configurations It is important to understand your MySQL workload and test it This can help you decide which EC2 server and storage to use for optimal performance and cost For a balanced performance and cost consideration General Purpose SSD Amazon EBS volumes (gp2 and gp3) are good options To maximize the benefit of gp2 you need to understand and monitor the burst credit This will help you determine whether you should consider other volume types On the other hand gp3 provides predictable 3000 I OPS baseline performance and 125 MiB/s regardless of volume size With gp3 volumes you can provision IOPS and throughput independently without increasing storage size at costs up to 20 percent lower per GB compared to gp2 volumes If you have mission critical MySQL workloads that need more consistent IOPS then you should use Provisioned IOPS volumes (io1 or io2) To maximize the benefit of both General Purpose and Provisioned IOPS volume types AWS recommend s using EBS optimized EC2 instances and tuni ng your database parameters to optimize storage consumption This ensures dedicated network bandwidth for your EBS volumes You can cost effectively operate your MySQL This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 26 database in AWS without sacrificing performance by taking advantage of the durability availability and elasticity of the EBS volumes Contributors Contributors to this document include : • Marie Yap Enterprise Solutions Architect Amazon Web Services • Ricky Chang Cloud Infrastructure Architect Amazon Web Services • Kehinde Otubamowo Database Partner Solutions Architect Amazon Web Services • Arnab Saha Cloud Support DBA Amazon Web Services • Chi Dinjors Cloud Support Engineer Amazon Web Services Further reading For additional information refer to : • MySQL Performance Tuning 101 • MySQL 57 Performance Tuning Immediately After Installation • MySQL on EC2: Consistent Backup and Log Purging using EBS Snapshots and N2WS • MySQL Database Backup Methods Document revisions Date Description December 7 2021 Updated for technical accuracy November 2017 First publication",General,consultant,Best Practices Oracle_WebLogic_Server_12c_on_AWS,ArchivedOracle WebLogic Server 12c on AWS December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or service s each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 3 Contents Introduction 5 Oracle WebLogic on AWS 6 Oracle WebLogic Components 6 Oracle WebLogic Architecture on AWS 8 Auto Scaling your Oracle WebLogic Cluster 15 Monitoring your Infrastructure 19 AWS Security and Compliance 20 Oracle WebLogic on AWS Use Cases 23 Conclusion 24 Contributors 25 Document Revisions 25 ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 4 Abstract This whitepaper provides guidance on how to deploy Oracle WebLogic Server 12cbased applications on AWS This paper provides a reference architecture and information about best practices for high availability security scalability and performance when yo u deploy Oracle WebLogic Server 12cbased applications on AWS Also included is information about cost optimization using AWS A uto Scaling The target audience of this whitepaper is Solution Architects Systems Architects and System Administrators with a basic understanding of cloud computing AWS and Oracle WebLogic 12c ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 5 Introduction Many enterprises today rely on J2EE application servers for deploying their mission critical applications Oracle Web Logic Server is a popular Java application server for deploying such applications You can use various AWS services to deploy Oracle WebLogic Server 12cbased applications on AWS in a secure highly available and cost effective manner With auto scaling you can dynamically scale the compute resou rces required for your application thereby keeping your costs low and using Amazon Elastic File System (EFS) for shared storage This whitepaper assumes that you have a basic understanding of Amazon Web Services For an overview of AWS Services see Overview of Amazon Web Services ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 6 Oracle WebLogic on AWS It is important to have a good understanding of the architecture of Oracle WebLogic Server 12c ( Oracle WebLogic ) and the major WebLogic components to successfully deploy and configure it on AWS Oracle WebLogic Components This diagram shows the major components of Oracle WebLogic Application Server Each WebLogic deployment has a WebLogic Domain which typically contains multiple WebLogic Server instances A WebLogic domain is the basic unit of administration for WebLogic Server instances : it is a group of logically related WebLogic Server resources For example you can have one WebLogic domain for each application There are two types of WebLogic Server instances in a domain : a single Administration Server and one or more Managed S ervers Each WebLogic Server instance runs its own Java Virtual Machine (JVM) and can be configured individually You deploy and run your web applications EJBs and other resources on the Managed S erver instances T he Administration S erver is used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 7 to configur e manage and monitor the resources in the domain including the Managed Server instances WebLogic Server instances referred to as WebLogic Server Machines can run on physical or virtual servers ( such as Amazon EC2) or in conta iners The Node Manager is a utility used to start stop or restart the Administration server or Managed Server instances You can create a group of multiple WebLogic Managed Servers which is known as a WebLogic cluster WebLogic clusters support load ba lancing and failover and are required for high a vailability and scalability of your production deployments You should deploy your WebLogic cluster across multiple WebLogic Machines so that the loss of a single WebLogic Machine does not impact the availabi lity of your application ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 8 Oracle WebLogic Architecture on AWS This reference architecture diagram shows how you can deploy a web application on Oracle WebLogic on AWS This is a basic combined tier architecture with static HTTP pages servlets and EJBs that are deployed together in a single WebLogic cluster You can also deploy the static HTTP pages and servlets to a separate WebLogic cluster and the EJBs to another WebLogic cluster For more information about WebLogic architectural patterns see the Oracle WebLogic Server documentation This reference architecture includes a WebLogic domain with one Administrative Server and multiple Managed Servers These Managed Servers are part of a WebLogic cluster and are deployed on EC2 instances (WebLogic Machines) across two Availability Zones for high availability The application is deployed to the Managed Servers in the cluster that spans the two Availability Zones Amazon EFS is used for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 9 AWS Availability Zones The AWS Cloud infrastructure is built around AWS Regions and Availability Zones AWS Regions provide multiple physically se parated and isolated Availability Zones which are connected with low latency high throughput and highly redundant networking Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity and housed in separate facilities as shown in the following diagram These Availability Zones enable you to operate production applications and databases that are more highly available fault tolerant and scalable than is possible from a single data center You can deploy your application on EC2 instances across multiple zones In the unlikely event of failure of one Availability Zone user requests are routed to your application instances in the second zone This ensures that your application continues to rem ain available at all times Traffic Distribution and Load Balancing Amazon Route 53 DNS is used to direct users to your application deployed on Oracle WebLogic on AWS Elastic Load Balancing (ELB) is used to distribute incoming requests across the WebLogic Managed Servers deployed on Amazon EC2 instances in multiple Availability Zones The load balancer serves as a single point of contact for client requests which enables you to increase the availability of your application You can add and remove WebLogic Managed Server instances from your load balancer as your needs change either manually or with Auto Scaling without disrupting the overall flow of information ELB ensures that only healthy ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 10 instances receive traffic by detecting unhealthy instances and rerouting traffic across the re maining healthy instances If an instance fails ELB automatically reroutes the traffic to the remai ning running instances If a fai led instance is restored ELB restores the traffic to that instance Use Multiple Availability Zones for High Availability Each Availability Zone is isolated from other Availability Zones and runs on its own physically distinct independent infrastructure The likelihood of two Availability Zones experiencing a failure at the same time is relatively small To ensure high availability of your application you can deploy your WebLogic Managed Server instances across multiple Availability Zones You then deploy your application on the Managed Servers in the WebLogic cluster which spans two Availability Zones In the unlikely event of an Availability Zone failure user requests to the zone with the failure are routed by Elastic Load Balancing to t he Managed Servers deployed in the second Availability Zone This ensures that your application continues to remain available regardless of a zone failure You can configure WebLogic to replicate the HTTP session state in memory to another Managed Server in the WebLogic cluster WebLogic tracks the location of the Managed Server s hosting the primary and the replica of the session state using a cookie If the Managed Server hosting the primary copy of the session state fails WebLogic can retrieve th e HTTP session state from the replica For more information about HTTP session state replication see the Oracle WebLogic documentation For shared storage you can use Amazon EFS which is designed to be highly available and durable Your data in Amazon EFS is redundantly stored across multiple Availability Zones which means that your data is available if there is an Availability Zone failure For information a bout how to use Amazon EFS for shared storage see the Shared Storage section Administration Server High Availability The Administration Server is used to configure manage and monitor the resources in the domain including the Managed Server instances Because the failure of the Administration Server does not affect the functioning of the Managed Servers in the domain the Managed Servers continue to run and you r ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 11 application is still available However if the Administration Server fails the WebLogic administration console is unavailable and you cannot make changes to the domain configuration If the underlying host for the Administration Server experiences a failure you can use the Amazon EC2 Auto Recovery feature to recover the failed server instances When using Amazon EC2 Auto Recovery several system status checks monitor the instance and the other components that need to be running for your instance to function as expected Among other th ings the system status checks look for loss of network connectivity loss of system power software issues on the physical host and hardware issues on the physical host If a system status check of the underlying hardware fails the instance will be rebo oted (on new hardware if necessary) but will retain its instance ID IP address Elastic IP addresses EBS volume attachments and other configuration details Another option is to put the Administration Server instances in an Auto Scaling group that spans multiple Availability Zones and set the minimum and maximum size of the group to one Auto Scaling ensures that an instance of the Administration Server is running in the selected Availability Zones This solution ensures high availability of the Adminis tration Server if a zone failure occurs Storage If you use file based persistence you must have storage for the WebLogic product binaries common files and scripts the domain configuration files logs and persistence stores for JMS and JTA You can either use shared storage or Amazon EBS volumes to store these files Shared Storage To store the shared files related to your WebLogic deployment you can use Amazon EFS which supports NFSv4 and will be mounted by all the instances that are part of the WebL ogic cluster In the reference architecture we use Amazon EFS for shared storage The WebLogic product binaries common files and scripts the domain configuration files and logs are stored in Amazon EFS which includes the commons domains middleware and logs file systems This table describes each of these file systems ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 12 File System Description commons For common files such as installation files response files and scripts domains For WebLogic Domain files such as configuration runtime and temporary files middleware For binaries such as Java VM and Oracle WebLogic i nstallation logs For log files Amazon EFS has two throughput modes for your file system : Bursting Throughput and Provisioned Throughput With Bursting Throughput mode throughput on Amazon EFS scales as your file system grows With Provisioned Throughput mode you can instantly provision the throughput of your file system in MiB/s independent of the amount of data stored For better performance we recommend you select Provisioned Throughput mode while using Amazon EFS With Provisioned Throughput mode you can provision up to 1024 MiB/s of throughput for your file system You can change the file system throughput in Provisioned Throughput mode at any time after you create the file system If you are deploying your application in a region where Amazon EFS is not yet available t here are several third party products by vendors such a s NetApp and SoftNAS available on the AWS Marketplace that offer a shared storage solution on AWS Amazon EBS Volumes In this reference architecture we use Am azon EFS for shared storage You can also deploy Oracle WebLogic on AWS without using shared storage Instead you can use Amazon EBS volumes attached to your Amazon EC2 instances for storage Make sure to select the General Purpose (gp2) volume type for s toring the WebLogic product binaries common files and scripts the domain configuration files and logs GP2 volumes a re backed by solid state drives (SSDs) designed to offer single digit millisecond latencies and are suitable for use with Oracle WebLogic ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 13 Scalability When you use AWS you can scale your application easily because of the elastic nature of the cloud You can scale your application vertically and horizontally Vertical Scaling You can vertically scale or scale up your application simply by changing the EC2 instance type on which your WebLogic Managed Servers are deployed to a larger instance type and then increasing the WebLogic JVM heap size You can modify the Java heap size with the Xms (initial heap size ) and Xmx (maximum heap size ) parameters Ideally you should set both the initial heap size ( Xms) and the maximum heap size ( Xmx) to the same value to minimize garbage collections and optimize performance For example you can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x1e32xlarge instance with 128 vCPUs and 3904 GiB RAM For the most updated list of Amazon EC2 instance types see the Amazon EC2 Instance Ty pes page on the AWS website After you select a new instance type you simply restart the instance for the changes to take effect Typically the resizing operation is completed in a few minutes the Amazon EBS volumes remain attached to the instances and no data migration is required Horizontal Scaling You can horizontally scale or scale out your application by adding more Managed Servers to your WebLogic cluster depending on the user traffic or on a particular schedule You l aunch new EC2 instance s to deploy and configure additional Managed Servers add them to the WebLogic cluster and register your instance s with the ELB You can automate this process with AWS Auto Scaling and WebLogic scripting For more information see the Auto Scaling your Oracle WebLogic Cluster section AWS Auto Scaling for scaling out your WebLogic cluster also requires scripting which can be an additional technical investment While we recommend that you use AWS Au to Scaling sometimes you might not have the time or the technical resources to implement it while migrating your WebLogic application to AWS A simpler alternative might be to use standby instances ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 14 Standby Instances To meet extra capacity requirements a dditional instances of the WebLogic Managed Servers are preinstalled and configured on EC2 instances These standby instances can be shut down until the extra capacity is required You do not incur compute charges when instances are shut down you incur only Amazon Elastic Block Store (Amazon EBS) storage charges These preinstalled standby instances provide you the flexibility to meet additional capacity when you need it ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 15 Auto Scaling your Oracle WebLogic Cluster You can use AWS Auto Scaling to horizontally scale your applications based on demand This helps you to maintain steady predictable performance at the lowest possible cost For example you can configure AWS Auto Scaling to automatically create and add more Managed Servers to your WebLogic cluster as the traffic increases and to stop and remove Managed Servers from the WebLogic cluster as the traffic decreases For more information about Auto Scaling see the Amazon EC2 Auto Scaling documentation This diagram shows how AWS Auto Scaling works with Oracle WebLogic In this example we use Amazon EFS for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 16 To Auto Scale your WebLogic cluster on AWS you must complete these major steps 1 Install and c onfigure WebLogic – The first step is to configure Amazon EFS for shared storage install Oracle WebLogic and configure the WebLogic Domain and the WebLogic clus ter Amazon EFS is used to store the WebLogic product binaries common files and scripts the domain configuration files and logs 2 Configure AWS Auto Scaling – Next you have to configure AWS Auto Scaling to launch and terminate EC2 instances —or WebLogic Machines —based on the application workload 3 Configure WebLogic scaling scripts – Finally you c reate WebLogic Scripting Tool (WLST) scripts These scripts create and add or remove the Managed Servers from the WebLogic cluster when AWS Auto Scaling launches or terminates EC2 instances in the auto scaling group Configure Oracle WebLogic To configure Oracle WebLogic and setup shared storage you must complete these high level steps 1 Create the commons domains middleware and logs file systems on Amazon EFS as described in the Shared Storage section 2 Create an EC2 instance for deploying the WebLogic Administration Server and mount the EFS file systems In the reference architecture we have created the following direc tory structure to store the WebLogic binaries domain configurations common scripts and logs ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 17 3 Install Oracle WebLogic The ORACLE_HOME directory should be located on a shared folder (/middleware) on EFS 4 Create the WebLogic domain You can use the Basic WebLogic Server Domain Template in the /templates/wls/wlsjar' directory to create the domain 5 Create a WebLogic cluster in the domain and set the cluster messaging mode to Unicast Config ure AWS Auto Scali ng To configure AWS Auto Scaling to launch and terminate EC2 instances (or WebLogic Machines ) based on the application load you must complete the following high level steps For more details on Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website 1 Create a launch configuration and an Auto Scaling group 2 Create the scale in and scale out policies For example you can create a scaling policy to add an instance when the CPU utilization is >80 % and to remove an instance when the CPU utilization is <60 % 3 If you are using inmemory session persistence Oracle WebLogic replicates the session data to another Manage d Server in the cluster You should ensure that the Auto Scaling s cale down process terminate s only one Managed Server at a time to make sure you do not destroy the master and the replica of the session at the same time For detailed step bystep instruc tions on how to configure Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website Configure WebLogic Scaling Scripts Based on the traffic to your application Auto Scaling can create and add new EC2 instances (scaling out) or remove existing EC2 instances (scaling in) from your auto scaling group You must create the following scripts to automate the configuration of WebLogic in an auto scaled environment • EC2 configuration scripts – These script s mount the EFS filesystems invoke the WLST scripts to configure and start the WebLogic Managed Server on the start up of the EC2 instance and invoke the WLST scripts to stop the WebLogic Managed Server on shutdown of the EC2 instance ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 18 You can pass this script with the EC2 user data For detailed information see the Amazon EC2 documentation on t he AWS website • WebLogic Scripting Tool (WLST ) scripts – WLST is a command line scripting interface used to manage WebLogic Server instances and domains These scripts create and add the Manage d Server to your WebLogic cluster when Auto Scaling adds a new EC2 instance to the Auto Scaling group These scripts also stop and remove the Managed Server from your WebLogic cluster when Auto Scaling removes the EC2 instance from the Auto Scaling group For more information see the Oracle WLST documentation ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 19 Monitoring your Infrastructure After you migrate your Oracle WebLogic application s to AWS you can continue to use the monitoring tools you are familiar with to monitor your Oracle WebLogic environment and the application you deployed on WebLogic You can use Fusion Middleware Control the Oracle WebLogic Server Administration Console or the command line (using the WSLT state command) to monitor your Oracle WebLogic infrastructure components This includes WebLogic domains Managed Servers and clusters You can also monitor the Java applications deployed and get information such as the state of your application the number of active sessions and response times For more information about how to monitor Oracle WebLogic see the Oracle WebLogic documentation You can also use Amazon CloudWatch to monitor AWS Cloud resources and the applications you run on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes Amazon EF S ELB load balancers and Amazon RDS DB instances Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom application and system metrics such as memory usage transaction volumes or error rates which Amazon CloudWatch will also monitor With Amazon CloudWatch alarms you can set a threshold on metrics and trigger an action when that threshold is exceeded For example you can create an alarm that is tri ggered when the CPU utilization on an EC2 instance crosses a threshold You can also configure a notification of the event to be sent through SMS or email Real time alarm s for metrics and events enable you to minimize downtime and potential business impact If your application uses a database deployed on Amazon RDS y ou can use the Enhanced Monitoring feature of Amazon RDS to monitor your database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 20 AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center but without the costs and complexities invol ved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security see the AWS Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide c ustomers with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center The AWS Security Mode l The AWS infrastructure has been architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is different than security in your on premises data center s When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In the AWS cloud model AWS is responsible for securing the underlying infrastructure that supports the cloud and you are responsible for securing workloads that you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways and gives you the flexibility you need to implement the most applicable security controls for you r business functions in the AWS environment ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 21 Figure 6: The AWS shared responsibility model When you deploy Oracle WebLogic applications on AWS we recommend that you take advantage of the various security features of AWS such as AWS Identity and Access Management monitoring and logging network security and data encryption AWS Identity and Access Management With AWS Identity and Access Management (IAM) you can centrally manage your users and their security credentials such as passwords access keys and permissions policies which control the AWS services and resources that users can access IAM supports multifactor authentication (MFA) for privileged accounts including options for hardware based authenticators and support for integration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you The recorded information in the log files includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made The AWS API call history produced by ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 22 CloudTrail enables security analysis resource change tracking and compliance auditing Network Security and Amazon Virtual Private Cloud In each Amazon Virtual Private Cloud (VPC) you create one or more subnets Each instance you launch in your VPC is connected to one subnet Traditional layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network ACLs which are stateless traffic filters that apply to all inbound or outbound traffic from a subnet within your VPC These ACLs can contain ordered r ules to allow or deny traffic based on IP protocol by service port and by source and destination IP address Security groups are a complete firewall solution that enable filtering on both ingress and egress traffic from an instance Traffic can be restri cted by any IP protocol by service port as well as source and destination IP address (individual IP address or classless inter domain routing (CIDR) block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Amazon RDS for SQL Server and Amazon Re dshift Flexible key management options allow you to choose whether to have AWS manage the encryption keys using the AWS Key Management Service o (AWS KMS) or to maintain complete control over your keys Dedicated hardware based cryptographic key storage options (AWS CloudHSM) are available to help you satisfy compliance requirements For more information see the Introduction to AWS Security and AWS Security Best Practices whitepapers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 23 Oracle WebLogic on AWS Use Cases Oracle WebLogic customers use AWS for a variety of use cases including these environments: • Migration of existing Oracle WebLogic production environments • Implementation of new Oracle WebLogic production environments • Implementing disaster recovery environments • Running Oracle WebLogic development test demonstration proof of concept (POC) and t raining environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 24 Conclusion AWS can be an extremely cost effective secure scalable high perform ing and flexible option for deploying Oracle WebLogic applications By deploying Oracle WebLogic applications on the AWS Cloud you can reduce costs and simultaneously enable capabilities that might not be possible or cost effective if you deployed your application in an on premises data center Some of the benefits of deploying Oracle WebLogic on AWS include: • Low cost – Resources are billed by the hour and only for the duration they are used • Eliminate the need for large capital outlays – Replace large upfront expenses with low variable payments that only apply to what you use • High availability – Achieve high availability by deploying Oracle WebLogic in a Multi AZ configuration • Flexibility –Add compute capacity elastically to cope with demand • Testing – Add test environments use them for short durations and pay only for the duration they are used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 25 Contributors The following individuals and organizations contributed to this document: Ashok Sundaram Solutions Architect Amazon Web Services Document Revisions Date Description December 2018 First publication,General,consultant,Best Practices Overview_of_Amazon_Web_Services,Overview of Amazon Web Services AWS Whitepaper Overview of Amazon Web Services AWS Whitepaper Overview of Amazon Web Services: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonOverview of Amazon Web Services AWS Whitepaper Table of Contents Overview of Amazon Web Services1 Abstract1 Introduction1 What Is Cloud Computing? 2 Six Advantages of Cloud Computing3 Types of Cloud Computing4 Cloud Computing Models 4 Infrastructure as a Service (IaaS)4 Platform as a Service (PaaS)4 Software as a Service (SaaS)4 Cloud Computing Deployment Models4 Cloud 4 Hybrid 5 Onpremises5 Global Infrastructure6 Security and Compliance7 Security7 Benefits of AWS Security7 Compliance7 Amazon Web Services Cloud9 AWS Management Console9 AWS Command Line Interface9 Software Development Kits10 Analytics10 Amazon Athena10 Amazon CloudSearch10 Amazon Elasticsearch Service11 Amazon EMR11 Amazon FinSpace11 Amazon Kinesis11 Amazon Kinesis Data Firehose12 Amazon Kinesis Data Analytics12 Amazon Kinesis Data Streams12 Amazon Kinesis Video Streams12 Amazon Redshift12 Amazon QuickSight13 AWS Data Exchange13 AWS Data Pipeline13 AWS Glue13 AWS Lake Formation14 Amazon Managed Streaming for Apache Kafka (Amazon MSK)14 Application Integration 14 AWS Step Functions15 Amazon AppFlow15 Amazon EventBridge15 Amazon Managed Workflows for Apache Airflow (MWAA)15 Amazon MQ16 Amazon Simple Notification Service16 Amazon Simple Queue Service16 Amazon Simple Workflow Service16 AR and VR 16 Amazon Sumerian17 Blockchain17 Amazon Managed Blockchain17 iiiOverview of Amazon Web Services AWS Whitepaper Business Applications 17 Alexa for Business 18 Amazon Chime18 Amazon SES18 Amazon WorkDocs18 Amazon WorkMail18 Cloud Financial Management 19 AWS Application Cost Profiler19 AWS Cost Explorer19 AWS Budgets19 AWS Cost & Usage Report19 Reserved Instance (RI) Reporting20 Savings Plans20 Compute Services20 Amazon EC220 Amazon EC2 Auto Scaling21 Amazon EC2 Image Builder21 Amazon Lightsail22 AWS App Runner22 AWS Batch22 AWS Elastic Beanstalk22 AWS Fargate22 AWS Lambda23 AWS Serverless Application Repository23 AWS Outposts23 AWS Wavelength23 VMware Cloud on AWS24 Contact Center24 Amazon Connect24 Containers 25 Amazon Elastic Container Registry25 Amazon Elastic Container Service25 Amazon Elastic Kubernetes Service25 AWS App2Container25 Red Hat OpenShift Service on AWS26 Database 26 Amazon Aurora26 Amazon DynamoDB26 Amazon ElastiCache27 Amazon Keyspaces (for Apache Cassandra)27 Amazon Neptune27 Amazon Relational Database Service28 Amazon RDS on VMware28 Amazon Quantum Ledger Database (QLDB)28 Amazon Timestream29 Amazon DocumentDB (with MongoDB compatibility)29 Developer Tools29 Amazon Corretto29 AWS Cloud929 AWS CloudShell30 AWS CodeArtifact30 AWS CodeBuild30 AWS CodeCommit30 AWS CodeDeploy30 AWS CodePipeline30 AWS CodeStar31 AWS Fault Injection Simulator31 ivOverview of Amazon Web Services AWS Whitepaper AWS XRay31 End User Computing 31 Amazon AppStream 2032 Amazon WorkSpaces32 Amazon WorkLink32 FrontEnd Web & Mobile Services32 Amazon Location Service33 Amazon Pinpoint33 AWS Amplify33 AWS Device Farm34 AWS AppSync34 Game Tech34 Amazon GameLift34 Amazon Lumberyard34 Internet of Things (IoT)34 AWS IoT 1Click35 AWS IoT Analytics35 AWS IoT Button36 AWS IoT Core36 AWS IoT Device Defender36 AWS IoT Device Management37 AWS IoT Events37 AWS IoT Greengrass37 AWS IoT SiteWise37 AWS IoT Things Graph38 AWS Partner Device Catalog38 FreeRTOS38 Machine Learning 39 Amazon Augmented AI40 Amazon CodeGuru40 Amazon Comprehend40 Amazon DevOps Guru40 Amazon Elastic Inference41 Amazon Forecast41 Amazon Fraud Detector42 Amazon HealthLake42 Amazon Kendra42 Amazon Lex42 Amazon Lookout for Equipment43 Amazon Lookout for Metrics43 Amazon Lookout for Vision43 Amazon Monitron43 Amazon Personalize44 Amazon Polly44 Amazon Rekognition44 Amazon SageMaker45 Amazon SageMaker Ground Truth45 Amazon Textract46 Amazon Transcribe46 Amazon Translate46 Apache MXNet on AWS46 AWS Deep Learning AMIs47 AWS DeepComposer47 AWS DeepLens47 AWS DeepRacer47 AWS Inferentia47 TensorFlow on AWS48 vOverview of Amazon Web Services AWS Whitepaper Management and Governance48 Amazon CloudWatch48 AWS Auto Scaling49 AWS Chatbot49 AWS Compute Optimizer49 AWS Control Tower49 AWS CloudFormation50 AWS CloudTrail50 AWS Config50 AWS Launch Wizard51 AWS Organizations51 AWS OpsWorks51 AWS Proton51 AWS Service Catalog51 AWS Systems Manager52 AWS Trusted Advisor53 AWS Personal Health Dashboard53 AWS Managed Services53 AWS Console Mobile Application53 AWS License Manager54 AWS WellArchitected Tool54 Media Services54 Amazon Elastic Transcoder55 Amazon Interactive Video Service55 Amazon Nimble Studio55 AWS Elemental Appliances & Software55 AWS Elemental MediaConnect55 AWS Elemental MediaConvert56 AWS Elemental MediaLive56 AWS Elemental MediaPackage56 AWS Elemental MediaStore56 AWS Elemental MediaTailor56 Migration and Transfer57 AWS Application Migration Service57 AWS Migration Hub57 AWS Application Discovery Service57 AWS Database Migration Service58 AWS Server Migration Service58 AWS Snow Family58 AWS DataSync59 AWS Transfer Family59 Networking and Content Delivery60 Amazon API Gateway60 Amazon CloudFront60 Amazon Route 5360 Amazon VPC61 AWS App Mesh61 AWS Cloud Map62 AWS Direct Connect62 AWS Global Accelerator62 AWS PrivateLink63 AWS Transit Gateway63 AWS VPN63 Elastic Load Balancing 63 Quantum Technologies64 Amazon Braket64 Robotics64 viOverview of Amazon Web Services AWS Whitepaper AWS RoboMaker64 Satellite 65 AWS Ground Station65 Security Identity and Compliance65 Amazon Cognito66 Amazon Cloud Directory66 Amazon Detective67 Amazon GuardDuty67 Amazon Inspector67 Amazon Macie68 AWS Artifact68 AWS Audit Manager68 AWS Certificate Manager68 AWS CloudHSM69 AWS Directory Service69 AWS Firewall Manager69 AWS Identity and Access Management69 AWS Key Management Service70 AWS Network Firewall70 AWS Resource Access Manager70 AWS Secrets Manager71 AWS Security Hub71 AWS Shield71 AWS Single SignOn72 AWS WAF72 Storage 72 Amazon Elastic Block Store72 Amazon Elastic File System73 Amazon FSx for Lustre73 Amazon FSx for Windows File Server73 Amazon Simple Storage Service74 Amazon S3 Glacier74 AWS Backup74 AWS Storage Gateway74 Next Steps75 Conclusion 75 Resources76 Document Details 77 Contributors 77 Document Revisions77 AWS glossary78 viiOverview of Amazon Web Services AWS Whitepaper Abstract Overview of Amazon Web Services Publication date: August 5 2021 (Document Details (p 77)) Abstract Amazon Web Services offers a broad set of global cloudbased products including compute storage databases analytics networking mobile developer tools management tools IoT security and enterprise applications: ondemand available in seconds with payasyougo pricing From data warehousing to deployment tools directories to content delivery over 200 AWS services are available New services can be provisioned quickly without the upfront capital expense This allows enterprises startups small and mediumsized businesses and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform Introduction In 2006 Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business With the cloud businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Today AWS provides a highly reliable scalable lowcost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world 1Overview of Amazon Web Services AWS Whitepaper What Is Cloud Computing? Cloud computing is the ondemand delivery of compute power database storage applications and other IT resources through a cloud services platform via the Internet with payasyougo pricing Whether you are running applications that share photos to millions of mobile users or you’re supporting the critical operations of your business a cloud services platform provides rapid access to flexible and lowcost IT resources With cloud computing you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware Instead you can provision exactly the right type and size of computing resources you need to power your newest bright idea or operate your IT department You can access as many resources as you need almost instantly and only pay for what you use Cloud computing provides a simple way to access servers storage databases and a broad set of application services over the Internet A cloud services platform such as Amazon Web Services owns and maintains the networkconnected hardware required for these application services while you provision and use what you need via a web application 2Overview of Amazon Web Services AWS Whitepaper Six Advantages of Cloud Computing •Trade capital expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them you can pay only when you consume computing resources and pay only for how much you consume •Benefit from massive economies of scale – By using cloud computing you can achieve a lower variable cost than you can get on your own Because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale which translates into lower pay asyougo prices •Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs When you make a capacity decision prior to deploying an application you often end up either sitting on expensive idle resources or dealing with limited capacity With cloud computing these problems go away You can access as much or as little capacity as you need and scale up and down as required with only a few minutes’ notice •Increase speed and agility – In a cloud computing environment new IT resources are only a click away which means that you reduce the time to make those resources available to your developers from weeks to just minutes This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower •Stop spending money running and maintaining data centers – Focus on projects that differentiate your business not the infrastructure Cloud computing lets you focus on your own customers rather than on the heavy lifting of racking stacking and powering servers •Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks This means you can provide lower latency and a better experience for your customers at minimal cost 3Overview of Amazon Web Services AWS Whitepaper Cloud Computing Models Types of Cloud Computing Cloud computing provides developers and IT departments with the ability to focus on what matters most and avoid undifferentiated work such as procurement maintenance and capacity planning As cloud computing has grown in popularity several different models and deployment strategies have emerged to help meet specific needs of different users Each type of cloud service and deployment method provides you with different levels of control flexibility and management Understanding the differences between Infrastructure as a Service Platform as a Service and Software as a Service as well as what deployment strategies you can use can help you decide what set of services is right for your needs Cloud Computing Models Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT and typically provides access to networking features computers (virtual or on dedicated hardware) and data storage space IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today Platform as a Service (PaaS) Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications This helps you be more efficient as you don’t need to worry about resource procurement capacity planning software maintenance patching or any of the other undifferentiated heavy lifting involved in running your application Software as a Service (SaaS) Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider In most cases people referring to Software as a Service are referring to enduser applications With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software A common example of a SaaS application is webbased email which you can use to send and receive email without having to manage feature additions to the email product or maintain the servers and operating systems that the email program is running on Cloud Computing Deployment Models Cloud A cloudbased application is fully deployed in the cloud and all parts of the application run in the cloud Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing Cloudbased applications can be built on lowlevel infrastructure pieces or can use higher level services that provide abstraction from the management architecting and scaling requirements of core infrastructure 4Overview of Amazon Web Services AWS Whitepaper Hybrid Hybrid A hybrid deployment is a way to connect infrastructure and applications between cloudbased resources and existing resources that are not located in the cloud The most common method of hybrid deployment is between the cloud and existing onpremises infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system For more information on how AWS can help you with your hybrid deployment visit our Hybrid Cloud with AWS page Onpremises The deployment of resources onpremises using virtualization and resource management tools is sometimes called the “private cloud” Onpremises deployment doesn’t provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase resource utilization For more information on how AWS can help see Use case: Cloud services onpremises 5Overview of Amazon Web Services AWS Whitepaper Global Infrastructure AWS serves over a million active customers in more than 240 countries and territories We are steadily expanding global infrastructure to help our customers achieve lower latency and higher throughput and to ensure that their data resides only in the AWS Region they specify As our customers grow their businesses AWS will continue to provide infrastructure that meets their global requirements The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the ability to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates in 80 Availability Zones within 25 geographic Regions around the world with announced plans for more Availability Zones and Regions For more information on the AWS Cloud Availability Zones and AWS Regions see AWS Global Infrastructure Each Amazon Region is designed to be completely isolated from the other Amazon Regions This achieves the greatest possible fault tolerance and stability Each Availability Zone is isolated but the Availability Zones in a Region are connected through lowlatency links AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region Each Availability Zone is designed as an independent failure zone This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by AWS Region) In addition to discrete uninterruptible power supply (UPS) and onsite backup generation facilities data centers located in different Availability Zones are designed to be supplied by independent substations to reduce the risk of an event on the power grid impacting more than one Availability Zone Availability Zones are all redundantly connected to multiple tier1 transit providers 6Overview of Amazon Web Services AWS Whitepaper Security Security and Compliance Security Cloud security at AWS is the highest priority As an AWS customer you will benefit from a data center and network architecture built to meet the requirements of the most securitysensitive organizations Security in the cloud is much like security in your onpremises data centers—only without the costs of maintaining facilities and hardware In the cloud you don’t have to manage physical servers or storage devices Instead you use softwarebased security tools to monitor and protect the flow of information into and out of your cloud resources An advantage of the AWS Cloud is that it allows you to scale and innovate while maintaining a secure environment and paying only for the services you use This means that you can have the security you need at a lower cost than in an onpremises environment As an AWS customer you inherit all the best practices of AWS policies architecture and operational processes built to satisfy the requirements of our most securitysensitive customers Get the flexibility and agility you need in security controls The AWS Cloud enables a shared responsibility model While AWS manages security of the cloud you are responsible for security in the cloud This means that you retain control of the security you choose to implement to protect your own content platform applications systems and networks no differently than you would in an onsite data center AWS provides you with guidance and expertise through online resources personnel and partners AWS provides you with advisories for current issues plus you have the opportunity to work with AWS when you encounter security issues You get access to hundreds of tools and features to help you to meet your security objectives AWS provides securityspecific tools and features across network security configuration management access control and data encryption Finally AWS environments are continuously audited with certifications from accreditation bodies across geographies and verticals In the AWS environment you can take advantage of automated tools for asset inventory and privileged access reporting Benefits of AWS Security •Keep Your Data Safe: The AWS infrastructure puts strong safeguards in place to help protect your privacy All data is stored in highly secure AWS data centers •Meet Compliance Requirements: AWS manages dozens of compliance programs in its infrastructure This means that segments of your compliance have already been completed •Save Money: Cut costs by using AWS data centers Maintain the highest standard of security without having to manage your own facility •Scale Quickly: Security scales with your AWS Cloud usage No matter the size of your business the AWS infrastructure is designed to keep your data safe Compliance AWS Cloud Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS Cloud infrastructure 7Overview of Amazon Web Services AWS Whitepaper Compliance compliance responsibilities will be shared By tying together governancefocused auditfriendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs This helps customers to establish and operate in an AWS security control environment The IT infrastructure that AWS provides to its customers is designed and managed in alignment with best security practices and a variety of IT security standards The following is a partial list of assurance programs with which AWS complies: •SOC 1/ISAE 3402 SOC 2 SOC 3 •FISMA DIACAP and FedRAMP •PCI DSS Level 1 •ISO 9001 ISO 27001 ISO 27017 ISO 27018 AWS provides customers a wide range of information on its IT control environment in whitepapers reports certifications accreditations and other thirdparty attestations More information is available in the Risk and Compliance whitepaper and the AWS Security Center 8Overview of Amazon Web Services AWS Whitepaper AWS Management Console Amazon Web Services Cloud Topics •AWS Management Console (p 9) •AWS Command Line Interface (p 9) •Software Development Kits (p 10) •Analytics (p 10) •Application Integration (p 14) •AR and VR (p 16) •Blockchain (p 17) •Business Applications (p 17) •Cloud Financial Management (p 19) •Compute Services (p 20) •Contact Center (p 24) •Containers (p 25) •Database (p 26) •Developer Tools (p 29) •End User Computing (p 31) •FrontEnd Web & Mobile Services (p 32) •Game Tech (p 34) •Internet of Things (IoT) (p 34) •Machine Learning (p 39) •Management and Governance (p 48) •Media Services (p 54) •Migration and Transfer (p 57) •Networking and Content Delivery (p 60) •Quantum Technologies (p 64) •Robotics (p 64) •Satellite (p 65) •Security Identity and Compliance (p 65) •Storage (p 72) AWS Management Console Access and manage Amazon Web Services through the AWS Management Console a simple and intuitive user interface You can also use the AWS Console Mobile Application to quickly view resources on the go AWS Command Line Interface The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services With just one tool to download and configure you can control multiple AWS services from the command line and automate them through scripts 9Overview of Amazon Web Services AWS Whitepaper Software Development Kits Software Development Kits Our Software Development Kits (SDKs) simplify using AWS services in your applications with an Application Program Interface (API) tailored to your programming language or platform Analytics Topics •Amazon Athena (p 10) •Amazon CloudSearch (p 10) •Amazon Elasticsearch Service (p 11) •Amazon EMR (p 11) •Amazon FinSpace (p 11) •Amazon Kinesis (p 11) •Amazon Kinesis Data Firehose (p 12) •Amazon Kinesis Data Analytics (p 12) •Amazon Kinesis Data Streams (p 12) •Amazon Kinesis Video Streams (p 12) •Amazon Redshift (p 12) •Amazon QuickSight (p 13) •AWS Data Exchange (p 13) •AWS Data Pipeline (p 13) •AWS Glue (p 13) •AWS Lake Formation (p 14) •Amazon Managed Streaming for Apache Kafka (Amazon MSK) (p 14) Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is serverless so there is no infrastructure to manage and you pay only for the queries that you run Athena is easy to use Simply point to your data in Amazon S3 define the schema and start querying using standard SQL Most results are delivered within seconds With Athena there’s no need for complex extract transform and load (ETL) jobs to prepare your data for analysis This makes it easy for anyone with SQL skills to quickly analyze largescale datasets Athena is outofthebox integrated with AWS Glue Data Catalog allowing you to create a unified metadata repository across various services crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions and maintain schema versioning Amazon CloudSearch Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and costeffective to set up manage and scale a search solution for your website or application Amazon CloudSearch 10Overview of Amazon Web Services AWS Whitepaper Amazon Elasticsearch Service supports 34 languages and popular search features such as highlighting autocomplete and geospatial search Amazon Elasticsearch Service Amazon Elasticsearch Service makes it easy to deploy secure operate and scale Elasticsearch to search analyze and visualize data in realtime With Amazon Elasticsearch Service you get easytouse APIs and realtime analytics capabilities to power usecases such as log analytics fulltext search application monitoring and clickstream analytics with enterprisegrade availability scalability and security The service offers integrations with opensource tools like Kibana and Logstash for data ingestion and visualization It also integrates seamlessly with other AWS services such as Amazon Virtual Private Cloud (Amazon VPC) AWS Key Management Service (AWS KMS) Amazon Kinesis Data Firehose AWS Lambda AWS Identity and Access Management (IAM) Amazon Cognito and Amazon CloudWatch so that you can go from raw data to actionable insights quickly Amazon EMR Amazon EMR is the industryleading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark Apache Hive Apache HBase Apache Flink Apache Hudi and Presto Amazon EMR makes it easy to set up operate and scale your big data environments by automating timeconsuming tasks like provisioning capacity and tuning clusters With EMR you can run petabytescale analysis at less than half of the cost of traditional onpremises solutions andover 3x faster than standard Apache Spark You can run workloads on Amazon EC2 instances on Amazon Elastic Kubernetes Service (EKS) clusters or onpremises using EMR on AWS Outposts Amazon FinSpace Amazon FinSpace is a data management and analytics service purposebuilt for the financial services industry (FSI) FinSpace reduces the time you spend finding and preparing petabytes of financial data to be ready for analysis from months to minutes Financial services organizations analyze data from internal data stores like portfolio actuarial and risk management systems as well as petabytes of data from thirdparty data feeds such as historical securities prices from stock exchanges It can take months to find the right data get permissions to access the data in a compliant way and prepare it for analysis FinSpace removes the heavy lifting of building and maintaining a data management system for financial analytics With FinSpace you collect data and catalog it by relevant business concepts such as asset class risk classification or geographic region FinSpace makes it easy to discover and share data across your organization in accordance with your compliance requirements You define your data access policies in one place and FinSpace enforces them while keeping audit logs to allow for compliance and activity reporting FinSpace also includes a library of 100+ functions like time bars and Bollinger bands for you to prepare data for analysis Amazon Kinesis Amazon Kinesis makes it easy to collect process and analyze realtime streaming data so you can get timely insights and react quickly to new information Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale along with the flexibility to choose the tools that best suit the requirements of your application With Amazon Kinesis you can ingest realtime data such as video audio application logs website clickstreams and IoT telemetry data for machine learning analytics and other applications Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin 11Overview of Amazon Web Services AWS Whitepaper Amazon Kinesis Data Firehose Amazon Kinesis currently offers four services: Kinesis Data Firehose Kinesis Data Analytics Kinesis Data Streams and Kinesis Video Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data stores and analytics tools It can capture transform and load streaming data into Amazon S3 Amazon Redshift Amazon Elasticsearch Service and Splunk enabling near realtime analytics with existing business intelligence tools and dashboards you’re already using today It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration It can also batch compress transform and encrypt the data before loading it minimizing the amount of storage used at the destination and increasing security You can easily create a Firehose delivery stream from the AWS Management Console configure it with a few clicks and start sending data to the stream from hundreds of thousands of data sources to be loaded continuously to AWS—all in just a few minutes You can also configure your delivery stream to automatically convert the incoming data to columnar formats like Apache Parquet and Apache ORC before the data is delivered to Amazon S3 for costeffective storage and analytics Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics is the easiest way to analyze streaming data gain actionable insights and respond to your business and customer needs in real time Amazon Kinesis Data Analytics reduces the complexity of building managing and integrating streaming applications with other AWS services SQL users can easily query streaming data or build entire streaming applications using templates and an interactive SQL editor Java developers can quickly build sophisticated streaming applications using open source Java libraries and AWS integrations to transform and analyze data in realtime Amazon Kinesis Data Analytics takes care of everything required to run your queries continuously and scales automatically to match the volume and throughput rate of your incoming data Amazon Kinesis Data Streams Amazon Kinesis Data Streams is a massively scalable and durable realtime data streaming service KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams database event streams financial transactions social media feeds IT logs and locationtracking events The data collected is available in milliseconds to enable realtime analytics use cases such as realtime dashboards realtime anomaly detection dynamic pricing and more Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics machine learning (ML) playback and other processing Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices It also durably stores encrypts and indexes video data in your streams and allows you to access your data through easytouse APIs Kinesis Video Streams enables you to playback video for live and ondemand viewing and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and libraries for ML frameworks such as Apache MxNet TensorFlow and OpenCV Amazon Redshift Amazon Redshift is the most widely used cloud data warehouse It makes it fast simple and cost effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools 12Overview of Amazon Web Services AWS Whitepaper Amazon QuickSight It allows you to run complex analytic queries against terabytes to petabytes of structured and semi structured data using sophisticated query optimization columnar storage on highperformance storage and massively parallel query execution Most results come back in seconds You can start small for just $025 per hour with no commitments and scale out to petabytes of data for $1000 per terabyte per year less than a tenth the cost of traditional onpremises solutions Amazon QuickSight Amazon QuickSight is a fast cloudpowered business intelligence (BI) service that makes it easy for you to deliver insights to everyone in your organization QuickSight lets you create and publish interactive dashboards that can be accessed from browsers or mobile devices You can embed dashboards into your applications providing your customers with powerful selfservice analytics QuickSight easily scales to tens of thousands of users without any software to install servers to deploy or infrastructure to manage AWS Data Exchange AWS Data Exchange makes it easy to find subscribe to and use thirdparty data in the cloud Qualified data providers include categoryleading brands such as Reuters who curate data from over 22 million unique news stories per year in multiple languages; Change Healthcare who process and anonymize more than 14 billion healthcare transactions and $1 trillion in claims annually; Dun & Bradstreet who maintain a database of more than 330 million global business records; and Foursquare whose location data is derived from 220 million unique consumers and includes more than 60 million global commercial venues Once subscribed to a data product you can use the AWS Data Exchange API to load data directly into Amazon S3 and then analyze it with a wide variety of AWS analytics and machine learning services For example property insurers can subscribe to data to analyze historical weather patterns to calibrate insurance coverage requirements in different geographies; restaurants can subscribe to population and location data to identify optimal regions for expansion; academic researchers can conduct studies on climate change by subscribing to data on carbon dioxide emissions; and healthcare professionals can subscribe to aggregated data from historical clinical trials to accelerate their research activities For data providers AWS Data Exchange makes it easy to reach the millions of AWS customers migrating to the cloud by removing the need to build and maintain infrastructure for data storage delivery billing and entitling AWS Data Pipeline AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as onpremises data sources at specified intervals With AWS Data Pipeline you can regularly access your data where it’s stored transform and process it at scale and efficiently transfer the results to AWS services such as Amazon S3 (p 74) Amazon RDS (p 28) Amazon DynamoDB (p 26) and Amazon EMR (p 11) AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant repeatable and highly available You don’t have to worry about ensuring resource availability managing intertask dependencies retrying transient failures or timeouts in individual tasks or creating a failure notification system AWS Data Pipeline also allows you to move and process data that was previously locked up in onpremises data silos AWS Glue AWS Glue is a fully managed extract transform and load (ETL) service that makes it easy for customers to prepare and load their data for analytics You can create and run an ETL job with a few clicks in the 13Overview of Amazon Web Services AWS Whitepaper AWS Lake Formation AWS Management Console You simply point AWS Glue to your data stored on AWS and AWS Glue discovers your data and stores the associated metadata (eg table definition and schema) in the AWS Glue Data Catalog Once cataloged your data is immediately searchable queryable and available for ETL AWS Lake Formation AWS Lake Formation is a service that makes it easy to set up a secure data lake in days A data lake is a centralized curated and secured repository that stores all your data both in its original form and prepared for analysis A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions However setting up and managing data lakes today involves a lot of manual complicated and time consuming tasks This work includes loading data from diverse sources monitoring those data flows setting up partitions turning on encryption and managing keys defining transformation jobs and monitoring their operation reorganizing data into a columnar format configuring access control settings deduplicating redundant data matching linked records granting access to data sets and auditing access over time Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply Lake Formation then collects and catalogs data from databases and object storage moves the data into your new Amazon S3 data lake cleans and classifies data using machine learning algorithms and secures access to your sensitive data Your users can then access a centralized catalog of data which describes available data sets and their appropriate usage Your users then leverage these data sets with their choice of analytics and machine learning services like Amazon EMR for Apache Spark Amazon Redshift Amazon Athena SageMaker and Amazon QuickSight Amazon Managed Streaming for Apache Kafka (Amazon MSK) Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data Apache Kafka is an opensource platform for building realtime streaming data pipelines and applications With Amazon MSK you can use Apache Kafka APIs to populate data lakes stream changes to and from databases and power machine learning and analytics applications Apache Kafka clusters are challenging to setup scale and manage in production When you run Apache Kafka on your own you need to provision servers configure Apache Kafka manually replace servers when they fail orchestrate server patches and upgrades architect the cluster for high availability ensure data is durably stored and secured setup monitoring and alarms and carefully plan scaling events to support load changes Amazon MSK makes it easy for you to build and run production applications on Apache Kafka without needing Apache Kafka infrastructure management expertise That means you spend less time managing infrastructure and more time building applications With a few clicks in the Amazon MSK console you can create highly available Apache Kafka clusters with settings and configuration based on Apache Kafka’s deployment best practices Amazon MSK automatically provisions and runs your Apache Kafka clusters Amazon MSK continuously monitors cluster health and automatically replaces unhealthy nodes with no downtime to your application In addition Amazon MSK secures your Apache Kafka cluster by encrypting data at rest Application Integration Topics 14Overview of Amazon Web Services AWS Whitepaper AWS Step Functions •AWS Step Functions (p 15) •Amazon AppFlow (p 15) •Amazon EventBridge (p 15) •Amazon Managed Workflows for Apache Airflow (MWAA) (p 15) •Amazon MQ (p 16) •Amazon Simple Notification Service (p 16) •Amazon Simple Queue Service (p 16) •Amazon Simple Workflow Service (p 16) AWS Step Functions AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows Building applications from individual components that each perform a discrete function lets you scale easily and change applications quickly Step Functions is a reliable way to coordinate components and step through the functions of your application Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps This makes it simple to build and run multistep applications Step Functions automatically triggers and tracks each step and retries when there are errors so your application runs in order and as expected Step Functions logs the state of each step so when things do go wrong you can diagnose and debug problems quickly You can change and add steps without even writing code so you can easily evolve your application and innovate faster Amazon AppFlow Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between SoftwareasaService (SaaS) applications like Salesforce Zendesk Slack and ServiceNow and AWS services like Amazon S3 and Amazon Redshift in just a few clicks With Amazon AppFlow you can run data flows at enterprise scale at the frequency you choose on a schedule in response to a business event or on demand You can configure data transformation capabilities like filtering and validation to generate rich readytouse data as part of the flow itself without additional steps Amazon AppFlow automatically encrypts data in motion and allows users to restrict data from flowing over the public Internet for SaaS applications that are integrated with AWS PrivateLink reducing exposure to security threats Amazon EventBridge Amazon EventBridge is a serverless event bus that makes it easier to build eventdriven applications at scale using events generated from your applications integrated SoftwareasaService (SaaS) applications and AWS services EventBridge delivers a stream of realtime data from event sources such as Zendesk or Shopify to targets like AWS Lambda and other SaaS applications You can set up routing rules to determine where to send your data to build application architectures that react in realtime to your data sources with event publisher and consumer completely decoupled Amazon Managed Workflows for Apache Airflow (MWAA) Amazon Managed Workflows for Apache Airflow (MWAA) is a managed orchestration service for Apache Airflow that makes it easier to set up and operate endtoend data pipelines in the cloud at scale Apache Airflow is an opensource tool used to programmatically author schedule and monitor sequences of 15Overview of Amazon Web Services AWS Whitepaper Amazon MQ processes and tasks referred to as “workflows” With Managed Workflows you can use Airflow and Python to create workflows without having to manage the underlying infrastructure for scalability availability and security Managed Workflows automatically scales its workflow execution capacity to meet your needs and is integrated with AWS security services to help provide you with fast and secure access to data Amazon MQ Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud Message brokers allow different software systems–often using different programming languages and on different platforms–to communicate and exchange information Amazon MQ reduces your operational load by managing the provisioning setup and maintenance of ActiveMQ and RabbitMQ popular opensource message brokers Connecting your current applications to Amazon MQ is easy because it uses industrystandard APIs and protocols for messaging including JMS NMS AMQP STOMP MQTT and WebSocket Using standards means that in most cases there’s no need to rewrite any messaging code when you migrate to AWS Amazon Simple Notification Service Amazon Simple Notification Service (Amazon SNS) is a highly available durable secure fully managed pub/sub messaging service that enables you to decouple microservices distributed systems and serverless applications Amazon SNS provides topics for highthroughput pushbased manytomany messaging Using Amazon SNS topics your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing including Amazon SQS queues AWS Lambda functions and HTTP/S webhooks Additionally SNS can be used to fan out notifications to end users using mobile push SMS and email Amazon Simple Queue Service Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices distributed systems and serverless applications SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware and empowers developers to focus on differentiating work Using SQS you can send store and receive messages between software components at any volume without losing messages or requiring other services to be available Get started with SQS in minutes using the AWS console Command Line Interface or SDK of your choice and three simple commands SQS offers two types of message queues Standard queues offer maximum throughput besteffort ordering and atleastonce delivery SQS FIFO queues are designed to guarantee that messages are processed exactly once in the exact order that they are sent Amazon Simple Workflow Service Amazon Simple Workflow Service (Amazon SWF) helps developers build run and scale background jobs that have parallel or sequential steps You can think of Amazon SWF as a fullymanaged state tracker and task coordinator in the cloud If your application’s steps take more than 500 milliseconds to complete you need to track the state of processing If you need to recover or retry if a task fails Amazon SWF can help you AR and VR Topics 16Overview of Amazon Web Services AWS Whitepaper Amazon Sumerian •Amazon Sumerian (p 17) Amazon Sumerian Amazon Sumerian lets you create and run virtual reality (VR) augmented reality (AR) and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise With Sumerian you can build highly immersive and interactive scenes that run on popular hardware such as Oculus Go Oculus Rift HTC Vive HTC Vive Pro Google Daydream and Lenovo Mirage as well as Android and iOS mobile devices For example you can build a virtual classroom that lets you train new employees around the world or you can build a virtual environment that enables people to tour a building remotely Sumerian makes it easy to create all the building blocks needed to build highly immersive and interactive 3D experiences including adding objects (eg characters furniture and landscape) and designing animating and scripting environments Sumerian does not require specialized expertise and you can design scenes directly from your browser Blockchain Topics •Amazon Managed Blockchain (p 17) Amazon Managed Blockchain Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using the popular open source frameworks Hyperledger Fabric and Ethereum Blockchain makes it possible to build applications where multiple parties can execute transactions without the need for a trusted central authority Today building a scalable blockchain network with existing technologies is complex to set up and hard to manage To create a blockchain network each network member needs to manually provision hardware install software create and manage certificates for access control and configure networking components Once the blockchain network is running you need to continuously monitor the infrastructure and adapt to changes such as an increase in transaction requests or new members joining or leaving the network Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks Amazon Managed Blockchain eliminates the overhead required to create the network and automatically scales to meet the demands of thousands of applications running millions of transactions Once your network is up and running Managed Blockchain makes it easy to manage and maintain your blockchain network It manages your certificates lets you easily invite new members to join the network and tracks operational metrics such as usage of compute memory and storage resources In addition Managed Blockchain can replicate an immutable copy of your blockchain network activity into Amazon Quantum Ledger Database (QLDB) a fully managed ledger database This allows you to easily analyze the network activity outside the network and gain insights into trends Business Applications Topics •Alexa for Business (p 18) •Amazon Chime (p 18) 17Overview of Amazon Web Services AWS Whitepaper Alexa for Business •Amazon SES (p 18) •Amazon WorkDocs (p 18) •Amazon WorkMail (p 18) Alexa for Business Alexa for Business is a service that enables organizations and employees to use Alexa to get more work done With Alexa for Business employees can use Alexa as their intelligent assistant to be more productive in meeting rooms at their desks and even with the Alexa devices they already have at home Amazon Chime Amazon Chime is a communications service that transforms online meetings with a secure easytouse application that you can trust Amazon Chime works seamlessly across your devices so that you can stay connected You can use Amazon Chime for online meetings video conferencing calls chat and to share content both inside and outside your organization Amazon Chime works with Alexa for Business which means you can use Alexa to start your meetings with your voice Alexa can start your video meetings in large conference rooms and automatically dial into online meetings in smaller huddle rooms and from your desk Amazon SES Amazon Simple Email Service (Amazon SES) is a costeffective flexible and scalable email service that enables developers to send mail from within any application You can configure Amazon SES quickly to support several email use cases including transactional marketing or mass email communications Amazon SES's flexible IP deployment and email authentication options help drive higher deliverability and protect sender reputation while sending analytics measure the impact of each email With Amazon SES you can send email securely globally and at scale Amazon WorkDocs Amazon WorkDocs is a fully managed secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity Users can comment on files send them to others for feedback and upload new versions without having to resort to emailing multiple versions of their files as attachments Users can take advantage of these capabilities wherever they are using the device of their choice including PCs Macs tablets and phones Amazon WorkDocs offers IT administrators the option of integrating with existing corporate directories flexible sharing policies and control of the location where data is stored You can get started using Amazon WorkDocs with a 30day free trial providing 1 TB of storage per user for up to 50 users Amazon WorkMail Amazon WorkMail is a secure managed business email and calendar service with support for existing desktop and mobile email client applications Amazon WorkMail gives users the ability to seamlessly access their email contacts and calendars using the client application of their choice including Microsoft Outlook native iOS and Android email applications any client application supporting the IMAP protocol or directly through a web browser You can integrate Amazon WorkMail with your existing corporate directory use email journaling to meet compliance requirements and control both the keys that encrypt your data and the location in which your data is stored You can also set up interoperability with Microsoft Exchange Server and programmatically manage users groups and resources using the Amazon WorkMail SDK 18Overview of Amazon Web Services AWS Whitepaper Cloud Financial Management Cloud Financial Management Topics •AWS Application Cost Profiler (p 19) •AWS Cost Explorer (p 19) •AWS Budgets (p 19) •AWS Cost & Usage Report (p 19) •Reserved Instance (RI) Reporting (p 20) •Savings Plans (p 20) AWS Application Cost Profiler AWS Application Cost Profiler provides you the ability to track the consumption of shared AWS resources used by software applications and report granular cost breakdown across tenant base You can achieve economies of scale with the shared infrastructure model while still maintaining a clear line of sight to detailed resource consumption information across multiple dimensions With the proportionate cost insights of shared AWS resources organizations running applications can establish the data foundation for accurate cost allocation model and ISV selling applications can better understand your profitability and customize pricing strategies for your end customers AWS Cost Explorer AWS Cost Explorer has an easytouse interface that lets you visualize understand and manage your AWS costs and usage over time Get started quickly by creating custom reports (including charts and tabular data) that analyze cost and usage data both at a high level (eg total costs and usage across all accounts) and for highlyspecific requests (eg m22xlarge costs within account Y that are tagged “project: secretProject”) AWS Budgets AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount You can also use AWS Budgets to set RI utilization or coverage targets and receive alerts when your utilization drops below the threshold you define RI alerts support Amazon EC2 Amazon RDS Amazon Redshift and Amazon ElastiCache reservations Budgets can be tracked at the monthly quarterly or yearly level and you can customize the start and end dates You can further refine your budget to track costs associated with multiple dimensions such as AWS service linked account tag and others Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic Budgets can be created and tracked from the AWS Budgets dashboard or via the Budgets API AWS Cost & Usage Report The AWS Cost & Usage Report is a single location for accessing comprehensive information about your AWS costs and usage The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items as well as any tags that you have activated for cost allocation purposes You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or monthly level 19Overview of Amazon Web Services AWS Whitepaper Reserved Instance (RI) Reporting Reserved Instance (RI) Reporting AWS provides a number of RIspecific cost management solutions outofthebox to help you better understand and manage your RIs Using the RI Utilization and Coverage reports available in AWS Cost Explorer you can visualize your RI data at an aggregate level or inspect a particular RI subscription To access the most detailed RI information available you can leverage the AWS Cost & Usage Report You can also set a custom RI utilization target via AWS Budgets and receive alerts when your utilization drops below the threshold you define Savings Plans Savings Plans is a flexible pricing model offering lower prices compared to OnDemand pricing in exchange for a specific usage commitment (measured in $/hour) for a one or threeyear period AWS offers three types of Savings Plans – Compute Savings Plans EC2 Instance Savings Plans and Amazon SageMaker Savings Plans Compute Savings Plans apply to usage across Amazon EC2 AWS Lambda and AWS Fargate The EC2 Instance Savings Plans apply to EC2 usage and Amazon SageMaker Savings Plans apply to Amazon SageMaker usage You can easily sign up a 1 or 3year term Savings Plans in AWS Cost Explorer and manage your plans by taking advantage of recommendations performance reporting and budget alerts Compute Services Topics •Amazon EC2 (p 20) •Amazon EC2 Auto Scaling (p 21) •Amazon EC2 Image Builder (p 21) •Amazon Lightsail (p 22) •AWS App Runner (p 22) •AWS Batch (p 22) •AWS Elastic Beanstalk (p 22) •AWS Fargate (p 22) •AWS Lambda (p 23) •AWS Serverless Application Repository (p 23) •AWS Outposts (p 23) •AWS Wavelength (p 23) •VMware Cloud on AWS (p 24) Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure resizable compute capacity in the cloud It is designed to make webscale computing easier for developers The simple web interface of Amazon EC2 allows you to obtain and configure capacity with minimal friction It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon EC2 reduces the time required to obtain and boot new server instances (called Amazon EC2 instances) to minutes allowing you to quickly scale capacity both up and down as your computing requirements change Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use Amazon EC2 provides developers and system administrators the tools to build failure resilient applications and isolate themselves from common failure scenarios 20Overview of Amazon Web Services AWS Whitepaper Amazon EC2 Auto Scaling Instance Types Amazon EC2 passes on to you the financial benefits of Amazon’s scale You pay a very low rate for the compute capacity you actually consume See Amazon EC2 Instance Purchasing Options for a more detailed description •OnDemand Instances— With OnDemand instances you pay for compute capacity by the hour or the second depending on which instances you run No longerterm commitments or upfront payments are needed You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified per hourly rates for the instance you use OnDemand instances are recommended for: •Users that prefer the low cost and flexibility of Amazon EC2 without any upfront payment or long term commitment •Applications with shortterm spiky or unpredictable workloads that cannot be interrupted •Applications being developed or tested on Amazon EC2 for the first time •Spot Instances—Spot Instances are available at up to a 90% discount compared to OnDemand prices and let you take advantage of unused Amazon EC2 capacity in the AWS Cloud You can significantly reduce the cost of running your applications grow your application’s compute capacity and throughput for the same budget and enable new types of cloud computing applications Spot instances are recommended for: •Applications that have flexible start and end times •Applications that are only feasible at very low compute prices •Users with urgent computing needs for large amounts of additional capacity •Reserved Instances—Reserved Instances provide you with a significant discount (up to 72%) compared to OnDemand instance pricing You have the flexibility to change families operating system types and tenancies while benefitting from Reserved Instance pricing when you use Convertible Reserved Instances •Savings Plans—Savings Plans are a flexible pricing model that offer low prices on EC2 and Fargate usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term •Dedicated Hosts —A Dedicated Host is a physical EC2 server dedicated for your use Dedicated Hosts can help you reduce costs by allowing you to use your existing serverbound software licenses including Windows Server SQL Server and SUSE Linux Enterprise Server (subject to your license terms) and can also help you meet compliance requirements Amazon EC2 Auto Scaling Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define You can use the fleet management features of Amazon EC2 Auto Scaling to maintain the health and availability of your fleet You can also use the dynamic and predictive scaling features of Amazon EC2 Auto Scaling to add or remove EC2 instances Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand Dynamic scaling and predictive scaling can be used together to scale faster Amazon EC2 Image Builder EC2 Image Builder simplifies the building testing and deployment of Virtual Machine and container images for use on AWS or onpremises Keeping Virtual Machine and container images uptodate can be time consuming resource intensive and errorprone Currently customers either manually update and snapshot VMs or have teams that build automation scripts to maintain images 21Overview of Amazon Web Services AWS Whitepaper Amazon Lightsail Image Builder significantly reduces the effort of keeping images uptodate and secure by providing a simple graphical interface builtin automation and AWSprovided security settings With Image Builder there are no manual steps for updating an image nor do you have to build your own automation pipeline Image Builder is offered at no cost other than the cost of the underlying AWS resources used to create store and share the images Amazon Lightsail Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS Lightsail plans include everything you need to jumpstart your project – a virtual machine SSD based storage data transfer DNS management and a static IP address – for a low predictable price AWS App Runner AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web applications and APIs at scale and with no prior infrastructure experience required Start with your source code or a container image App Runner automatically builds and deploys the web application and load balances traffic with encryption App Runner also scales up or down automatically to meet your traffic needs With App Runner rather than thinking about servers or scaling you have more time to focus on your applications AWS Batch AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the optimal quantity and type of compute resources (eg CPU or memoryoptimized instances) based on the volume and specific resource requirements of the batch jobs submitted With AWS Batch there is no need to install and manage batch computing software or server clusters that you use to run your jobs allowing you to focus on analyzing results and solving problems AWS Batch plans schedules and runs your batch computing workloads across the full range of AWS compute services and features such as Amazon EC2 and Spot Instances AWS Elastic Beanstalk AWS Elastic Beanstalk is an easytouse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and Internet Information Services (IIS) You can simply upload your code and AWS Elastic Beanstalk automatically handles the deployment from capacity provisioning load balancing and auto scaling to application health monitoring At the same time you retain full control over the AWS resources powering your application and can access the underlying resources at any time AWS Fargate AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters With AWS Fargate you no longer have to provision configure and scale clusters of virtual machines to run containers This removes the need to choose server types decide when to scale your clusters or optimize cluster packing AWS Fargate removes the need for you to interact with or think about servers or clusters Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them Amazon ECS has two modes: Fargate launch type and EC2 launch type With Fargate launch type all you have to do is package your application in containers specify the CPU and memory requirements 22Overview of Amazon Web Services AWS Whitepaper AWS Lambda define networking and IAM policies and launch the application EC2 launch type allows you to have serverlevel more granular control over the infrastructure that runs your container applications With EC2 launch type you can use Amazon ECS to manage a cluster of servers and schedule placement of containers on the servers Amazon ECS keeps track of all the CPU memory and other resources in your cluster and also finds the best server for a container to run on based on your specified resource requirements You are responsible for provisioning patching and scaling clusters of servers You can decide which type of server to use which applications and how many containers to run in a cluster to optimize utilization and when you should add or remove servers from a cluster EC2 launch type gives you more control of your server clusters and provides a broader range of customization options which might be required to support some specific applications or possible compliance and government requirements AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume—there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service—all with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or you can call it directly from any web or mobile app AWS Serverless Application Repository The AWS Serverless Application Repository enables you to quickly deploy code samples components and complete applications for common use cases such as web and mobile backends event and data processing logging monitoring IoT and more Each application is packaged with an AWS Serverless Application Model (SAM) template that defines the AWS resources used Publicly shared applications also include a link to the application’s source code There is no additional charge to use the Serverless Application Repository you only pay for the AWS resources used in the applications you deploy You can also use the Serverless Application Repository to publish your own applications and share them within your team across your organization or with the community at large To share an application you've built publish it to the AWS Serverless Application Repository AWS Outposts AWS Outposts bring native AWS services infrastructure and operating models to virtually any data center colocation space or onpremises facility You can use the same APIs the same tools the same hardware and the same functionality across onpremises and the cloud to deliver a truly consistent hybrid experience Outposts can be used to support workloads that need to remain onpremises due to low latency or local data processing needs AWS Outposts come in two variants: 1) VMware Cloud on AWS Outposts allows you to use the same VMware control plane and APIs you use to run your infrastructure 2) AWS native variant of AWS Outposts allows you to use the same exact APIs and control plane you use to run in the AWS cloud but onpremises AWS Outposts infrastructure is fully managed maintained and supported by AWS to deliver access to the latest AWS services Getting started is easy you simply log into the AWS Management Console to order your Outposts servers choosing from a wide range of compute and storage options You can order one or more servers or quarter half and full rack units AWS Wavelength AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage 23Overview of Amazon Web Services AWS Whitepaper VMware Cloud on AWS services within communications service providers’ (CSP) datacenters at the edge of the 5G network so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network This avoids the latency that would result from application traffic having to traverse multiple hops across the Internet to reach their destination enabling customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks VMware Cloud on AWS VMware Cloud on AWS is an integrated cloud offering jointly developed by AWS and VMware delivering a highly scalable secure and innovative service that allows organizations to seamlessly migrate and extend their onpremises VMware vSpherebased environments to the AWS Cloud running on nextgeneration Amazon Elastic Compute Cloud (Amazon EC2) bare metal infrastructure VMware Cloud on AWS is ideal for enterprise IT infrastructure and operations organizations looking to migrate their onpremises vSpherebased workloads to the public cloud consolidate and extend their data center capacities and optimize simplify and modernize their disaster recovery solutions VMware Cloud on AWS is delivered sold and supported globally by VMware and its partners with availability in the following AWS Regions: AWS Europe (Stockholm) AWS US East (Northern Virginia) AWS US East (Ohio) AWS US West (Northern California) AWS US West (Oregon) AWS Canada (Central) AWS Europe (Frankfurt) AWS Europe (Ireland) AWS Europe (London) AWS Europe (Paris) AWS Europe (Milan) AWS Asia Pacific (Singapore) AWS Asia Pacific (Sydney) AWS Asia Pacific (Tokyo) AWS Asia Pacific (Mumbai) Region AWS South America (Sao Paulo) AWS Asia Pacific (Seoul) and AWS GovCloud (US West) With each release VMware Cloud on AWS availability will expand into additional global regions VMware Cloud on AWS brings the broad diverse and rich innovations of AWS services natively to the enterprise applications running on VMware's compute storage and network virtualization platforms This allows organizations to easily and rapidly add new innovations to their enterprise applications by natively integrating AWS infrastructure and platform capabilities such as AWS Lambda Amazon Simple Queue Service (SQS) Amazon S3 Elastic Load Balancing Amazon RDS Amazon DynamoDB Amazon Kinesis and Amazon Redshift among many others With VMware Cloud on AWS organizations can simplify their Hybrid IT operations by using the same VMware Cloud Foundation technologies including vSphere vSAN NSX and vCenter Server across their onpremises data centers and on the AWS Cloud without having to purchase any new or custom hardware rewrite applications or modify their operating models The service automatically provisions infrastructure and provides full VM compatibility and workload portability between your onpremises environments and the AWS Cloud With VMware Cloud on AWS you can leverage AWS's breadth of services including compute databases analytics Internet of Things (IoT) security mobile deployment application services and more Contact Center Topics •Amazon Connect (p 24) Amazon Connect Amazon Connect is a selfservice omnichannel cloud contact center service that makes it easy for any business to deliver better customer service at lower cost Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations The selfservice graphical interface in Amazon Connect makes it easy for non technical users to design contact flows manage agents and track performance metrics – no specialized skills required There are no upfront payments or longterm commitments and no infrastructure to manage with Amazon Connect; customers pay by the minute for Amazon Connect usage plus any associated telephony services 24Overview of Amazon Web Services AWS Whitepaper Containers Containers Topics •Amazon Elastic Container Registry (p 25) •Amazon Elastic Container Service (p 25) •Amazon Elastic Kubernetes Service (p 25) •AWS App2Container (p 25) •Red Hat OpenShift Service on AWS (p 26) Amazon Elastic Container Registry Amazon Elastic Container Registry (ECR) is a fullymanaged Docker container registry that makes it easy for developers to store manage and deploy Docker container images Amazon ECR is integrated with Amazon Elastic Container Service (Amazon ECS) simplifying your development to production workflow Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure Amazon ECR hosts your images in a highly available and scalable architecture allowing you to reliably deploy containers for your applications Integration with AWS Identity and Access Management (IAM) (p 69) provides resourcelevel control of each repository With Amazon ECR there are no upfront fees or commitments You pay only for the amount of data you store in your repositories and data transferred to the Internet Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a highly scalable highperformance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS Amazon ECS eliminates the need for you to install and operate your own container orchestration software manage and scale a cluster of virtual machines or schedule containers on those virtual machines With simple API calls you can launch and stop Dockerenabled applications query the complete state of your application and access many familiar features such as IAM roles security groups load balancers Amazon CloudWatch Events AWS CloudFormation templates and AWS CloudTrail logs Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy manage and scale containerized applications using Kubernetes on AWS Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community Applications running on any standard Kubernetes environment are fully compatible and can be easily migrated to Amazon EKS AWS App2Container AWS App2Container (A2C) is a commandline tool for modernizing NET and Java applications into containerized applications A2C analyzes and builds an inventory of all applications running in virtual machines onpremises or in the cloud You simply select the application you want to containerize and A2C packages the application artifact and identified dependencies into container images configures the network ports and generates the ECS task and Kubernetes pod definitions A2C provisions through CloudFormation the cloud infrastructure and CI/CD pipelines required to deploy the containerized NET 25Overview of Amazon Web Services AWS Whitepaper Red Hat OpenShift Service on AWS or Java application into production With A2C you can easily modernize your existing applications and standardize the deployment and operations through containers Red Hat OpenShift Service on AWS Red Hat OpenShift Service on AWS (ROSA) provides an integrated experience to use OpenShift If you are already familiar with OpenShift you can accelerate your application development process by leveraging familiar OpenShift APIs and tools for deployments on AWS With ROSA you can use the wide range of AWS compute database analytics machine learning networking mobile and other services to build secure and scalable applications faster ROSA comes with payasyougo hourly and annual billing a 9995% SLA and joint support from AWS and Red Hat ROSA makes it easier for you to focus on deploying applications and accelerating innovation by moving the cluster lifecycle management to Red Hat and AWS With ROSA you can run containerized applications with your existing OpenShift workflows and reduce the complexity of management Database Topics •Amazon Aurora (p 26) •Amazon DynamoDB (p 26) •Amazon ElastiCache (p 27) •Amazon Keyspaces (for Apache Cassandra) (p 27) •Amazon Neptune (p 27) •Amazon Relational Database Service (p 28) •Amazon RDS on VMware (p 28) •Amazon Quantum Ledger Database (QLDB) (p 28) •Amazon Timestream (p 29) •Amazon DocumentDB (with MongoDB compatibility) (p 29) Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that combines the speed and availability of highend commercial databases with the simplicity and costeffectiveness of open source databases Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases It provides the security availability and reliability of commercial databases at 1/10th the cost Amazon Aurora is fully managed by Amazon Relational Database Service (Amazon RDS) which automates timeconsuming administration tasks like hardware provisioning database setup patching and backups Amazon Aurora features a distributed faulttolerant selfhealing storage system that autoscales up to 128TB per database instance It delivers high performance and availability with up to 15 lowlatency read replicas pointintime recovery continuous backup to Amazon S3 and replication across three Availability Zones (AZs) Amazon DynamoDB Amazon DynamoDB is a keyvalue and document database that delivers singledigit millisecond performance at any scale It's a fully managed multiregion multimaster database with builtin security 26Overview of Amazon Web Services AWS Whitepaper Amazon ElastiCache backup and restore and inmemory caching for internetscale applications DynamoDB can handle more than 10 trillion requests per day and support peaks of more than 20 million requests per second Many of the world's fastest growing businesses such as Lyft Airbnb and Redfin as well as enterprises such as Samsung Toyota and Capital One depend on the scale and performance of DynamoDB to support their missioncritical workloads Hundreds of thousands of AWS customers have chosen DynamoDB as their keyvalue and document database for mobile web gaming ad tech IoT and other applications that need lowlatency data access at any scale Create a new table for your application and let DynamoDB handle the rest Amazon ElastiCache Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an inmemory cache in the cloud The service improves the performance of web applications by allowing you to retrieve information from fast managed inmemory caches instead of relying entirely on slower diskbased databases Amazon ElastiCache supports two opensource inmemory caching engines: •Redis a fast opensource inmemory keyvalue data store for use as a database cache message broker and queue Amazon ElastiCache for Redis is a Rediscompatible inmemory service that delivers the easeofuse and power of Redis along with the availability reliability and performance suitable for the most demanding applications Both singlenode and up to 15shard clusters are available enabling scalability to up to 355 TiB of inmemory data ElastiCache for Redis is fully managed scalable and secure This makes it an ideal candidate to power highperformance use cases such as web mobile apps gaming adtech and IoT •Memcached a widely adopted memory object caching system ElastiCache for Memcached is protocol compliant with Memcached so popular tools that you use today with existing Memcached environments will work seamlessly with the service Amazon Keyspaces (for Apache Cassandra) Amazon Keyspaces (for Apache Cassandra) is a scalable highly available and managed Apache Cassandra–compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today You don’t have to provision patch or manage servers and you don’t have to install maintain or operate software Amazon Keyspaces is serverless so you pay for only the resources you use and the service can automatically scale tables up and down in response to application traffic You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage Data is encrypted by default and Amazon Keyspaces enables you to back up your table data continuously using pointintime recovery Amazon Keyspaces gives you the performance elasticity and enterprise features you need to operate businesscritical Cassandra workloads at scale Amazon Neptune Amazon Neptune is a fast reliable fullymanaged graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built highperformance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency Amazon Neptune supports popular graph models Property Graph and W3C's RDF and their respective query languages Apache TinkerPop Gremlin and SPARQL allowing you to easily build queries that efficiently navigate highly connected datasets Neptune powers graph use cases such as recommendation engines fraud detection knowledge graphs drug discovery and network security 27Overview of Amazon Web Services AWS Whitepaper Amazon Relational Database Service Amazon Neptune is highly available with read replicas pointintime recovery continuous backup to Amazon S3 and replication across Availability Zones Neptune is secure with support for encryption at rest Neptune is fullymanaged so you no longer need to worry about database management tasks such as hardware provisioning software patching setup configuration or backups Amazon Relational Database Service Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a relational database in the cloud It provides costefficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on several database instance types optimized for memory performance or I/O and provides you with six familiar database engines to choose from including Amazon Aurora PostgreSQL MySQL MariaDB Oracle Database and SQL Server You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS Amazon RDS on VMware Amazon Relational Database Service (Amazon RDS) on VMware lets you deploy managed databases in onpremises VMware environments using the Amazon RDS technology enjoyed by hundreds of thousands of AWS customers Amazon RDS provides costefficient and resizable capacity while automating timeconsuming administration tasks including hardware provisioning database setup patching and backups freeing you to focus on your applications RDS on VMware brings these same benefits to your onpremises deployments making it easy to set up operate and scale databases in VMware vSphere private data centers or to migrate them to AWS Amazon RDS on VMware allows you to utilize the same simple interface for managing databases in onpremises VMware environments as you would use in AWS You can easily replicate RDS on VMware databases to RDS instances in AWS enabling lowcost hybrid deployments for disaster recovery read replica bursting and optional longterm backup retention in Amazon Simple Storage Service (Amazon S3) Amazon Quantum Ledger Database (QLDB) Amazon QLDB is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time Ledgers are typically used to record a history of economic and financial activity in an organization Many organizations build applications with ledgerlike functionality because they want to maintain an accurate history of their applications' data for example tracking the history of credits and debits in banking transactions verifying the data lineage of an insurance claim or tracing movement of an item in a supply chain network Ledger applications are often implemented using custom audit tables or audit trails created in relational databases However building audit functionality with relational databases is time consuming and prone to human error It requires custom development and since relational databases are not inherently immutable any unintended changes to the data are hard to track and verify Alternatively blockchain frameworks such as Hyperledger Fabric and Ethereum can also be used as a ledger However this adds complexity as you need to setup an entire blockchain network with multiple nodes manage its infrastructure and require the nodes to validate each transaction before it can be added to the ledger Amazon QLDB is a new class of database that eliminates the need to engage in the complex development effort of building your own ledgerlike applications With QLDB your data’s change history is immutable – it cannot be altered or deleted – and using cryptography you can easily verify 28Overview of Amazon Web Services AWS Whitepaper Amazon Timestream that there have been no unintended modifications to your application’s data QLDB uses an immutable transactional log known as a journal that tracks each application data change and maintains a complete and verifiable history of changes over time QLDB is easy to use because it provides developers with a familiar SQLlike API a flexible document data model and full support for transactions QLDB is also serverless so it automatically scales to support the demands of your application There are no servers to manage and no read or write limits to configure With QLDB you only pay for what you use Amazon Timestream Amazon Timestream is a fast scalable fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases Driven by the rise of IoT devices IT systems and smart industrial machines timeseries data — data that measures how things change over time — is one of the fastest growing data types Timeseries data has specific characteristics such as typically arriving in time order form data is appendonly and queries are always over a time interval While relational databases can store this data they are inefficient at processing this data as they lack optimizations such as storing and retrieving data by time intervals Timestream is a purposebuilt time series database that efficiently stores and processes this data by time intervals With Timestream you can easily store and analyze log data for DevOps sensor data for IoT applications and industrial telemetry data for equipment maintenance As your data grows over time Timestream’s adaptive query processing engine understands its location and format making your data simpler and faster to analyze Timestream also automates rollups retention tiering and compression of data so you can manage your data at the lowest possible cost Timestream is serverless so there are no servers to manage It manages timeconsuming tasks such as server provisioning software patching setup configuration or data retention and tiering freeing you to focus on building your applications Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB (with MongoDB compatibility) is a fast scalable highly available and fully managed document database service that supports MongoDB workloads Amazon DocumentDB (with MongoDB compatibility) is designed from the groundup to give you the performance scalability and availability you need when operating missioncritical MongoDB workloads at scale Amazon DocumentDB (with MongoDB compatibility) implements the Apache 20 open source MongoDB 36 and 40 APIs by emulating the responses that a MongoDB client expects from a MongoDB server allowing you to use your existing MongoDB drivers and tools with Amazon DocumentDB (with MongoDB compatibility) Developer Tools Amazon Corretto Amazon Corretto is a nocost multiplatform productionready distribution of the Open Java Development Kit (OpenJDK) Corretto comes with longterm support that will include performance enhancements and security fixes Amazon runs Corretto internally on thousands of production services and Corretto is certified as compatible with the Java SE standard With Corretto you can develop and run Java applications on popular operating systems including Amazon Linux 2 Windows and macOS AWS Cloud9 AWS Cloud9 is a cloudbased integrated development environment (IDE) that lets you write run and debug your code with just a browser It includes a code editor debugger and terminal Cloud9 comes prepackaged with essential tools for popular programming languages including JavaScript Python PHP 29Overview of Amazon Web Services AWS Whitepaper AWS CloudShell and more so you don’t need to install files or configure your development machine to start new projects Since your Cloud9 IDE is cloudbased you can work on your projects from your office home or anywhere using an internetconnected machine Cloud9 also provides a seamless experience for developing serverless applications enabling you to easily define resources debug and switch between local and remote execution of serverless applications With Cloud9 you can quickly share your development environment with your team enabling you to pair program and track each other's inputs in real time AWS CloudShell AWS CloudShell is a browserbased shell that makes it easy to securely manage explore and interact with your AWS resources CloudShell is preauthenticated with your console credentials Common development and operations tools are preinstalled so no local installation or configuration is required With CloudShell you can quickly run scripts with the AWS Command Line Interface (AWS CLI) experiment with AWS service APIs using the AWS SDKs or use a range of other tools to be productive You can use CloudShell right from your browser and at no additional cost AWS CodeArtifact AWS CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store publish and share software packages used in their software development process CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions CodeArtifact works with commonly used package managers and build tools like Maven Gradle npm yarn twine pip and NuGet making it easy to integrate into existing development workflows AWS CodeBuild AWS CodeBuild is a fully managed build service that compiles source code runs tests and produces software packages that are ready to deploy With CodeBuild you don’t need to provision manage and scale your own build servers CodeBuild scales continuously and processes multiple builds concurrently so your builds are not left waiting in a queue You can get started quickly by using prepackaged build environments or you can create custom build environments that use your own build tools AWS CodeCommit AWS CodeCommit is a fully managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories AWS CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure You can use AWS CodeCommit to securely store anything from source code to binaries and it works seamlessly with your existing Git tools AWS CodeDeploy AWS CodeDeploy is a service that automates code deployments to any instance including EC2 instances and instances running on premises CodeDeploy makes it easier for you to rapidly release new features helps you avoid downtime during application deployment and handles the complexity of updating your applications You can use CodeDeploy to automate software deployments eliminating the need for errorprone manual operations The service scales with your infrastructure so you can easily deploy to one instance or thousands AWS CodePipeline AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates CodePipeline automates the build 30Overview of Amazon Web Services AWS Whitepaper AWS CodeStar test and deploy phases of your release process every time there is a code change based on the release model you define This enables you to rapidly and reliably deliver features and updates You can easily integrate CodePipeline with thirdparty services such as GitHub or with your own custom plugin With AWS CodePipeline you only pay for what you use There are no upfront fees or longterm commitments AWS CodeStar AWS CodeStar enables you to quickly develop build and deploy applications on AWS AWS CodeStar provides a unified user interface enabling you to easily manage your software development activities in one place With AWS CodeStar you can set up your entire continuous delivery toolchain in minutes allowing you to start releasing code faster AWS CodeStar makes it easy for your whole team to work together securely allowing you to easily manage access and add owners contributors and viewers to your projects Each AWS CodeStar project comes with a project management dashboard including an integrated issue tracking capability powered by Atlassian JIRA Software With the AWS CodeStar project dashboard you can easily track progress across your entire software development process from your backlog of work items to teams’ recent code deployments For more information see AWS CodeStar features AWS Fault Injection Simulator AWS Fault Injection Simulator is a fully managed service for running fault injection experiments on AWS that makes it easier to improve an application’s performance observability and resiliency Fault injection experiments are used in chaos engineering which is the practice of stressing an application in testing or production environments by creating disruptive events such as sudden increase in CPU or memory consumption observing how the system responds and implementing improvements Fault injection experiment helps teams create the realworld conditions needed to uncover the hidden bugs monitoring blind spots and performance bottlenecks that are difficult to find in distributed systems Fault Injection Simulator simplifies the process of setting up and running controlled fault injection experiments across a range of AWS services so teams can build confidence in their application behavior With Fault Injection Simulator teams can quickly set up experiments using prebuilt templates that generate the desired disruptions Fault Injection Simulator provides the controls and guardrails that teams need to run experiments in production such as automatically rolling back or stopping the experiment if specific conditions are met With a few clicks in the console teams can run complex scenarios with common distributed system failures happening in parallel or building sequentially over time enabling them to create the real world conditions necessary to find hidden weaknesses AWS XRay AWS XRay helps developers analyze and debug distributed applications in production or under development such as those built using a microservices architecture With XRay you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors XRay provides an endtoend view of requests as they travel through your application and shows a map of your application’s underlying components You can use X Ray to analyze both applications in development and in production from simple threetier applications to complex microservices applications consisting of thousands of services End User Computing Topics •Amazon AppStream 20 (p 32) •Amazon WorkSpaces (p 32) 31Overview of Amazon Web Services AWS Whitepaper Amazon AppStream 20 •Amazon WorkLink (p 32) Amazon AppStream 20 Amazon AppStream 20 is a fully managed application streaming service You centrally manage your desktop applications on AppStream 20 and securely deliver them to any computer You can easily scale to any number of users across the globe without acquiring provisioning and operating hardware or infrastructure AppStream 20 is built on AWS so you benefit from a data center and network architecture designed for the most securitysensitive organizations Each user has a fluid and responsive experience with your applications including GPUintensive 3D design and engineering ones because your applications run on virtual machines (VMs) optimized for specific use cases and each streaming session automatically adjusts to network conditions Enterprises can use AppStream 20 to simplify application delivery and complete their migration to the cloud Educational institutions can provide every student access to the applications they need for class on any computer Software vendors can use AppStream 20 to deliver trials demos and training for their applications with no downloads or installations They can also develop a full softwareasaservice (SaaS) solution without rewriting their application Amazon WorkSpaces Amazon WorkSpaces is a fully managed secure cloud desktop service You can use WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe You can pay either monthly or hourly just for the WorkSpaces you launch which helps you save money when compared to traditional desktops and onpremises VDI solutions WorkSpaces helps you eliminate the complexity in managing hardware inventory OS versions and patches and Virtual Desktop Infrastructure (VDI) which helps simplify your desktop delivery strategy With WorkSpaces your users get a fast responsive desktop of their choice that they can access anywhere anytime from any supported device Amazon WorkLink Amazon WorkLink is a fully managed service that lets you provide your employees with secure easy access to your internal corporate websites and web apps using their mobile phones Traditional solutions such as Virtual Private Networks (VPNs) and device management software are inconvenient to use on the go and often require the use of custom browsers that have a poor user experience As a result employees often forgo using them altogether With Amazon WorkLink employees can access internal web content as easily as they access any public website without the hassle of connecting to their corporate network When a user accesses an internal website the page is first rendered in a browser running in a secure container in AWS Amazon WorkLink then sends the contents of that page to employee phones as vector graphics while preserving the functionality and interactivity of the page This approach is more secure than traditional solutions because internal content is never stored or cached by the browser on employee phones and employee devices never connect directly to your corporate network With Amazon WorkLink there are no minimum fees or longterm commitments You pay only for users that connect to the service each month and there is no additional charge for bandwidth consumption FrontEnd Web & Mobile Services Topics •Amazon Location Service (p 33) 32Overview of Amazon Web Services AWS Whitepaper Amazon Location Service •Amazon Pinpoint (p 33) •AWS Amplify (p 33) •AWS Device Farm (p 34) •AWS AppSync (p 34) Amazon Location Service Amazon Location Service makes it easy for developers to add location functionality to applications without compromising data security and user privacy Location data is a vital ingredient in today’s applications enabling capabilities ranging from asset tracking to locationbased marketing However developers face significant barriers when integrating location functionality into their applications This includes cost privacy and security compromises and tedious and slow integration work Amazon Location Service provides affordable data tracking and geofencing capabilities and native integrations with AWS services so you can create sophisticated locationenabled applications quickly without the high cost of custom development You retain control of your location data with Amazon Location and you can combine proprietary data with data from the service Amazon Location provides costeffective locationbased services (LBS) using highquality data from global trusted providers Esri and HERE Amazon Pinpoint Amazon Pinpoint makes it easy to send targeted messages to your customers through multiple engagement channels Examples of targeted campaigns are promotional alerts and customer retention campaigns and transactional messages are messages such as order confirmations and password reset messages You can integrate Amazon Pinpoint into your mobile and web apps to capture usage data to provide you with insight into how customers interact with your apps Amazon Pinpoint also tracks the ways that your customers respond to the messages you send—for example by showing you the number of messages that were delivered opened or clicked You can develop custom audience segments and send them prescheduled targeted campaigns via email SMS and push notifications Targeted campaigns are useful for sending promotional or educational content to reengage and retain your users You can send transactional messages using the console or the Amazon Pinpoint REST API Transactional campaigns can be sent via email SMS push notifications and voice messages You can also use the API to build custom applications that deliver campaign and transactional messages AWS Amplify AWS Amplify makes it easy to create configure and implement scalable mobile applications powered by AWS Amplify seamlessly provisions and manages your mobile backend and provides a simple framework to easily integrate your backend with your iOS Android Web and React Native frontends Amplify also automates the application release process of both your frontend and backend allowing you to deliver features faster Mobile applications require cloud services for actions that can’t be done directly on the device such as offline data synchronization storage or data sharing across multiple users You often have to configure set up and manage multiple services to power the backend You also have to integrate each of those services into your application by writing multiple lines of code However as the number of application 33Overview of Amazon Web Services AWS Whitepaper AWS Device Farm features grow your code and release process becomes more complex and managing the backend requires more time Amplify provisions and manages backends for your mobile applications You just select the capabilities you need such as authentication analytics or offline data sync and Amplify will automatically provision and manage the AWS service that powers each of the capabilities You can then integrate those capabilities into your application through the Amplify libraries and UI components AWS Device Farm AWS Device Farm is an app testing service that lets you test and interact with your Android iOS and web apps on many devices at once or reproduce issues on a device in real time View video screenshots logs and performance data to pinpoint and fix issues before shipping your app AWS AppSync AWS AppSync is a serverless backend for mobile web and enterprise applications AWS AppSync makes it easy to build data driven mobile and web applications by handling securely all the application data management tasks like online and offline data access data synchronization and data manipulation across multiple data sources AWS AppSync uses GraphQL an API query language designed to build client applications by providing an intuitive and flexible syntax for describing their data requirement Game Tech Topics •Amazon GameLift (p 34) •Amazon Lumberyard (p 34) Amazon GameLift Amazon GameLift is a managed service for deploying operating and scaling dedicated game servers for sessionbased multiplayer games Amazon GameLift makes it easy to manage server infrastructure scale capacity to lower latency and cost match players into available game sessions and defend from distributed denialofservice (DDoS) attacks You pay for the compute resources and bandwidth your games actually use without monthly or annual contracts Amazon Lumberyard Amazon Lumberyard is a free crossplatform 3D game engine for you to create the highestquality games connect your games to the vast compute and storage of the AWS Cloud and engage fans on Twitch By starting game projects with Lumberyard you can spend more of your time creating great gameplay and building communities of fans and less time on the undifferentiated heavy lifting of building a game engine and managing server infrastructure Internet of Things (IoT) Topics 34Overview of Amazon Web Services AWS Whitepaper AWS IoT 1Click •AWS IoT 1Click (p 35) •AWS IoT Analytics (p 35) •AWS IoT Button (p 36) •AWS IoT Core (p 36) •AWS IoT Device Defender (p 36) •AWS IoT Device Management (p 37) •AWS IoT Events (p 37) •AWS IoT Greengrass (p 37) •AWS IoT SiteWise (p 37) •AWS IoT Things Graph (p 38) •AWS Partner Device Catalog (p 38) •FreeRTOS (p 38) AWS IoT 1Click AWS IoT 1Click is a service that enables simple devices to trigger AWS Lambda functions that can execute an action AWS IoT 1Click supported devices enable you to easily perform actions such as notifying technical support tracking assets and replenishing goods or services AWS IoT 1Click supported devices are ready for use right out of the box and eliminate the need for writing your own firmware or configuring them for secure connectivity AWS IoT 1Click supported devices can be easily managed You can easily create device groups and associate them with a Lambda function that runs your desired action when triggered You can also track device health and activity with the prebuilt reports AWS IoT Analytics AWS IoT Analytics is a fullymanaged service that makes it easy to run and operationalize sophisticated analytics on massive volumes of IoT data without having to worry about the cost and complexity typically required to build an IoT analytics platform It is the easiest way to run analytics on IoT data and get insights to make better and more accurate decisions for IoT applications and machine learning use cases IoT data is highly unstructured which makes it difficult to analyze with traditional analytics and business intelligence tools that are designed to process structured data IoT data comes from devices that often record fairly noisy processes (such as temperature motion or sound) The data from these devices can frequently have significant gaps corrupted messages and false readings that must be cleaned up before analysis can occur Also IoT data is often only meaningful in the context of additional third party data inputs For example to help farmers determine when to water their crops vineyard irrigation systems often enrich moisture sensor data with rainfall data from the vineyard allowing for more efficient water usage while maximizing harvest yield AWS IoT Analytics automates each of the difficult steps that are required to analyze data from IoT devices AWS IoT Analytics filters transforms and enriches IoT data before storing it in a timeseries data store for analysis You can setup the service to collect only the data you need from your devices apply mathematical transforms to process the data and enrich the data with devicespecific metadata such as device type and location before storing the processed data Then you can analyze your data by running ad hoc or scheduled queries using the builtin SQL query engine or perform more complex analytics and machine learning inference AWS IoT Analytics makes it easy to get started with machine learning by including prebuilt models for common IoT use cases You can also use your own custom analysis packaged in a container to execute on AWS IoT Analytics AWS IoT Analytics automates the execution of your custom analyses created in Jupyter Notebook or your own tools (such as Matlab Octave etc) to be executed on your schedule 35Overview of Amazon Web Services AWS Whitepaper AWS IoT Button AWS IoT Analytics is a fully managed service that operationalizes analyses and scales automatically to support up to petabytes of IoT data With AWS IoT Analytics you can analyze data from millions of devices and build fast responsive IoT applications without managing hardware or infrastructure AWS IoT Button The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware This simple WiFi device is easy to configure and it’s designed for developers to get started with AWS IoT Core AWS Lambda Amazon DynamoDB Amazon SNS and many other Amazon Web Services without writing devicespecific code You can code the button's logic in the cloud to configure button clicks to count or track items call or alert someone start or stop something order services or even provide feedback For example you can click the button to unlock or start a car open your garage door call a cab call your spouse or a customer service representative track the use of common household chores medications or products or remotely control your home appliances The button can be used as a remote control for Netflix a switch for your Philips Hue light bulb a checkin/checkout device for Airbnb guests or a way to order your favorite pizza for delivery You can integrate it with thirdparty APIs like Twitter Facebook Twilio Slack or even your own company's applications Connect it to things we haven’t even thought of yet AWS IoT Core AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices AWS IoT Core can support billions of devices and trillions of messages and can process and route those messages to AWS endpoints and to other devices reliably and securely With AWS IoT Core your applications can keep track of and communicate with all your devices all the time even when they aren’t connected AWS IoT Core makes it easy to use AWS services like AWS Lambda Amazon Kinesis Amazon S3 Amazon SageMaker Amazon DynamoDB Amazon CloudWatch AWS CloudTrail and Amazon QuickSight to build Internet of Things (IoT) applications that gather process analyze and act on data generated by connected devices without having to manage any infrastructure AWS IoT Device Defender AWS IoT Device Defender is a fully managed service that helps you secure your fleet of IoT devices AWS IoT Device Defender continuously audits your IoT configurations to make sure that they aren’t deviating from security best practices A configuration is a set of technical controls you set to help keep information secure when devices are communicating with each other and the cloud AWS IoT Device Defender makes it easy to maintain and enforce IoT configurations such as ensuring device identity authenticating and authorizing devices and encrypting device data AWS IoT Device Defender continuously audits the IoT configurations on your devices against a set of predefined security best practices AWS IoT Device Defender sends an alert if there are any gaps in your IoT configuration that might create a security risk such as identity certificates being shared across multiple devices or a device with a revoked identity certificate trying to connect to AWS IoT Core AWS IoT Device Defender also lets you continuously monitor security metrics from devices and AWS IoT Core for deviations from what you have defined as appropriate behavior for each device If something doesn’t look right AWS IoT Device Defender sends out an alert so you can take action to remediate the issue For example traffic spikes in outbound traffic might indicate that a device is participating in a DDoS attack AWS IoT Greengrass and FreeRTOS automatically integrate with AWS IoT Device Defender to provide security metrics from the devices for evaluation AWS IoT Device Defender can send alerts to the AWS IoT Console Amazon CloudWatch and Amazon SNS If you determine that you need to take an action based on an alert you can use AWS IoT Device Management to take mitigating actions such as pushing security fixes 36Overview of Amazon Web Services AWS Whitepaper AWS IoT Device Management AWS IoT Device Management As many IoT deployments consist of hundreds of thousands to millions of devices it is essential to track monitor and manage connected device fleets You need to ensure your IoT devices work properly and securely after they have been deployed You also need to secure access to your devices monitor health detect and remotely troubleshoot problems and manage software and firmware updates AWS IoT Device Management makes it easy to securely onboard organize monitor and remotely manage IoT devices at scale With AWS IoT Device Management you can register your connected devices individually or in bulk and easily manage permissions so that devices remain secure You can also organize your devices monitor and troubleshoot device functionality query the state of any IoT device in your fleet and send firmware updates overtheair (OTA) AWS IoT Device Management is agnostic to device type and OS so you can manage devices from constrained microcontrollers to connected cars all with the same service AWS IoT Device Management allows you to scale your fleets and reduce the cost and effort of managing large and diverse IoT device deployments AWS IoT Events AWS IoT Events is a fully managed IoT service that makes it easy to detect and respond to events from IoT sensors and applications Events are patterns of data identifying more complicated circumstances than expected such as changes in equipment when a belt is stuck or connected motion detectors using movement signals to activate lights and security cameras To detect events before AWS IoT Events you had to build costly custom applications to collect data apply decision logic to detect an event and then trigger another application to react to the event Using AWS IoT Events it’s simple to detect events across thousands of IoT sensors sending different telemetry data such as temperature from a freezer humidity from respiratory equipment and belt speed on a motor and hundreds of equipment management applications You simply select the relevant data sources to ingest define the logic for each event using simple ‘ifthenelse’ statements and select the alert or custom action to trigger when an event occurs AWS IoT Events continuously monitors data from multiple IoT sensors and applications and it integrates with other services such as AWS IoT Core and AWS IoT Analytics to enable early detection and unique insights into events AWS IoT Events automatically triggers alerts and actions in response to events based on the logic you define This helps resolve issues quickly reduce maintenance costs and increase operational efficiency AWS IoT Greengrass AWS IoT Greengrass seamlessly extends AWS to devices so they can act locally on the data they generate while still using the cloud for management analytics and durable storage With AWS IoT Greengrass connected devices can run AWS Lambda functions execute predictions based on machine learning models keep device data in sync and communicate with other devices securely – even when not connected to the Internet With AWS IoT Greengrass you can use familiar languages and programming models to create and test your device software in the cloud and then deploy it to your devices AWS IoT Greengrass can be programmed to filter device data and only transmit necessary information back to the cloud You can also connect to thirdparty applications onpremises software and AWS services outofthebox with AWS IoT Greengrass Connectors Connectors also jumpstart device onboarding with prebuilt protocol adapter integrations and allow you to streamline authentication via integration with AWS Secrets Manager AWS IoT SiteWise AWS IoT SiteWise is a managed service that makes it easy to collect store organize and monitor data from industrial equipment at scale to help you make better datadriven decisions You can use AWS IoT SiteWise to monitor operations across facilities quickly compute common industrial performance 37Overview of Amazon Web Services AWS Whitepaper AWS IoT Things Graph metrics and create applications that analyze industrial equipment data to prevent costly equipment issues and reduce gaps in production This allows you to collect data consistently across devices identify issues with remote monitoring more quickly and improve multisite processes with centralized data Today getting performance metrics from industrial equipment is challenging because data is often locked into proprietary onpremises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data This gateway securely connects to your onpremises data servers collects data and sends the data to the AWS Cloud AWS IoT SiteWise also provides interfaces for collecting data from modern industrial applications through MQTT messages or APIs You can use AWS IoT SiteWise to model your physical assets processes and facilities quickly compute common industrial performance metrics and create fully managed web applications to help analyze industrial equipment data reduce costs and make faster decisions With AWS IoT SiteWise you can focus on understanding and optimizing your operations rather than building costly inhouse data collection and management applications AWS IoT Things Graph AWS IoT Things Graph is a service that makes it easy to visually connect different devices and web services to build IoT applications IoT applications are being built today using a variety of devices and web services to automate tasks for a wide range of use cases such as smart homes industrial automation and energy management Because there aren't any widely adopted standards it's difficult today for developers to get devices from multiple manufacturers to connect to each other as well as with web services This forces developers to write lots of code to wire together all of the devices and web services they need for their IoT application AWS IoT Things Graph provides a visual draganddrop interface for connecting and coordinating devices and web services so you can build IoT applications quickly For example in a commercial agriculture application you can define interactions between humidity temperature and sprinkler sensors with weather data services in the cloud to automate watering You represent devices and services using prebuilt reusable components called models that hide lowlevel details such as protocols and interfaces and are easy to integrate to create sophisticated workflows You can get started with AWS IoT Things Graph using these prebuilt models for popular device types such as switches and programmable logic controllers (PLCs) or create your own custom model using a GraphQLbased schema modeling language and deploy your IoT application to AWS IoT Greengrass enabled devices such as cameras cable settop boxes or robotic arms in just a few clicks IoT Greengrass is software that provides local compute and secure cloud connectivity so devices can respond quickly to local events even without internet connectivity and runs on a huge range of devices from a Raspberry Pi to a serverlevel appliance IoT Things Graph applications run on IoT Greengrassenabled devices AWS Partner Device Catalog The AWS Partner Device Catalog helps you find devices and hardware to help you explore build and go to market with your IoT solutions Search for and find hardware that works with AWS including development kits and embedded systems to build new devices as well as offtheshelfdevices such as gateways edge servers sensors and cameras for immediate IoT project integration The choice of AWS enabled hardware from our curated catalog of devices from APN partners can help make the rollout of your IoT projects easier All devices listed in the AWS Partner Device Catalog are also available for purchase from our partners to get you started quickly FreeRTOS FreeRTOS is an operating system for microcontrollers that makes small lowpower edge devices easy to program deploy secure connect and manage FreeRTOS extends the FreeRTOS kernel a popular 38Overview of Amazon Web Services AWS Whitepaper Machine Learning open source operating system for microcontrollers with software libraries that make it easy to securely connect your small lowpower devices to AWS cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass A microcontroller (MCU) is a single chip containing a simple processor that can be found in many devices including appliances sensors fitness trackers industrial automation and automobiles Many of these small devices could benefit from connecting to the cloud or locally to other devices For example smart electricity meters need to connect to the cloud to report on usage and building security systems need to communicate locally so that a door will unlock when you badge in Microcontrollers have limited compute power and memory capacity and typically perform simple functional tasks Microcontrollers frequently run operating systems that do not have builtin functionality to connect to local networks or the cloud making IoT applications a challenge FreeRTOS helps solve this problem by providing both the core operating system (to run the edge device) as well as software libraries that make it easy to securely connect to the cloud (or other edge devices) so you can collect data from them for IoT applications and take action Machine Learning Topics •Amazon Augmented AI (p 40) •Amazon CodeGuru (p 40) •Amazon Comprehend (p 40) •Amazon DevOps Guru (p 40) •Amazon Elastic Inference (p 41) •Amazon Forecast (p 41) •Amazon Fraud Detector (p 42) •Amazon HealthLake (p 42) •Amazon Kendra (p 42) •Amazon Lex (p 42) •Amazon Lookout for Equipment (p 43) •Amazon Lookout for Metrics (p 43) •Amazon Lookout for Vision (p 43) •Amazon Monitron (p 43) •Amazon Personalize (p 44) •Amazon Polly (p 44) •Amazon Rekognition (p 44) •Amazon SageMaker (p 45) •Amazon SageMaker Ground Truth (p 45) •Amazon Textract (p 46) •Amazon Transcribe (p 46) •Amazon Translate (p 46) •Apache MXNet on AWS (p 46) •AWS Deep Learning AMIs (p 47) •AWS DeepComposer (p 47) •AWS DeepLens (p 47) •AWS DeepRacer (p 47) •AWS Inferentia (p 47) •TensorFlow on AWS (p 48) 39Overview of Amazon Web Services AWS Whitepaper Amazon Augmented AI Amazon Augmented AI Amazon Augmented AI (Amazon A2I) is a machine learning service which makes it easy to build the workflows required for human review Amazon A2I brings human review to all developers removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers whether it runs on AWS or not Amazon CodeGuru Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application's performance in production and provide recommendations and visual clues on how to improve code quality application performance and reduce overall cost CodeGuru Reviewer uses machine learning and automated reasoning to identify critical issues security vulnerabilities and hardtofind bugs during application development and provides recommendations to improve code quality CodeGuru Profiler helps developers find an application’s most expensive lines of code by helping them understand the runtime behavior of their applications identify and remove code inefficiencies improve performance and significantly decrease compute costs Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text No machine learning experience required There is a treasure trove of potential sitting in your unstructured data Customer emails support tickets product reviews social media even advertising copy represents insights into customer sentiment that can be put to work for your business The question is how to get at it? As it turns out Machine learning is particularly good at accurately identifying specific items of interest inside vast swathes of text (such as finding company names in analyst reports) and can learn the sentiment hidden inside language (identifying negative reviews or positive customer interactions with customer service agents) at almost limitless scale Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data The service identifies the language of the text; extracts key phrases places people brands or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic You can also use AutoML capabilities in Amazon Comprehend to build a custom set of entities or text classification models that are tailored uniquely to your organization’s needs For extracting complex medical information from unstructured text you can use Amazon Comprehend Medical The service can identify medical information such as medical conditions medications dosages strengths and frequencies from a variety of sources like doctor’s notes clinical trial reports and patient health records Amazon Comprehend Medical also identifies the relationship among the extracted medication and test treatment and procedure information for easier analysis For example the service identifies a particular dosage strength and frequency related to a specific medication from unstructured clinical notes Amazon DevOps Guru Amazon DevOps Guru is a Machine Learning (ML) powered service that makes it easy to improve an application’s operational performance and availability DevOps Guru detects behaviors that deviate from normal operating patterns so you can identify operational issues long before they impact your customers 40Overview of Amazon Web Services AWS Whitepaper Amazon Elastic Inference DevOps Guru uses machine learning models informed by years of Amazoncom and AWS operational excellence to identify anomalous application behavior (eg increased latency error rates resource constraints etc) and surface critical issues that could cause potential outages or service disruptions When DevOps Guru identifies a critical issue it automatically sends an alert and provides a summary of related anomalies the likely root cause and context about when and where the issue occurred When possible DevOps Guru also provides recommendations on how to remediate the issue DevOps Guru automatically ingests operational data from your AWS applications and provides a single dashboard to visualize issues in your operational data You can get started with DevOps Guru by selecting coverage from your CloudFormation stacks or your AWS account to improve application availability and reliability with no manual setup or machine learning expertise Amazon Elastic Inference Amazon Elastic Inference allows you to attach lowcost GPUpowered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75% Amazon Elastic Inference supports TensorFlow Apache MXNet PyTorch and ONNX models In most deep learning applications making predictions using a trained model—a process called inference —can drive as much as 90% of the compute costs of the application due to two factors First standalone GPU instances are designed for model training and are typically oversized for inference While training jobs batch process hundreds of data samples in parallel most inference happens on a single input in real time that consumes only a small amount of GPU compute Even at peak load a GPU's compute capacity may not be fully utilized which is wasteful and costly Second different models need different amounts of GPU CPU and memory resources Selecting a GPU instance type that is big enough to satisfy the requirements of the least used resource often results in underutilization of the other resources and high costs Amazon Elastic Inference solves these problems by allowing you to attach just the right amount of GPUpowered inference acceleration to any EC2 or SageMaker instance type with no code changes With Amazon Elastic Inference you can now choose the instance type that is best suited to the overall CPU and memory needs of your application and then separately configure the amount of inference acceleration that you need to use resources efficiently and to reduce the cost of running inference Amazon Forecast Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts Companies today use everything from simple spreadsheets to complex financial planning software to attempt to accurately forecast future business outcomes such as product demand resource needs or financial performance These tools build forecasts by looking at a historical series of data which is called time series data For example such tools may try to predict the future sales of a raincoat by looking only at its previous sales data with the underlying assumption that the future is determined by the past This approach can struggle to produce accurate forecasts for large sets of data that have irregular trends Also it fails to easily combine data series that change over time (such as price discounts web traffic and number of employees) with relevant independent variables like product features and store locations Based on the same technology used at Amazoncom Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts Amazon Forecast requires no machine learning experience to get started You only need to provide historical data plus any additional data that you believe may impact your forecasts For example the demand for a particular color of a shirt may change with the seasons and store location This complex relationship is hard to determine on its own but machine learning is ideally suited to recognize it Once you provide your data Amazon Forecast will automatically examine it identify what is meaningful and produce a forecasting model capable of making predictions that are up to 50% more accurate than looking at time series data alone 41Overview of Amazon Web Services AWS Whitepaper Amazon Fraud Detector Amazon Forecast is a fully managed service so there are no servers to provision and no machine learning models to build train or deploy You pay only for what you use and there are no minimum fees and no upfront commitments Amazon Fraud Detector Amazon Fraud Detector is a fully managed service that uses machine learning (ML) and more than 20 years of fraud detection expertise from Amazon to identify potentially fraudulent activity so customers can catch more online fraud faster Amazon Fraud Detector automates the time consuming and expensive steps to build train and deploy an ML model for fraud detection making it easier for customers to leverage the technology Amazon Fraud Detector customizes each model it creates to a customer’s own dataset making the accuracy of models higher than current onesize fits all ML solutions And because you pay only for what you use you avoid large upfront expenses Amazon HealthLake Amazon HealthLake is a HIPAAeligible service that healthcare providers health insurance companies and pharmaceutical companies can use to store transform query and analyze largescale health data Health data is frequently incomplete and inconsistent It's also often unstructured with information contained in clinical notes lab reports insurance claims medical images recorded conversations and timeseries data (for example heart ECG or brain EEG traces) Healthcare providers can use HealthLake to store transform query and analyze data in the AWS Cloud Using the HealthLake integrated medical natural language processing (NLP) capabilities you can analyze unstructured clinical text from diverse sources HealthLake transforms unstructured data using natural language processing models and provides powerful query and search capabilities You can use HealthLake to organize index and structure patient information in a secure compliant and auditable manner Amazon Kendra Amazon Kendra is an intelligent search service powered by machine learning Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for even when it’s scattered across multiple locations and content repositories within your organization Using Amazon Kendra you can stop searching through troves of unstructured data and discover the right answers to your questions when you need them Amazon Kendra is a fully managed service so there are no servers to provision and no machine learning models to build train or deploy Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using voice and text Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging user experiences and lifelike conversational interactions With Amazon Lex the same deep learning technologies that power Amazon Alexa are now available to any developer enabling you to quickly and easily build sophisticated natural language conversational bots (“chatbots”) Speech recognition and natural language understanding are some of the most challenging problems to solve in computer science requiring sophisticated deep learning algorithms to be trained on massive amounts of data and infrastructure Amazon Lex democratizes these deep learning technologies by putting the power of Alexa within reach of all developers Harnessing these technologies Amazon Lex enables you to define entirely new categories of products made possible through conversational interfaces 42Overview of Amazon Web Services AWS Whitepaper Amazon Lookout for Equipment Amazon Lookout for Equipment Amazon Lookout for Equipment analyzes the data from the sensors on your equipment (eg pressure in a generator flow rate of a compressor revolutions per minute of fans) to automatically train a machine learning model based on just your data for your equipment – with no ML expertise required Lookout for Equipment uses your unique ML model to analyze incoming sensor data in realtime and accurately identify early warning signs that could lead to machine failures This means you can detect equipment abnormalities with speed and precision quickly diagnose issues take action to reduce expensive downtime and reduce false alerts Amazon Lookout for Metrics Amazon Lookout for Metrics uses machine learning (ML) to automatically detect and diagnose anomalies (ie outliers from the norm) in business and operational data such as a sudden dip in sales revenue or customer acquisition rates In a couple of clicks you can connect Amazon Lookout for Metrics to popular data stores like Amazon S3 Amazon Redshift and Amazon Relational Database Service (RDS) as well as thirdparty SaaS applications such as Salesforce Servicenow Zendesk and Marketo and start monitoring metrics that are important to your business Amazon Lookout for Metrics automatically inspects and prepares the data from these sources to detect anomalies with greater speed and accuracy than traditional methods used for anomaly detection You can also provide feedback on detected anomalies to tune the results and improve accuracy over time Amazon Lookout for Metrics makes it easy to diagnose detected anomalies by grouping together anomalies that are related to the same event and sending an alert that includes a summary of the potential root cause It also ranks anomalies in order of severity so that you can prioritize your attention to what matters the most to your business Amazon Lookout for Vision Amazon Lookout for Vision is a machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV) With Amazon Lookout for Vision manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale For example Amazon Lookout for Vision can be used to identify missing components in products damage to vehicles or structures irregularities in production lines miniscule defects in silicon wafers and other similar problems Amazon Lookout for Vision uses ML to see and understand images from any camera as a person would but with an even higher degree of accuracy and at a much larger scale Amazon Lookout for Vision allows customers to eliminate the need for costly and inconsistent manual inspection while improving quality control defect and damage assessment and compliance In minutes you can begin using Amazon Lookout for Vision to automate inspection of images and objects– with no machine learning expertise required Amazon Monitron Amazon Monitron is an endtoend system that uses machine learning (ML) to detect abnormal behavior in industrial machinery enabling you to implement predictive maintenance and reduce unplanned downtime Installing sensors and the necessary infrastructure for data connectivity storage analytics and alerting are foundational elements for enabling predictive maintenance However in order to make it work companies have historically needed skilled technicians and data scientists to piece together a complex solution from scratch This included identifying and procuring the right type of sensors for their use cases and connecting them together with an IoT gateway (a device that aggregates and transmits data) As a result few companies have been able to successfully implement predictive maintenance Amazon Monitron includes sensors to capture vibration and temperature data from equipment a gateway device to securely transfer data to AWS the Amazon Monitron service that analyzes the data for abnormal machine patterns using machine learning and a companion mobile app to set up the devices 43Overview of Amazon Web Services AWS Whitepaper Amazon Personalize and receive reports on operating behavior and alerts to potential failures in your machinery You can start monitoring equipment health in minutes without any development work or ML experience required and enable predictive maintenance with the same technology used to monitor equipment in Amazon Fulfillment Centers Amazon Personalize Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications Machine learning is being increasingly used to improve customer engagement by powering personalized product and content recommendations tailored search results and targeted marketing promotions However developing the machinelearning capabilities necessary to produce these sophisticated recommendation systems has been beyond the reach of most organizations today due to the complexity of developing machine learning functionality Amazon Personalize allows developers with no prior machine learning experience to easily build sophisticated personalization capabilities into their applications using machine learning technology perfected from years of use on Amazoncom With Amazon Personalize you provide an activity stream from your application – page views signups purchases and so forth – as well as an inventory of the items you want to recommend such as articles products videos or music You can also choose to provide Amazon Personalize with additional demographic information from your users such as age or geographic location Amazon Personalize will process and examine the data identify what is meaningful select the right algorithms and train and optimize a personalization model that is customized for your data All data analyzed by Amazon Personalize is kept private and secure and only used for your customized recommendations You can start serving your personalized predictions via a simple API call from inside the virtual private cloud that the service maintains You pay only for what you use and there are no minimum fees and no upfront commitments Amazon Personalize is like having your own Amazoncom machine learning personalization team at your disposal 24 hours a day Amazon Polly Amazon Polly is a service that turns text into lifelike speech Polly lets you create applications that talk enabling you to build entirely new categories of speechenabled products Polly is an Amazon artificial intelligence (AI) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice Polly includes a wide selection of lifelike voices spread across dozens of languages so you can select the ideal voice and build speechenabled applications that work in many different countries Amazon Polly delivers the consistently fast response times required to support realtime interactive dialog You can cache and save Polly’s speech audio to replay offline or redistribute And Polly is easy to use You simply send the text you want converted into speech to the Polly API and Polly immediately returns the audio stream to your application so your application can play it directly or store it in a standard audio file format such as MP3 With Polly you only pay for the number of characters you convert to speech and you can save and replay Polly’s generated speech Polly’s low cost per character converted and lack of restrictions on storage and reuse of voice output make it a costeffective way to enable TexttoSpeech everywhere Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to your applications using proven highly scalable deep learning technology that requires no machine learning expertise to use With 44Overview of Amazon Web Services AWS Whitepaper Amazon SageMaker Amazon Rekognition you can identify objects people text scenes and activities in images and videos as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect analyze and compare faces for a wide variety of user verification people counting and public safety use cases With Amazon Rekognition Custom Labels you can identify the objects and scenes in images that are specific to your business needs For example you can build a model to classify specific machine parts on your assembly line or to detect unhealthy plants Amazon Rekognition Custom Labels takes care of the heavy lifting of model development for you so no machine learning experience is required You simply need to supply images of objects or scenes you want to identify and the service handles the rest Amazon SageMaker Amazon SageMaker is a fullymanaged service that enables developers and data scientists to quickly and easily build train and deploy machine learning models at any scale SageMaker removes all the barriers that typically slow down developers who want to use machine learning Machine learning often feels a lot harder than it should be to most developers because the process to build and train models and then deploy them into production is too complicated and too slow First you need to collect and prepare your training data to discover which elements of your data set are important Then you need to select which algorithm and framework you’ll use After deciding on your approach you need to teach the model how to make predictions by training which requires a lot of compute Then you need to tune the model so it delivers the best possible predictions which is often a tedious and manual effort After you’ve developed a fully trained model you need to integrate the model with your application and deploy this application on infrastructure that will scale All of this takes a lot of specialized expertise access to large amounts of compute and storage and a lot of time to experiment and optimize every part of the process In the end it's not a surprise that the whole thing feels out of reach for most developers SageMaker removes the complexity that holds back developer success with each of these steps SageMaker includes modules that can be used together or independently to build train and deploy your machine learning models Amazon SageMaker Ground Truth Amazon SageMaker Ground Truth helps you build highly accurate training datasets for machine learning quickly SageMaker Ground Truth offers easy access to public and private human labelers and provides them with builtin workflows and interfaces for common labeling tasks Additionally SageMaker Ground Truth can lower your labeling costs by up to 70% using automatic labeling which works by training Ground Truth from data labeled by humans so that the service learns to label data independently Successful machine learning models are built on the shoulders of large volumes of highquality training data But the process to create the training data necessary to build these models is often expensive complicated and timeconsuming The majority of models created today require a human to manually label data in a way that allows the model to learn how to make correct decisions For example building a computer vision system that is reliable enough to identify objects such as traffic lights stop signs and pedestrians requires thousands of hours of video recordings that consist of hundreds of millions of video frames Each one of these frames needs all of the important elements like the road other cars and signage to be labeled by a human before any work can begin on the model you want to develop Amazon SageMaker Ground Truth significantly reduces the time and effort required to create datasets for training to reduce costs These savings are achieved by using machine learning to automatically label data The model is able to get progressively better over time by continuously learning from labels created by human labelers Where the labeling model has high confidence in its results based on what it has learned so far it will automatically apply labels to the raw data Where the labeling model has lower confidence in its results 45Overview of Amazon Web Services AWS Whitepaper Amazon Textract it will pass the data to humans to do the labeling The humangenerated labels are provided back to the labeling model for it to learn from and improve Over time SageMaker Ground Truth can label more and more data automatically and substantially speed up the creation of training datasets Amazon Textract Amazon Textract is a service that automatically extracts text and data from scanned documents Amazon Textract goes beyond simple optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables Many companies today extract data from documents and forms through manual data entry that’s slow and expensive or through simple optical character recognition (OCR) software that is difficult to customize Rules and workflows for each document and form often need to be hardcoded and updated with each change to the form or when dealing with multiple forms If the form deviates from the rules the output is often scrambled and unusable Amazon Textract overcomes these challenges by using machine learning to instantly “read” virtually any type of document to accurately extract text and data without the need for any manual effort or custom code With Textract you can quickly automate document workflows enabling you to process millions of document pages in hours Once the information is captured you can take action on it within your business applications to initiate next steps for a loan application or medical claims processing Additionally you can create smart search indexes build automated approval workflows and better maintain compliance with document archival rules by flagging data that may require redaction Amazon Transcribe Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speechtotext capability to their applications Using the Amazon Transcribe API you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech You can also send a live audio stream to Amazon Transcribe and receive a stream of transcripts in real time Amazon Transcribe can be used for lots of common applications including the transcription of customer service calls and generating subtitles on audio and video content The service can transcribe audio files stored in common formats like WAV and MP3 with time stamps for every word so that you can easily locate the audio in the original source by searching for the text Amazon Transcribe is continually learning and improving to keep pace with the evolution of language Amazon Translate Amazon Translate is a neural machine translation service that delivers fast highquality and affordable language translation Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rulebased translation algorithms Amazon Translate allows you to localize content such as websites and applications for international users and to easily translate large volumes of text efficiently Apache MXNet on AWS Apache MXNet on AWS is a fast and scalable training and inference framework with an easytouse concise API for machine learning MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud on edge devices and on mobile apps In just a few lines of Gluon code you can build linear regression convolutional networks and recurrent LSTMs for object detection speech recognition recommendation and personalization 46Overview of Amazon Web Services AWS Whitepaper AWS Deep Learning AMIs You can get started with MxNet on AWS with a fullymanaged experience using SageMaker a platform to build train and deploy machine learning models at scale Or you can use the AWS Deep Learning AMIs to build custom environments and workflows with MxNet as well as other frameworks including TensorFlow PyTorch Chainer Keras Caffe Caffe2 and Microsoft Cognitive Toolkit AWS Deep Learning AMIs The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud at any scale You can quickly launch Amazon EC2 instances preinstalled with popular deep learning frameworks such as Apache MXNet and Gluon TensorFlow Microsoft Cognitive Toolkit Caffe Caffe2 Theano Torch PyTorch Chainer and Keras to train sophisticated custom AI models experiment with new algorithms or to learn new skills and techniques AWS DeepComposer AWS DeepComposer is the world’s first musical keyboard powered by machine learning to enable developers of all skill levels to learn Generative AI while creating original music outputs DeepComposer consists of a USB keyboard that connects to the developer’s computer and the DeepComposer service accessed through the AWS Management Console DeepComposer includes tutorials sample code and training data that can be used to start building generative models AWS DeepLens AWS DeepLens helps put deep learning in the hands of developers literally with a fully programmable video camera tutorials code and pretrained models designed to expand deep learning skills AWS DeepRacer AWS DeepRacer is a 1/18th scale race car which gives you an interesting and fun way to get started with reinforcement learning (RL) RL is an advanced machine learning (ML) technique which takes a very different approach to training models than other machine learning methods Its super power is that it learns very complex behaviors without requiring any labeled training data and can make short term decisions while optimizing for a longer term goal With AWS DeepRacer you now have a way to get handson with RL experiment and learn through autonomous driving You can get started with the virtual car and tracks in the cloudbased 3D racing simulator and for a realworld experience you can deploy your trained models onto AWS DeepRacer and race your friends or take part in the global AWS DeepRacer League Developers the race is on AWS Inferentia AWS Inferentia is a machine learning inference chip designed to deliver high performance at low cost AWS Inferentia will support the TensorFlow Apache MXNet and PyTorch deep learning frameworks as well as models that use the ONNX format Making predictions using a trained machine learning model–a process called inference–can drive as much as 90% of the compute costs of the application Using Amazon Elastic Inference developers can reduce inference costs by up to 75% by attaching GPUpowered inference acceleration to Amazon EC2 and SageMaker instances However some inference workloads require an entire GPU or have extremely low latency requirements Solving this challenge at low cost requires a dedicated inference chip AWS Inferentia provides high throughput low latency inference performance at an extremely low cost Each chip provides hundreds of TOPS (tera operations per second) of inference throughput to allow complex models to make fast predictions For even more performance multiple AWS Inferentia chips can 47Overview of Amazon Web Services AWS Whitepaper TensorFlow on AWS be used together to drive thousands of TOPS of throughput AWS Inferentia will be available for use with SageMaker Amazon EC2 and Amazon Elastic Inference TensorFlow on AWS TensorFlow enables developers to quickly and easily get started with deep learning in the cloud The framework has broad support in the industry and has become a popular choice for deep learning research and application development particularly in areas such as computer vision natural language understanding and speech translation You can get started on AWS with a fullymanaged TensorFlow experience with SageMaker a platform to build train and deploy machine learning models at scale Or you can use the AWS Deep Learning AMIs to build custom environments and workflows with TensorFlow and other popular frameworks including Apache MXNet PyTorch Caffe Caffe2 Chainer Gluon Keras and Microsoft Cognitive Toolkit Management and Governance Topics •Amazon CloudWatch (p 48) •AWS Auto Scaling (p 49) •AWS Chatbot (p 49) •AWS Compute Optimizer (p 49) •AWS Control Tower (p 49) •AWS CloudFormation (p 50) •AWS CloudTrail (p 50) •AWS Config (p 50) •AWS Launch Wizard (p 51) •AWS Organizations (p 51) •AWS OpsWorks (p 51) •AWS Proton (p 51) •AWS Service Catalog (p 51) •AWS Systems Manager (p 52) •AWS Trusted Advisor (p 53) •AWS Personal Health Dashboard (p 53) •AWS Managed Services (p 53) •AWS Console Mobile Application (p 53) •AWS License Manager (p 54) •AWS WellArchitected Tool (p 54) Amazon CloudWatch Amazon CloudWatch is a monitoring and management service built for developers system operators site reliability engineers (SRE) and IT managers CloudWatch provides you with data and actionable insights to monitor your applications understand and respond to systemwide performance changes optimize resource utilization and get a unified view of operational health CloudWatch collects monitoring and operational data in the form of logs metrics and events providing you with a unified view of AWS resources applications and services that run on AWS and onpremises servers You can use CloudWatch to set high resolution alarms visualize logs and metrics side by side take automated 48Overview of Amazon Web Services AWS Whitepaper AWS Auto Scaling actions troubleshoot issues and discover insights to optimize your applications and ensure they are running smoothly AWS Auto Scaling AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady predictable performance at the lowest possible cost Using AWS Auto Scaling it’s easy to setup application scaling for multiple resources across multiple services in minutes The service provides a simple powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets Amazon ECS tasks Amazon DynamoDB tables and indexes and Amazon Aurora Replicas AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance costs or balance between them If you’re already using Amazon EC2 Auto Scaling to dynamically scale your Amazon EC2 instances you can now combine it with AWS Auto Scaling to scale additional resources for other AWS services With AWS Auto Scaling your applications always have the right resources at the right time AWS Chatbot AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources in your Slack channels and Amazon Chime chat rooms With AWS Chatbot you can receive alerts run commands to return diagnostic information invoke AWS Lambda functions and create AWS support cases AWS Chatbot manages the integration between AWS services and your Slack channels or Amazon Chime chat rooms helping you to get started with ChatOps fast With just a few clicks you can start receiving notifications and issuing commands in your chosen channels or chat rooms so your team doesn’t have to switch contexts to collaborate AWS Chatbot makes it easier for your team to stay updated collaborate and respond faster to operational events security findings CI/CD workflows budget and other alerts for applications running in your AWS accounts AWS Compute Optimizer AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics Over provisioning resources can lead to unnecessary infrastructure cost and underprovisioning resources can lead to poor application performance Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances Amazon EBS volumes and AWS Lambda functions based on your utilization data By applying the knowledge drawn from Amazon’s own experience running diverse workloads in the cloud Compute Optimizer identifies workload patterns and recommends optimal AWS resources Compute Optimizer analyzes the configuration and resource utilization of your workload to identify dozens of defining characteristics for example if a workload is CPUintensive if it exhibits a daily pattern or if a workload accesses local storage frequently The service processes these characteristics and identifies the hardware resource required by the workload Compute Optimizer infers how the workload would have performed on various hardware platforms (eg Amazon EC2 instances types) or using different configurations (eg Amazon EBS volume IOPS settings and AWS Lambda function memory sizes) to offer recommendations Compute Optimizer is available to you at no additional charge To get started you can opt in to the service in the AWS Compute Optimizer Console AWS Control Tower AWS Control Tower automates the setup of a baseline environment or landing zone that is a secure wellarchitected multiaccount AWS environment The configuration of the landing zone is based on 49Overview of Amazon Web Services AWS Whitepaper AWS CloudFormation best practices that have been established by working with thousands of enterprise customers to create a secure environment that makes it easier to govern AWS workloads with rules for security operations and compliance As enterprises migrate to AWS they typically have a large number of applications and distributed teams They often want to create multiple accounts to allow their teams to work independently while still maintaining a consistent level of security and compliance In addition they use AWS’s management and security services like AWS Organizations AWS Service Catalog and AWS Config that provide very granular controls over their workloads They want to maintain this control but they also want a way to centrally govern and enforce the best use of AWS services across all the accounts in their environment Control Tower automates the setup of their landing zone and configures AWS management and security services based on established best practices in a secure compliant multiaccount environment Distributed teams are able to provision new AWS accounts quickly while central teams have the peace of mind knowing that new accounts are aligned with centrally established companywide compliance policies This gives you control over your environment without sacrificing the speed and agility AWS provides your development teams AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion You can use the AWS CloudFormation sample templates or create your own templates to describe your AWS resources and any associated dependencies or runtime parameters required to run your application You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work CloudFormation takes care of this for you After the AWS resources are deployed you can modify and update them in a controlled and predictable way in effect applying version control to your AWS infrastructure the same way you do with your software You can also visualize your templates as diagrams and edit them using a draganddrop interface with the AWS CloudFormation Designer AWS CloudTrail AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service With CloudTrail you can get a history of AWS API calls for your account including API calls made using the AWS Management Console AWS SDKs command line tools and higherlevel AWS services (such as AWS CloudFormation (p 50)) The AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing AWS Config AWS Config is a fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance The Config Rules feature enables you to create rules that automatically check the configuration of AWS resources recorded by AWS Config With AWS Config you can discover existing and deleted AWS resources determine your overall compliance against rules and dive into configuration details of a resource at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting 50Overview of Amazon Web Services AWS Whitepaper AWS Launch Wizard AWS Launch Wizard AWS Launch Wizard offers a guided way of sizing configuring and deploying AWS resources for third party applications such as Microsoft SQL Server Always On and HANA based SAP systems without the need to manually identify and provision individual AWS resources To start you input your application requirements including performance number of nodes and connectivity on the service console Launch Wizard then identifies the right AWS resources such as EC2 instances and EBS volumes to deploy and run your application Launch Wizard provides an estimated cost of deployment and lets you modify your resources to instantly view an updated cost assessment Once you approve the AWS resources Launch Wizard automatically provisions and configures the selected resources to create a fullyfunctioning productionready application AWS Launch Wizard also creates CloudFormation templates that can serve as a baseline to accelerate subsequent deployments Launch Wizard is available to you at no additional charge You only pay for the AWS resources that are provisioned for running your solution AWS Organizations AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources Using AWS Organizations you can programmatically create new AWS accounts and allocate resources group accounts to organize your workflows apply policies to accounts or groups for governance and simplify billing by using a single payment method for all of your accounts In addition AWS Organizations is integrated with other AWS services so you can define central configurations security mechanisms audit requirements and resource sharing across accounts in your organization AWS Organizations is available to all AWS customers at no additional charge AWS OpsWorks AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers OpsWorks lets you use Chef and Puppet to automate how servers are configured deployed and managed across your Amazon EC2 instances or onpremises compute environments OpsWorks has three offerings AWS OpsWorks for Chef Automate AWS OpsWorks for Puppet Enterprise and AWS OpsWorks Stacks AWS Proton AWS Proton is the first fully managed delivery service for container and serverless applications Platform engineering teams can use AWS Proton to connect and coordinate all the different tools needed for infrastructure provisioning code deployments monitoring and updates Maintaining hundreds – or sometimes thousands – of microservices with constantly changing infrastructure resources and continuous integration/continuous delivery (CI/CD) configurations is a nearly impossible task for even the most capable platform teams AWS Proton solves this by giving platform teams the tools they need to manage this complexity and enforce consistent standards while making it easy for developers to deploy their code using containers and serverless technologies AWS Service Catalog AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multitier application architectures AWS Service Catalog allows you to 51Overview of Amazon Web Services AWS Whitepaper AWS Systems Manager centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need AWS Systems Manager AWS Systems Manager gives you visibility and control of your infrastructure on AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources With Systems Manager you can group resources like Amazon EC2 instances Amazon S3 buckets or Amazon RDS instances by application view operational data for monitoring and troubleshooting and take action on your groups of resources Systems Manager simplifies resource and application management shortens the time to detect and resolve operational problems and makes it easy to operate and manage your infrastructure securely at scale AWS Systems Manager contains the following tools: •Resource groups: Lets you create a logical group of resources associated with a particular workload such as different layers of an application stack or production versus development environments For example you can group different layers of an application such as the frontend web layer and the backend data layer Resource groups can be created updated or removed programmatically through the API •Insights Dashboard: Displays operational data that the AWS Systems Manager automatically aggregates for each resource group Systems Manager eliminates the need for you to navigate across multiple AWS consoles to view your operational data With Systems Manager you can view API call logs from AWS CloudTrail resource configuration changes from AWS Config software inventory and patch compliance status by resource group You can also easily integrate your Amazon CloudWatch Dashboards AWS Trusted Advisor notifications and AWS Personal Health Dashboard performance and availability alerts into your Systems Manager dashboard Systems Manager centralizes all relevant operational data so you can have a clear view of your infrastructure compliance and performance •Run Command: Provides a simple way of automating common administrative tasks like remotely executing shell scripts or PowerShell commands installing software updates or making changes to the configuration of OS software EC2 and instances and servers in your onpremises data center •State Manager: Helps you define and maintain consistent OS configurations such as firewall settings and antimalware definitions to comply with your policies You can monitor the configuration of a large set of instances specify a configuration policy for the instances and automatically apply updates or configuration changes •Inventory: Helps you collect and query configuration and inventory information about your instances and the software installed on them You can gather details about your instances such as installed applications DHCP settings agent detail and custom items You can run queries to track and audit your system configurations •Maintenance Window: Lets you define a recurring window of time to run administrative and maintenance tasks across your instances This ensures that installing patches and updates or making other configuration changes does not disrupt businesscritical operations This helps improve your application availability •Patch Manager: Helps you select and deploy operating system and software patches automatically across large groups of instances You can define a maintenance window so that patches are applied only during set times that fit your needs These capabilities help ensure that your software is always up to date and meets your compliance policies •Automation: Simplifies common maintenance and deployment tasks such as updating Amazon Machine Images (AMIs) Use the Automation feature to apply patches update drivers and agents or bake applications into your AMI using a streamlined repeatable and auditable process •Parameter Store: Provides an encrypted location to store important administrative information such as passwords and database strings The Parameter Store integrates with AWS KMS to make it easy to encrypt the information you keep in the Parameter Store 52Overview of Amazon Web Services AWS Whitepaper AWS Trusted Advisor •Distributor: Helps you securely distribute and install software packages such as software agents Systems Manager Distributor allows you to centrally store and systematically distribute software packages while you maintain control over versioning You can use Distributor to create and distribute software packages and then install them using Systems Manager Run Command and State Manager Distributor can also use AWS Identity and Access Management (IAM) policies to control who can create or update packages in your account You can use the existing IAM policy support for Systems Manager Run Command and State Manager to define who can install packages on your hosts •Session Manager: Provides a browserbased interactive shell and CLI for managing Windows and Linux EC2 instances without the need to open inbound ports manage SSH keys or use bastion hosts Administrators can grant and revoke access to instances through a central location by using AWS Identity and Access Management (IAM) policies This allows you to control which users can access each instance including the option to provide nonroot access to specified users Once access is provided you can audit which user accessed an instance and log each command to Amazon S3 or Amazon CloudWatch Logs using AWS CloudTrail AWS Trusted Advisor AWS Trusted Advisor is an online resource to help you reduce cost increase performance and improve security by optimizing your AWS environment Trusted Advisor provides realtime guidance to help you provision your resources following AWS best practices AWS Personal Health Dashboard AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that might affect you While the Service Health Dashboard displays the general status of AWS services Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources The dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for scheduled activities With Personal Health Dashboard alerts are automatically triggered by changes in the health of AWS resources giving you event visibility and guidance to help quickly diagnose and resolve issues AWS Managed Services AWS Managed Services provides ongoing management of your AWS infrastructure so you can focus on your applications By implementing best practices to maintain your infrastructure AWS Managed Services helps to reduce your operational overhead and risk AWS Managed Services automates common activities such as change requests monitoring patch management security and backup services and provides fulllifecycle services to provision run and support your infrastructure Our rigor and controls help to enforce your corporate and security infrastructure policies and enables you to develop solutions and applications using your preferred development approach AWS Managed Services improves agility reduces cost and unburdens you from infrastructure operations so you can direct resources toward differentiating your business AWS Console Mobile Application The AWS Console Mobile Application lets customers view and manage a select set of resources to support incident response while onthego The Console Mobile Application allows AWS customers to monitor resources through a dedicated dashboard and view configuration details metrics and alarms for select AWS services The Dashboard provides permitted users with a single view a resource's status with realtime data on Amazon CloudWatch Personal Health Dashboard and AWS Billing and Cost Management Customers can view ongoing issues and follow through to the relevant CloudWatch alarm screen for a detailed view with 53Overview of Amazon Web Services AWS Whitepaper AWS License Manager graphs and configuration options In addition customers can check on the status of specific AWS services view detailed resource screens and perform select actions AWS License Manager AWS License Manager makes it easier to manage licenses in AWS and onpremises servers from software vendors such as Microsoft SAP Oracle and IBM AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements and then enforces these rules when an instance of Amazon EC2 gets launched Administrators can use these rules to limit licensing violations such as using more licenses than an agreement stipulates or reassigning licenses to different servers on a shortterm basis The rules in AWS License Manager enable you to limit a licensing breach by physically stopping the instance from launching or by notifying administrators about the infringement Administrators gain control and visibility of all their licenses with the AWS License Manager dashboard and reduce the risk of noncompliance misreporting and additional costs due to licensing overages AWS License Manager integrates with AWS services to simplify the management of licenses across multiple AWS accounts IT catalogs and onpremises through a single AWS account License administrators can add rules in AWS Service Catalog which allows them to create and manage catalogs of IT services that are approved for use on all their AWS accounts Through seamless integration with AWS Systems Manager and AWS Organizations administrators can manage licenses across all the AWS accounts in an organization and onpremises environments AWS Marketplace buyers can also use AWS License Manager to track bring your own license (BYOL) software obtained from the Marketplace and keep a consolidated view of all their licenses AWS WellArchitected Tool The AWS WellArchitected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices The tool is based on the AWS WellArchitected Framework developed to help cloud architects build secure highperforming resilient and efficient application infrastructure This Framework provides a consistent approach for customers and partners to evaluate architectures has been used in tens of thousands of workload reviews conducted by the AWS solutions architecture team and provides guidance to help implement designs that scale with application needs over time To use this free tool available in the AWS Management Console just define your workload and answer a set of questions regarding operational excellence security reliability performance efficiency and cost optimization The AWS WellArchitected Tool then provides a plan on how to architect for the cloud using established best practices Media Services Topics •Amazon Elastic Transcoder (p 55) •Amazon Interactive Video Service (p 55) •Amazon Nimble Studio (p 55) •AWS Elemental Appliances & Software (p 55) •AWS Elemental MediaConnect (p 55) •AWS Elemental MediaConvert (p 56) •AWS Elemental MediaLive (p 56) •AWS Elemental MediaPackage (p 56) •AWS Elemental MediaStore (p 56) 54Overview of Amazon Web Services AWS Whitepaper Amazon Elastic Transcoder •AWS Elemental MediaTailor (p 56) Amazon Elastic Transcoder Amazon Elastic Transcoder is media transcoding in the cloud It is designed to be a highly scalable easy touse and costeffective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones tablets and PCs Amazon Interactive Video Service Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is quick and easy to set up and ideal for creating interactive video experiences Send your live streams to Amazon IVS using streaming software and the service does everything you need to make lowlatency live video available to any viewer around the world letting you focus on building interactive experiences alongside the live video You can easily customize and enhance the audience experience through the Amazon IVS player SDK and timed metadata APIs allowing you to build a more valuable relationship with your viewers on your own websites and applications Amazon Nimble Studio Amazon Nimble Studio empowers creative studios to produce visual effects animation and interactive content entirely in the cloud from storyboard sketch to final deliverable Rapidly onboard and collaborate with artists globally and create content faster with access to virtual workstations highspeed storage and scalable rendering across AWS’s global infrastructure AWS Elemental Appliances & Software AWS Elemental Appliances and Software solutions bring advanced video processing and delivery technologies into your data center colocation space or onpremises facility You can deploy AWS Elemental Appliances and Software to encode package and deliver video assets onpremises and seamlessly connect with cloudbased video infrastructure Designed for easy integration with AWS Cloud media solutions AWS Elemental Appliances and Software support video workloads that need to remain onpremises to accommodate physical camera and router interfaces managed network delivery or network bandwidth constraints AWS Elemental Live Server and Conductor come in two variants: readytodeploy appliances or AWS licensed software that you install on your own hardware AWS Elemental Link is a compact hardware device that sends live video to the cloud for encoding and delivery to viewers AWS Elemental MediaConnect AWS Elemental MediaConnect is a highquality transport service for live video Today broadcasters and content owners rely on satellite networks or fiber connections to send their highvalue content into the cloud or to transmit it to partners for distribution Both satellite and fiber approaches are expensive require long lead times to set up and lack the flexibility to adapt to changing requirements To be more nimble some customers have tried to use solutions that transmit live video on top of IP infrastructure but have struggled with reliability and security Now you can get the reliability and security of satellite and fiber combined with the flexibility agility and economics of IPbased networks using AWS Elemental MediaConnect MediaConnect enables you to build missioncritical live video workflows in a fraction of the time and cost of satellite or fiber services You can use MediaConnect to ingest live video from a remote event site (like a stadium) share video with a partner (like a cable TV distributor) or replicate a video stream for processing (like an overthe 55Overview of Amazon Web Services AWS Whitepaper AWS Elemental MediaConvert top service) MediaConnect combines reliable video transport highly secure stream sharing and real time network traffic and video monitoring that allow you to focus on your content not your transport infrastructure AWS Elemental MediaConvert AWS Elemental MediaConvert is a filebased video transcoding service with broadcastgrade features It allows you to easily create videoondemand (VOD) content for broadcast and multiscreen delivery at scale The service combines advanced video and audio capabilities with a simple web services interface and payasyougo pricing With AWS Elemental MediaConvert you can focus on delivering compelling media experiences without having to worry about the complexity of building and operating your own video processing infrastructure AWS Elemental MediaLive AWS Elemental MediaLive is a broadcastgrade live video processing service It lets you create high quality video streams for delivery to broadcast televisions and internetconnected multiscreen devices like connected TVs tablets smart phones and settop boxes The service works by encoding your live video streams in realtime taking a largersized live video source and compressing it into smaller versions for distribution to your viewers With AWS Elemental MediaLive you can easily set up streams for both live events and 24x7 channels with advanced broadcasting features high availability and pay asyougo pricing AWS Elemental MediaLive lets you focus on creating compelling live video experiences for your viewers without the complexity of building and operating broadcastgrade video processing infrastructure AWS Elemental MediaPackage AWS Elemental MediaPackage reliably prepares and protects your video for delivery over the Internet From a single video input AWS Elemental MediaPackage creates video streams formatted to play on connected TVs mobile phones computers tablets and game consoles It makes it easy to implement popular video features for viewers (startover pause rewind etc) like those commonly found on DVRs AWS Elemental MediaPackage can also protect your content using Digital Rights Management (DRM) AWS Elemental MediaPackage scales automatically in response to load so your viewers will always get a great experience without you having to accurately predict in advance the capacity you’ll need AWS Elemental MediaStore AWS Elemental MediaStore is an AWS storage service optimized for media It gives you the performance consistency and low latency required to deliver live streaming video content AWS Elemental MediaStore acts as the origin store in your video workflow Its high performance capabilities meet the needs of the most demanding media delivery workloads combined with longterm costeffective storage AWS Elemental MediaTailor AWS Elemental MediaTailor lets video providers insert individually targeted advertising into their video streams without sacrificing broadcastlevel qualityofservice With AWS Elemental MediaTailor viewers of your live or ondemand video each receive a stream that combines your content with ads personalized to them But unlike other personalized ad solutions with AWS Elemental MediaTailor your entire stream – video and ads – is delivered with broadcastgrade video quality to improve the experience for your viewers AWS Elemental MediaTailor delivers automated reporting based on both client and serverside ad delivery metrics making it easy to accurately measure ad impressions and viewer behavior You can easily monetize unexpected highdemand viewing events with no upfront costs using AWS Elemental MediaTailor It also improves ad delivery rates helping you make more money from every video and it works with a wider variety of content delivery networks ad decision servers and client devices 56Overview of Amazon Web Services AWS Whitepaper Migration and Transfer See also Amazon Kinesis Video Streams (p 12) Migration and Transfer Topics •AWS Application Migration Service (p 57) •AWS Migration Hub (p 57) •AWS Application Discovery Service (p 57) •AWS Database Migration Service (p 58) •AWS Server Migration Service (p 58) •AWS Snow Family (p 58) •AWS DataSync (p 59) •AWS Transfer Family (p 59) AWS Application Migration Service AWS Application Migration Service (AWS MGN) allows you to quickly realize the benefits of migrating applications to the cloud without changes and with minimal downtime AWS Application Migration Service minimizes timeintensive errorprone manual processes by automatically converting your source servers from physical virtual or cloud infrastructure to run natively on AWS It further simplifies your migration by enabling you to use the same automated process for a wide range of applications And by launching nondisruptive tests before migrating you can be confident that your most critical applications such as SAP Oracle and SQL Server will work seamlessly on AWS AWS Migration Hub AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions Using Migration Hub allows you to choose the AWS and partner migration tools that best fit your needs while providing visibility into the status of migrations across your portfolio of applications Migration Hub also provides key metrics and progress for individual applications regardless of which tools are being used to migrate them For example you might use AWS Database Migration Service AWS Server Migration Service and partner migration tools such as ATADATA ATAmotion CloudEndure Live Migration or RiverMeadow Server Migration Saas to migrate an application comprised of a database virtualized web servers and a bare metal server Using Migration Hub you can view the migration progress of all the resources in the application This allows you to quickly get progress updates across all of your migrations easily identify and troubleshoot any issues and reduce the overall time and effort spent on your migration projects AWS Application Discovery Service AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their onpremises data centers Planning data center migrations can involve thousands of workloads that are often deeply interdependent Server utilization data and dependency mapping are important early first steps in the migration process AWS Application Discovery Service collects and presents configuration usage and behavior data from your servers to help you better understand your workloads 57Overview of Amazon Web Services AWS Whitepaper AWS Database Migration Service The collected data is retained in encrypted format in an AWS Application Discovery Service data store You can export this data as a CSV file and use it to estimate the Total Cost of Ownership (TCO) of running on AWS and to plan your migration to AWS In addition this data is also available in AWS Migration Hub where you can migrate the discovered servers and track their progress as they get migrated to AWS AWS Database Migration Service AWS Database Migration Service helps you migrate databases to AWS easily and securely The source database remains fully operational during the migration minimizing downtime to applications that rely on the database The AWS Database Migration Service can migrate your data to and from most widely used commercial and opensource databases The service supports homogeneous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora PostgreSQL MySQL MariaDB Oracle SAP ASE and SQL Server enabling consolidation and easy analysis of data in the petabytescale data warehouse AWS Database Migration Service can also be used for continuous data replication with high availability AWS Server Migration Service AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of onpremises workloads to AWS AWS SMS allows you to automate schedule and track incremental replications of live server volumes making it easier for you to coordinate largescale server migrations AWS Snow Family The AWS Snow Family helps customers that need to run operations in austere nondata center environments and in locations where there's lack of consistent network connectivity The Snow Family comprises AWS Snowcone AWS Snowball and AWS Snowmobile and offers a number of physical devices and capacity points most with builtin computing capabilities These services help physically transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and computing capabilities AWS Snowcone AWS Snowcone is the smallest member of the AWS Snow Family of edge computing edge storage and data transfer devices weighing in at 45 pounds (21 kg) with 8 terabytes of usable storage Snowcone is ruggedized secure and purposebuilt for use outside of a traditional data center Its small form factor makes it a perfect fit for tight spaces or where portability is a necessity and network connectivity is unreliable You can use Snowcone in backpacks on first responders or for IoT vehicular and drone use cases You can execute compute applications at the edge and you can ship the device with data to AWS for offline data transfer or you can transfer data online with AWS DataSync from edge locations Like AWS Snowball Snowcone has multiple layers of security and encryption You can use either of these services to run edge computing workloads or to collect process and transfer data to AWS Snowcone is designed for data migration needs up to 8 terabytes per device and from spaceconstrained environments where AWS Snowball devices will not fit AWS Snowball AWS Snowball is an edge computing data migration and edge storage device that comes in two options Snowball Edge Storage Optimized devices provide both block storage and Amazon S3compatible object 58Overview of Amazon Web Services AWS Whitepaper AWS DataSync storage and 40 vCPUs They are well suited for local storage and large scaledata transfer Snowball Edge Compute Optimized devices provide 52 vCPUs block and object storage and an optional GPU for use cases like advanced machine learning and full motion video analysis in disconnected environments You can use these devices for data collection machine learning and processing and storage in environments with intermittent connectivity (like manufacturing industrial and transportation) or in extremely remote locations (like military or maritime operations) before shipping them back to AWS These devices may also be rack mounted and clustered together to build larger temporary installations Snowball supports specific Amazon EC2 instance types and AWS Lambda functions so you can develop and test in the AWS Cloud then deploy applications on devices in remote locations to collect pre process and ship the data to AWS Common use cases include data migrati AWS Snowmobile AWS Snowmobile is an exabytescale data transfer service used to move extremely large amounts of data to AWS You can transfer up to 100 PB per Snowmobile a 45foot long ruggedized shipping container pulled by a semitrailer truck Snowmobile makes it easy to move massive volumes of data to the cloud including video libraries image repositories or even a complete data center migration Transferring data with Snowmobile is secure fast and cost effective After an initial assessment a Snowmobile will be transported to your data center and AWS personnel will configure it for you so it can be accessed as a network storage target When your Snowmobile is on site AWS personnel will work with your team to connect a removable highspeed network switch from the Snowmobile to your local network Then you can begin your highspeed data transfer from any number of sources within your data center to the Snowmobile After your data is loaded the Snowmobile is driven back to AWS where your data is imported into Amazon S3 or S3 Glacier AWS Snowmobile uses multiple layers of security designed to protect your data including dedicated security personnel GPS tracking alarm monitoring 24/7 video surveillance and an optional escort security vehicle while in transit All data is encrypted with 256bit encryption keys managed through AWS KMS (p 70) and designed to ensure both security and full chain of custody of your data AWS DataSync AWS DataSync is a data transfer service that makes it easy for you to automate moving data between onpremises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS) DataSync automatically handles many of the tasks related to data transfers that can slow down migrations or burden your IT operations including running your own instances handling encryption managing scripts network optimization and data integrity validation You can use DataSync to transfer data at speeds up to 10 times faster than opensource tools DataSync uses an onpremises software agent to connect to your existing storage or file systems using the Network File System (NFS) protocol so you don’t have write scripts or modify your applications to work with AWS APIs You can use DataSync to copy data over AWS Direct Connect or internet links to AWS The service enables onetime data migrations recurring data processing workflows and automated replication for data protection and recovery Getting started with DataSync is easy: Deploy the DataSync agent on premises connect it to a file system or storage array select Amazon EFS or S3 as your AWS storage and start moving data You pay only for the data you copy AWS Transfer Family AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon EFS With support for Secure File Transfer Protocol (SFTP) File Transfer Protocol over SSL (FTPS) and File Transfer Protocol (FTP) the AWS Transfer Family helps you seamlessly migrate your file transfer workflows to AWS by integrating with existing authentication systems and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners or their applications With your data in Amazon S3 or Amazon EFS you can use it with AWS services for 59Overview of Amazon Web Services AWS Whitepaper Networking and Content Delivery processing analytics machine learning archiving as well as home directories and developer tools Getting started with the AWS Transfer Family is easy; there is no infrastructure to buy and set up Networking and Content Delivery Topics •Amazon API Gateway (p 60) •Amazon CloudFront (p 60) •Amazon Route 53 (p 60) •Amazon VPC (p 61) •AWS App Mesh (p 61) •AWS Cloud Map (p 62) •AWS Direct Connect (p 62) •AWS Global Accelerator (p 62) •AWS PrivateLink (p 63) •AWS Transit Gateway (p 63) •AWS VPN (p 63) •Elastic Load Balancing (p 63) Amazon API Gateway Amazon API Gateway is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale With a few clicks in the AWS Management Console you can create an API that acts as a “front door” for applications to access data business logic or functionality from your backend services such as workloads running on Amazon EC2 code running on AWS Lambda or any web application Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Amazon CloudFront Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data videos applications and APIs to customers globally with low latency high transfer speeds all within a developerfriendly environment CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure as well as other AWS services CloudFront works seamlessly with services including AWS Shield for DDoS mitigation Amazon S3 Elastic Load Balancing or Amazon EC2 as origins for your applications and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience You can get started with the Content Delivery Network in minutes using the same AWS tools that you're already familiar with: APIs AWS Management Console AWS CloudFormation CLIs and SDKs Amazon's CDN offers a simple payasyougo pricing model with no upfront fees or required longterm contracts and support for the CDN is included in your existing AWS Support subscription Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and costeffective way to route end 60Overview of Amazon Web Services AWS Whitepaper Amazon VPC users to Internet applications by translating human readable names such as wwwexamplecom into the numeric IP addresses such as 192021 that computers use to connect to each other Amazon Route 53 is fully compliant with IPv6 as well Amazon Route 53 effectively connects user requests to infrastructure running in AWS—such as EC2 instances Elastic Load Balancing load balancers or Amazon S3 buckets—and can also be used to route users to infrastructure outside of AWS You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints Amazon Route 53 traffic flow makes it easy for you to manage traffic globally through a variety of routing types including latencybased routing Geo DNS and weighted round robin—all of which can be combined with DNS Failover in order to enable a variety of lowlatency faulttolerant architectures Using Amazon Route 53 traffic flow’s simple visual editor you can easily manage how your end users are routed to your application’s endpoints—whether in a single AWS Region or distributed around the globe Amazon Route 53 also offers Domain Name Registration—you can purchase and manage domain names such as examplecom and Amazon Route 53 will automatically configure DNS settings for your domains Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications You can easily customize the network configuration for your VPC For example you can create a public facing subnet for your web servers that has access to the Internet and place your backend systems such as databases or application servers in a privatefacing subnet with no Internet access You can leverage multiple layers of security (including security groups and network access control lists) to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and leverage the AWS Cloud as an extension of your corporate data center AWS App Mesh AWS App Mesh makes it easy to monitor and control microservices running on AWS App Mesh standardizes how your microservices communicate giving you endtoend visibility and helping to ensure highavailability for your applications Modern applications are often composed of multiple microservices that each perform a specific function This architecture helps to increase the availability and scalability of the application by allowing each component to scale independently based on demand and automatically degrading functionality when a component fails instead of going offline Each microservice interacts with all the other microservices through an API As the number of microservices grows within an application it becomes increasingly difficult to pinpoint the exact location of errors reroute traffic after failures and safely deploy code changes Previously this has required you to build monitoring and control logic directly into your code and redeploy your microservices every time there are changes AWS App Mesh makes it easy to run microservices by providing consistent visibility and network traffic controls for every microservice in an application App Mesh removes the need to update application code to change how monitoring data is collected or traffic is routed between microservices App Mesh configures each microservice to export monitoring data and implements consistent communications control logic across your application This makes it easy to quickly pinpoint the exact location of errors and automatically reroute network traffic when there are failures or when code changes need to be deployed 61Overview of Amazon Web Services AWS Whitepaper AWS Cloud Map You can use App Mesh with Amazon ECS and Amazon EKS to better run containerized microservices at scale App Mesh uses the open source Envoy proxy making it compatible with a wide range of AWS partner and open source tools for monitoring microservices AWS Cloud Map AWS Cloud Map is a cloud resource discovery service With Cloud Map you can define custom names for your application resources and it maintains the updated location of these dynamically changing resources This increases your application availability because your web service always discovers the most uptodate locations of its resources Modern applications are typically composed of multiple services that are accessible over an API and perform a specific function Each service interacts with a variety of other resources such as databases queues object stores and customerdefined microservices and they also need to be able to find the location of all the infrastructure resources on which it depends in order to function You typically manually manage all these resource names and their locations within the application code However manual resource management becomes time consuming and errorprone as the number of dependent infrastructure resources increases or the number of microservices dynamically scale up and down based on traffic You can also use thirdparty service discovery products but this requires installing and managing additional software and infrastructure Cloud Map allows you to register any application resources such as databases queues microservices and other cloud resources with custom names Cloud Map then constantly checks the health of resources to make sure the location is uptodate The application can then query the registry for the location of the resources needed based on the application version and deployment environment AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment which in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connections AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021Q virtual LANS (VLANs) this dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as EC2 instances running within a VPC using private IP address space while maintaining network separation between the public and private environments Virtual interfaces can be reconfigured at any time to meet your changing needs AWS Global Accelerator AWS Global Accelerator is a networking service that improves the availability and performance of the applications that you offer to your global users Today if you deliver applications to your global users over the public internet your users might face inconsistent availability and performance as they traverse through multiple public networks to reach your application These public networks are often congested and each hop can introduce availability and performance risk AWS Global Accelerator uses the highly available and congestionfree AWS global network to direct internet traffic from your users to your applications on AWS making your users’ experience more consistent To improve the availability of your application you must monitor the health of your application endpoints and route traffic only to healthy endpoints AWS Global Accelerator improves application 62Overview of Amazon Web Services AWS Whitepaper AWS PrivateLink availability by continuously monitoring the health of your application endpoints and routing traffic to the closest healthy endpoints AWS Global Accelerator also makes it easier to manage your global applications by providing static IP addresses that act as a fixed entry point to your application hosted on AWS which eliminates the complexity of managing specific IP addresses for different AWS Regions and Availability Zones AWS Global Accelerator is easy to set up configure and manage AWS PrivateLink AWS PrivateLink simplifies the security of data shared with cloudbased applications by eliminating the exposure of data to the public Internet AWS PrivateLink provides private connectivity between VPCs AWS services and onpremises applications securely on the Amazon network AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture AWS Transit Gateway AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their onpremises networks to a single gateway As you grow the number of workloads running on AWS you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth Today you can connect pairs of Amazon VPCs using peering However managing pointtopoint connectivity across many Amazon VPCs without the ability to centrally manage the connectivity policies can be operationally costly and cumbersome For onpremises connectivity you need to attach your AWS VPN to each individual Amazon VPC This solution can be time consuming to build and hard to manage when the number of VPCs grows into the hundreds With AWS Transit Gateway you only have to create and manage a single connection from the central gateway in to each Amazon VPC onpremises data center or remote office across your network Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway This ease of connectivity makes it easy to scale your network as you grow AWS VPN AWS Virtual Private Network solutions establish secure connections between your onpremises networks remote offices client devices and the AWS global network AWS VPN is comprised of two services: AWS SitetoSite VPN and AWS Client VPN Each service provides a highlyavailable managed and elastic cloud VPN solution to protect your network traffic AWS SitetoSite VPN creates encrypted tunnels between your network and your Amazon Virtual Private Clouds or AWS Transit Gateways For managing remote access AWS Client VPN connects your users to AWS or onpremises resources using a VPN software client Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers and IP addresses It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones Elastic Load Balancing offers four types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant •Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including 63Overview of Amazon Web Services AWS Whitepaper Quantum Technologies microservices and containers Operating at the individual request level (Layer 7) Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request •Network Load Balancer is best suited for load balancing of TCP traffic where extreme performance is required Operating at the connection level (Layer 4) Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultralow latencies Network Load Balancer is also optimized to handle sudden and volatile traffic patterns •Gateway Load Balancer makes it easy to deploy scale and run thirdparty virtual networking appliances Providing load balancing and auto scaling for fleets of thirdparty appliances Gateway Load Balancer is transparent to the source and destination of traffic This capability makes it well suited for working with thirdparty appliances for security network analytics and other use cases •Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level Classic Load Balancer is intended for applications that were built within the EC2Classic network Quantum Technologies Amazon Braket Amazon Braket is a fully managed quantum computing service that helps researchers and developers get started with the technology to accelerate research and discovery Amazon Braket provides a development environment for you to explore and build quantum algorithms test them on quantum circuit simulators and run them on different quantum hardware technologies Quantum computing has the potential to solve computational problems that are beyond the reach of classical computers by harnessing the laws of quantum mechanics to process information in new ways This approach to computing could transform areas such as chemical engineering material science drug discovery financial portfolio optimization and machine learning But defining those problems and programming quantum computers to solve them requires new skills which are difficult to acquire without easy access to quantum computing hardware Amazon Braket overcomes these challenges so you can explore quantum computing With Amazon Braket you can design and build your own quantum algorithms from scratch or choose from a set of pre built algorithms Once you have built your algorithm Amazon Braket provides a choice of simulators to test troubleshoot and run your algorithms When you are ready you can run your algorithm on your choice of different quantum computers including quantum annealers from DWave and gatebased computers from Rigetti and IonQ With Amazon Braket you can now evaluate the potential of quantum computing for your organization and build expertise Robotics AWS RoboMaker AWS RoboMaker is a service that makes it easy to develop test and deploy intelligent robotics applications at scale RoboMaker extends the most widely used opensource robotics software framework Robot Operating System (ROS) with connectivity to cloud services This includes AWS machine learning services monitoring services and analytics services that enable a robot to stream data navigate communicate comprehend and learn RoboMaker provides a robotics development environment for application development a robotics simulation service to accelerate application testing and a robotics fleet management service for remote application deployment update and management 64Overview of Amazon Web Services AWS Whitepaper Satellite Robots are machines that sense compute and take action Robots need instructions to accomplish tasks and these instructions come in the form of applications that developers code to determine how the robot will behave Receiving and processing sensor data controlling actuators for movement and performing a specific task are all functions that are typically automated by these intelligent robotics applications Intelligent robots are being increasingly used in warehouses to distribute inventory in homes to carry out tedious housework and in retail stores to provide customer service Robotics applications use machine learning in order to perform more complex tasks like recognizing an object or face having a conversation with a person following a spoken command or navigating autonomously Until now developing testing and deploying intelligent robotics applications was difficult and time consuming Building intelligent robotics functionality using machine learning is complex and requires specialized skills Setting up a development environment can take each developer days and building a realistic simulation system to test an application can take months due to the underlying infrastructure needed Once an application has been developed and tested a developer needs to build a deployment system to deploy the application into the robot and later update the application while the robot is in use AWS RoboMaker provides you with the tools to make building intelligent robotics applications more accessible a fully managed simulation service for quick and easy testing and a deployment service for lifecycle management AWS RoboMaker removes the heavy lifting from each step of robotics development so you can focus on creating innovative robotics applications Satellite AWS Ground Station AWS Ground Station is a fully managed service that lets you control satellite communications downlink and process satellite data and scale your satellite operations quickly easily and costeffectively without having to worry about building or managing your own ground station infrastructure Satellites are used for a wide variety of use cases including weather forecasting surface imaging communications and video broadcasts Ground stations are at the core of global satellite networks which are facilities that provide communications between the ground and the satellites by using antennas to receive data and control systems to send radio signals to command and control the satellite Today you must either build your own ground stations and antennas or obtain longterm leases with ground station providers often in multiple countries to provide enough opportunities to contact the satellites as they orbit the globe Once all this data is downloaded you need servers storage and networking in close proximity to the antennas to process store and transport the data from the satellites AWS Ground Station eliminates these problems by delivering a global Ground Station as a Service We provide direct access to AWS services and the AWS Global Infrastructure including our lowlatency global fiber network right where your data is downloaded into our AWS Ground Station This enables you to easily control satellite communications quickly ingest and process your satellite data and rapidly integrate that data with your applications and other services running in the AWS Cloud For example you can use Amazon S3 to store the downloaded data Amazon Kinesis Data Streams for managing data ingestion from satellites SageMaker for building custom machine learning applications that apply to your data sets and Amazon EC2 to command and download data from satellites AWS Ground Station can help you save up to 80% on the cost of your ground station operations by allowing you to pay only for the actual antenna time used and relying on our global footprint of ground stations to download data when and where you need it instead of building and operating your own global ground station infrastructure There are no longterm commitments and you gain the ability to rapidly scale your satellite communications ondemand when your business needs it Security Identity and Compliance Topics 65Overview of Amazon Web Services AWS Whitepaper Amazon Cognito •Amazon Cognito (p 66) •Amazon Cloud Directory (p 66) •Amazon Detective (p 67) •Amazon GuardDuty (p 67) •Amazon Inspector (p 67) •Amazon Macie (p 68) •AWS Artifact (p 68) •AWS Audit Manager (p 68) •AWS Certificate Manager (p 68) •AWS CloudHSM (p 69) •AWS Directory Service (p 69) •AWS Firewall Manager (p 69) •AWS Identity and Access Management (p 69) •AWS Key Management Service (p 70) •AWS Network Firewall (p 70) •AWS Resource Access Manager (p 70) •AWS Secrets Manager (p 71) •AWS Security Hub (p 71) •AWS Shield (p 71) •AWS Single SignOn (p 72) •AWS WAF (p 72) Amazon Cognito Amazon Cognito lets you add user signup signin and access control to your web and mobile apps quickly and easily With Amazon Cognito you also have the option to authenticate users through social identity providers such as Facebook Twitter or Amazon with SAML identity solutions or by using your own identity system In addition Amazon Cognito enables you to save data locally on users’ devices allowing your applications to work even when the devices are offline You can then synchronize data across users’ devices so that their app experience remains consistent regardless of the device they use With Amazon Cognito you can focus on creating great app experiences instead of worrying about building securing and scaling a solution to handle user management authentication and sync across devices Amazon Cloud Directory Amazon Cloud Directory enables you to build flexible cloudnative directories for organizing hierarchies of data along multiple dimensions With Cloud Directory you can create directories for a variety of use cases such as organizational charts course catalogs and device registries While traditional directory solutions such as Active Directory Lightweight Directory Services (AD LDS) and other LDAPbased directories limit you to a single hierarchy Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions For example you can create an organizational chart that can be navigated through separate hierarchies for reporting structure location and cost center Amazon Cloud Directory automatically scales to hundreds of millions of objects and provides an extensible schema that can be shared with multiple applications As a fullymanaged service Cloud Directory eliminates timeconsuming and expensive administrative tasks such as scaling infrastructure 66Overview of Amazon Web Services AWS Whitepaper Amazon Detective and managing servers You simply define the schema create a directory and then populate your directory by making calls to the Cloud Directory API Amazon Detective Amazon Detective makes it easy to analyze investigate and quickly identify the root cause of potential security issues or suspicious activities Amazon Detective automatically collects log data from your AWS resources and uses machine learning statistical analysis and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations AWS security services like Amazon GuardDuty Amazon Macie and AWS Security Hub as well as partner security products can be used to identify potential security issues or findings These services are really helpful in alerting you when something is wrong and pointing out where to go to fix it But sometimes there might be a security finding where you need to dig a lot deeper and analyze more information to isolate the root cause and take action Determining the root cause of security findings can be a complex process that often involves collecting and combining logs from many separate data sources using extract transform and load (ETL) tools or custom scripting to organize the data and then security analysts having to analyze the data and conduct lengthy investigations Amazon Detective simplifies this process by enabling your security teams to easily investigate and quickly get to the root cause of a finding Amazon Detective can analyze trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs AWS CloudTrail and Amazon GuardDuty and automatically creates a unified interactive view of your resources users and the interactions between them over time With this unified view you can visualize all the details and context in one place to identify the underlying reasons for the findings drill down into relevant historical activities and quickly determine the root cause You can get started with Amazon Detective in just a few clicks in the AWS Console There is no software to deploy or data sources to enable and maintain Amazon GuardDuty Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise GuardDuty also detects potentially compromised instances or reconnaissance by attackers Enabled with a few clicks in the AWS Management Console Amazon GuardDuty can immediately begin analyzing billions of events across your AWS accounts for signs of risk GuardDuty identifies suspected attackers through integrated threat intelligence feeds and uses machine learning to detect anomalies in account and workload activity When a potential threat is detected the service delivers a detailed security alert to the GuardDuty console and Amazon CloudWatch Events This makes alerts actionable and easy to integrate into existing event management and workflow systems Amazon GuardDuty is cost effective and easy It does not require you to deploy and maintain software or security infrastructure meaning it can be enabled quickly with no risk of negatively impacting existing application workloads There are no upfront costs with GuardDuty no software to deploy and no threat intelligence feeds required Customers pay for the events analyzed by GuardDuty and there is a 30day free trial available for every new account to the service Amazon Inspector Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS Amazon Inspector automatically assesses applications for exposure vulnerabilities and deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity These findings 67Overview of Amazon Web Services AWS Whitepaper Amazon Macie can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances Amazon Inspector assessments are offered to you as predefined rules packages mapped to common security best practices and vulnerability definitions Examples of builtin rules include checking for access to your EC2 instances from the internet remote root login being enabled or vulnerable software versions installed These rules are regularly updated by AWS security researchers Amazon Macie Amazon Macie is a security service that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks AWS Artifact AWS Artifact is your goto central resource for compliancerelated information that matters to you It provides ondemand access to AWS’ security and compliance reports and select online agreements Reports available in AWS Artifact include our Service Organization Control (SOC) reports Payment Card Industry (PCI) reports and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA) AWS Audit Manager AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards Audit Manager automates evidence collection to reduce the “all hands on deck” manual effort that often happens for audits and enable you to scale your audit capability in the cloud as your business grows With Audit Manager it is easy to assess if your policies procedures and activities – also known as controls – are operating effectively When it is time for an audit AWS Audit Manager helps you manage stakeholder reviews of your controls and enables you to build auditready reports with much less manual effort AWS Audit Manager’s prebuilt frameworks help translate evidence from cloud services into auditor friendly reports by mapping your AWS resources to the requirements in industry standards or regulations such as CIS AWS Foundations Benchmark the General Data Protection Regulation (GDPR) and the Payment Card Industry Data Security Standard (PCI DSS) You can also fully customize a framework and its controls for your unique business requirements Based on the framework you select Audit Manager launches an assessment that continuously collects and organizes relevant evidence from your AWS accounts and resources such as resource configuration snapshots user activity and compliance check results You can get started quickly in the AWS Management Console Just select a prebuilt framework to launch an assessment and begin automatically collecting and organizing evidence AWS Certificate Manager AWS Certificate Manager is a service that lets you easily provision manage and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal 68Overview of Amazon Web Services AWS Whitepaper AWS CloudHSM connected resources SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks AWS Certificate Manager removes the timeconsuming manual process of purchasing uploading and renewing SSL/TLS certificates With AWS Certificate Manager you can quickly request a certificate deploy it on ACMintegrated AWS resources such as Elastic Load Balancing Amazon CloudFront distributions and APIs on API Gateway and let AWS Certificate Manager handle certificate renewals It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally Public and private certificates provisioned through AWS Certificate Manager for use with ACMintegrated services are free You pay only for the AWS resources you create to run your application With AWS Certificate Manager Private Certificate Authority you pay monthly for the operation of the private CA and for the private certificates you issue AWS CloudHSM The AWS CloudHSM is a cloudbased hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud With CloudHSM you can manage your own encryption keys using FIPS 1402 Level 3 validated HSMs CloudHSM offers you the flexibility to integrate with your applications using industrystandard APIs such as PKCS#11 Java Cryptography Extensions (JCE) and Microsoft CryptoNG (CNG) libraries CloudHSM is standardscompliant and enables you to export all of your keys to most other commerciallyavailable HSMs subject to your configurations It is a fullymanaged service that automates timeconsuming administrative tasks for you such as hardware provisioning software patching highavailability and backups CloudHSM also enables you to scale quickly by adding and removing HSM capacity ondemand with no upfront costs AWS Directory Service AWS Directory Service for Microsoft Active Directory also known as AWS Managed Microsoft AD enables your directoryaware workloads and AWS resources to use managed Active Directory in the AWS Cloud AWS Managed Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud You can use standard Active Directory administration tools and take advantage of builtin Active Directory features such as Group Policy and single signon (SSO) With AWS Managed Microsoft AD you can easily join Amazon EC2 and Amazon RDS for SQL Server instances to a domain and use AWS Enterprise IT applications such as Amazon WorkSpaces with Active Directory users and groups AWS Firewall Manager AWS Firewall Manager is a security management service that makes it easier to centrally configure and manage AWS WAF rules across your accounts and applications Using Firewall Manager you can easily roll out AWS WAF rules for your Application Load Balancers and Amazon CloudFront distributions across accounts in AWS Organizations As new applications are created Firewall Manager also makes it easy to bring new applications and resources into compliance with a common set of security rules from day one Now you have a single service to build firewall rules create security policies and enforce them in a consistent hierarchical manner across your entire Application Load Balancers and Amazon CloudFront infrastructure AWS Identity and Access Management AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources IAM allows you to do the following: 69Overview of Amazon Web Services AWS Whitepaper AWS Key Management Service •Manage IAM users and their access: You can create users in IAM assign them individual security credentials (access keys passwords and multifactor authentication devices) or request temporary security credentials to provide users access to AWS services and resources You can manage permissions in order to control which operations a user can perform •Manage IAM roles and their permissions : You can create roles in IAM and manage permissions to control which operations can be performed by the entity or AWS service that assumes the role You can also define which entity is allowed to assume the role •Manage federated users and their permissions : You can enable identity federation to allow existing identities (users groups and roles) in your enterprise to access the AWS Management Console call AWS APIs and access resources without the need to create an IAM user for each identity AWS Key Management Service AWS Key Management Service (KMS) makes it easy for you to create and manage keys and control the use of encryption across a wide range of AWS services and in your applications AWS KMS is a secure and resilient service that uses FIPS 1402 validated hardware security modules to protect your keys AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs AWS Network Firewall AWS Network Firewall is a managed service that makes it easy to deploy essential network protections for all of your Amazon Virtual Private Clouds (VPCs) The service can be setup with just a few clicks and scales automatically with your network traffic so you don't have to worry about deploying and managing any infrastructure AWS Network Firewall’s flexible rules engine lets you define firewall rules that give you finegrained control over network traffic such as blocking outbound Server Message Block (SMB) requests to prevent the spread of malicious activity You can also import rules you’ve already written in common open source rule formats as well as enable integrations with managed intelligence feeds sourced by AWS partners AWS Network Firewall works together with AWS Firewall Manager so you can build policies based on AWS Network Firewall rules and then centrally apply those policies across your VPCs and accounts AWS Network Firewall includes features that provide protections from common network threats AWS Network Firewall’s stateful firewall can incorporate context from traffic flows like tracking connections and protocol identification to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol AWS Network Firewall’s intrusion prevention system (IPS) provides active traffic flow inspection so you can identify and block vulnerability exploits using signaturebased detection AWS Network Firewall also offers web filtering that can stop traffic to known bad URLs and monitor fully qualified domain names It’s easy to get started with AWS Network Firewall by visiting the Amazon VPC Console to create or import your firewall rules group them into policies and apply them to the VPCs you want to protect AWS Network Firewall pricing is based on the number of firewalls deployed and the amount of traffic inspected There are no upfront commitments and you pay only for what you use AWS Resource Access Manager AWS Resource Access Manager (RAM) helps you securely share your resources across AWS accounts within your organization or organizational units (OUs) in AWS Organizations and with IAM roles and IAM users for supported resource types You can use AWS RAM to share transit gateways subnets AWS License Manager license configurations Amazon Route 53 Resolver rules and more resource types Many organizations use multiple accounts to create administrative or billing isolation and to limit the impact of errors With AWS RAM you don’t need to create duplicate resources in multiple AWS accounts 70Overview of Amazon Web Services AWS Whitepaper AWS Secrets Manager This reduces the operational overhead of managing resources in every account that you own Instead in your multiaccount environment you can create a resource once and use AWS RAM to share that resource across accounts by creating a resource share When you create a resource share you select the resources to share choose an AWS RAM managed permission per resource type and specify whom you want to have access to the resources AWS RAM is available to you at no additional charge AWS Secrets Manager AWS Secrets Manager helps you protect secrets needed to access your applications services and IT resources The service enables you to easily rotate manage and retrieve database credentials API keys and other secrets throughout their lifecycle Users and applications retrieve secrets with a call to Secrets Manager APIs eliminating the need to hardcode sensitive information in plain text Secrets Manager offers secret rotation with builtin integration for Amazon RDS for MySQL PostgreSQL and Amazon Aurora Also the service is extensible to other types of secrets including API keys and OAuth tokens In addition Secrets Manager enables you to control access to secrets using finegrained permissions and audit secret rotation centrally for resources in the AWS Cloud thirdparty services and onpremises AWS Security Hub AWS Security Hub gives you a comprehensive view of your highpriority security alerts and compliance status across AWS accounts There are a range of powerful security tools at your disposal from firewalls and endpoint protection to vulnerability and compliance scanners But oftentimes this leaves your team switching backandforth between these tools to deal with hundreds and sometimes thousands of security alerts every day With Security Hub you now have a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services such as Amazon GuardDuty Amazon Inspector and Amazon Macie as well as from AWS Partner solutions Your findings are visually summarized on integrated dashboards with actionable graphs and tables You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows Get started with AWS Security Hub just a few clicks in the Management Console and once enabled Security Hub will begin aggregating and prioritizing findings AWS Shield AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS AWS Shield provides you with alwayson detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection There are two tiers of AWS Shield: Standard and Advanced All AWS customers benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against most common frequently occurring network and transport layer DDoS attacks that target your website or applications When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53 you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 resources you can subscribe to AWS Shield Advanced In addition to the network and transport layer protections that come with Standard AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks near realtime visibility into attacks and integration with AWS WAF a web application firewall AWS Shield Advanced also gives you 24x7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 charges AWS Shield Advanced is available globally on all Amazon CloudFront and Amazon Route 53 edge locations You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your application Your origin servers can be Amazon S3 Amazon Elastic Compute 71Overview of Amazon Web Services AWS Whitepaper AWS Single SignOn Cloud (Amazon EC2) Elastic Load Balancing (ELB) or a custom server outside of AWS You can also enable AWS Shield Advanced directly on an Elastic IP or Elastic Load Balancing (ELB) in the following AWS Regions: Northern Virginia Ohio Oregon Northern California Montreal São Paulo Ireland Frankfurt London Paris Stockholm Singapore Tokyo Sydney Seoul and Mumbai AWS Single SignOn AWS Single SignOn (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications With just a few clicks you can enable a highly available SSO service without the upfront investment and ongoing maintenance costs of operating your own SSO infrastructure With AWS SSO you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally AWS SSO also includes builtin SAML integrations to many business applications such as Salesforce Box and Microsoft Office 365 Further by using the AWS SSO application configuration wizard you can create Security Assertion Markup Language (SAML) 20 integrations and extend SSO access to any of your SAMLenabled applications Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place AWS WAF AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability compromise security or consume excessive resources AWS WAF gives you control over which traffic to allow or block to your web application by defining customizable web security rules You can use AWS WAF to create custom rules that block common attack patterns such as SQL injection or crosssite scripting and rules that are designed for your specific application New rules can be deployed within minutes letting you respond quickly to changing traffic patterns Also AWS WAF includes a fullfeatured API that you can use to automate the creation deployment and maintenance of web security rules Storage Topics •Amazon Elastic Block Store (p 72) •Amazon Elastic File System (p 73) •Amazon FSx for Lustre (p 73) •Amazon FSx for Windows File Server (p 73) •Amazon Simple Storage Service (p 74) •Amazon S3 Glacier (p 74) •AWS Backup (p 74) •AWS Storage Gateway (p 74) Amazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes offer the consistent and lowlatency performance needed to run your workloads With Amazon EBS you can scale your usage up or down within minutes—all while paying a low price for only what you provision 72Overview of Amazon Web Services AWS Whitepaper Amazon Elastic File System Amazon Elastic File System Amazon Elastic File System (Amazon EFS) provides a simple scalable elastic file system for Linuxbased workloads for use with AWS Cloud services and onpremises resources It is built to scale on demand to petabytes without disrupting applications growing and shrinking automatically as you add and remove files so your applications have the storage they need – when they need it It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies Amazon EFS is a fully managed service that requires no changes to your existing applications and tools providing access through a standard file system interface for seamless integration Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability You can access your file systems across AZs and AWS Regions and share files between thousands of Amazon EC2 instances and onpremises servers via AWS Direct Connect or AWS VPN Amazon EFS is well suited to support a broad spectrum of use cases from highly parallelized scaleout workloads that require the highest possible throughput to singlethreaded latencysensitive workloads Use cases such as liftandshift enterprise applications big data analytics web serving and content management application development and testing media and entertainment workflows database backups and container storage Amazon FSx for Lustre Amazon FSx for Lustre is a fully managed file system that is optimized for computeintensive workloads such as high performance computing machine learning and media data processing workflows Many of these applications require the highperformance and low latencies of scaleout parallel file systems Operating these file systems typically requires specialized expertise and administrative overhead requiring you to provision storage servers and tune complex performance parameters With Amazon FSx you can launch and run a Lustre file system that can process massive data sets at up to hundreds of gigabytes per second of throughput millions of IOPS and submillisecond latencies Amazon FSx for Lustre is seamlessly integrated with Amazon S3 making it easy to link your long term data sets with your high performance file systems to run computeintensive workloads You can automatically copy data from S3 to FSx for Lustre run your workloads and then write results back to S3 FSx for Lustre also enables you to burst your computeintensive workloads from onpremises to AWS by allowing you to access your FSx file system over Amazon Direct Connect or VPN FSx for Lustre helps you costoptimize your storage for computeintensive workloads: It provides cheap and performant non replicated storage for processing data with your longterm data stored durably in Amazon S3 or other lowcost data stores With Amazon FSx you pay for only the resources you use There are no minimum commitments upfront hardware or software costs or additional fees Amazon FSx for Windows File Server Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windowsbased applications that require file storage to AWS Built on Windows Server Amazon FSx provides shared file storage with the compatibility and features that your Windows based applications rely on including full support for the SMB protocol and Windows NTFS Active Directory (AD) integration and Distributed File System (DFS) Amazon FSx uses SSD storage to provide the fast performance your Windows applications and users expect with high levels of throughput and IOPS and consistent submillisecond latencies This compatibility and performance is particularly important when moving workloads that require Windows shared file storage like CRM ERP and NET applications as well as home directories With Amazon FSx you can launch highly durable and available Windows file systems that can be accessed from up to thousands of compute instances using the industrystandard SMB protocol Amazon FSx eliminates the typical administrative overhead of managing Windows file servers You pay for only the resources used with no upfront costs minimum commitments or additional fees 73Overview of Amazon Web Services AWS Whitepaper Amazon Simple Storage Service Amazon Simple Storage Service Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability data availability security and performance This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases such as websites mobile applications backup and restore archive enterprise applications IoT devices and big data analytics Amazon S3 provides easytouse management features so you can organize your data and configure finelytuned access controls to meet your specific business organizational and compliance requirements Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies all around the world Amazon S3 Glacier Amazon S3 Glacier is a secure durable and extremely lowcost storage service for data archiving and longterm backup It is designed to deliver 99999999999% durability and provides comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements Amazon S3 Glacier provides queryinplace functionality allowing you to run powerful analytics directly on your archive data at rest You can store data for as little as $1 per terabyte per month a significant savings compared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon S3 Glacier provides three options for access to archives from a few minutes to several hours and S3 Glacier Deep Archive provides two access options ranging from 12 to 48 hours AWS Backup AWS Backup enables you to centralize and automate data protection across AWS services AWS Backup offers a costeffective fully managed policybased service that further simplifies data protection at scale AWS Backup also helps you support your regulatory compliance or business policies for data protection Together with AWS Organizations AWS Backup enables you to centrally deploy data protection policies to configure manage and govern your backup activity across your organization’s AWS accounts and resources including Amazon Elastic Compute Cloud (Amazon EC2) instances Amazon Elastic Block Store (Amazon EBS) volumes Amazon Relational Database Service (Amazon RDS) databases (including Amazon Aurora clusters) Amazon DynamoDB tables Amazon Elastic File System (Amazon EFS) file systems Amazon FSx for Lustre file systems Amazon FSx for Windows File Server file systems and AWS Storage Gateway volumes AWS Storage Gateway The AWS Storage Gateway is a hybrid storage service that enables your onpremises applications to seamlessly use AWS cloud storage You can use the service for backup and archiving disaster recovery cloud data processing storage tiering and migration Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols such as NFS SMB and iSCSI The gateway connects to AWS storage services such as Amazon S3 S3 Glacier and Amazon EBS providing storage for files volumes and virtual tapes in AWS The service includes a highlyoptimized data transfer mechanism with bandwidth management automated network resilience and efficient data transfer along with a local cache for lowlatency onpremises access to your most active data 74Overview of Amazon Web Services AWS Whitepaper Conclusion Next Steps Reinvent how you work with IT by signing up for the AWS Free Tier which enables you to gain handson experience with a broad selection of AWS products and services Within the AWS Free Tier you can test workloads and run applications to learn more and build the right solution for your organization You can also contact AWS Sales and Business Development By signing up for AWS you have access to Amazon’s cloud computing services Note: The signup process requires a credit card which will not be charged until you start using services There are no longterm commitments and you can stop using AWS at any time To help familiarize you with AWS view these short videos that cover topics like creating an account launching a virtual server storing media and more Learn about the breadth and depth of AWS on our general AWS Channel and AWS Online Tech Talks Get hands on experience from our selfpaced labs Conclusion AWS provides building blocks that you can assemble quickly to support virtually any workload With AWS you’ll find a complete set of highly available services that are designed to work together to build sophisticated scalable applications You have access to highly durable storage lowcost compute highperformance databases management tools and more All this is available without upfront cost and you pay for only what you use These services help organizations move faster lower IT costs and scale AWS is trusted by the largest enterprises and the hottest startups to power a wide variety of workloads including web and mobile applications game development data processing and warehousing storage archive and many others 75Overview of Amazon Web Services AWS Whitepaper Resources •AWS Architecture Center •AWS Whitepapers •AWS Architecture Monthly •AWS Architecture Blog •This Is My Architecture videos •AWS Documentation 76Overview of Amazon Web Services AWS Whitepaper Contributors Document Details Contributors The following individuals and organizations contributed to this document: •Sajee Mathew AWS Principal Solutions Architect Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 77) Added new services and updated information throughoutAugust 5 2021 Minor update (p 77) Minor text updates to improve accuracy and fix linksApril 12 2021 Minor update (p 77) Minor text updates to improve accuracyNovember 20 2020 Minor update (p 77) Fixed incorrect link November 19 2020 Minor update (p 77) Fixed incorrect link August 11 2020 Minor update (p 77) Fixed incorrect link July 17 2020 Minor updates (p 77) Minor text updates to improve accuracyJanuary 1 2020 Minor updates (p 77) Minor text updates to improve accuracyOctober 1 2019 Whitepaper updated (p 77) Added new services and updated information throughoutDecember 1 2018 Whitepaper updated (p 77) Added new services and updated information throughoutApril 1 2017 Initial publication (p 77) Overview of Amazon Web Services publishedJanuary 1 2014 77Overview of Amazon Web Services AWS Whitepaper AWS glossary For the latest AWS terminology see the AWS glossary in the AWS General Reference 78,General,consultant,Best Practices Overview_of_AWS_Security__Analytics_Mobile_and_Application_Services,"ArchivedOverview of AWS Security Analytics Services Mobile and Applications Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 13 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 13 Analytics Services Amazon Web Services provides cloud based analytics services to help you process and analyze any volume of data whether your need is for managed Hadoop clusters real time streaming data petabyte scale data w arehousing or orchestration Amazon Elastic MapReduce (Amazon EMR) Security Amazon Elastic MapReduce (Amazon EMR) is a managed web service you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among severa l servers It utilizes an enhanced version of the Apache Hadoop framework running on the webscale infrastructure of Amazon EC2 and Amazon S3 You simply upload your input data and a data processing application into Amazon S3 Amazon EMR then launches the number of Amazon EC2 instances you specify The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances Once the job flow is finished Amazon EMR transfers the output data to Amazon S3 where you can then retrieve it or use it as input in another job flow When launching job flows on your behalf Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves The master security group has a port open f or communication with the service It also has the SSH port open to allow you to SSH into the instances using the key specified at startup The slaves start in a separate security group which only allows interaction with the master instance By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers Since these are security groups within your account you can reconfigure them using the standard EC2 tools or dashboard To protect customer input and output datasets Amazon EMR transfers data to and from Amazon S3 using SSL Amazon EMR provides several ways to control access to the resources of your cluster You can use AWS IAM to create user accounts and roles and config ure permissions that control which AWS features those users and roles can access When you launch a cluster you can associate an Amazon EC2 key pair with the cluster which you can then use when you connect to the cluster using SSH You can also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster By default if an IAM user launches a cluster that cluster is hidden from other IAM users on the AWS account This filtering occurs on all Amazon EMR interfaces— the console CLI API and SDKs —and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users It is useful for clusters that are intended to be viewed by only a single IAM user and the main AWS account You also h ave the option to make a cluster visible and accessible to all IAM users under a single AWS account For an additional layer of protection you can launch the EC2 instances of your EMR cluster into an Amazon VPC which is like launching it into a private subnet This allows you to control access to the entire subnetwork You can also launch the cluster into a VPC and enable the cluster to access resources on your internal network using a VPN connection You can encrypt the input data before you upload it t o Amazon S3 using any common data encryption tool If you do encrypt the data before it is uploaded you then need to add a decryption step to the beginning of your job flow when Amazon Elastic MapReduce fetches the data from Amazon S3 Archived Page 4 of 13 Amazon Kinesis Security Amazon Kinesis is a managed service designed to handle real time streaming of big data It can accept any amount of data from any number of sources scaling up and down as needed You can use Kinesis in situations that call for large scale real time data ingestion and processing such as server logs social media or market data feeds and web clickstream data Applications read and write data records to Amazon Kinesis in streams You can create any number of Kinesis streams to capture store and transport data Amazon Kinesis automatically manages the infrastructure storage networking and configuration needed to collect and process your data at the level of throughput your streaming applications need You don’t have to worry about provisioning deployment or ongoing maintenance of hardware software or other services to enable real time capture and storage of large scale data Amazon Kinesis also synchronously replicates data across three facilities in an AWS Region providing high availabili ty and data durability In Amazon Kinesis data records contain a sequence number a partition key and a data blob which is an un interpreted immutable sequence of bytes The Amazon Kinesis service does not inspect interpret or change the data in the blob in any way Data records are accessible for only 24 hours from the time they are added to an Amazon Kinesis stream and then they are automatically discarded Your application is a consumer of an Amazon Kinesis stream which typically runs on a flee t of Amazon EC2 instances A Kinesis application uses the Amazon Kinesis Client Library to read from the Amazon Kinesis stream The Kinesis Client Library takes care of a variety of details for you including failover recovery and load balancing allowing your application to focus on processing the data as it becomes availabl e After processing the record your consumer code can pass it along to another Kinesis stream; write it to an Amazon S3 bucket a Redshift data warehouse or a DynamoDB table; or simply discard it A connector library is available to help you integrate Kinesis with other AWS services (such as DynamoDB Redshift and Amazon S3) as well as third party products like Apache Storm You can control logical access to Kinesis resources and management functions by creating users under your AWS Account using AWS IAM and controlling which Kinesis operations these users have permission to perform To facilitate running your producer or consumer applications on an Amazon EC2 instance you can configure that instance with an IAM role That way AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance which means you don’t have to use your long term AWS security credentials Roles ha ve the added benefit of providing temporary credentials that expire within a short timeframe which adds an additional measure of protection See the Using IAM Guide for more information about IAM roles The Amazon Kinesis API is only accessible via an SS Lencrypted endpoint (kinesisus east 1amazonawscom) to help ensure secure transmission of your data to AWS You must connect to that endpoint to access Kinesis but you can then use the API to direct AWS Kinesis to create a stream in any AWS Region AWS Data Pipeline Security The AWS Data Pipeline service helps you process and move data between different data sources at specified intervals using data driven workflows and built in dependency checking Archived Page 5 of 13 When you create a pipeline you define data sources p reconditions destinations processing steps and an operational schedule Once you define and activate a pipeline it will run automatically according to the schedule you specified With AWS Data Pipeline you don’t have to worry about checking resource availability managing inter task dependencies retrying transient failures/timeouts in individual tasks or creating a failure notification system AWS Data Pipeline takes care of launching the AWS services and resources your pipeline needs to process your data (eg Amazon EC2 or EMR) and transferring the results to storage (eg Amazon S3 RDS DynamoDB or EMR) When you use the console AWS Data Pipeline creates the necessary IAM roles and policies including a trusted entities list for you IAM ro les determine what your pipeline can access and the actions it can perform Additionally when your pipeline creates a resource such as an EC2 instance IAM roles determine the EC2 instance's permitted resources and actions When you create a pipeline you specify one IAM role that governs your pipeline and another IAM role to govern your pipeline's resources (referred to as a ""resource role"") which can be the same role for both As part of the security best practice of least privilege we recommend that you consider the minimum permissions necessary for your pipeline to perform work and define the IAM roles accordingly Like most AWS services AWS Data Pipeline also provides the option of secure (HTTPS) endpoints for access via SSL Deployment and Management Services Amazon Web Services provides a variety of tools to help with the deployment and management of your applications This includes services that allow you to create individual user accounts with credentials for access to AWS services It also in cludes services for creating and updating stacks of AWS resources deploying applications on those resources and monitoring the health of those AWS resources Other tools help you manage cryptographic keys using hardware security modules (HSMs) and log AWS API activity for security and compliance purposes AWS Identity and Access Management (AWS IAM) AWS IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Services AWS IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate AWS IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your Archived Page 6 of 13 organization can subscribe to the software and services offered in the Marketplace Since subsc ribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have fine grained control over usage and software costs AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website Roles An IAM role uses temporary security credentials to allow you to delegate access to users or services that normally don't have access to your AWS resources A role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives temporary security credentials for auth enticating to the resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after they expire This can be particularly useful in providing limited controlled access in certain situations: • Federated (non AWS) User Access Federated users are users (or applications) who do not have AWS Accounts With roles you can give them access to your AWS resources for a limited amount of time This is useful if you have non AWS users that you can authenticate with an external service such as Microsoft Active Directory LDAP or Kerberos The temporary AWS credentials used with the roles provide identity federation between AWS and you r non AWS users in your corporate identity and authorization system • If your organization supports SAML 20 (Security Assertion Markup Language 20) you can create trust between your organization as an identity provider (IdP) and other organizations as service providers In AWS you can configure AWS as the service provider and use SAML to provide your users with federated single sign on (SSO) to the AWS Management Console or to get federated access to call AWS APIs • Roles are also useful if you create a mobile or web based application that accesses AWS resources AWS resources require security credentials for programmatic requests; however you shouldn't embed long term security credentials in your application because they are accessible to the application's users and can be difficult to rotate Instead you can let users sign in to your application using Login with Amazon Facebook or Google and then use their authentication information to assume a role and get temporary security credentials • Cross Account Access For organizations who use multiple AWS Accounts to manage their resources you can set up roles to provide users who have permissions in one account to access resources under another account For organizations who have personnel who only rarel y need access to resources under another account using roles helps ensures that credentials are provided temporarily only as needed • Applications Running on EC2 Instances that Need to Access AWS Resources If an Archived Page 7 of 13 application runs on an Amazon EC2 instanc e and needs to make requests for AWS resources such as Amazon S3 buckets or a DynamoDB table it must have security credentials Using roles instead of creating individual IAM accounts for each application on each instance can save significant time for customers who manage a large number of instances or an elastically scaling fleet using AWS Auto Scaling The temporary credentials include a security token an Access Key ID and a Secret Access Key To give a user access to certain resources you distribute the temporary security credentials to the user you are granting temporary access to When the user makes calls to your resources the user passes in the token and Access Key ID and signs the request with the Secret Access Key The token will not work with different access keys How the user passes in the token depends on the API and version of the AWS product the user is making calls to More information about temporary security credentials is available on the AWS website The use of temporary credentials means additional protection for you because you don’t have to manage or distribute long term credentials to temporary users In addition the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe l ike your code Temporary credentials are automatically rotated or changed multiple times a day without any action on your part and are stored securely by default Amazon CloudWatch Security Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources starting with Amazon EC2 It provides customers with visibility into resource utilization operational performance and overall demand patterns including metrics such as CPU utilization disk reads and writes and network traffic You can set up CloudWatch alarms to notify you if certain thresholds are crossed or to take other automated actions such as adding or removing EC2 instances if Auto Scaling is enabled CloudWatch captures and summarizes utilization metrics natively for AWS resources but you can also have other logs sent to CloudWatch to monitor You can route your guest OS application and custom log files for the software installed on your EC2 instances to CloudWatch where they will be stored in durable fashion for as long as you'd like You can configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics You could for example monitor your web server's log files for 404 errors to detect bad inbound links or invalid user messages to detect unauthorized login attempts to your guest OS Like all AWS Services Amazon CloudWatch requires that every request made to its control API be authenticated so only authenticated users can access and manage CloudWatch Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudWatch control API is only accessible via SSL encrypted endpoints You can further control access to Amazon CloudWatch by creating users under your AWS Account using AWS IAM and controlling what CloudWatch operations these users have permission to call AWS CloudHSM Security The AWS CloudHSM service provides customers with dedicated access to a hardware security module (HSM) appliance designed to provide secure cryptographic key storage and operations Archived Page 8 of 13 within an intrusion resistant tamper evident device You can generate store and manage the cryptographic keys used for data encryption so that they are accessible only by you AWS CloudHSM appliances are designed to securely store and process cryptographic key material for a wide variety of uses such as database encryption Digital Rights Management (DRM) Public Key Infrastructure (PKI) authentication and authorization document signing and transaction processing They support some of the strongest cryptographic algorithms available including AES RSA and ECC and many others The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC providi ng the appliance with its own private IP within a private subnet You can connect to CloudHSM appliances from your EC2 servers through SSL/TLS which uses two way digital certificate authentication and 256 bit SSL encryption to provide a secure communicati on channel Selecting CloudHSM service in the same region as your EC2 instance decreases network latency which can improve your application performance You can configure a client on your EC2 instance that allows your applications to use the APIs provide d by the HSM including PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) Before you begin using an HSM you must set up at least one partition on the appliance A cryptographic partition is a logical and phy sical security boundary that restricts access to your keys so only you control your keys and the operations performed by the HSM AWS has administrative credentials to the appliance but these credentials can only be used to manage the appliance not the HSM partitions on the appliance AWS uses these credentials to monitor and maintain the health and availability of the appliance AWS cannot extract your keys nor can AWS cause the appliance to perform any cryptographic operation using your keys The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected The HSM is designed to detect tampering if the physical barrier of the HSM applianc e is breached In addition after three unsuccessful attempts to access an HSM partition with HSM Admin credentials the HSM appliance erases its HSM partitions When your CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed you must delete each partition and its contents as well as any logs As part of the decommissioning process AWS zeroizes the appliance permanently erasing all ke y material Mobile Services AWS mobile services make it easier for you to build ship run monitor optimize and scale cloud powered applications for mobile devices These services also help you authenticate users to your mobile application synchronize data and collect and analyze application usage Amazon Cognito Amazon Cognito provides identity and sync services for mobile and web based applications It simplifies the task of authenticating users and storing managing and syncing their data across multiple devices platforms and applications It provides temporary limited privilege credentials for both authenticated and unauthenticated users without having to manage any backend infrastructure Archived Page 9 of 13 Cognito works with well known identity providers like Google Facebook and Amazon to authenticate end users of your mobile and web applications You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own Your applicati on authenticates with one of these identity providers using the provider’s SDK Once the end user is authenticated with the provider an OAuth or OpenID Connect token returned from the provider is passed by your application to Cognito which returns a new Cognito ID for the user and a set of temporary limited privilege AWS credentials To begin using Amazon Cognito you create an identity pool through the Amazon Cognito console The identity pool is a store of user identity information that is specific to your AWS account During the creation of the identity pool you will be asked to create a new IAM role or pick an existing one for your end users An IAM role is a set of permissions to a ccess specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after they expire The role you select has an impact on which AWS services your end users will be able to access with the temporary credentials By default Amazon Cognito creates a new role with limited permissions – end users only have access to the Cognito Sync service and Amazon Mobile Analytics If your application needs acce ss to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console With Amazon Cognito there’s no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile app’s end users who will need to access your AWS resources In conjunction with IAM roles mobile users can securely access AWS resources and application features and even save data to the AWS cloud without having to create an account or log in However if th ey choose to do this later Cognito will merge data and identification information Because Amazon Cognito stores data locally as well as in the service your end users can continue to interact with their data even when they are offline Their offline dat a may be stale but anything they put into the dataset they can immediately retrieve whether they are online or not The client SDK manages a local SQLite store so that the application can work even when it is not connected The SQLite store functions as a cache and is the target of all read and write operations Cognito's sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed Note that in order to sync data across devices your identity pool must support authenticated identities Unauthenticated identities are tied to the device so unless an end user authenticates no data can be synced across multiple devices With Cognito your application communicates directly with a supported public id entity provider (Amazon Facebook or Google) to authenticate users Amazon Cognito does not receive or store user credentials— only the OAuth or OpenID Connect token received from the identity provider Once Cognito receives the token it returns a new Cognito ID for the user and a set of temporary limited privilege AWS credentials Each Cognito identity has access only to its own data in the sync store and this data is encrypted when stored In addition all identity data is transmitted over HTTPS The unique Archived Page 10 of 13 Amazon Cognito identifier on the device is stored in the appropriate secure location —on iOS for example the Cognito identifier is stored in the iOS keychain User data is cached in a local SQLite database within the application’s sandbox; if you require additional security you can encrypt this identity data in the local cache by implementing encryption in your application Amazon Mobile Analytics Amazon Mobile Analytics is a service for collecting visualizing and understanding mobile application usage data It enables you to track customer behaviors aggregate metrics and identify mean ingful patterns in your mobile applications Amazon Mobile Analytics automatically calculates and updates usage metrics as the data is received from client devices running your app and displays the data in the console You can integrate Amazon Mobile Analytics with your application without requiring users of your app to be authenticated with an identity provider (like Google Facebook or Amazon) For these unauthenticated users Mobile Analytics works with Amazon Cognito to provide temporary limited privilege credentials To do this you first create an identity pool in Cognito The identity pool will use IAM roles which is a set of perm issions not tied to a specific IAM user or group but which allows an entity to access specific AWS resources The entit y assumes a role and receives tempora ry security credentials f or authenticatin g to the AWS resources defined in the role By default Amazon Cog nito creates a new role with limited permissions – end users only have access to the Cog nito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amaz on S3 or Dynamo DB you can mod ify your roles direc tly from the IAM management console You can integrate the AWS Mobile SDK for Android or iOS into your application or use the Amazon Mobile Analytics REST API to send events from any connected device or service and visualize data in the reports The Amazon Mobile Analytics API is only accessible via an SSLencrypted endpoint Applications AWS applications are managed services that enable you to provide your users with secure centralized storage and work areas in the cloud Amazon WorkSpaces Amazon WorkSpaces is a managed desktop service that allows you to quickly provision cloudbased desktops for your users Simply choose a Windows 7 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch Once the WorkSpaces are ready users receive an email informing them where they can down load the relevant client and log into their WorkSpace They can then access their cloud based desktops from a variety of endpoint devices including PCs laptops and mobile devices However your organization’s data is never sent to or stored on the end user device because Amazon WorkSpaces uses PC over IP (PCoIP ) which provides an interactive video stream without transmitting actual data The PCoIP protocol compresses encrypts and encodes the users’ desktop computing experience and transmits ‘pixels only’ across any standard IP network to end user devices Archived Page 11 of 13 In order to access their WorkSpace users must sign in using a set of unique credentials or their regular Active Directory credentials When you integrate Amazon WorkSpaces with your corporate Active Directory each WorkSpace joins your Active Directory domain and can be managed just like any other desktop in your organization This means that you can use Active Directory Group Policies to manage your users’ WorkSpaces to specify configuration options that control the desktop If you choose not to use Active Directory or other type of on premises directory to manage your user WorkSpaces you can create a private cloud directory within Amaz on WorkSpaces that you can use for administration To provide an additional layer of security you can also require the use of multi factor authentication upon sign in in the form of a hardware or software token Amazon WorkSpaces supports MFA using an on premise Remote Authentication Dial In User Service (RADIUS) server or any security provider that supports RADIUS authentication It currently supports the PAP CHAP MS CHAP1 and MS CHAP2 protocols along with RADIUS proxies Each Workspace resides on i ts own EC2 instance within a VPC You can create WorkSpaces in a VPC you already own or have the WorkSpaces service create one for you automatically using the WorkSpaces Quick Start option When you use the Quick Start option WorkSpaces not only creates the VPC but it performs several other provisioning and configuration tasks for you such as creating an Internet Gateway for the VPC setting up a directory within the VPC that is used to store user and WorkSpace information creating a directory administr ator account creating the specified user accounts and adding them to the directory and creating the WorkSpace instances Or the VPC can be connected to an on premises network using a secure VPN connection to allow access to an existing on premises Active Directory and other intranet resources You can add a security group that you create in your Amazon VPC to all the WorkSpaces that belong to your Directory This allows you to control network access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC and on premises network Persistent storage for WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3 If WorkSpaces Sync is enabled on a WorkSpace the folder a user chooses to sync will be continuo usly backed up and stored in Amazon S3 You can also use WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that you can always have access to your data regardless of the desktop computer you are using Because it’s a managed service AWS takes care of several security and maintenance tasks like daily backups and patching Updates are delivered automatically to your WorkSpaces during a weekly maintenance window You can control how patching is configured for a user’s WorkSpace B y default Windows Update is turned on but you have the ability to customize these settings or use an alternative patch management approach if you desire For the underlying OS Windows Update is enabled by default on WorkSpaces and configured to instal l updates on a weekly basis You can use an alternative patching approach or to configure Windows Update to perform updates at a time of your choosing Archived Page 12 of 13 You can use IAM to control who on your team can perform administrative functions like creating or delet ing WorkSpaces or setting up user directories You can also set up a WorkSpace for directory administration install your favorite Active Directory administration tools and create organizational units and Group Policies in order to more easily apply Activ e Directory changes for all your WorkSpaces users Amazon WorkDocs Amazon WorkDocs is a managed enterprise storage and sharing service with feedback capabilities for user collaboration Users can store any type of file in a WorkDocs folder and allow others to view and download them Commenting and annotation capabilities work on certain file types such as MS Word and without requiring the application that was used to originally create the file WorkDocs notifies contributors about r eview activities and deadlines via email and performs versioning of files that you have synced using the WorkDocs Sync application User information is stored in an Active Directory compatible network directory You can either create a new directory in the cloud or connect Amazon WorkDocs to your on premises directory When you create a cloud directory using WorkDocs’ quick start setup it also creates a directory administrator account with the administrator email as the username An email is sent to your administrator with instructions to complete registration The administrator then uses this account to manage your directory When you create a cloud directory using WorkDocs’ quick start setup it also creates and configures a VPC for use with the direct ory If you need more control over the directory configuration you can choose the standard setup which allows you to specify your own directory domain name as well as one of your existing VPCs to use with the directory If you want to use one of your ex isting VPCs the VPC must have an Internet gateway and at least two subnets Each of the subnets must be in a different Availability Zone Using the Amazon WorkDocs Management Console administrators can view audit logs to track file and user activity by time IP address and device and choose whether to allow users to share files with others outside their organization Users can then control who can access individual files and disable downloads of files they share All data in transit is encrypted using industry standard SSL The WorkDocs web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL WorkDocs users can also utilize Multi Factor Authentication or MFA if their organization has deployed a Radius server MFA uses the following factors: username password and methods supported by the Radius server The protocols supported are PAP CHAP MS CHAPv1 and MS CHAPv2 You choose the AWS Region where each WorkDocs site’s files are stored Amazon WorkDocs is currently available in the US East (Virginia) US West (Oregon) and EU (Ireland) AWS Regions All files comments and annotations stored in WorkDocs are automatically encrypted with AES 256 encryption Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Archived Page 13 of 13 Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services",General,consultant,Best Practices Overview_of_AWS_Security__Application_Services,"ArchivedOverview of AWS Security Application Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 9 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 9 Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email delivery search and transcoding Amazon CloudSearch Security Amazon Cloud Search is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections of data such as web pages document files forum posts or product infor mation It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuration that controls how your data is indexed and searched You create a se parate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index and how you want to use them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints Access to your search domain's endpoints is restricted by IP address so that only authorized hosts can submit documents and send search requests IP address authorization is used only to control access to the document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • You use the configuration service to create and manage your search domains The region specific configuration serv ice endpoints are of the form: cloudsearchregionamazonawscom For example cloudsearchus east 1amazonawscom For a current list of supported regions see Regions and Endpoints in the AWS General Reference The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east1cloudsearchamazonawscom • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http ://search domainname domain iduseast 1cloudsearchamazonawscom Note that if you do not have a static IP address you must re authorize your computer whenever your IP address changes If your IP address is assigned dynamically it is also likely that you're sharing that address with other computers on your network This means that when you authorize the IP address all computers that share it will be able to access your search domain's document service endpoint Archived Page 4 of 9 Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your Clou dSearch domain API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch management functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 14 days) Messages are highly durable; each message is persistently stored in highly available highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created wi th AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue i s restricted to the AWS Account that created it However you can allow other access to a queue using either an SQS generated policy or a policy you write Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the messag e when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Ser vice (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other applications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these Archived Page 5 of 9 topics publish messages and have these messages delivered over clients’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build highly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems time sensitive information updates mobile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can p ublish or subscribe to a topic Additionally topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that crea ted it However you can allow other access to SNS using either an SNS generated policy or a policy you write Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service (SWF) makes it easy to build applications that coordina te work across distributed components Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amaz on SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application ho sts interact You create desired workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a workflow —deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however only has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Archived Page 6 of 9 Amazon Simple Email Service (SES) is an outbound only emailsending service b uilt on Amazon’s reliable and scalable infrastructure Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from applications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a different sourc e To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record that Amazo n SES supplies as proof of control over the domain Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from being sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses cont entfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam Amazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for u nsolicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your b ehalf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS servi ces you use securit y credentia ls to verify who you are and whether you have perm ission to interact with Amazon SES For information about whic h credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specif y which Amazon SES API actions a user can perform If you choose to communicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to commu nicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its final Archived Page 7 of 9 destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic Transcoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Eac h job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is made to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs By Status function to find all of the jobs with a given status (eg ""Completed"") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated processes or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Transcoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against Archived Page 8 of 9 users acciden tly deleting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream Security The Amazon AppStream service provides a framework for running streami ng applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client device This can be a pre existing application that you modify to work with Amazon AppStream or a new application that you design specifically to work with the service The Amazon AppStream SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet in near real time decode content on client devices and return user inpu t to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream deploys streaming applications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients usi ng the Amazon AppStream SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your application before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream utilizes an AWS CloudFormation template that automates the process of deploying a GPU EC2 instance that has the AppStream Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is up load your application to the server and run the command to launch it You can then use the Amazon AppStream Service Simulator tool to test your application in standalone mode before deploying it into production Amazon AppStream also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors ne twork Archived Page 9 of 9 conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and video as well as capturing input from your customers to be sent back to the a pplication running in AWS Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute S ervices Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services",General,consultant,Best Practices Overview_of_AWS_Security__Compute_Services,ArchivedOverview of AWS Security Compute Services June 2016 (Please c onsult http://awsamazoncom/security/ forthelatest versi onofthispaper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 8 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 8 AWS ServiceSpecific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service provides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloudbased computing services that include a wide selection of compute instances that can scale up and down automatically to meet the needs of your application or enterprise Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud (EC2) is a key component in Amazon’s Infrastructure as a Service (IaaS) providing resizable computing capacity using server instances in AWS’ data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and softwa re Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexibility in configuration that customers demand The Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 03 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit Archived Page 4 of 8 virtualization of the physical resources leads to a clear separation between guest and hypervisor resulting in additional security separation between the two Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor AWS is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance ’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device: Figure 3: Amazon EC2 Multiple Layers of Security Host Operating System : Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Archived Page 5 of 8 Guest Operating System : Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling passwordonly access to your guests and utilizing some form of multifactor authentication to gain access to your instances (or at a minimum certificatebased SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a peruser basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use commandline logging and use ‘sudo’ for privilege escalation You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance You also control the updating and patching of your guest OS including security updates AWSprovided Windows and Linuxbased AMIs are updated regularly with the latest patches so if you do not need to preserve data or customizations on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall : Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default denyall mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional threetiered web application The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the database servers would have port 3306 (MySQL) open only to the application server group All three groups would permit adm inistrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this expressive mechanism See diagram below: Archived Page 6 of 8 Figure 4: Amazon EC2 Securi ty Group Firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications Wellinformed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional perinstance filters with hostbased firewalls such as IPtables or the Windows Firewall and VPNs This can restrict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality AWS recommends always using SSLprotected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call Elastic Block Storage (Amazon EBS) Security: Amazon Elastic Block Storage (EBS) allows you to create storage volumes from 1 GB to 16 TB that can be mounted as devices by Archived Page 7 of 8 Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volumes or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted access to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for longterm data durability For customers who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a blocklevel view of an entire EBS volume Note that data that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do so on Amazon EBS Encrypti on of sensitive data is general ly a good securi ty practice and AWS pro vides the ability to encry pt EBS vo lumes and their snapshots with AES256 The encryption o ccurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instances and EBS storage In order to be able to do this efficiently and with low laten cy the EBS encryption feature is only available on EC2 's more powerful instance types (eg M 3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up Archived Page 8 of 8 seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role will be securely provisioned to the instance and will be made available to your application via the Amazon EC2 Instance Metadata Service The Metadata Service will make new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the instance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call Further Reading https://awsamazoncom/security/securityresources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services,General,consultant,Best Practices Overview_of_AWS_Security__Database_Services,"ArchivedOverview of AWS Security Database Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 11 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 11 Database Services Amazon Web Services provides a number of database solutions for developers and businesses— from managed relational and NoSQL database services to in memory caching as a service and petabyte scale data warehouse service Amazon DynamoDB Security Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS so you don’t have to worry about hardware provisioning setup and configuration replication software patching or cluster scaling You can create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored while maintaining consistent fast performance All data items are stored on Solid Sta te Drives (SSDs) and are automatically replicated across multiple availability zones in a region to provide built in high availability and data durability You can set up automatic backups using a special template in AWS Data Pipeline that was created just for copying DynamoDB tables You can choose full or incremental backups to a table in the same region or a different region You can use the copy for disaster recovery (DR) in the event that an error in your code damages the original table or to federate DynamoDB data across regions to support a multi region application To control who can use the DynamoDB resources and API you set up permissions in AWS IAM In addition to controlling access at the resource level with IAM you can also control access at the database level —you can create database level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application These database level permissions are called fine grained access controls and you create them using an IAM policy that specifies under what circumstances a user or application can access a DynamoDB table The IAM policy can restrict access to individual items in a table access to the attributes in those items or both at the same time You can optionally use web identity federation to control access by application users who are authenticated by Login with Amazon Facebook or Google Web identity federation removes the need for creating individual IAM users; instead users can sign in to an identity provider and then obtain temporary security credentials from AWS Security Token Service (AWS STS) AWS STS returns temporary AWS credentials to the application Archived Page 4 of 11 and allows it to access the specific DynamoDB table In addition to requiring database and user permissions each request to the DynamoDB service must contain a valid HMAC SHA256 signature or the request is rejected The AWS SDKs automatically sign your requests; however if you want to write your own HTTP POST requests you must provi de the signature in the header of your request to Amazon DynamoDB To calculate the signature you must request temporary security credentials from the AWS Security Token Service Use the temporary security credentials to sign your requests to Amazon Dynam oDB Amazon DynamoDB is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 Amazon Relational Database Service (Amazon RDS) Security Amazon RDS allows you to quickly create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS manages the database instance on your behalf by performing backups handling failover and maintaining the data base software Currently Amazon RDS is available for Amazon Aurora MySQL PostgreSQL Oracle Microsoft SQL Server and MariaDB database engines Amazon RDS has multiple features that enhance reliability for critical production databases including DB security groups permissions SSL connections automated backups DB snapshots and multi AZ deployments DB instances can also be deployed in an Amazon VPC for additional network isolation Access Control When you first create a DB Instance within Amazon RDS you will create a master user account which is used only within the context of Amazon RDS to control access to your DB Instance(s) The master user account is a native database user account that allows you to log on to your DB Instance with all database privileges You can specify the master user name and password you want associated with each DB Instance when you create the DB Instance Once you have created your DB Instance you can connect to the database using the master user credentials Subsequ ently you can create additional user accounts so that you can restrict who can access your DB Instance Using AWS IAM you can further control access to your RDS DB instances AWS IAM enables you to control what RDS operations each individual AWS IAM user has permission to call Network Isolation Archived Page 5 of 11 For additional network access control you can run your DB Instances in an Amazon VPC Amazon VPC enables you to isolate your DB Instances by specifying the IP range you wish to use and connect to your existing IT infrastructure through industry standard encrypted IPsec VPN Running Amazon RDS in a VPC enables you to have a DB instance within a private subnet You can also set up a virtual private gateway that extends your corporate network into your VPC and al lows access to the RDS DB instance in that VPC Refer to the Amazon VPC User Guide for more details DB Instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet To use a bastion host you will need to set up a public subnet with an EC2 instance that acts as a SSH Bastion This public subnet mus t have an Internet gateway and routing rules that allow traffic to be directed via the SSH host which must then forward requests to the private IP address of your Amazon RDS DB instance DB Security Groups can be used to help secure DB Instances within a n Amazon VPC In addition network traffic entering and exiting each subnet can be allowed or denied via network ACLs All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on premises security infra structure including network firewalls and intrusion detection systems Encryption You can encrypt connections between your application and your DB Instance using SSL For MySQL and SQL Server RDS creates an SSL certificate and installs the certificate o n the DB instance when the instance is provisioned For MySQL you launch the mysql client using the ssl_ca parameter to reference the public key in order to encrypt connections For SQL Server download the public key and import the certificate into you r Windows operating system Oracle RDS uses Oracle native network encryption with a DB instance You simply add the native network encryption option to an option group and associate that option group with the DB instance Once an encrypted connection is es tablished data transferred between the DB Instance and your application will be encrypted during transfer You can also require your DB instance to only accept encrypted connections Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (S QL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterprise Edition) The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage If you require your MySQL data to be encrypted while “at rest” in the database your application must manage the encryption and decryption of data Note that SSL support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself While SSL offers security benefits be aware that SSL encryption is a compute intensive Archived Page 6 of 11 operation and will increase the latency of your database connection To learn more about how SSL works with MySQL you can refer directly to the MySQL documentation found here To learn how SSL works with SQL Server you can read more in the RDS User Guid e Automated Backups and DB Snapshots Amazon RDS provides two different methods for ba cking up and restoring your DB Instance(s): automated backups and database snapshots (DB Snapshots) Turned on by default the automated backup feature of Amazon RDS enables point intime recovery for your DB Instance Amazon RDS will back up your database and transaction logs and store both for a user specified retention period This allows you to restore your DB Instance to any second during your retention period up to the last 5 minutes Your automatic backup retention period can be configured to up to 35 days During the backup window storage I/O may be suspended while your data is being backed up This I/O suspension typically lasts a few minutes This I/O suspension is avoided with Multi AZ DB deployments since the backup is taken from the standby DB Snapshots are user initiated backups o f your DB Instance These full database backups are stored by Amazon RDS until you explicitly delete them You can copy DB snapshots of any size and move them between any of AWS’ public regions or copy the same snapshot to multiple regions simultaneously You can then create a new DB Instance from a DB Snapshot whenever you desire DB Instance Replication Amazon cloud computing resources are housed in highly available data center facilities in different regions of the world and each region contains multi ple distinct locations called Availability Zones Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive low latency network connectivity to other Availability Zones in the same region Amazon RDS provides high availability and failover support for DB instances using Multi AZ deployments Multi AZ deployments for Oracle PostgreSQL MySQL and MariaDB DB instances use Amazon technology while SQL Server DB instances use SQL Server Mirrorin g Note that Amazon Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single region regardless of whether the instances in the DB cluster span multiple Availability Zones In a Multi AZ deployment Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy eliminate I/O freezes and minimi ze latency spikes during system backups In the event of DB instance or Availability Zone failure Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against DB instance failure and Availability Zone disruption Amazon RDS also uses the PostgreSQL MySQL and MariaDB DB engines' built in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance Updates made to the source DB instance are asynchronously copied to the Read Replica You can reduce the load on your source DB instance by routing read queries from your Archived Page 7 of 11 applications to the Read Replica Read Replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read heavy database workloads"" Automatic Software Patching Amazon RDS will make sure that the relational database software powering your deployment stays up todate with the latest patches When necessary patches are applied during a maintenance window that you can control You can think of the Amazon RDS maintenance window as an op portunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching occur in the event either are requested or required If a “maintenance” event is scheduled for a given week it will be initiated and completed at some point during the 30 minute maintenance window you identify The only maintenance events that require Amazon RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start tofinish) or required software patching Required patching is automatically scheduled only for patches that are security and durability related Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window If you do not specify a preferred weekly maintenance window when creating your DB Instance a 30 minute default value is assigned If you wish to modify when maintenance is performed on your behalf you can do so by modifying your DB Instance in the AWS Manag ement Conso le or by using the ModifyDBInstance API Each of your DB Instances can have different preferred maintenance windows if you so choose Running your DB Instance as a Multi AZ deployment can further reduce the impact of a maintenance event as Amazon RDS will conduct maintenance via the following steps: 1) Perform maintenance on standby 2) Promote standby to primary and 3) Perform maintenance on old primary which becomes the new standby When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run the DB Instance is marked for deletion Once the instance no longer indicates ‘deleting’ status it has been removed At this point the instance is no longer accessible an d unless a final snapshot copy was asked for it cannot be restored and will not be listed by any of the tools or APIs Event Notification You can receive notifications of a variety of important events that can occur on your RDS instance such as whether the instance was shut down a backup was started a failover occurred the security group was changed or your storage space is low The Ama zon RDS service groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs You can subscribe to an event category for a DB instance DB snapshot DB security group or for a DB parameter group RDS events are published via AWS SNS and sent to you as an email or text message For more information about RDS notification event categories refer to the RDS User Guid e Amazon Redshift Security Amazon Redshift is a petabytescale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources The service has been architected to not only scale up or down rapidly but to significantly improve query speeds Archived Page 8 of 11 even on extremely large datasets To increase performance Redshift uses techniques such as columnar storage data compression and zone maps to reduce the amount of IO needed to perform queries It also has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources When you create a Redshift data warehouse you provision a single node or multi node cluster specifying the type and number of nodes that will make up the cluster The node type determines the storage size memory and CPU of each node Each multi node cluster includes a leader node and two or more compute nodes A leader node manages connections parses queries builds execution plans and manages query execution in the compute nodes The compute nodes store data perform computations and run queries as directed by the leader node The leader node of each cluster is accessible through ODBC and JDBC endpoints using standard PostgreSQL drivers The compute nodes run on a separate isolated network and are never accessed directly After you provision a cluster you can upload your dataset and perform data analysis queries by using common SQL based tools and business intelligence applications Cluster Access By de fault clusters that you create are closed to everyone Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster You can also run Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry standard encrypted IPsec VPN The AWS account that creates the cluster has full access to the cluster Within your AWS account you can use AWS IAM to create user accounts and manage permissions for those accounts By using IAM you can grant different users permission to perform only the cluster operations that are necessary for their work Like all databases you must grant permission in Redshift at the database level in addition to granting access at the resource level Database users are named user accounts that can connect to a database and are authenticated when they login to Amazon Redshift In Redshift you grant database user permissions on a per cluster basis instead of on a per table basis However a user can see data only in the table rows that were generated by his own activities; rows generated by other users are not visible to him The user who creates a database object is its owner By default only a superuser or the owner of an object can query modify or grant permissions on the object For users to use an object you must grant the necessary permissions to the user or the group that contains the user And only the owner of an object can modify or delete it Data Backups Amazon Redshift distributes your data across all compute nodes in a cluster When you run a cluster with at least two compute nodes data on each node will always be mirrored on disks Archived Page 9 of 11 on another node reducing the risk of data loss In addition all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots Redshift stores your snapshots for a user defined period which can be from one to thirty five days You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them Amazon Redshift continuously monitors the health of the cluster and automatically re replicates data from failed drives and replaces nodes as necessary All of this happens without any effort on your part although you may see a slight performance degradation during the rereplication process You can use any system or user snapshot to restore your cluster using the AWS M anagement Console or the Amazon Redshift APIs Your cluster is available as soon as the system metadata has been restored and you can start running queries while user data is spooled down in the background Data Encryption When creating a cluster you can choose to encrypt it in order to provide additional protection for your data at rest When you enable encryption in your cluster Amazon Redshift stores all data in user created tables in an encrypted format using hardware accelerated AES 256 block encryption keys This includes all data written to disk as well as any backups Amazon Redshift uses a four tier key based architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key: • Data encryptio n keys encrypt data blocks in the cluster Each data block is assigned a randomly generated AES 256 key These keys are encrypted by using the database key for the cluster • The database key encrypts data encryption keys in the cluster The database key is a randomly generated AES 256 key It is stored on disk in a separate network from the Amazon Redshift cluster and encrypted by a master key Amazon Redshift passes the database key across a secure channel and keeps it in memory in the cluster • The cluste r key encrypts the database key for the Amazon Redshift cluster You can use either AWS or a hardware security module (HSM) to store the cluster key HSMs provide direct control of key generation and management and make key management separate and distinct from the application and the database • The master key encrypts the cluster key if it is stored in AWS The master key encrypts the cluster keyencrypted database key if the cluster key is stored in an HSM You can have Redshift rotate the encryption keys for your encrypted clusters at any time As part of the rotation process keys are also updated for all of the cluster's automatic and manual snapshots Note that enabling encryption in your cluster will impact performance even though it is hardware ac celerated Encryption also applies to backups When restoring from an encrypted snapshot the new cluster will be encrypted as well To encrypt your table load data files when you upload them to Amazon S3 you can use Amazon Archived Page 10 of 11 S3 server side encryption Whe n you load the data from Amazon S3 the COPY command will decrypt the data as it loads the table Database Audit Logging Amazon Redshift logs all SQL operations including connection attempts queries and changes to your database You can access these logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket You can then use these audit logs to monitor your cluster for security and troubleshooting purposes Automatic Software Patching Amazon Redshift manages all the work of setting up operating and scaling your data warehouse including provisioning capacity monitoring the cluster and applying patches and upgrades to the Amazon Redshift engine Patches are applied only during specified maintenance windows SSL Connections To protect your data in transit within the AWS cloud Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY UNLOAD backup and restore operations You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster To have your clients also authenticate the Redshift server you can install the public key (pem file) for the SSL certificate on your client and use the key to connect to your clusters Amazon Redshift offers the newer stronger cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral protocol ECDHE allows SSL clients to provide Perfect Forward Secrecy between the client and the Redshift cluster Perfect Forward Secrecy uses session keys that are ephemeral and not stored anywhere which prevents the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised You do not need to configure anything in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server Amazon Redshift will use the provided cipher list to make the appropriate connection Amazon ElastiCache Security Amazon ElastiCache is a web service that makes it easy to set up manage and scale distributed inmemory cache environments in the cloud The service improves th e performance of web applications by allowing you to retrieve information from a fast managed in memory caching system instead of relying entirely on slower disk based databases It can be used to significantly improve latency and throughput for many re adheavy application workloads (such as social networking gaming media sharing and Q&A portals) or compute intensive workloads (such as a recommendation engine) Caching improves application performance by storing critical pieces of data in memory for l owlatency access Cached information may include the results of I/O intensive database queries or the results of computationally intensive calculations The Amazon ElastiCache service automates time consuming management tasks for inmemory cache environm ents such as patch management failure detection and recovery It works in conjunction with other Amazon Web Services (such as Amazon EC2 Amazon CloudWatch and Amazon SNS) to provide a secure high performance and managed in memory cache For example an application running in Amazon EC2 can securely access an Amazon ElastiCache Archived Page 11 of 11 Cluster in the same region with very low latency Using the Amazon ElastiCache service you create a Cache Cluster which is a collection of one or more Cache Nodes A Cache N ode is a fixed size chunk of secure network attached RAM Each Cache Node runs an instance of the Memcached or Redis protocol compliant service and has its own DNS name and port Multiple types of Cache Nodes are supported each with varying amounts of a ssociated memory A Cache Cluster can be set up with a specific number of Cache Nodes and a Cache Parameter Group that controls the properties for each Cache Node All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and have the same parameter and security group settings Amazon ElastiCach e allows you to contro l access to your Cache Clusters usin g Cache Security Groups A Cache Security Group acts like a firewall controlling network access to your Cache Cluster By default network access is turned off to your Cache Clusters If you want your applications to access your Cache Cluster you must explicitly enable access from hosts in specific EC2 security groups Once ingress rules are configured the same rules apply to all Cache Clusters associated with that Cache Security Group To allow network access to your Cache Cluster create a Cache Security Group and link the desired EC2 security groups (which in turn specify the EC2 instances allowed) to it The Cache Security Group can be associated with your Cache Cluster at the time of creation or using the ""Modify"" option on the AWS Management Console IP range based access control is currently not enabled for Cache Clusters All clients to a Cache Cluster must be within the EC2 network and authorized via Cache Security Groups ElastiCache for Redis provides backup and restore functionality where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time You can schedule automatic recurring daily snapshots or you can create a manual snapshot at any time For automatic snapshots you specify a retention period; manual snapshots are retained until you delete them The snapshots are stored in Amazon S3 with high durability and can be used fo r warm starts backups and archiving Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services",General,consultant,Best Practices Overview_of_AWS_Security__Network_Services,Archived Overview of AWS Security Network Security August 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 1 of 7 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 2 of 7 Network Security The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographically dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed Secure Network Architecture Network devices including firewall and other boundary devices are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system ser vices ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are automatically pushed using AWS’ ACL Manage tool to help ensure these managed interfaces enforce the most up todate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS To support customers with FIPS cryptographic requirements the SSL terminating load balancers in AWS GovCloud (US) are FIPS 140 2compliant In addition AWS has implemented network devices that are dedicated to managing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmission Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional layers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center For more Archived Page 3 of 7 information about VPC configuration o ptions refer to the Amazon Virtual Private Cloud (Amazon VPC) Security section below Amazon Corporate Segregation Logically the AWS Production network is segregated from the Amazon Corporate network by means of a complex set of network security / segregation devices AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly request access through the AWS ticketing system All requests are reviewed and approved by the appli cable service owner Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud components logging all activity for security review Access to bastion hosts require SSH public key authentication for all user accounts on the host For more information on AWS developer and administrator logical access see AWS Access below Fault Tolerant Design AWS’ infrastructure has a high level of availability and provides you with the capabilit y to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by region) In addition to utili zing discrete uninterruptable power supply (UPS) and onsite backup generators they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure scenarios including natural disasters or system failures However you should be aware of location dependent privacy and compliance requirements such as the EU Data Privacy Directive Data is not replicated between regions unless proactively done so by the customer thus allowing customers with these types of data placement and privacy requirements the ability to establish compliant environments It should be noted that all Archived Page 4 of 7 communications between regions is across public Internet infrastructure; therefore appropr iate encryption methods should be used to protect sensitive data As of this writing there are thirteen regions: US East (Northern Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) EU (Ireland) EU (Frankfurt) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) Asia Pacific (Seoul) Asia Pacific (Mumbai) South America ( São Paulo) and China (Beijing) AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move workloads into the cloud by helping them meet certain regulatory and compliance requirements The AWS GovCloud (US) framework allows US government agencies and their contractors to comply with US International Traffic in Arms Regulations (ITAR) regulations as well as the Federal Risk and Authorization Management Program (FedRAMP) requirements AWS GovCloud (US) has received an Agency Authorization to Op erate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredited Third Party Assessment Organization (3PAO) for several AWS services The AWS GovCloud (US) Region provides the same fault tolerant design as other regions with two Availability Zones In addition the AWS GovCloud (US) region is a mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses More information about GovCloud is available on the AWS website: http://awsamazoncom/govcloud us/ Figure 2: Regions and Availability Zon es Note that the number of Availabili ty Zones may chang e Archived Page 5 of 7 Network Monitoring and Protection AWS utilizes a wide variety of automated monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at ingress and egress communication points These tools monitor server and network usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call schedule is used so personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in h andling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operat ional issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in th e future Implementation of the preventative measures is tracked during weekly operations meetings AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks The AWS network provides significant protection against traditional network security issues and you can implement further protection The following are a few examples: • Distributed Denial Of Service (DDoS) Attacks AWS API endpoints are hosted on large Internet scale world class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer Proprietary DDoS mitigation techniques are used Additionally AWS’ networks are multi homed across a number of providers to achieve Internet access diversity • Man in the Middle (MITM) Attacks All of the AWS APIs are available via SSL protected endpoints which provide server authentication Amazon EC2 AMIs automatically generate new SSH host certificates on first boot and log them to the instance’s console You can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time We Archived Page 6 of 7 encourage you to use SSL for all of your interactions with AWS • IP Spoofing Amazon EC2 instances cannot send spoofed network traffic The AWS controlled host based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own • Port Scanning Unauthorize d port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy Violations of the AWS Acceptable Use Policy are taken seriously and every reported violation is investigated Customers can report suspected abuse via the contacts available on our website at: http://awsamazoncom/contact us/report abuse/ When unauthorized port scanning is detected by AWS it is stopped and blocked Port scans of Amazon EC2 instances are generally ineffective because by default all inbound ports on Amazon EC2 instances are closed and are only opened by you Your strict management of security groups can further mitigate the threat of port scans If you configure the security group to allow traffic from any source to a specific port then that specific port will be vulnerable to a port scan In these cases you must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan For example a web server must clearly have port 80 (HTTP) open to the world and the administrator of this server is responsible for the security of the HTTP server software such as Apache You may request permission to conduct vulnerability scans as required to meet your specific comp liance requirements These scans must be limited to your own instances and must not violate the AWS Acceptable Use Policy • Packet sniffing by other tenants It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traff ic that is intended for a different virtual instance While you can place your interfaces into promiscuous mode the hypervisor will not deliver any traffic to them that is not addressed to them Even two virtual instances that are owned by the same custom er located on the same physical host cannot listen to each other’s traffic Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously a ttempting to view another’s data as a standard practice you should encrypt sensitive traffic In addition to monitoring regular vulnerability scans are performed on the host operating system web application and databases in the AWS environment using a variety of tools Also AWS Security teams subscribe to newsfeeds for applicable vendor flaws and proactively monitor vendors’ websites and other relevant outlets for new patches AWS customers also have the ability to report issues to AWS via the AWS Vul nerability Reporting website at: http ://awsamaz onco m/se curity /vulnerab ilityreportin g/ Archived Page 7 of 7 Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Servi ces Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Appli cation Services Overview of AWS Security – Network Services,General,consultant,Best Practices Overview_of_AWS_Security__Storage_Services,ArchivedOverview of AWS Security Storage Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 9 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS ’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 9 Storage Services Amazon Web Services provides low cost data storage with high durability and availability AWS offers storage choices for backup archiving and disaster recovery as well as block and object storage Amazon Simple Storage Service (Amazon S3) Security Amazon Simple Storage Service (S3) allows you to upload and retrieve data at any time from anywhere on the web Amazon S3 stores data as objects within buckets An object can be any kind of file: a text file a photo a video etc When you add a file to Amazon S3 you have the option of including metadata with the file and setting permissions to control access to the file For each bucket you can control access to the bucket (who can create delete and list objects in the bucket) view access logs for the bucket and its objects and choose the geographical region where Amazon S3 will store the bucket and its contents Data Access Access to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources they create (note that a bucket/object owner is the AWS Account owner not the user who created the bucket/object) There are multiple ways to control access to buckets and objects: • Identity and Access Management (IAM) Policies AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS Account IAM policies are attached to the users enabling centralized control of permissions for users under your AWS Account to access buckets or objects With IAM policies you can onl y grant users within your own AWS account permission to access your Amazon S3 resources • Access Control Lists (ACLs) Within Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users With ACLs you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources • Bucket Policies Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket Policies can be attached to users grou ps or Amazon S3 buckets enabling centralized management of permissions With bucket policies you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources You can further restrict access to specific resources based on certain conditions For Type of Access Control AWS Account Level Control? User Level Control? IAM Policies No Yes ACLs Yes No Bucket Policies Yes Yes Archived Page 4 of 9 example you can restrict access based on request time (Date Condition) whether the request was sent using SSL (Boolean Conditions) a requester’s IP address (IP Address Condition) or based on the requester's client application (String Conditions) To identify these conditions you use policy keys For more information about action specific policy keys available within Amazon S3 refer to the Amazon Simple Storage Service Developer Guide Amazon S3 also gives developers the option to use query string authentication which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time Query string authentication is useful for giving HTTP or browser access to resources that would normally require authentication The signature in the query string secures the request Data Transfer For maximum security you can securely upload/download data to Amazon S3 via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Storage Amazon S3 provides multiple options for protecting data at rest For customers who prefer to manage their own encryption they can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 Alternatively you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you Data is encrypted with a key generated by AWS or with a key you supply depending on your requirements With Amazon S3 SSE you can encrypt data on upload simply by adding an additional request header when writing the object Decryption happens automatically when data is retrieved Note that metadata which you can include with your object is not encrypted Therefore AWS recommends that customers not place sensitive information in Amazon S3 metadata Amazon S3 SSE uses one of the strongest block ciphers available – 256bit Advanced Encr yption Standard (AES 256) With Amazon S3 SSE every protected object is encrypted with a unique encryption key This object key itself is then encrypted with a regularly rotated master key Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts Amazon S3 SSE also makes it possible for you to enforce encryption requirements For example you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets For long term storage you can automatically archive the contents of your Amazon S3 buckets to AWS’ archival service called Amazon Glacier You can have data transferred at specific intervals to Glacier by creating lifecycle rules in Amazon S3 that describe which objects you want to be archived to Glacier and when As part of your data management strategy you can also specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them When an object is deleted from Amazon S3 removal of the mapping from the public name Archived Page 5 of 9 to the object starts immediately and is generally processed across the distributed system within several seconds Once the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Durability and Reliability Amazon S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Objects are redundantly stored on multiple devices across multi ple facilities in an Amazon S3 region To help provide durability Amazon S3 PUT and COPY operations synchronously store customer data across multiple facilities before returning SUCCESS Once stored Amazon S3 helps maintain the durability of the objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired using redundant data In addition Amazon S3 calculates checksums on all netwo rk traffic to detect corruption of data packets when storing or retrieving data Amazon S3 provides further protection via Versioning You can use Versioning to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket W ith Versioning you can easily recover from both unintended user actions and application failures By default requests will retrieve the most recently written version Older versions of an object can be retrieved by specifying a version in the request You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Access Logs An Amazon S3 bucket can be configured to log access to the bucket and objects within it The access log contains details about each access request including request type the requested resource the requestor’s IP and the time and date of the request Wh en logging is enabled for a bucket log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket Cross Origin Resource Sharing (CORS) AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross origin requests Modern browsers use the Same Origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross site scripting attacks) With the Cross Origin Resource Sharing (CORS) policy enabled assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages style sheets and HTML5 applications Amazon Glacier Security Like Amazon S3 the Amazon Glacier service provides low cost secure and durable storage But where Amazon S3 is designed for rapid retrieval Amazon Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable Archived Page 6 of 9 Amazon Glacier stores files as archives within vaults Archives can be any data such as a photo video or document and can contain one or several files You can store an unlimited number of archives in a single vault and can create up to 1000 vaults per region Each archive can contain up to 40 TB of data Data Upload To transfer data into Amazon Glacier vaults you can upload an archive in a single upload operation or a multipart operation In a single upload operation you can upload archives up to 4 GB in size However customers can achieve better results using the Multipart Upload API to upload archives greater than 100 MB Using the Multipart Upload API allows you to upload large archives up to about 40 TB The Multipart Upload API call is designed to improve the upload experience for larger archives; it enables the parts to be uploaded independently in any order and in parallel If a multipart upload fails you only need to upload the failed part again and not the entire archive When you upload data to Amazon Glacier you must compute and supply a tree hash Amazon Glacier checks the hash against the data to help ensure that it has not been altered en route A tree hash is generated by computing a hash for each megabyte sized segment of the data and then combining the hashes in tree fashion to represent everg rowing adjacent segments of the data As an alternate to using the Multipart Upload feature customers with very large uploads to Amazon Glacier may consider using the AWS Import/Export service instead to transfer the data AWS Import/Export facilitates m oving large amounts of data into AWS using portable storage devices for transport AWS transfers your data directly off of storage devices using Amazon’s high speed internal network bypassing the Internet You can also set up Amazon S3 to transfer data at specific intervals to Amazon Glacier You can create lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon Glacier and when You can also specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them To achieve even greater security you can securely upload/download data to Amazon Glacier via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Retrieval Retrieving archives from Amazon Glacier requires the initiation of a retrieval job which is generally completed in 3 to 5 hours You can then access the data via HTTP GET requests The data will remain available to you for 24 hours You can retrieve an entire archive or several files from an archive If you want to retrieve only a subset of an archive you can use one retrieval request to specify the range of the archive t hat contains the files you are interested or you can initiate multiple retrieval requests each with a range for one or more files You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by settin g a maximum items limit Whichever method you choose when you retrieve portions of your Archived Page 7 of 9 archive you can use the supplied checksum to help ensure the integrity of the files provided that the range that is retrieved is aligned with the tree hash of the ove rall archive Data Storage Amazon Glacier automatically encrypts the data using AES 256 and stores it durably in an immutable form Amazon Glacier is designed to provide average annual durability of 99999999999% for an archive It stores each archive in multiple facilities and multiple devices Unlike traditional systems which can require laborious data verification and manual repair Amazon Glacier performs regular systematic data integrity checks and is built to be automatically self healing Data Access Only your account can access your data in Amazon Glacier To control access to your data in Amazon Glacier you can use AWS IAM to specify which users within your account have rights to operations on a given vault AWS Storage Gateway Security The AWS Storage Gateway service connects your on premises software appliance with cloud based storage to provide seamless and secure integration between your IT environment and AWS’ storage infrastructure The service enables you to securely upload data to AWS’ scalable reliable and secure Amazon S3 storage service for cost effective backup and rapid disaster recovery AWS Storage Gateway transparently backs up data off site to Amazon S3 in the form of Amazon EBS snapshots Amazon S3 redundantly stores these sn apshots on multiple devices across multiple facilities detecting and repairing any lost redundancy The Amazon EBS snapshot provides a point intime backup that can be restored on premises or used to instantiate new Amazon EBS volumes Data is stored within a single region that you specify AWS Storage Gateway offers three options: • Gateway Stored Volumes (where the cloud is backup) In this option your volume data is stored locally and then pushed to Amazon S3 where it is stored in redundant encrypted form and made available in the form of Elastic Block Storage (EBS) snapshots When you use this model the on premises storage is primary delivering low latency access to your entire dataset and the cloud storage is the backup • Gateway Cached Volumes ( where the cloud is primary) In this option your volume data is stored encrypted in Amazon S3 visible within your enterprise's network via an iSCSI interface Recently accessed data is cached on premises for low latency local access When you use this model the cloud storage is primary but you get low latency access to your active working set in the cached volumes on premises • Gateway Virtual Tape Library (VTL) In this option you can configure a Gateway VTL with up to 10 virtual tape drives per gate way 1 media changer and up to 1500 virtual tape cartridges Each virtual tape drive responds to the SCSI command set so your existing on premises backup applications (either disk totape or disk todisk to tape) will work without modification No matte r which option you choose data is asynchronously transferred from your on premises Archived Page 8 of 9 storage hardware to AWS over SSL The data is stored encrypted in Amazon S3 using Advanced Encryption Standard (AES) 256 a symmetric key encryption standard using 256 bit encryption keys The AWS Storage Gateway only uploads data that has changed minimizing the amount of data sent over the Internet The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in your data center running VMware ESXi Hy pervisor v 41 or v 5 or Microsoft Hyper V (you download the VMware software during the setup process) You can also run within EC2 using a gateway AMI During the installation and configuration process you can create up to 12 stored volumes 20 Cached vo lumes or 1500 virtual tape cartridges per gateway Once installed each gateway will automatically download install and deploy updates and patches This activity takes place during a maintenance window that you can set on a per gateway basis The iSCSI protocol supports authentication between targets and initiators via CHAP (Challenge Handshake Authentication Protocol) CHAP provides protection against man inthemiddle and playback attacks by periodically verifying the identity of an iSCSI initiator as authenticated to access a storage volume target To set up CHAP you must configure it in both the AWS Storage Gateway console and in the iSCSI initiator software you use to connect to the target After you deploy the AWS Storage Gateway VM you must activate the gateway using the AWS Storage Gateway console The activation process associates your gateway with your AWS Account Once you establish this connection you can manage almost all aspects of your gateway from the console In the activation process you specify the IP address of your gateway name your gateway identify the AWS region in which you want your snapshot backups stored and specify the gateway time zone AWS Import/Export Security AWS Import/Export is a simple secure method for physically transferring large amounts of data to Amazon S3 EBS or Amazon Glacier storage This service is typically used by customers who have over 100 GB of data and/or slow connection speeds that would r esult in very slow transfer rates over the Internet With AWS Import/Export you prepare a portable storage device that you ship to a secure AWS facility AWS transfers the data directly off of the storage device using Amazon’s high speed internal network thus bypassing the Internet Conversely data can also be exported from AWS to a portable storage device Like all other AWS services the AWS Import/Export service requires that you securely identify and authenticate your storage device In this case you will submit a job request to AWS that includes your Amazon S3 bucket Amazon EBS region AWS Access Key ID and return shipping address You then receive a unique identifier for the job a digital signature for authenticating your device and an AWS add ress to ship the storage device to For Amazon S3 you place the signature file on the root directory of your device For Amazon EBS you tape the signature barcode to the exterior of the device The signature file is used only for authentication and is no t uploaded to Amazon S3 or EBS For transfers to Amazon S3 you specify the specific buckets to which the data should be loaded and ensure that the account doing the loading has write permission for the buckets You should also specify the access control list to be applied to each object loaded to Amazon S3 For transfers to EBS you specify the target region for the EBS import operation If the storage device is less than or equal to the maximum volume size of 1 TB its contents are loaded directly into an Amazon EBS snapshot If the storage device’s capacity exceeds 1 TB a device image is Archived Page 9 of 9 stored within the specified S3 log bucket You can then create a RAID of Amazon EBS volumes using software such as Logical Volume Manager and copy the image from S3 to this new volume For added protection you can encrypt the data on your device before you ship it to AWS For Amazon S3 data you can use a PIN code device with hardware encryption or TrueCrypt software to encrypt your data before sending it to AWS For EBS and Amazon Glacier data you can use any encryption method you choose including a PIN code device AWS will decrypt your Amazon S3 data before importing using the PIN code and/or TrueCrypt password you supply in your import manifest AWS uses your PIN to access a PIN code device but does not decrypt software encrypted data for import to Amazon EBS or Amazon Glacier AWS Import/Export Snowball uses appliances designed for security and the Snowball client to accelerate petabyte scale data transfers into and out of AWS You start by using the AWS Management Console to create one or more jobs to request one or multiple Snowball appliances (depending on how much data you need to transfer) and download and install the Snowball client Once the appliance arrives connect it to your local network set the IP address either manually or with DHCP and use the client to identify the directories you want to copy The client will automatically encrypt and copy the data to the appliance and notify you when the transfer job is complete After the import is complete AWS Import/Export will erase the contents of your storage device to safeguard the data during return shipment AWS overwrites all writable blocks on the storage device with zeroes If AWS is unable to erase the data on the device it will be scheduled for destruction and our support team will contact you using the email address specified in the manifest file you ship with the device When shipping a device internationally the customs option and certain required subfields are required in the manifest file sent to AWS AWS Import/Export uses these values to validate the inbound shipment and prepare the outbound customs paperwork Two of these options are whether the data on the device is encrypted or not and the encryption software’s classification When shipping encrypted data to or from the United States the encryption software must be classified as 5D992 under the United States Export Administration Regulations Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Service s,General,consultant,Best Practices Overview_of_Deployment_Options_on_AWS,Overview of Deployment Options on AWS AWS Whitepaper Overview of Deployment Options on AWS AWS Whitepaper Overview of Deployment Options on AWS: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonOverview of Deployment Options on AWS AWS Whitepaper Table of Contents Abstract1 Abstract1 Introduction2 AWS Deployment Services3 AWS CloudFormation3 AWS Elastic Beanstalk5 AWS CodeDeploy7 Amazon Elastic Container Service9 Amazon Elastic Kubernetes Service10 AWS OpsWorks12 Additional Deployment Services14 Deployment Strategies15 Prebaking vs Bootstrapping AMIs15 Blue/Green Deployments15 Rolling Deployments15 InPlace Deployments16 Combining Deployment Services16 Conclusion 17 Contributors 18 Further Reading19 Document Revisions20 Notices21 iiiOverview of Deployment Options on AWS AWS Whitepaper Abstract Overview of Deployment Options on AWS Publication date: June 3 2020 (Document Revisions (p 20)) Abstract Amazon Web Services (AWS) offers multiple options for provisioning infrastructure and deploying your applications Whether your application architecture is a simple threetier web application or a complex set of workloads AWS offers deployment services to meet the requirements of your application and your organization This whitepaper is intended for those individuals looking for an overview of the different deployment services offered by AWS It lays out common features available in these deployment services and articulates basic strategies for deploying and updating application stacks 1Overview of Deployment Options on AWS AWS Whitepaper Introduction Designing a deployment solution for your application is a critical part of building a wellarchitected application on AWS Based on the nature of your application and the underlying services (compute storage database etc) that it requires you can use AWS services to create a flexible deployment solution that can be tailored to fit the needs of both your application and your organization The constantly growing catalog of AWS services not only complicates the process of deciding which services will compose your application architecture but also the process of deciding how you will create manage and update your application When designing a deployment solution on AWS you should consider how your solution will address the following capabilities: •Provision: create the raw infrastructure (Amazon EC2 Amazon Virtual Private Cloud [Amazon VPC] subnets etc) or managed service infrastructure (Amazon Simple Storage Service (Amazon S3) Amazon Relational Database Service [Amazon RDS] Amazon CloudFront etc) required for your application •Configure : customize your infrastructure based on environment runtime security availability performance network or other application requirements •Deploy: install or update your application component(s) onto infrastructure resources and manage the transition from a previous application version to a new application version •Scale: proactively or reactively adjust the amount of resources available to your application based on a set of userdefined criteria •Monitor : provide visibility into the resources that are launched as part of your application architecture Track resources usage deployment success/failure application health application logs configuration drift and more This whitepaper highlights the deployment services offered by AWS and outlines strategies for designing a successful deployment architecture for any type of application 2Overview of Deployment Options on AWS AWS Whitepaper AWS CloudFormation AWS Deployment Services The task of designing a scalable efficient and costeffective deployment solution should not be limited to the issue of how you will update your application version but should also consider how you will manage supporting infrastructure throughout the complete application lifecycle Resource provisioning configuration management application deployment software updates monitoring access control and other concerns are all important factors to consider when designing a deployment solution AWS provides a number of services that provide management capabilities for one or more aspects of your application lifecycle Depending on your desired balance of control (ie manual management of resources) versus convenience (ie AWS management of resources) and the type of application these services can be used on their own or combined to create a featurerich deployment solution This section will provide an overview of the AWS services that can be used to enable organizations to more rapidly and reliably build and deliver applications AWS CloudFormation AWS CloudFormation is a service that enables customers to provision and manage almost any AWS resource using a custom template language expressed in YAML or JSON A CloudFormation template creates infrastructure resources in a group called a “stack” and allows you to define and customize all components needed to operate your application while retaining full control of these resources Using templates introduces the ability to implement version control on your infrastructure and the ability to quickly and reliably replicate your infrastructure CloudFormation offers granular control over the provisioning and management of all application infrastructure components from lowlevel components such as route tables or subnet configurations to highlevel components such as CloudFront distributions CloudFormation is commonly used with other AWS deployment services or thirdparty tools; combining CloudFormation with more specialized deployment services to manage deployments of application code onto infrastructure components AWS offers extensions to the CloudFormation service in addition to its base features: •AWS Cloud Development Kit (AWS CDK) (AWS CDK) is an open source software development kit (SDK) to programmatically model AWS infrastructure with TypeScript Python Java or NET •AWS Serverless Application Model (SAM) is an open source framework to simplify building serverless applications on AWS Table 1: AWS CloudFormation deployment features Capability Description Provision CloudFormation will automatically create and update infrastructure components that are defined in a template Refer to AWS CloudFormation Best Practices for more details on creating infrastructure using CloudFormation templates Configure CloudFormation templates offer extensive flexibility to customize and update all infrastructure components 3Overview of Deployment Options on AWS AWS Whitepaper AWS CloudFormation Capability Description Refer to CloudFormation Template Anatomy for more details on customizing templates Deploy Update your CloudFormation templates to alter the resources in a stack Depending on your application architecture you may need to use an additional deployment service to update the application version running on your infrastructure Refer to Deploying Applications on EC2 with AWS CloudFormation for more details on how CloudFormation can be used as a deployment solution Scale CloudFormation will not automatically handle infrastructure scaling on your behalf; however you can configure auto scaling policies for your resources in a CloudFormation template Monitor CloudFormation provides native monitoring of the success or failure of updates to infrastructure defined in a template as well as “drift detection” to monitor when resources defined in a template do not meet specifications Additional monitoring solutions will need to be in place for application level monitoring and metrics Refer to Monitoring the Progress of a Stack Update for more details on how CloudFormation monitors infrastructure updates The following diagram shows a common use case for CloudFormation Here CloudFormation templates are created to define all infrastructure components necessary to create a simple threetier web application In this example we are using bootstrap scripts defined in CloudFormation to deploy the latest version of our application onto EC2 instances; however it is also a common practice to combine additional deployment services with CloudFormation (using CloudFormation only for its infrastructure management and provisioning capabilities) Note that more than one CloudFormation template is used to create the infrastructure 4Overview of Deployment Options on AWS AWS Whitepaper AWS Elastic Beanstalk Figure 1: AWS CloudFormation use case AWS Elastic Beanstalk AWS Elastic Beanstalk is an easytouse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go or Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk is a complete application management solution and manages all infrastructure and platform tasks on your behalf With Elastic Beanstalk you can quickly deploy manage and scale applications without the operational burden of managing infrastructure Elastic Beanstalk reduces management complexity for web applications making it a good choice for organizations that are new to AWS or wish to deploy a web application as quickly as possible When using Elastic Beanstalk as your deployment solution simply upload your source code and Elastic Beanstalk will provision and operate all necessary infrastructure including servers databases load balancers networks and auto scaling groups Although these resources are created on your behalf you retain full control of these resources allowing developers to customize as needed Table 2: AWS Elastic Beanstalk Deployment Features Capability Description Provision Elastic Beanstalk will create all infrastructure components necessary to operate a web application or service that runs on one of its supported platforms If you need additional infrastructure this will have to be created outside of Elastic Beanstalk 5Overview of Deployment Options on AWS AWS Whitepaper AWS Elastic Beanstalk Capability Description Refer to Elastic Beanstalk Platforms for more details on the web application platforms supported by Elastic Beanstalk Configure Elastic Beanstalk provides a wide range of options for customizing the resources in your environment Refer to Configuring Elastic Beanstalk environments for more information about customizing the resources that are created by Elastic Beanstalk Deploy Elastic Beanstalk automatically handles application deployments and creates an environment that runs a new version of your application without impacting existing users Refer to Deploying Applications to AWS Elastic Beanstalk for more details on application deployments with Elastic Beanstalk Scale Elastic Beanstalk will automatically handle scaling of your infrastructure with managed auto scaling groups for your application instances Refer to Auto Scaling Group for your Elastic Beanstalk Environment for more details about auto scaling with Elastic Beanstalk Monitor Elastic Beanstalk offers builtin environment monitoring for applications including deployment success/failures environment health resource performance and application logs Refer to Monitoring an Environment for more details on fullstack monitoring with Elastic Beanstalk Elastic Beanstalk makes it easy for web applications to be quickly deployed and managed in AWS The following example shows a general use case for Elastic Beanstalk as it is used to deploy a simple web application 6Overview of Deployment Options on AWS AWS Whitepaper AWS CodeDeploy Figure 2: AWS Elastic Beanstalk use case AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates application deployments to compute services such as Amazon EC2 Amazon Elastic Container Service (Amazon ECS) AWS Lambda or onpremises servers Organizations can use CodeDeploy to automate deployments of an application and remove error prone manual operations from the deployment process CodeDeploy can be used with a wide variety of application content including code serverless functions configuration files and more CodeDeploy is intended to be used as a “building block” service that is focused on helping application developers deploy and update software that is running on existing infrastructure It is not an endtoend application management solution and is intended to be used in conjunction with other AWS deployment services such as AWS CodeStar AWS CodePipeline other AWS Developer Tools and thirdparty services (see AWS CodeDeploy Product Integrations for a complete list of product integrations) as part of a complete CI/CD pipeline Additionally CodeDeploy does not manage the creation of resources on behalf of the user Table 3: AWS CodeDeploy deployment features Capability Description Provision CodeDeploy is intended for use with existing compute resources and does not create resources on your behalf CodeDeploy requires compute resources to be organized into a construct called a “deployment group” in order to deploy application content Refer to Working with Deployment Groups in CodeDeploy for more details on linking CodeDeploy to compute resources 7Overview of Deployment Options on AWS AWS Whitepaper AWS CodeDeploy Capability Description Configure CodeDeploy uses an application specification file to define customizations for compute resources Refer to CodeDeploy AppSpec File Reference for more details on the resource customizations with CodeDeploy Deploy Depending on the type of compute resource that CodeDeploy is used with CodeDeploy offers different strategies for deploying your application Refer to Working with Deployments in CodeDeploy for more details on the types of deployment processes that are supported Scale CodeDeploy does not support scaling of your underlying application infrastructure; however depending on your deployment configurations it may create additional resources to support blue/ green deployments Monitor CodeDeploy offers monitoring of the success or failure of deployments as well as a history of all deployments but does not provide performance or applicationlevel metrics Refer to Monitoring Deployments in CodeDeploy for more details on the types of monitoring capabilities offered by CodeDeploy The following diagram illustrates a general use case for CodeDeploy as part of a complete CI/CD solution In this example CodeDeploy is used in conjunction with additional AWS Developer Tools namely AWS CodePipeline (automate CI/CD pipelines) AWS CodeBuild (build and test application components) and AWS CodeCommit (source code repository) to deploy an application onto a group of EC2 instances Figure 3: AWS CodeDeploy use case 8Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Container Service Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster Amazon ECS eliminates the need to install operate and scale container management infrastructure and simplifies the creation of environments with familiar AWS core features like Security Groups Elastic Load Balancing and AWS Identity and Access Management (IAM) When running applications on Amazon ECS you can choose to provide the underlying compute power for your containers with Amazon EC2 instances or with AWS Fargate a serverless compute engine for containers In either case Amazon ECS automatically places and scales your containers onto your cluster according to configurations defined by the user Although Amazon ECS does not create infrastructure components such as Load Balancers or IAM Roles on your behalf the Amazon ECS service provides a number of APIs to simplify the creation and use of these resources in an Amazon ECS cluster Amazon ECS allows developers to have direct finegrained control over all infrastructure components allowing for the creation of custom application architectures Additionally Amazon ECS supports different deployment strategies to update your application container images Table 4: Amazon ECS deployment features Capability Description Provision Amazon ECS will provision new application container instances and compute resources based on scaling policies and Amazon ECS configurations Infrastructure resources such as Load Balancers will need to be created outside of Amazon ECS Refer to Getting Started with Amazon ECS for more details on the types of resources that can be created with Amazon ECS Configure Amazon ECS supports customization of the compute resources created to run a containerized application as well as the runtime conditions of the application containers (eg environment variables exposed ports reserved memory/CPU) Customization of underlying compute resources is only available if using Amazon EC2 instances Refer to Creating a Cluster for more details on how to customize an Amazon ECS cluster to run containerized applications Deploy Amazon ECS supports several deployment strategies for you containerized applications Refer to Amazon ECS Deployment Types for more details on the types of deployment processes that are supported Scale Amazon ECS can be used with autoscaling policies to automatically adjust the number of containers running in your Amazon ECS cluster 9Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Kubernetes Service Capability Description Refer to Service Auto Scaling for more details on configuring auto scaling for your containerized applications on Amazon ECS Monitor Amazon ECS supports monitoring compute resources and application containers with CloudWatch Refer to Monitoring Amazon ECS for more details on the types of monitoring capabilities offered by Amazon ECS The following diagram illustrates Amazon ECS being used to manage a simple containerized application In this example infrastructure components are created outside of Amazon ECS and Amazon ECS is used to manage the deployment and operation of application containers on the cluster Figure 4: Amazon ECS use case Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) is a fullymanaged certified Kubernetes conformant service that simplifies the process of building securing operating and maintaining Kubernetes clusters on AWS Amazon EKS integrates with core AWS services such as CloudWatch Auto Scaling Groups and IAM to provide a seamless experience for monitoring scaling and load balancing your containerized applications Amazon EKS also integrates with AWS App Mesh and provides a Kubernetesnative experience to consume service mesh features and bring rich observability traffic controls and security features to applications Amazon EKS provides a scalable highlyavailable control plane for Kubernetes workloads When running applications on Amazon EKS as with Amazon ECS you can choose to provide the underlying compute power for your containers with EC2 instances or with AWS Fargate Table 5: Amazon EKS deployment features 10Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Kubernetes Service Capability Description Provision Amazon EKS provisions certain resources to support containerized applications: •Load Balancers if needed •Compute Resources (“workers”) Amazon EKS supports Windows and Linux •Application Container Instances (“pods”) Refer to Getting Started with Amazon EKS for more details on Amazon EKS cluster provisioning Configure Amazon EKS supports customization of the compute resources (“workers”) if using EC2 instances to supply compute power EKS also supports customization of the runtime conditions of the application containers (“pods”) Refer to Worker Nodes and Fargate Pod Configuration documentation for more details Deploy Amazon EKS supports the same deployment strategies as Kubernetes see Writing a Kubernetes Deployment Spec > Strategy for more details Scale Amazon EKS scales workers with Kubernetes Cluster Autoscaler and pods with Kubernetes Horizontal Pod Autoscaler and Kubernetes Vertical Pod Autoscaler Monitor The Amazon EKS control plane logs provide audit and diagnostic information directly to CloudWatch Logs The Amazon EKS control plane also integrates with AWS CloudTrail to record actions taken in Amazon EKS Refer to Logging and Monitoring Amazon EKS for more details Amazon EKS allows organizations to leverage open source Kubernetes tools and plugins and can be a good choice for organizations migrating to AWS with existing Kubernetes environments The following diagram illustrates Amazon EKS being used to manage a general containerized application 11Overview of Deployment Options on AWS AWS Whitepaper AWS OpsWorks Figure 5: Amazon EKS use case AWS OpsWorks AWS OpsWorks is a configuration management service that enables customers to construct manage and operate a wide variety of application architectures from simple web applications to highly complex custom applications Organizations deploying applications with OpsWorks use the automation platforms Chef or Puppet to manage key operational activities like server provisioning software configurations package installations database setups scaling and code deployments There are three ways to use OpsWorks: •AWS OpsWorks for Chef Automate: fully managed configuration management service that hosts Chef Automate •AWS OpsWorks for Puppet Enterprise: fully managed configuration management service that hosts Puppet Enterprise •AWS OpsWorks Stacks: application and server management service that supports modeling applications using the abstractions of “stacks” and “layers” that depend on Chef recipes for configuration management With OpsWorks for Chef Automate and OpsWorks for Puppet Enterprise AWS creates a fully managed instance of Chef or Puppet running on Amazon EC2 This instance manages configuration deployment and monitoring of nodes in your environment that are registered to the instance When using OpsWorks with Chef Automate or Puppet Enterprise additional services (eg CloudFormation) may need to be used to create and manage infrastructure components that are not supported by OpsWorks OpsWorks Stacks provides a simple and flexible way to create and manage application infrastructure When working with OpsWorks Stacks you model your application as a “stack” containing different “layers” A layer contains infrastructure components necessary to support a particular application function such as load balancers databases or application servers OpsWorks Stacks does not require the creation of a Chef server but uses Chef recipes for each layer to handle tasks such as installing packages on instances deploying applications and managing other resource configurations OpsWorks Stacks will create and provision infrastructure on your behalf but does not support all AWS services 12Overview of Deployment Options on AWS AWS Whitepaper AWS OpsWorks Provided that a node is network reachable from an OpsWorks Puppet or Chef instance any node can be registered with the OpsWorks making this solution a good choice for organizations already using Chef or Puppet and working in a hybrid environment With OpsWorks Stacks an onpremises node must be able to communicate with public AWS endpoints Table 6: AWS OpsWorks deployment features Capability Description Provision OpsWorks Stacks can create and manage certain AWS services as part of your application using Chef recipes With OpsWorks for Chef Automate or Puppet Enterprise infrastructure must be created elsewhere and registered to the Chef or Puppet instance Refer to Create a New Stack for more details on creating resources with OpsWorks Stacks Configure All OpsWorks operating models support configuration management of registered nodes OpsWorks Stacks supports customization of other infrastructure in your environment through layer customization Refer to OpsWorks Layer Basics for more details on customizing resources with OpsWorks Layers Deploy All OpsWorks operating models support deployment and update of applications running on registered nodes Refer to Deploying Apps for more details on how to deploy applications with OpsWorks Stacks Scale OpsWorks Stacks can handle automatically scaling instances in your environment based on changes in incoming traffic Refer to Using Automatic Loadbased Scaling for more details on auto scaling with OpsWorks Stacks Monitor OpsWorks provides several features to monitor your application infrastructure and deployment success In addition to Chef/Puppet logs OpsWorks provides a set of configurable Amazon CloudWatch and AWS CloudTrail metrics for full stack monitoring Refer to Monitoring Stacks using Amazon CloudWatch for more details on resource monitoring in OpsWorks OpsWorks provides a complete flexible and automated solution that works with existing and popular tools while allowing application owners to maintain fullstack control of an application The following example shows a typical use case for AWS OpsWorks Stacks as it is used to create and manage a three tier web application 13Overview of Deployment Options on AWS AWS Whitepaper Additional Deployment Services Figure 6: AWS OpsWorks Stacks use case This next example shows a typical use case for AWS OpsWorks for Chef Automate or Puppet Enterprise as it is used to manage the compute instances of a web application Figure 7: AWS OpsWorks with Chef Automate or Puppet Enterprise use case Additional Deployment Services Amazon Simple Storage Service (Amazon S3) can be used as a web server for static content and single page applications (SPA) Combined with Amazon CloudFront to increase performance in static content delivery using Amazon S3 can be a simple and powerful way to deploy and update static content More details on this approach can be found in Hosting Static Websites on AWS whitepaper 14Overview of Deployment Options on AWS AWS Whitepaper Prebaking vs Bootstrapping AMIs Deployment Strategies In addition to selecting the right tools to update your application code and supporting infrastructure implementing the right deployment processes is a critical part of a complete wellfunctioning deployment solution The deployment processes that you choose to update your application can depend on your desired balance of control speed cost risk tolerance and other factors Each AWS deployment service supports a number of deployment strategies This section will provide an overview of generalpurpose deployment strategies that can be used with your deployment solution Prebaking vs Bootstrapping AMIs If your application relies heavily on customizing or deploying applications onto Amazon EC2 instances then you can optimize your deployments through bootstrapping and prebaking practices Installing your application dependencies or customizations whenever an Amazon EC2 instance is launched is called bootstrapping an instance If you have a complex application or large downloads required this can slow down deployments and scaling events An Amazon Machine Image (AMI) provides the information required to launch an instance (operating systems storage volumes permissions software packages etc) You can launch multiple identical instances from a single AMI Whenever an EC2 instance is launched you select the AMI that is to be used as a template Prebaking is the process of embedding a significant portion of your application artifacts within an AMI Prebaking application components into an AMI can speed up the time to launch and operationalize an Amazon EC2 instance Prebaking and bootstrapping practices can be combined during the deployment process to quickly create new instances that are customized to the current environment Refer to Best practices for building AMIs for more details on creating optimized AMIs for your application Blue/Green Deployments A blue/green deployment is a deployment strategy in which you create two separate but identical environments One environment (blue) is running the current application version and one environment (green) is running the new application version Using a blue/green deployment strategy increases application availability and reduces deployment risk by simplifying the rollback process if a deployment fails Once testing has been completed on the green environment live application traffic is directed to the green environment and the blue environment is deprecated A number of AWS deployment services support blue/green deployment strategies including Elastic Beanstalk OpsWorks CloudFormation CodeDeploy and Amazon ECS Refer to Blue/Green Deployments on AWS for more details and strategies for implementing blue/green deployment processes for your application Rolling Deployments A rolling deployment is a deployment strategy that slowly replaces previous versions of an application with new versions of an application by completely replacing the infrastructure on which the application 15Overview of Deployment Options on AWS AWS Whitepaper InPlace Deployments is running For example in a rolling deployment in Amazon ECS containers running previous versions of the application will be replaced onebyone with containers running new versions of the application A rolling deployment is generally faster to than a blue/green deployment; however unlike a blue/ green deployment in a rolling deployment there is no environment isolation between the old and new application versions This allows rolling deployments to complete more quickly but also increases risks and complicates the process of rollback if a deployment fails Rolling deployment strategies can be used with most deployment solutions Refer to CloudFormation Update Policies for more information on rolling deployments with CloudFormation; Rolling Updates with Amazon ECS for more details on rolling deployments with Amazon ECS; Elastic Beanstalk Rolling Environment Configuration Updates for more details on rolling deployments with Elastic Beanstalk; and Using a Rolling Deployment in AWS OpsWorks for more details on rolling deployments with OpsWorks InPlace Deployments An inplace deployment is a deployment strategy that updates the application version without replacing any infrastructure components In an inplace deployment the previous version of the application on each compute resource is stopped the latest application is installed and the new version of the application is started and validated This allows application deployments to proceed with minimal disturbance to underlying infrastructure An inplace deployment allows you to deploy your application without creating new infrastructure; however the availability of your application can be affected during these deployments This approach also minimizes infrastructure costs and management overhead associated with creating new resources Refer to Overview of an InPlace Deployment for more details on using inplace deployment strategies with CodeDeploy Combining Deployment Services There is not a “one size fits all” deployment solution on AWS In the context of designing a deployment solution it is important to consider the type of application as this can dictate which AWS services are most appropriate To deliver complete functionality to provision configure deploy scale and monitor your application it is often necessary to combine multiple deployment services A common pattern for applications on AWS is to use CloudFormation (and its extensions) to manage generalpurpose infrastructure and use a more specialized deployment solution for managing application updates In the case of a containerized application CloudFormation could be used to create the application infrastructure and Amazon ECS and Amazon EKS could be used to provision deploy and monitor containers AWS deployment services can also be combined with thirdparty deployment services This allows organizations to easily integrate AWS deployment services into their existing CI/CD pipelines or infrastructure management solutions For example OpsWorks can be used to synchronize configurations between onpremises and AWS nodes and CodeDeploy can be used with a number of thirdparty CI/CD services as part of a complete pipeline 16Overview of Deployment Options on AWS AWS Whitepaper Conclusion AWS provides number of tools to simplify and automate the provisioning of infrastructure and deployment of applications; each deployment service offers different capabilities for managing applications To build a successful deployment architecture evaluate the available features of each service against the needs your application and your organization 17Overview of Deployment Options on AWS AWS Whitepaper Contributors Contributors to this document include: •Bryant Bost AWS ProServe Consultant 18Overview of Deployment Options on AWS AWS Whitepaper Further Reading For additional information see: •AWS Whitepapers page 19Overview of Deployment Options on AWS AWS Whitepaper Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Minor update (p 20) Blue/Green Deployments section revised for clarityApril 8 2021 Whitepaper updated (p 20) Updated with latest services and featuresJune 3 2020 Initial publication (p 20) Whitepaper first published March 1 2015 20Overview of Deployment Options on AWS AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved 21,General,consultant,Best Practices Overview_of_Oracle_EBusiness_Suite_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtml Overview of Oracle E Business Suite on AWS First Published May 2017 Updated September 10 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 3 Contents Introduction 5 AWS overview 5 Amazon Web Services concepts 6 Region s and Availability Zones 6 Elastic Load Balancing 7 Amazon Elastic Block Store (Amazon EBS) 8 Amazon Machine Image (AMI) 8 Amazon S imple Storage Service (Amazon S3) 8 Amazon Route 53 8 Amazon Virtual Private Cloud (Amazon VPC) 8 Amazon Elastic File System (Amazon EFS) 9 AWS security and compliance 9 Oracle E Business Suite on AWS 9 Oracle E Business Suite components 10 Oracle E Business Suite architecture on AWS 11 Benefits of Oracle E Business Suite on AWS 15 Oracle E Business Suite on AWS use cases 18 Conclusion 18 Contri butors 18 Further reading 19 Document versions 19 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 4 Abstract Oracle E Business Suite is a popular suite of integrated business applications for automating enterprise wide processes like customer relationship management financial management and supply chain management Th is is the first whitepaper in a series focused on Oracle E Business Suite on Amazon Web Services (AWS) It provides an architectural overview for running Oracle E Business Suite 122 on AWS The whitepaper series is intended for customers and partners who want to learn about the benefits and options for running Oracle E Busines s Suite on AWS Subsequent whitepapers in this series will discuss advanced topics and outline best practices for high availability security scalability performance migration disaster recovery and management of Oracle E Business Suite systems on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overv iew of Oracle E Business Suite on AWS 5 Introduction Almost all large enterprises use enterprise resource planning (ERP) systems for managing and optimizing enterprise wide business processes Cloud adoption among enterprises is growing rapidly with many adopting a cloud first strategy for new projects and migrating their existing systems from on premises to AWS ERP systems such as Oracle E Business Suite are mission c ritical for most enterprises and figure prominently in considerations for planning an enterprise cloud migration This whitepaper provide s a brief overview of Oracle E Business Suite and a reference architecture for deploying Oracle E Business Suite on AWS It also discuss es the benefits of running Oracle E Business suite on AWS and various use cases AWS overview AWS provides on demand computing resources and services in the cloud with pay as yougo pricing As of the date of this publication AWS serves over a million active customers in more than 190 countries and is available in 25 AWS Regions worldwide You can run a server on AWS and log in configure secure and operate it just as you would operate a server in your own data center Using AWS resources for your compute needs is like purchasing electricity from a power company instead of runn ing your own generator and it provides many of the same benefits: • The capacity you get exactly matches your needs • You pay only for what you use • Economies of scale result in lower costs • The service is provided by a vendor who is experienced in running l argescale compute and network systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 6 Amazon Web Services concepts This section describes the AWS infrastructure and services that are part of the reference architecture for running Oracle E Business Suite on AWS Regions and Availability Zones Each Region is a separate geographi c area isolated from the other R egions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resourc es aren't replicated across R egions unless you do so specifically An AWS account provides multiple Regions so you can launch your application in locations that meet your requirements For example you might w ant to launch your application in Europe to be closer to your European customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independe nt infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment are not shared across Availability Zones Because Availability Zones are physically separate even extremely uncommon disasters such as fires tornados or flooding would only affect a single Availability Zone Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links The following figure illustrates the relationship between Regions and Availability Zones Relationship between AWS Regions and Availability Zones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 7 The following figure shows the Regions and the number of Availability Zones in each Region provided by an AWS account at the time of this publication For the most current list of Regions and Availability Zones see Global Infrastructure Note : You can’t describe or access additional Regions from the AWS GovCloud (US) Region or China (Beijing) Region Map of AWS Regions and Availability Zones Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a web service that provides resizable compute capacity in the cloud billed by the hour or second (minimum of 60 seconds) You can run virtual machines (EC2 instances) ranging in size from one vCPU and one GB memory to 448 vCPU and 6six TB memory You have a choice of operating systems including Windows Server 2008/2012 /2016/2019 Oracle Linux Red Hat Enterprise Linux and SUSE Linux Elastic Load Balanc ing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances containers and IP addresses in one or mor e Availability Zones o n AWS Cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing can be used for load balancing web server traffic This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 8 Amazon Elastic Block Store (Amazon EBS) Amazon EBS provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image (AMI) An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your instance Your AMIs are your unit of deployment A mazon EC2 uses Amazon EBS and Amazon Simple Storage Service (Amazon S3) to provide reliable scalable storage of your AMIs so th ey can boot when you need them Amazon Simple Storage Service (Amazon S3) Amazon S3 provides developers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use It provides a simple web services interface you can use to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable clou d Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and costeffective way to route end users to internet applications by translating names like wwwexamplecom into the numeric IP address Amazon Virtual Private Cloud (Amazon VPC) Amazon VPC enables you to provision a logically isolated section of the AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 9 You can use multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and use the AWS Cloud as an extension of your corporate data center Amazon Elastic File System (Amazon EFS) Amazon EFS is a file storage service for EC2 instances Amazon EFS supports the NFS v4 protocol so the applications and tools that you use today work seamlessly with Amazon EFS Multiple EC2 instances can access an Amazon EFS file system at the same time providing a common data source for workloads and applications running on more than one instance With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files so your applications have the storage they need when the y need it AWS security and compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center —but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more see AWS Cloud Security AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and indepe ndent auditors to provide customers with extensive information regarding the policies processes and controls established and operated by AWS To learn more see AWS Compliance Oracle E Business Suite o n AWS This section cover s the major components of Oracle E Business Suite and its architecture on AWS It is important to have a good understanding of Oracle E Business Suite architecture and its major components to successfully deploy and configure it on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 10 Oracle E Business Suite components Oracle E Business Suite has a three tier architecture consisting of client application and database ( DB) tiers Oracle E Business Suite three tier architecture The client tier contains the client user interface which is provided through HTML or Java applets in a web browser for forms based applications The application tier consists of Oracle Fusion Middleware (Oracle HTTP Server and Oracle WebLogic Server) and the concurrent processing server The Fus ion Middleware server has HTTP Java and Forms services that process the business logic and talk to the database tier The Oracle HTTP Server (OHS) accepts incoming HTTP requests from clients and routes the requests to the Oracle Web Logic Server (WLS) which hosts the business logic and other server side components The HTTP services forms services and concurrent processing server can be installed on multiple application tier nodes and load balanced The database tier consists of an Oracle database tha t stores the data for Oracle E Business Suite This tier has the Oracle database run items and the Oracle database files that physically store the tables indexes and other database objects in the system See the Oracle E Business Suite Concepts guide for a deeper dive on the Oracle E Business Suite architecture components This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of O racle E Business Suite on AWS 11 Oracle E Business Suite architecture on AWS The following reference diagram illustrates how Oracle E Business Suite can be deployed on AWS The application and database tiers are deployed across multiple Availability Zones for high availability Sample Oracle E Business Suite deployment on AWS User requests from the client tier are routed using Amazon Route53 DNS to the Oracle EBusiness Suite application servers deployed on EC2 instances through Application Load Balancer The OHS and the Oracle WLS are deployed on each application tier instance The OHS accept s the requests from Application Load Balancer and route s them to the Oracle WLS The Oracle WLS runs the appropriate business logic and communicate s with the Oracle database The various modules and functions within Oracle E Business Suite share a common data model There is only one Oracle d atabase instance for multiple application tier nodes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 12 Load balancing and high availability Application Load Balancer is used to distribute incoming traffic across multiple application tier instances deployed across multiple Availability Zones You can add and remove application tier instances from your load balancer as your needs change without disrupting the overall flow of information Application Load Balancer ensures that only healthy instances receive traffic by detecting unhealthy instances and rerou ting traffic across the remaining healthy instances If an application tier instance fails Application Load Balanc er automatically reroutes the traffic to the remaining running application tier instances In the unlikely event of an Availability Zone fai lure user traffic is routed to the remaining application tier instances in the other Availability Zone Other third party load balancers like the F5 BIG IP are available on AWS Marketplace and can be used as well See My Oracle Support document 13756861 for more details on using load balancers with Oracle E Business Suite (sign in required) The database tier is deployed on Oracle running on two EC2 instances in different Availability Zones Oracle Data Guard replication (maximum protection or maximum availability mode) is configured between the primary database in one Availability Zone and a standby database in another Availability Zone In case of failure of the primary database the standby database is promoted as the primary and the application tier instances will connect to it For more details on deploying Oracle Database on AWS see the Oracle Database on AWS Quick Start Scalability When using AWS you can scale your application easily due to the elastic nature of the cloud You can scale up the O racle E Business Suite application tier and database tier instances simply by changing the instance type to a larger instance type For example you can start with an r 5large instance with two vCPUs and 1 6 GiB RAM and scale up all the way to an x1 e32xlar ge instance with 128 vCPUs and 3904 GiB RAM After selecting a new instance type only a restart is required for the changes to take effect Typically the resizing operation is complete d in a few minutes the EBS volumes remain attached to the instances and no data migration is required You can scale out the application tier by adding and configuring more application tier instances when required You can l aunc h a new EC2 instance in a few minutes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 13 However additional work is required to ensure that the AutoConfig files are correct and the new application tier instance is correctly configured and registered with the database Although it might be possible to automate scaling out the application tier using scripting this require s an additional technical investment A simpler alternative might be to use standby EC2 instances as explained in the next section Standby EC2 i nstances To meet extra capacity requirements additional application tier instances of Oracle E Business Suite can be pre installed and configured on EC2 instances These standby instances can be shut down until extra capacity is required Charges are not incurred when EC2 instances are shut down —only EBS storage charges are incurred At the time of this publication EBS General Purpose (gp2) volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an EC2 instance with 120 GB hard disk drive ( HDD ) space the storage ch arge is only $12 per month These preinstalled standby instances provide you the flexibility to use these instances for meeting additional capacity needs as and when required In this model you need to ensure that any configuration changes/patching/maint enance activities are also applied to the standby node to avoid inconsistencies Storage options and backup AWS offers a complete range of cloud storage services to support both application and archival compliance requirements You can choose from object file block and archival services The following table list s some of the storage options and how they can be used when deploying Oracle E Business Suite on AWS Table 1 – Storage options and how they can be used Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – gp2/gp3 volumes SSDbased block storage with up to 16000 input/output operations per second ( IOPS ) per volume Boot volumes operating system and software binaries Oracle database archive logs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracl e EBusiness Suite on AWS 14 Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – io1/io2/io2 Block Express volumes SSDbased block storage with up to 64000 IOPS per volume Multiple volumes can be striped together for higher IOPS By attaching io2 volumes to r5b instan ce types you can achieve up to 256000 IOPS per volume Storage for the database tier – ASM disks Oracle data files redo logs Amazon EFS Highly durable NFSv41 compatible file system PCP out and log files media staging Amazon S3 Object store with 99999999999% durability Backups archives media staging Amazon Glacier Extremely low cost and highly durable storage for long term backup and archival Long term backup and archival Amazon EC2 instance storage Ephemeral or temporary storage data persists only for the lifetime of the instance Swap temporary files reports cache Web Server cache The application and database servers use EBS volumes for persistent block storage Amazon EBS has two types of solidstate drive ( SSD)backed volumes : provisioned IOPS SSD (io 1 io2 io2 Block Express ) for latency sensitive database and application workloa ds and general purpose SSD (gp2 gp3) that balance s price and performance for a wide variety of transactional workloads including dev elopment and test environments and boot volumes General purpose SSD volumes provide good balance between price and performance and can be used for boot volumes the Oracle E Business Suite application tier file system and logs They are designed to offer single digit millisecond latencies and deliver a consistent baselin e performance of 3 IOPS/GB for gp2 and 3000 IOPS regardless of volume size for gp3 to a maximum of 1 6000 IOPS per volume Provisioned IOPS volumes are the highest performance EBS storage option and should be used along with Oracle Automatic Storage Manag ement (ASM) for storing the Oracle database data and log files You can provision up to 64000 IOPS per io1/io2 volume and 256000 per io2 Block Express These volumes are designed to achieve single digit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 15 millisecond latencies and to deliver the provisione d IOPS 999% for i01 and 99999% of the time for io2 and io2 Block Express You can use Oracle ASM to stripe the data across multiple EBS volumes for higher IOPS and to scale the database storage To maximize the performance of EBS volumes use EBSoptimized EC2 instances and instances based on the AWS Nitro System EC2 instances have temporary SSD based block storage called instance stora ge Instance storage persists only for the lifetime of the instance and should not be used to store valuable long term data Instance storage can be used as swap space and for storing temporary files like the report cache or web server cache If you are u sing Oracle Linux as the operating system for the database server you can use the instance storage for the Oracle Database Smart Flash Cache and improve the database performance Parallel Concurrent Processing (PCP) allows you to distribute concurrent managers across multiple nodes so that you can use the available capacity and provide failover You can use a shared file system such as Amazon EFS for storing the log and out files while implementing PCP in Oracle E Business Suite However this configuration may not be ideal for environments with an extremely large number of log and out files Oracle E Business Suite Release 122 introduced a new environment variable APPLLDM to specify whether log and out files are stored in a single directory for all Oracle E Business Suite products or in one subdirectory per product APPLLDM can be set to ‘single’ or ‘product’ ‘Product’ will avoid highest concentration of log and out files in a single directory and may avoid perf ormance issues Amazon S3 provides low cost scalable and highly durable storage and should be used for storing backups You can use Oracle Recovery Manager (RMAN) to back up your database then copy the data to Amazon S3 Alternatively you can use the O racle Secure Backup (OSB) Cloud Module to back up your database The OSB Cloud Module is fully integrated with RMAN features and functionality and the backups are sent directly to Amazon S3 for storage Benefits of Oracle E Business Suite on AWS The follo wing sections discuss some of the key benefits of running Oracle E Business Suite on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 16 Agility and speed Traditional deployment involves a long procurement process in which each stage is timeintensive and requires large capital outlay and multiple approvals With AWS you can provision new infrastructure and Oracle E Business Suite environments in minutes compared to waiting weeks or months to procure and deploy traditional infrastructure Lower total cost of ownership In an o npremise s environment you typically pay hardware support costs virtualization licensing and support data center costs and so on You can eliminate or reduce all of these costs by moving to AWS You benefit from the economies of scale and efficiencies provided by AWS and pay only for the compute storage and other resources you use Cost savings for nonproduction environments You can shut down your non production environments when you are not using them and save costs For example if you are using a development environment for only 40 hours a week ( eight hours a day five days a week) you can shut down the environment when it’ s not in use You pay only for 40 hours of Amazon EC2 compute charges instead of 168 hours (24 hours a day seven days a week) for an on premises environment running all the time; this can result in a saving of 75% for EC2 compute charges Replace capital expenditure ( CapEx ) with operating expenditure (OpEx ) You can s tart an Oracle E Business Suite implementation or project on AWS without any upfront cost or commitment for compute storage or network infrastructure Unlimited environments In an o npremise s environment you usually have a limited set of environments to work with; provisioning additional environments take s a long time or might not be possible at all You do not face these restrictions when using AWS ; you can create virtually any number of new environments in minutes as required You can have a different environment for each major project so that each team can work independently with the resources they need without interfering with other teams ; the teams can then converge at a common integrati on environment when they are This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 17 ready You can shut down these environments when the project finishes and stop paying for them Have Moore’s Law work for you instead of against you Moore's Law refers to the observation that the number of transistors on a microchip doubles every two years In an on premises environment you end up owning hardware that depreciat es in value every y ear You are locked into the price and capacity of the hardware after it is acquired plus you have ongoing hardware support costs With AWS you can switch your underlying instances to the faster more powerful next generation AWS instance types as they b ecome available Right size anytime Customer s often oversize environments for initial phases and are then unable to cope with growth in later phases With AWS you can scale the usage up or down at any time You pay only for the computing capacity you use for the duration you use it Instance sizes can be changed in minutes through the AWS Management Console or the AWS Application Programming Interface (API) or Command Line Interface (CLI) Assess the resource usage on current system and launch with appr opriate size instances for the enterprise resource planning ( ERP) environment to reduce the cost overhead Lowcost disaster recovery You can build extremely low cost standby disaster recovery environments for your existing deployments and incur costs only for the duration of the outage CloudEndure Disaster Recovery for Oracle brings significant savin gs on disaster recovery total cost of ownership ( TCO ) compared to traditional disaster recovery solution s Ability to test application performance Although performance testing is recommended prior to any major change to an Oracle EBusiness Suite environme nt most customers only performance test their Oracle E Business Suite application during the initial launch in the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of environment requi red for performance testing With AWS you can minimize the risk of discovering performance issues later in production An AWS Cloud environment can be created easily and quickly just for the duration of the performance test and only used when needed Aga in you are charged only for the hours the environment is used This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 18 No end of life for hardware or platform All hardware platforms have endoflife dates at which point the hardware is no longer supported and you are forced to buy new hardware again In the A WS Cloud you can simply upgrade the platform instances to new AWS instance types in a single click at no cost for the upgrade Oracle E Business Suite on AWS use cases Oracle E Business Suite customers are using AWS for a variety of use cases including the following environments: • Migration of existing Oracle E Business Suite production environments • Implementation of new Oracle E Business Suite production environments • Implementing disaster recovery environments • Running Oracle E Business Suite development test demonstration proof of concept (POC) and training environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing Conclusion AWS can be an extremely cost effective secure scala ble high perform ing and flexible option for deploying Oracle E Business Suite This whitepaper outline s some of the benefits and use cases for deploying Oracle E Business Suite on AWS If you are looking for migration specific guidance see the Migrating Oracle E Business Suite on AWS whitepaper Subsequent whitepapers in this series will cover advanced topics and outline best practices for high availability security scalability performance disaster recovery and management of Oracle E Business Suite systems on AWS Contributors Contributors to this document include : • Ejaz Sayyed Sr Partner Solutions Architect Amazon Web Services • Praveen Katari Partner Managemen t Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 19 • Ashok Sundaram Principal Solutions Architect Amazon Web Services Further reading For additional information see: • AWS Whitepapers & Guides • AWS Cloud Security • AWS Compliance • Oracle R122 Document • Using Load Balancers with Oracle EBS (Sign in to Oracle required) • Oracle Database on AWS • AWS EBS Optimized instances • Oracle APPLLDM document (Sign in to Oracle required) Document version s Date Description September 10 2021 Updated logos new EBS storage and EC2 instance types performance metrics May 2017 First publication,General,consultant,Best Practices Overview_of_the_Samsung_Push_to_Talk_PTT_Solution_on_AWS,For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/ samsungpttaws/samsungpttawshtml Overview of the Samsung Push to Talk (PTT) Solution on AWS First published October 2017 Updated March 30 2021 This paper has been archived This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 AWS Overview 1 AWS Infrastructu re and Services for Samsung PTT Solution 2 Regions and Availability Zones 2 Amazon Elastic Cloud Compute 2 Elastic Load Balancing 3 Amazon Elastic Block Store 3 Amazon Machine Image 3 Amazon Simple Storage Service 3 Amazon Virtual Private Cloud 3 AWS Security and Compliance 4 AWS Features Enabling Virtualization of Samsung PTT Solution 4 Samsung PTT Solution on AWS 6 Samsung PTT Solution Components 6 Samsung PTT Architecture on AWS 7 Benefits of Samsung PTT Solution on AWS 9 Samsung PTT on AWS Use Cases 11 Conclusion 11 Contributors 11 Document Revisions 12 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract The Samsung Push to Talk (PTT) solution is a popular suite of integrated components that enabl es mobile workforce communication This whitepaper provides an architectural overview for running the Samsung PTT solution suite on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 1 Introduction All major enterprises public safety and communications service organizations with mobile workforces can benefit from a Push to Talk (PTT) solution The PTT solution is a twoway radio type service that enables custom ers to push a button and instantly communicate with large audiences over a variety of devices and networks Sectors such as construction hospitality security oil and gas utilities manufacturing field services education and transportation already rely on previous generation technologies to perform this function However cloud adoption among enterprises is growing rapidly with many adopting a cloud first strategy for new projects and migrating their existing systems fro m on premises to Amazon Web Services (AWS) Enterprises can deploy the Samsung PTT solution on AWS This whitepaper provides a n overview of the Samsung PTT solution and a reference architecture for deploying Samsung PTT on AWS We also discuss the benefits of running the Samsung PTT solution on AWS and various use cases AWS Overview AWS provides on demand computing resources and services in the cloud with payasyougo pricing As of this publication AWS serves over a million active customers in more tha n 190 countries and is available in 16 AWS Regions worldwide You can access server s on AWS and log in configure secure and operate them just as you would operate server s in your own data center When you u se AWS resources for your compute needs it’s like purchasing electricity from a power company instead of running your own generator and it provides many of the same benefits including : • The capacity you get exactly matches your needs • You pay only for what you use • Economies of scale result in lower costs • The service is provided by a vendor who is experienced in running large scale compute and network systems This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 2 AWS Infrastructure and Services for Samsung PTT Solution This section describes the AWS infrastructure and services that are part of the reference architecture that you need to use to run the Samsung PTT solution on AWS Region s and Availability Zones Each AWS Region is a separate geographic area that is isolated from the other Regions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resources aren't replicated acros s Regions unless you do so specifically An AWS account provides multiple Regions so that you can launch your application s in locations that meet your requirements For example you might want to launch your application s in Europe to be closer to your Euro pean customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment a ren’t shared across Availability Zones Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links For more information about Regions an d Availability Zones see Regions and Availability Zones in the Amazon E C2 User Guide for Linux Instances For the most current list of Regions and Availability Zones see AWS Global Infrastructure Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud (Amazon EC2) is a web service th at provides resizable compute capacity in the cloud that is billed by the hour You can run virtual machines (EC2 instances) ranging in size from 1 vCPU and 1 GB memory to 128 vCPU and 2 TB memory You have a choice of operating systems including Windows S erver 2008/2012 Oracle Linux Red Hat Enterprise Linux and SUSE Linux This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 3 Elastic Load Balanc ing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing can be used for load balancing web server traffic Amazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failu re offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessa ry bits to set up and boot your EC2 instance Your AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that we can boot them when you ask us to do so Amazon Simple Storage Servic e Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use It provides a simple web services interface you can use to store and retrieve any amount of data f rom anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of th e AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 4 gateways You can leverage multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware Virtual Private Net work (VPN) connection between your corporate data center and your VPC and then you can leverage the AWS Cloud as an extension of your corporate data center AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is similar to security in your on premises data center but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security see the AWS Cloud Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide customers with extensive information regarding the p olicies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center AWS Features Enabling Virtualization of Samsung PTT Solution The feature s used to support the function virtualization of Push to Talk s olution from Samsung on AWS Cloud include the following : This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 5 • Elastic Networ k Adapter (ENA) – ENA is the next generation network interface and accompanying drivers that provide enhanced networking on EC2 instances ENA is a custom AWS network interface optimized to deliver high throughput and packet per second (PPS) performance and consistently low laten cies on EC2 instances Using ENA customers can use up to 20 Gbps of network bandwidth on specific EC2 instance types Open source licensed ENA drivers are currently available for Linux and Intel Data Plane Development Kit (Intel DPDK) The latest Amazon L inux AMI includes the ENA Linux driver support by default ENA Linux driver source code is also available on GitHub for developers to integrate in their AMIs There is no additional fee to use ENA For m ore information see the Enhanced Networking on Linux in the Amazon E C2 User Guide for Linux Instances • Support for single root I/O virtualization (SR IOV) – The single root I/O virtualization ( SRIOV) interface is an extension to the PCI Express (PCIe) specification SRIOV allows a device such as a network adapter to separate access to its resources among various PCIe hardware functions • Support for data plane development kit (DPDK) – The DPDK is a set of data plane libraries and network interface controller drivers for fast packet processing The DPDK provides a programming framework and enables faster development of highspeed data packet networking applications • Support for nonuniform memory access (NUMA) – NUMA is a design where a cluster of microprocessor s in a multiprocessing system are configured so that they can share memory locally This design improv es performance and enables expansion of the system NUMA is used in a symmetric multiprocessing (SMP ) system • Support for h uge pages – Huge pages is a mechanism that allows the Linux kernel to use the multiple page size capabilities of modern hardware architectures Linux uses pages as the basic unit of memory where physical memory is partitioned and accessed using the basic page unit This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 6 • Support for static IP addresses – Amazon EC2 instances can use static IP addresses (survives reboot) and these addresses can be associated with or dissociat ed from a different EC2 instance in any Availability Zone within a Region Samsung PTT Solution on AWS This section cover s the major components of the Samsung PTT solution and its architecture on AWS that you can use to deploy and configure it on AWS Samsung PTT Solution Components The Samsung PTT solution offers advanced 3GPP Rel13 MCPTT ( Mission Critical Push toTalk) features centralized online address book management and security —all delivered over 4G LTE 3G WCDMA/HSPA and Wi Fi networks With PTT users can carry a single device to conveniently access instant broadband data voice service workforce management and mobile productivity applications The Samsung PTT solution leverages eMBMS broadcast technology to transmit data to up to several thousand users within range of a given LTE base station This method allows an extremely rapid flow of information during crisis situations without slowing down traffic on the network Figure 1 — Push to Talk (PTT) network architecture The solution consists of three main components: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 7 • Samsung PTT serve r solution sends multimedia such as video or high quality images to thousands of devices simultaneously using a single transmission channel Each device seamlessly receives the incoming data at the same time allowing real time video communication among thousands of users In contrast when relying upon traditional unicast methods in order to send multimedia to different devices a single channel for each device is needed consuming unnecessary air link capacity significantly degrading the quality of video and potentially causing video buffering or stuttering issues • Samsung Call Session Control Function (CSCF ) is a collection of functional capabilities that play an essenti al role in the IP Multimedia Core Network Subsystem ( IMS) The CSCF is responsible for the signaling that control s the communication of IMS User Equipment (UE) with IMSenhanced services across different network access technologies and domains • Samsung Hom e Subscribe r Server (HSS ) is the main IMS database that also acts as a database in Evolved Packet Core ( EPC) The HSS is a super home location register ( HLR) that combines legacy HLR and authentication center ( AuC) functions together for circuit switched ( CS) and packet switched ( PS) domains This component architecture integrates with Long Term Evolution (LTE ) handsets eNodeB and EPC components We integrated this component architecture with AWS services via the public internet to create a test network The next sections describe how we set it up Samsung PTT Architecture on AWS The Samsung PTT solution setup included setting up a VPC with a public subnet that has a bastion host and three private subnets for CSCF HSS s and PTT server s The bear er packet processing acceleration was powered by the AWS ENA with DPDK applications and SR IOV network port capabilities The EC2 instances within each of the private subnet s reside in their respective placement groups as shown in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 8 Figure 2 — Push to Talk (PTT) deployment architecture on AWS Effective and accurate dimensioning of the solution is critical for the virtual PTT solution It’s always advisable to contact your Samsung team and get their input before implementing a solution for your organization The configuration used for validation of the PTT solution on AWS is outlined in the following table which lists each function plane number of instances instance type and feature that is enabled Table 1 — EC2 Configuration used for Samsung PTT Solution Validation Function Plane Number of Instances Instance Type Features Enabled CSCF Control plane 1 c44xlarge DPDK SRIOV CSCF User plane 1 m42xlarge DPDK SR IOV PTT Control plane 1 c44xlarge DPDK SR IOV PTT User plane 1 m42xlarge DPDK SR IOV This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 9 Function Plane Number of Instances Instance Type Features Enabled HSS Control 1 m42xlarge OSS Operations & maintenance 2 m4xlarge Not applicable Bastion Management 1 T2micro Not applicable Contact the Samsung team for accurate dimensioning of the solution for your organization Benefits of Samsung PTT Solution on AWS The following sections outline the benefits of using Samsung PTT on AWS Cost Savings for Non Prod uction Environments You can shut down your non production environments when you aren’t using them and save costs For example if you are using a development environment for only 40 hours a week (8 hours a day 5 days a week) you can shut down the enviro nment when it’s not in use You pay only for 40 hours of Amazon EC2 compute charges instead of 168 hours (24 hours a day 7 days a week) for an onpremises environment running all the time This can result in a saving of 75% for Amazon EC2 compute charges Unlimited On demand Environments In an on premises environment you usually have a limited set of environments to work with Provisioning additional environments takes a long time or might not be possible at all You don’t face these restrictions when using AWS You can create virtually a ny number of new environments in minutes as necessary You can have a different environment for each major project so that each team can work independently with the resources they need without interfering with other teams Then the teams can converge at a common integration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 10 environment when they are ready You can terminate these environments when the project finishes and stop paying for them Lower Total Cost of Ownership In an on premises environment you typically pay hardware support costs virtuali zation licensing and support data center costs etc You can eliminate or reduce all of these costs by moving to AWS You benefit from the economies of scale and efficiencies provided by AWS and pay only for the compute storage and other resources that you use Right Size Anytime Often customers oversize environments for initial phases and then they’re not able to cope with growth in later phases With AWS you can scale your organization’s usage up or down at any time You only pay for the computing capacity you use for the duration that you use it Instance sizes can be changed in minutes through the AWS Management Console the AWS application programming interface (API) or the AWS Command Line Interface (AWS CLI) Replace CapEx with OpEx You can s tart a Samsung PTT solution implementation or project on AWS without any upfront cost or commitment for compute storage or network infrastructure No Hardware Costs In an on premises environment you end up owning hardware that is depreciating in value every year You are locked into the price and capacity of the hardware once it is acquired plus you have ongoing hardware support costs With AWS you can switch your underlying instances to faster more powerful nextgeneration AWS instance types as they become available LowCost Disaster Recovery You can build low cost standby disaster recovery environments for your existing deployments and incur costs only for the duration of the outage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 11 No End of Life for Hardware or Platform All hardware platforms have end oflife dates at which point the hardware is no longer supported and you are forced to buy new hardware again In the AWS Cloud you can simply upgrade the platform instances to new AWS instance types in a single click at no c ost for the upgrade Samsung PTT on AWS Use Cases Samsung PTT partners and customers are using AWS for a variety of use cases including the following: • Implement ing new Samsung PTT production environments • Implement ing disaster recovery environments • Running Samsung PTT development test demonstration proof of concept (POC) and training environments • Scaling existing Samsung PTT production environments for incremental traffic • Setting up temporary environments for migrations and testing upgrades • Setting up temporary environments for performance testing Conclusion AWS can be an extremely cost effective secure scalable high performing and flexible option for deploying the Samsung PTT solution This whitepaper outlines some of the benefits and use cases for deploying the Samsung PTT solution on AWS Contributors The following individuals and organizations contributed to this document: • Jeong S hang Ohn Principal Engineer Samsung Network Division • Robin Harwani Global Strategic Partner Solution Lead for Telecommunications Amazon Web Services • Andy Kim Solution Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 12 Document Revisions Date Description March 30 2021 Reviewed for technical accuracy October 2017 First publication,General,consultant,Best Practices Performance_at_Scale_with_Amazon_ElastiCache,"This paper has been archived For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/scale performanceelasticache/scaleperformanceelasticachehtml Performance at Scale with Amazon ElastiCache Published May 2015 Updated March 30 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 ElastiCache Overview 2 Alternatives t o ElastiCache 2 Memcached vs Redis 3 ElastiCache for Memcached 5 Architecture with ElastiCache for Memcached 5 Selecting the Right Cache Node Size 9 Security Groups and VPC 10 Caching Design Patterns 12 How to Apply Caching 12 Consistent Hashing (Sharding) 13 Client Libraries 15 Be Lazy 16 Write On Through 18 Expiration Date 19 The Thundering Herd 20 Cache (Almost) Everything 21 ElastiCache for Redis 22 Architecture with ElastiCache for Redis 22 Distributing Reads and Writes 24 Multi AZ with Auto Failover 25 Sharding with Redis 26 Advanced Datasets with Redis 29 Game Leaderboards 29 Recommendation Engines 30 Chat and Messaging 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Queues 31 Client Libraries and Consistent Hashing 32 Monitoring and Tuning 33 Monitoring Cache Efficiency 33 Watching for Hot Spots 34 Memcached Memory Optimization 35 Redi s Memory Optimization 36 Redis Backup and Restore 36 Cluster Scaling and Auto Discovery 37 Auto Scaling Cluster Nodes 37 Auto Discovery of Memcached Nodes 38 Cluster Reconfiguration Events from Amazon SNS 39 Conclusion 40 Contributors 41 Document Revisions 41 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Inmemory caching improves application performance by storing frequently accessed data items in memory so that they can be retrieved without acc ess to the primary data store Properly leveraging caching can result in an application that not only performs better but also costs less at scale Amazon ElastiCache is a managed service that reduces the administrative burden of deploying an in memory ca che in the cloud Beyond caching an in memory data layer also enables advanced use cases such as analytics and recommendation engines This whitepaper lays out common ElastiCache design patterns performance tuning tips and important operational conside rations to get the most out of an in memory layer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 1 Introduction An effective caching strategy is perhaps the single biggest factor in creating an app that performs well at scale A brief look at the largest web gaming and mobile ap ps reveals that all apps at significant scale have a considerable investment in caching Despite this many developers fail to exploit caching to its full potential This oversight can result in running larger database and application instances than needed Not only does this approach decrease performance and add cost but also it limits your ability to scale The in memory caching provided by Amazon ElastiCache improves application performance by storing critical pieces of data in memory for fast access Y ou can use this caching to significantly improve latency and throughput for many read heavy application workloads such as social networking gaming media sharing and Q&A portals Cached information can include the results of database queries computatio nally intensive calculations or even remote API calls In addition compute intensive workloads that manipulate datasets such as recommendation engines and highperformance computing simulations also benefit from an in memory data layer In these applications very large datasets must be accessed in real time across clusters of machines that can span hundreds of nodes Manipulating this data in a disk based store would be a significa nt bottleneck for these applications Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud Amazon ElastiCache manages the work involved in setting up an in memory service from provisioning t he AWS resources you request to installing the software Using Amazon ElastiCache you can add an in memory caching layer to your application in a matter of minutes with a few API calls Amazon ElastiCache integrates with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) as well as deployment management solutions such as AWS CloudFormation AWS Elastic Beanstalk and AWS OpsWorks In this whitepaper we'll walk through best practices f or working with ElastiCache We'll demonstrate common in memory data design patterns compare the two open source engines that ElastiCache supports and show how ElastiCache fits into real world application architectures such as web apps and online games By the end of this paper you should have a clear grasp of which caching strategies apply to your use case and how you can use ElastiCache to deploy an in memory caching layer for your app This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale w ith Amazon ElastiCache Page 2 ElastiCache Overview The Amazon ElastiCache architecture is based on the concept of deploying one or more cache clusters for your application After your cache cluster is up and running the service automates common administrative tasks such as resource provisioning failure detection and recovery and software patching Amazon ElastiCache provides detailed monitoring metrics associated with your cache nodes enabling you to diagnose and react to issues very quickly For example you can set up thresholds and receive alarms if one of your cache nod es is overloaded with requests ElastiCache works with both the Redis and Memcached engines You can launch an ElastiCache cluster by following the steps in the appropriate User Guide: • Getting Started with Amazon ElastiCache for Redis • Getting Started with Amazon ElastiCache for Memcached It's important to understand t hat Amazon ElastiCache is not coupled to your database tier As far as Amazon ElastiCache nodes are concerned your application is just setting and getting keys in a slab of memory That being the case you can use Amazon ElastiCache with relational databa ses such as MySQL or Microsoft SQL Server; with NoSQL databases such as Amazon DynamoDB or MongoDB; or with no database tier at all which is common for distributed computing applications Amazon ElastiCache gives you the flexibility to deploy one two or more different cache clusters with your application which you can use for differing types of datasets Alternatives to ElastiCache In addition to using ElastiCache you can cache data in AWS in other ways each of which has its own pros and cons To review some of the alternatives: • Amazon CloudFront content delivery network (CDN) —this approach is used to cache webpages image assets videos and other static data at the edge as close to end users as possible In addition to using CloudFront with static assets you can also place CloudFront in front of dynamic content such as web apps The important caveat here is that CloudFront only caches rendered page output In web apps games and mobile apps it's very common to have thousands of fragments of data which are reused in multiple sections of the app CloudFront is a valuable component of scaling a website but it does not obviate the need for application caching This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 3 • Amazon RDS Read Replicas —some database engines such as MySQL support the ability to attach asynchronous read replicas Although useful this ability is limited to providing data in a duplicate format of the primary database You cannot cache calculations aggregates or arbitrary custom keys in a replica Also read replicas are not as fa st as in memory caches Read replicas are more interesting for distributing data to remote sites or apps • Onhost caching —a simplistic approach to caching is to store data on each Amazon EC2 application instance so that it's local to the server for fast lookup Don't do this First you get no efficiency from your cache in this case As application instances scale up they start with an empty cache meaning they end up hammering the data tier Second cache invalidation becomes a nightmare How are you going to reliably signal 10 or 100 separate EC2 instances to delete a given cache key? Finally you rule out interesting use cases for in memory caches such as sharing data at high speed across a fleet of instances Let's turn our attention back to ElastiCache and how it fits into your application Memcached vs Redis Amazon ElastiCache currently supports two different in memory key value engines You can choose the engine you prefer when launching an ElastiCache cache cluster: • Memcached —a widely ad opted in memory key store and historically the gold standard of web caching ElastiCache is protocol compliant with Memcached so popular tools that you use today with existing Memcached environments will work seamlessly with the service Memcached is als o multithreaded meaning it makes good use of larger Amazon EC2 instance sizes with multiple cores • Redis —an increasingly popular open source key value store that supports more advanced data structures such as sorted sets hashes and lists Unlike Memcach ed Redis has disk persistence built in meaning that you can use it for longlived data Redis also supports replication which can be used to achieve Multi AZ redundancy similar to Amazon RDS Although both Memcached and Redis appear similar on the surf ace in that they are both in memory key stores they are quite different in practice Because of the replication and persistence features of Redis ElastiCache manages Redis more as a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 4 relational database Redis ElastiCache clusters are managed as stateful entities that include failover similar to how Amazon RDS manages database failover Conversely because Memcached is designed as a pure caching solution with no persistence ElastiCache manages Memcached nodes as a pool that can grow and shrink similar to an Amazon EC2 Auto Scaling group Individual nodes are expendable and ElastiCache provides additional capabilities here such as automatic node replacement and Auto Discovery When deciding between Memcached and Redis here are a few questions to consid er: • Is object caching your primary goal for example to offload your database? If so use Memcached • Are you interested in as simple a caching model as possible? If so use Memcached • Are you planning on running large cache nodes and require multithreaded performance with utilization of multiple cores? If so use Memcached • Do you want the ability to scale your cache horizontally as you grow? If so use Memcached • Does your app need to atomically increment or decrement counters? If so use either Redis or Memcached • Are you looking for more advanced data types such as lists hashes bit arrays HyperLogLogs and sets? If so use Redis • Does sorting and ranking datasets in memory help you such as with leaderboards? If so use Redis • Are publis h and subscribe (pub/sub) capabilities of use to your application? If so use Redis • Is persistence of your key store important? If so use Redis • Do you want to run in multiple AWS Availability Zones (Multi AZ) with failover? If so use Redis • Is geospatial support important to your applications? If so use Redis • Is encryption and compliance to standards such as PCI DSS HIPAA and FedRAMP required for your business? If so use Redis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 5 Although it's tempting to look at Redis as a more evolved Mem cached due to its advanced data types and atomic operations Memcached has a longer track record and the ability to leverage multiple CPU cores Because Memcached and Redis are so different in practice we're going to address them separately in most of thi s paper We will focus on using Memcached as an in memory cache pool and using Redis for advanced datasets such as game leaderboards and activity streams ElastiCache for Memcached The primary goal of caching is typically to offload reads from your datab ase or other primary data source In most apps you have hot spots of data that are regularly queried but only updated periodically Think of the front page of a blog or news site or the top 100 leaderboard in an online game In this type of case your a pp can receive dozens hundreds or even thousands of requests for the same data before it's updated again Having your caching layer handle these queries has several advantages First it's considerably cheaper to add an in memory cache than to scale up t o a larger database cluster Second an in memory cache is also easier to scale out because it's easier to distribute an in memory cache horizontally than a relational database Last a caching layer provides a request buffer in the event of a sudden spik e in usage If your app or game ends up on the front page of Reddit or the App Store it's not unheard of to see a spike that is 10 –100 times your normal application load Even if you autoscale your application instances a 10x request spike will likely m ake your database very unhappy Let's focus on ElastiCache for Memcached first because it is the best fit for a caching focused solution We'll revisit Redis later in the paper and weigh its advantages and disadvantages Architecture with ElastiCache for Memcached When you deploy an ElastiCache Memcached cluster it sits in your application as a separate tier alongside your database As mentioned previously Amazon ElastiCache does not directly communicate with your database tier or indeed have any parti cular knowledge of your database A simplified deployment for a web application looks similar to the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 6 A simplified deployment for a web application In this architecture diagram the Amazon EC2 application instances are in an Auto Scalin g group located behind a load balancer using Elastic Load Balancing which distributes requests among the instances As requests come into a given EC2 instance that EC2 instance is responsible for communicating with ElastiCache and the database tier For development purposes you can begin with a single ElastiCache node to test your application and then scale to additional cluster nodes by modifying the ElastiCache cluster As you add additional cache nodes the EC2 application instances are able to dist ribute cache keys across multiple ElastiCache nodes The most common This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 7 practice is to use client side sharding to distribute keys across cache nodes discuss ed later in this paper EC2 application instances in an Auto Scaling group When you launch an ElastiCache cluster you can choose the Availability Zones where the cluster lives For best performance you should configure your cluster to use the same Availability Zones as your application servers To launch an ElastiCache cluster in a specific Availability Zone make sure to specify the Preferred Zone(s) option during cache cluster creation The Availability Zones that you specify will be where ElastiCache will launch your cache nodes AWS recommend s that you select Spread Nodes Across Zones which tells ElastiCache to distribute cache nodes across these zones This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Perfor mance at Scale with Amazon ElastiCache Page 8 as evenly as possible This distribution will mitigate the impact of an Availability Zone disruption on your ElastiCache nodes The trade off is that some of the requests from your application to ElastiCache will go to a node in a different Availability Zone meaning latency will be slightly higher For more details see Creating a Clu ster in the Amazon ElastiCache for Memcached User Guide As mentioned at the outset ElastiCache can be coupled with a wide variety of databases Here is an example architecture that uses Amazon DynamoDB instead of Amazon RDS and MySQL: Example archit ecture using Amazon DynamoDB instead of Amazon RDS and MySQL This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 9 This combination of DynamoDB and ElastiCache is very popular with mobile and game companies because DynamoDB allows for higher write throughput at lower cost than traditional relational database s In addition DynamoDB uses a key value access pattern similar to ElastiCache which also simplifies the programming model Instead of using relational SQL for the primary database but then key value patterns for the cache both the primary database and cache can be programmed similarly In this architecture pattern DynamoDB remains the source of truth for data but application reads are offloaded to ElastiCache for a speed boost Selecting the Right Cache Node Size ElastiCache supports a variety of cache node types We recommend choosing a cache node from the M5 or R5 families because the newest node types support the latest generation CPUs and networking capabilities These instance families can deliver up to 25 Gbps of aggregate network bandwidth with enhanced networking based on the Elastic Network Adapter (ENA) and over 600 GiB of memory The R5 node types provide 5% more memory per vCPU and a 10% price per GiB improvement over R4 node types In addition R5 node types deliver a ~20% CPU performance improvement over R4 node types If you don’t know how much capacity you need AWS recommend s starting with one cachem5large node Use the ElastiCache metrics published to CloudWatch to monitor memory usage CPU utilization and the cache hit rate If your cluster does not have the desired hit rate or you notice that keys are being evic ted too often choose a nother node type with more CPU and memory capacity For production and large workloads the R5 nodes typically provide the best performance and memory cost value You can get an approximate estimate of the amount of cache memory you' ll need by multiplying the size of items you want to cache by the number of items you want to keep cached at once Unfortunately calculating the size of your cached items can be trickier than it sounds You can arrive at a slight overestimate by serializi ng your cached items and then counting characters Here's an example that flattens a Ruby object to JSON counts the number of characters and then multiplies by 2 because there are typically 2 bytes per character: irb(main):010:0> user = Userfind(4) irb(main):011:0> use/to_jsonsize * 2 => 580 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 10 In addition to the size of your data Memcached adds approximately 50 –60 bytes of internal bookkeeping data to each element The cache key also consumes space up to 250 characters at 2 bytes each In this example it's probably safest to overestimate a little and guess 1 –2 KB per cached object Keep in mind that this approach is just for illustration purposes Your cached objects can be much larger if you are caching rendered page fragments or if you use a serializa tion library that expands strings Because Amazon ElastiCache is a pay asyougo service make your best guess at the node instance size and then adjust after getting some real world data Make sure that your application is set up for consistent hashing which will enable you to add additional Memcached nodes to scale your in memory layer horizontally For additional tips see Choosing Your Node Size in the Amazon ElastiCache for Memcached User Guide Security Groups and VPC Like other AWS services ElastiCache supports security groups You can use security groups to define rules that limit access to your instances based on IP address and port ElastiCache supports both subnet security groups in Amazon Virtual Private Cloud (Amazon VPC) and classic Amazon EC2 se curity groups We strongly recommend that you deploy ElastiCache and your application in Amazon VPC unless you have a specific need otherwise (such as for an existing application) Amazon VPC offers several advantages including fine grained access rules and control over private IP addressing For an overview of how ElastiCache integrates with Amazon VPC see Understanding ElastiCache and Amazon VPCs in the Amazon Elasti Cache for Memcached User Guide When launching your ElastiCache cluster in a VPC launch it in a private subnet with no public connectivity for best security Memcached does not have any serious authentication or encryption capabilities but Redis does sup port encryption Following is a simplified version of the previous architecture diagram that includes an example VPC subnet design This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 11 Example VPC subnet design To keep your cache nodes as secure as possible only allow access to your cache cluster from you r application tier as shown preceding ElastiCache does not need connectivity to or from your database tier because your database does not directly interact with ElastiCache Only application instances that are making calls to your cache cluster need con nectivity to it The way ElastiCache manages connectivity in Amazon VPC is through standard VPC subnets and security groups To securely launch an ElastiCache cluster in Amazon VPC follow these steps: 1 Create VPC private subnet(s) that will house your Elas tiCache cluster in the same VPC as the rest of your application A given VPC subnet maps to a single Availability Zone Given this mapping create a private VPC subnet for each Availability Zone where you have application instances Alternatively you can reuse another private VPC subnet that you already have For more information refer to VPCs and Subnets in the Amazon Virtual Private Cloud User Guide 2 Create a VPC sec urity group for your new cache cluster Make sure it is also in the same VPC as the preceding subnet For more details see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide 3 Create a single access rule for this security group allowing inbound access on port 11211 for Memcached or on port 6379 for Redis 4 Create an ElastiCache subnet group that contains the VPC private subnets that you created in step 1 This subnet group is how ElastiCache knows which VPC subnets to use when launching the cluster For instructions see Creating a Cache Subnet Group in the Amazon ElastiCache for Memcached User Guide 5 When you launch your ElastiCache cluster make sure to place it in the correct VPC and choose the correct ElastiCache subnet group For instructions see Creating a Cluster in the Amazon ElastiCache for Memcached User Guide A correct VPC security group for your cache cluster should look like the following Notice the single inbound rule allowing access to the cluster from the application tier: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 12 VPC security group for your cache cluster To test connectivity from an application instance to your cache cluster in VPC you can use netcat a Linux command line utility Choose one of your cac he cluster nodes and attempt to connect to the node on either port 11211 (Memcached) or port 6379 (Redis): $ nc z w5 mycache2bz2vq55001usw2cache amazonawscom 11211 $ echo $? 0 If the connection is successful netcat will exit with status 0 If netcat appears to hang or exits with a nonzero status check your VPC security group and subnet settings Caching Design Patterns With a ElastiCache cluster deployed let's now dive into how to best apply caching in your appli cation How to Apply Caching With a ElastiCache cluster deployed let's now dive into how to best apply caching in your application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 13 • Is it safe to use a cached value? The same piece of data can have different consistency requirements in different contexts For example during online checkout you need the authoritative price of an item so caching might not be appropriate On other pages however the price might be a few minutes out of date without a negative impact on users • Is caching effective for that data? Some applications generate access patterns that are not suitable for caching —for example sweeping through the key space of a large dataset that is changing frequently In this case keeping the cache up todate could offset any advantage caching cou ld offer • Is the data structured well for caching? Simply caching a database record can often be enough to offer significant performance advantages However other times data is best cached in a format that combines multiple records together Because cach es are simple key value stores you might also need to cache a data record in multiple different formats so you can access it by different attributes in the record You don’t need to make all of these decisions up front As you expand your usage of cachin g keep these guidelines in mind when deciding whether to cache a given piece of data Consistent Hashing (Sharding) In order to make use of multiple ElastiCache nodes you need a way to efficiently spread your cache keys across your cache nodes The naïve approach to distributing cache keys often found in blogs looks like this: cache_node_list = [ ’mycache2az2vq550001usw2cacheamazonawscom:11211’ ’my cache2az2vq550002usw2cacheamazonawscom:11211’ ] This approach applies a hash function (suc h as CRC32) to the key to add some randomization and then uses a math modulo of the number of cache nodes to distribute the key to a random node in the list This approach is easy to understand and most importantly for any key hashing scheme it is determ inistic in that the same cache key always maps to the same cache node Unfortunately this particular approach suffers from a fatal flaw due to the way that modulo works As the number of cache nodes scales up most hash keys will get This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 14 remapped to new nodes with empty caches as a side effect of using modulo You can calculate the number of keys that would be remapped to a new cache node by dividing the old node count by the new node count For example scaling from 1 to 2 nodes remaps half (½) of your cache keys; scaling from 3 to 4 nodes remaps three quarters (¾) of your keys; and scaling from 9 to 10 nodes remaps 90 percent of your keys to empty caches This approach is bad for obvious reasons Think of the scenario where you're scaling rapidly due to a s pike in demand Just at the point when your application is getting overwhelmed you add an additional cache node to help alleviate the load Instead you effectively wipe 90 percent of your cache causing a huge spike of requests to your database Your dash board goes red and you start getting those alerts that nobody wants to get Luckily there is a well understood solution to this dilemma known as consistent hashing The theory behind consistent hashing is to create an internal hash ring with a prealloca ted number of partitions that can hold hash keys As cache nodes are added and removed they are slotted into positions on that ring The following illustration taken from Benjamin Erb’s thesis on Current Programming for Scalable Web Architectures illustrates consistent hashing graphically Consistent hashing The downside to consistent hashing is that there's quite a bit of math involved —at least it's more complicated than a simple modulo Basically you pre allocate a set of random integers and assign cache nodes to those random integers Then rather than using modulo you find the closest integer in the ring for a given cache key and use the cache node associated with that integer A concise yet complete explanation can be found in the article Consistent Hashing by Tom White This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 15 Luckily many modern client libraries include consistent hashing Although you shouldn't need to write your own consistent hashing solution from scratch it's important that you are aware of consistent ha shing so that you can ensure it's enabled in your client For many libraries it's still not the default behavior even when supported by the library Client Libraries Mature Memcached client libraries exist for all popular programming languages Any of the following Memcached libraries will work with Amazon ElastiCache: Language Memcached Library Ruby Dalli Dalli::ElastiCache Python Memcache Ring django elasticache python memcached pylibmc Nodejs node memcached PHP ElastiCache Cluster Cli ent memcached Java ElastiCache Cluster Client spymemcached C#/NET ElastiCache Cluster Client Enyim Memcached For Memcached with Java NET or PHP AWS recommend s using ElastiCache Clients with Auto Discovery because it supports Auto Discovery of new ElastiCache nodes as they are added to the cache cluster For Java this library is a simple wrapper around the popular spymemcached library that ad ds Auto Discovery support For PHP it is a wrapper around the built in Memcached PHP library For NET it is a wrapper around Enyim Memcached Auto Discovery only works for Memcached not Redis When ElastiCache repairs or replaces a cache node the Doma in Name Service (DNS) name of the cache node will remain the same meaning your application doesn't need to use Auto Discovery to deal with common failures You only need Auto Discovery support if you dynamically scale the size of your cache cluster on the fly while your application is running Dynamic scaling is only required if your application load fluctuates significantly For more details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 16 see Automatically Ide ntify Nodes in your Memcached Cluster in the Amazon ElastiCache for Memcached User Guide As mentioned you should choose a client library that includes native support for consistent hashing Many of the libraries in the preceding table support consiste nt hashing but we recommend that you check the documentation because this support can change over time Also you might need to enable consistent hashing by setting an option in the client library In PHP for example you need to explicitly set Memcached::OPT_LIBKETAMA_COMPATIBLE to true to enable consistent hashing: This code snippet tells PHP to use consistent hashing by using libketama Otherwise the default in PHP is to use modulo which suffers from the drawbacks outlined preceding Following are some common and effect ive caching strategies If you've done a good amount of caching before some of this might be familiar Be Lazy Lazy caching also called lazy population or cache aside is the most prevalent form of caching Laziness should serve as the foundation of any good caching strategy The basic idea is to populate the cache only when an object is actually requested by the application The overall application flow goes like this: $cache_nodes = array( array(’my cache 2az2vq550001usw2cacheamazonawscom’ 11211) array(’my cache 2az2vq550002usw2cacheamazonawscom’ 11211) ); $memcached = new Memcached(); $memcached >setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE true); $memcached >addServers($cache_nodes); This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 17 1 Your app receives a query for data for example the top 10 most recent news stories 2 Your app checks the cache to see if the object is in cache 3 If so (a cache hit) the cached object is returned and the call flow ends 4 If not (a cache miss) then the database is queried for the object The cache is populated and the object is returned This approach has several advantages over other methods: • The cache only contains objects that the application actually requests which helps keep the cache size manageable New objects are only added to the cache as needed You can then manage your cache me mory passively by simply letting Memcached automatically evict (delete) the least accessed keys as your cache fills up which it does by default • As new cache nodes come online for example as your application scales up the lazy population method will au tomatically add objects to the new cache nodes when the application first requests them • Cache expiration which we will cover in depth later is easily handled by simply deleting the cached object A new object will be fetched from the database the next t ime it is requested • Lazy caching is widely understood and many web and app frameworks include support out of the box Here is an example of lazy caching in Python pseudocode: # Python def get_user(user_id): # Check the cache record = cacheget( user_id) if record is None: # Run a DB query record = dbquery(""select * from users where id = ?""user_id) # Populate the cache cacheset(user_id record) return record # App code user = get_user(17) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 18 You can find libraries in many popular programming fram eworks that encapsulate this pattern But regardless of programming language the overall approach is the same Apply a lazy caching strategy anywhere in your app lication where you have data that is going to be read often but written infrequently In a ty pical web or mobile app for example a user's profile rarely changes but is accessed throughout the app A person might only update his or her profile a few times a year but the profile might be accessed dozens or hundreds of times a day depending on t he user Because Memcached will automatically evict the less frequently used cache keys to free up memory you can apply lazy caching liberally with little downside Write On Through In a write through cache the cache is updated in real time when the data base is updated So if a user updates his or her profile the updated profile is also pushed into the cache You can think of this as being proactive to avoid unnecessary cache misses in the case that you have data that you absolutely know is going to be accessed A good example is any type of aggregate such as a top 100 game leaderboard or the top 10 most popular news stories or even recommendations Because this data is typically updated by a specific piece of application or background job code it's straightforward to update the cache as well The write through pattern is also easy to demonstrate in pseudocode: # Python def save_user(user_id values): # Save to DB record = dbquery(""update users where id = ?"" user_id values) # Push into cache cacheset(user_id record) return record # App code user = save_user(17 {""name"": ""Nate Dogg""}) This approach has certain advantages over lazy population: • It avoids cache misses which can help the application perform better and feel snappier This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 19 • It shifts any application delay to the user updating data which maps better to user expectations By contrast a series of cache misses can give a random user the impression that your app is just slow • It simplifies cache expiration The cache i s always up todate However write through caching also has some disadvantages: • The cache can be filled with unnecessary objects that aren't actually being accessed Not only could this consume extra memory but unused items can evict more useful items ou t of the cache • It can result in a lot of cache churn if certain records are updated repeatedly • When (not if) cache nodes fail those objects will no longer be in the cache You need some way to repopulate the cache of missing objects for example by lazy population As might be obvious you can combine lazy caching with write through caching to help address these issues because they are associated with opposite sides of the data flow Lazy caching catches cache misses on reads and write through caching populates data on writes so the two approaches complement each other For this reason it's often best to think of lazy caching as a foundation that you can use throughout your app and write through caching as a targeted optimization that you apply to s pecific situations Expiration Date Cache expiration can become complex quickly In our previous examples we were only operating on a single user record In a real app a given page or screen often caches a whole bunch of different stuff at once —profile d ata top news stories recommendations comments and so forth all of which are being updated by different methods Unfortunately there is no silver bullet for this problem and cache expiration is a whole arm of computer science But there are a few sim ple strategies that you can use: • Always apply a time to live (TTL) to all of your cache keys except those you are updating by write through caching You can use a long time say hours or even days This approach catches application bugs where you forget to update or delete a given cache key when updating the underlying record Eventually the cache key will auto expire and get refreshed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 20 • For rapidly changing data such as comments leaderboards or activity streams rather than adding write through caching or complex expiration logic just set a short TTL of a few seconds If you have a database query that is getting hammered in production it's just a few lines of code to add a cache key with a 5 second TTL around the query This code can keep your applica tion up and running while you evaluate more elegant solutions • A newer pattern Russian doll caching has come out of work done by the Ruby on Rails team In this pattern nested records are managed with their own cache keys and then the top level resourc e is a collection of those cache keys Say that you have a news webpage that contains users stories and comments In this approach each of those is its own cache key and the page queries each of those keys respectively • When in doubt just delete a cac he key if you're not sure whether it's affected by a given database update or not Your lazy caching foundation will refresh the key when needed In the meantime your database will be no worse off than it was without Memcached For a good overview of cach e expiration and Russian doll caching see the blog post The performance impact of ""Russian doll"" caching in the Basecamp Signal vs Noise blog The Thunderi ng Herd Also known as dog piling the thundering herd effect is what happens when many different application processes simultaneously request a cache key get a cache miss and then each hits the same database query in parallel The more expensive this que ry is the bigger impact it has on the database If the query involved is a top 10 query that requires ranking a large dataset the impact can be a significant hit One problem with adding TTLs to all of your cache keys is that it can exacerbate this probl em For example let's say millions of people are following a popular user on your site That user hasn't updated his profile or published any new messages yet his profile cache still expires due to a TTL Your database might suddenly be swamped with a series of identical queries TTLs aside this effect is also common when adding a new cache node because the new cache node's memory is empty In both cases the solution is to prewarm the cache by following these steps: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 21 1 Write a script that performs the sam e requests that your application will If it's a web app this script can be a shell script that hits a set of URLs 2 If your app is set up for lazy caching cache misses will result in cache keys being populated and the new cache node will fill up 3 When y ou add new cache nodes run your script before you attach the new node to your application Because your application needs to be reconfigured to add a new node to the consistent hashing ring insert this script as a step before triggering the app reconfigu ration 4 If you anticipate adding and removing cache nodes on a regular basis prewarming can be automated by triggering the script to run whenever your app receives a cluster reconfiguration event through Amazon Simple Notification Service (Amazon SNS) Finally there is one last subtle side effect of using TTLs everywhere If you use the same TTL length (say 60 minutes) consistently then many of your cache keys might expire within the same time window even after prewarming your cache One strateg y that's easy to implement is to add some randomness to your TTL: ttl = 3600 + (rand() * 120) /* +/ 2 minutes */ The good news is that only sites at large scale typically have to worry about this level of scaling problem It's good to be aware of but it' s also a good problem to have Cache (Almost) Everything Finally it might seem as if you should only cache your heavily hit database queries and expensive calculations but that other parts of your app might not benefit from caching In practice in memor y caching is widely useful because it is much faster to retrieve a flat cache key from memory than to perform even the most highly optimized database query or remote API call Just keep in mind that cached data is stale data by definition meaning there m ay be cases where it’s not appropriate such as accessing an item’s price during online checkout You can monitor statistics like cache misses to determine whether your cache is effective which we will cover in Monitoring and Tuning later in the paper This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 22 ElastiCache for Redis So far we've been talking about ElastiCache for Memcached as a passive component in our application —a big slab of memory in the cloud Choosing Redis as our engine can unlock more interesting possibilities for our application due to its higher level data structures such as lists hashes sets and sorted sets Deploying Redis makes use of familiar concepts such as clusters and nodes However Redis has a few important differences compared with Memcached: • Redis data structures cannot be horizontally sharded As a result Redis ElastiCache clusters a re always a single node rather than the multiple nodes we saw with Memcached • Redis supports replication both for high availability and to separate read workloads from write workloads A given ElastiCache for Redis primary node can have one or more repli ca nodes A Redis primary node can handle both reads and writes from the app Redis replica nodes can only handle reads similar to Amazon RDS Read Replicas • Because Redis supports replication you can also fail over from the primary node to a replica in t he event of failure You can configure ElastiCache for Redis to automatically fail over by using the Multi AZ feature • Redis supports persistence including backup and recovery However because Redis replication is asynchronous you cannot completely guar d against data loss in the event of a failure We will go into detail on this topic in our discussion of Multi AZ Architecture with ElastiCache for Redis As with Memcached when you deploy an ElastiCache for Redis cluster it is an additional tier in your app Unlike Memcached ElastiCache clusters for Redis only contain a single primary node After you create the primary node you can configure one or more replica nodes and attach them to the primary Redis node An ElastiCache for Redis replication group consists of a primary and up to five read replicas Redis asynchronously replicates the data from the primary to the read replicas Because Redis supports persistence it is technically possible to use Redis as your only data store In practice customers find that a managed database such as Amazon DynamoDB or Amazon RDS is a better fit for most use cases of long term data storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 23 Amazon Elasti Cache for Redis ElastiCache for Redis has the concept of a primary endpoint which is a DNS name that always poi nts to the current Redis primary node If a failover event occurs the DNS entry will be updated to point to the new Redis primary node To take advantage of this functionality make sure to configure your Redis client so that it uses the primary endpoint DNS name to access your Redis cluster Keep in mind that the number of Redis replicas you attach will affect the performance of the primary node Resist the urge to spin up lots of replicas just for durability One or two replicas in a different Availabili ty Zone are sufficient for availability When scaling read throughput monitor your application's performance and add replicas as needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 24 Be sure to monitor your ElastiCache cluster's performance as you add replica nodes For more details see Monitoring and Tuning later in this paper Distributing Reads and Writes Using read replicas with Redis you can separate your read and write workloads This separa tion lets you scale reads by adding additional replicas as your application grows In this pattern you configure your application to send writes to the primary endpoint Then you read from one of the replicas as shown in the following diagram With this approach you can scale your read and write loads independently so your primary node only has to deal with writes Distributing reads and writes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 25 The main caveat to this approach is that reads can return data that is slightly out of date compared to the primary node because Redis replication is asynchronous For example if you have a global counter of ""total games played"" that is being continuously incremented (a good fit for Redis) your master might show 51782 However a read from a replica migh t only return 51775 In many cases this is just fine But if the counter is a basis for a crucial application state such as the number of seconds remaining to vote on the most popular pop singer this approach won't work When deciding whether data can be read from a replica here are a few questions to consider: • Is the value being used only for display purposes? If so being slightly out of date is probably okay • Is the value a cached value for example a page fragment? If so again being slightly out o f date is likely fine • Is the value being used on a screen where the user might have just edited it? In this case showing an old value might look like an application bug • Is the value being used for application logic? If so using an old value can be risky • Are multiple processes using the value simultaneously such as a lock or queue? If so the value needs to be up todate and needs to be read from the primary node In order to split reads and writes you will need to create two separate Redis connection handles in your application: one pointing to the primary node and one pointing to the read replica(s) Configure your application to write to the DNS primary endpoint and then read from the other Redis nodes Multi AZ with Auto Failover During certain t ypes of planned maintenance or in the unlikely event of ElastiCache node failure or Availability Zone failure Amazon ElastiCache can be configured to automatically detect the failure of the primary node select a read replica and promote it to become th e new primary ElastiCache auto failover will then update the DNS primary endpoint with the IP address of the promoted read replica If your application is writing to the primary node endpoint as recommended earlier no application change will be needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 26 Depending on how in sync the promoted read replica is with the primary node the failover process can take several minutes First ElastiCache needs to detect the failover then suspend writes to the primary node and finally complete the failover to the replica During this time your application cannot write to the Redis ElastiCache cluster Architecting your application to limit the impact of these types of failover events will ensure greater overall availability Unless you have a specific need otherwise all production deployments should use Multi AZ with auto failover Keep in mind that Redis replication is asynchronous meaning if a failover occurs the read replica that is selected might be slightly behind the master Bottom line: Some data loss might occur if you have rapidly changing data This effect is currently a limitation of Redis replication itself If you have crucial data that cannot be lost (for example transactional or purchase data) we recommend that you also store that in a durable data base such as Amazon DynamoDB or Amazon RDS Sharding with Redis Redis has two categories of data structures: simple keys and counters and multidimensional sets lists and hashes The bad news is the second category cannot be sharded horizontally But the good news is that simple keys and counters can In the simplest case you can treat a single Redis node just like a single Memcached node Just like you might spin up multiple Memcached nodes you can spin up multiple Redis clusters and each Redis cluste r is responsible for part of the sharded dataset This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 27 Sharding with Redis In your application you'll then need to configure the Redis client to shard between those two clusters Here is an example from the Jedis Sharded Java Client: List shards = new ArrayList(); shardsadd(new JedisShardInfo(""redis cluster1"" 6379)); shardsadd(new JedisShardInfo(""redis cluster2"" 6379)); This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon Elast iCache Page 28 ShardedJedisPool pool = new ShardedJedisPool(shards); ShardedJedis jedis = poolgetResource(); You can also combine horizontal sharding with split reads and writes In this setup you have two or more Redis clusters each of which stores part of the key space You configure your application with two separate sets of Redis handles a write handle that points to the sharded masters and a read handle that points to the sharded replicas Following is an example architecture this time with Amazon DynamoDB rather than MySQL just to illustrate that you can use either one: Example arc hitecture with DynamoDB For the purpose of simplification the preceding diagram shows replicas in the same Availability Zone as the primary node In practice you should place the replicas in a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 29 different Availability Zone From an application perspective continuing with our Java example you configure two Redis connection pools as follows: List masters = new ArrayList(); mastersadd(new JedisShardInfo(""redis masterA"" 6379)); mastersadd(new JedisShardInfo(""redis masterB"" 6379)); ShardedJedisPool write_pool = new ShardedJedisPool(masters); ShardedJedis write_jedis = write_poolgetResource(); List replicas = new ArrayList(); replicasadd(new JedisShardInfo(""redis replicaA"" 6379)); replicasa dd(new JedisShardInfo(""redis replicaB"" 6379)); ShardedJedisPool read_pool = new ShardedJedisPool(replicas); ShardedJedis read_jedis = read_poolgetResource(); In designing your application you need to make decisions as to whether a given value can be re ad from the replica pool which might be slightly out of date or from the primary write node Be aware that reading from the primary node will ultimately limit the throughput of your entire Redis layer because it takes I/O away from writes Using multipl e clusters in this fashion is the most advanced configuration of Redis possible In practice it is overkill for most applications However if you design your application so that it can leverage a split read/write Redis layer you can apply this design in the future if your application grows to the scale where it is needed Advanced Datasets with Redis Let's briefly look at some use cases that ElastiCache for Redis can support Game Leaderboards If you've played online games you're probably familiar with top 10 leaderboards What might not be obvious is that calculating a top n leaderboard in near real time is actually quite complex An online game can easily have thousands of people playing concurrently each with stats that are changing con tinuously Re sorting these users and reassigning a numeric position is computationally expensive Sorted sets are particularly interesting here because they simultaneously guarantee both the uniqueness and ordering of elements Redis sorted set commands all start with This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 30 Z When an element is inserted in a Redis sorted set it is reranked in real time and assigned a numeric position Here is a complete game leaderboard example in Redis: ZADD “leaderboard” 556 “Andy” ZADD “leaderboard” 819 “Barry” ZADD “leaderboard” 105 “Carl” ZADD “leaderboard” 1312 “Derek” ZREVRANGE “leaderboard” 0 1 1) “Derek” 2) “Barry” 3) “Andy” 4) “Carl” ZREVRANK “leaderboard” “Barry” 2 When a player's score is updated the Redis command ZADD overwrites the existing value with the new score The list is instantly re sorted and the player receives a new rank For more information refer to the Redis documentation on ZADD ZRANGE and ZRANK Recommendation Engines Similarly calculating recommendations for users based on other items they've liked requi res very fast access to a large dataset Some algorithms such as Slope One are simple and effective but require in memory access to every item ever rated by anyone in the system Even if this data i s kept in a relational database it has to be loaded in memory somewhere to run the algorithm Redis data structures are a great fit for recommendation data You can use Redis counters used to increment or decrement the number of likes or dislikes for a gi ven item You can use Redis hashes to maintain a list of everyone who has liked or disliked that item which is the type of data that Slope One requires Here is a brief example of storing item likes and dislikes: INCR ""item:38923:likes"" HSET ""item:38923:r atings"" ""Susan"" 1 INCR ""item:38923:dislikes"" HSET ""item:38923:ratings"" ""Tommy"" 1 From this simple data not only can we use Slope One or Jaccardian similarity to recommend similar items but we can use the same counters to display likes and dislikes in th e app itself In fact a number of open source projects use Redis in exactly this manner such as Recommendify and Recommendable In addi tion because Redis supports persistence this data can live solely in Redis This placement eliminates the need for any data loading process and also offloads an intensive process from your main database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amaz on ElastiCache Page 31 Chat and Messaging Redis provides a lightweight p ub/sub mechanism that is well suited to simple chat and messaging needs Use cases include in app messaging web chat windows online game invites and chat and real time comment streams (such as you might see during a live streaming event) Two basic Redi s commands are involved PUBLISH and SUBSCRIBE: SUBSCRIBE ""chat:114"" PUBLISH ""chat:114"" ""Hello all"" [""message"" ""chat:114"" ""Hello all""] UNSUBSCRIBE ""chat:114"" Unlike other Redis data structures pub/sub messaging doesn't get persisted to disk Redis pub/sub messages are not written as part of the RDB or AOF backup files that Redis creates If you want to save these pub/sub messages you will need to add them to a Redis data structure such as a list For more details see Using Pub/Sub for Asynchronous Communication in the Redis Cookbook Also because Redis pub/sub is not persistent you can lose data if a cache node fails If you're looking for a reliable topic based messaging system consider evaluating Amazon SNS Queues Although we offer a managed queue service in the form of Amazon Simple Queue Service (Amazon SQS) and we encourage customers to us e it you can also use Redis data structures to build queuing solutions The Redis documentation for RPOPLPUSH covers two well documented queuing patterns In these patterns Redis lists are used to hold items in a queue When a process takes an item from the queue to work on it the item is pushed onto an ""in progress"" queue and then deleted when the work is done Open source solutions such as Resque use R edis as a queue; GitHub uses Resque Redis does have certain advantages over other queue options such as very fast speed once and only once delivery and guaranteed message ordering However pay careful attention to ElastiCache for Redis backup and reco very options (which we will cover shortly) if you intend to use Redis as a queue If a Redis node terminates and you have not properly configured its persistence options you can lose the data for the items in This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 32 your queue Essentially you need to view your queue as a type of database and treat it appropriately rather than as a disposable cache Client Libraries and Consistent Hashing As with Memcached you can find Redis client libraries for the currently popular programming languages Any of these will w ork with ElastiCache for Redis: Language Redis Library Ruby redis rb Redis ::Objects Python redispy Nodejs node_redis ioredis PHP phpredis Predis Java Jedis Lettuce Redisson C#/NET ServiceStackRedis StackExchangeRedis GO goredis/redis Radix Redigo Unlike with Memcached it is uncommon for Redis libraries to support consistent hashing Redis libraries rarely support consistent hashing because the advanced data types that we discussed preceding cannot simply be horizontally sharded across multiple Redis nodes This point leads to another very important one: Redis as a technology cannot be horizontally scaled easily Redis can only scale up to a larger node size because its data structures must reside in a single memory image in order to perform properly Note that Redis Cluster was first made available in Redis version 30 It aims to provide scale out capability with certain data types Redis Cluster currently only supports a subset of Redis functionality and has some important caveats about possible data loss For more details see the Redis Cluster Specification This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 33 Monitoring and Tuning Before we wrap up let's spend some time talking about monitoring and performance tuning Monitoring Cache Efficiency To begin see the Monitoring Use with CloudWatch topic for Redis and Memcached as well as the Which Metrics Should I Monitor? topic for Redis and Memcached in the Amazon ElastiCache User Guide Both topics are excellent resources for understanding how to measure the health of your ElastiCache cluster using the metrics that ElastiCache publishes to Amazon CloudWatch Most importantly watch CPU us age A consistently high CPU usage indicates that a node is overtaxed either by too many concurrent requests or by performing dataset operations in the case of Redis For Redis ElastiCache provides two different types of metrics for monitoring CPU usage : CPUUtilization and EngineCPUUtilization Because Redis is single threaded you need to multiply the CPU percentage by the number of cores to get an accurate measure of CPUUtilization For s maller node types with one or two vCPUs use the CPUUtilization metric to monitor your workload For larger node types with four or more vCPUs we recommend monitor ing the EngineCPUUtilization metric which reports the percentage of usage on the Redis engi ne core After Redis maxes out a single CPU core that node is fully utilized and further scaling is needed If your main workload is from read requests add more replicas to distribute the read workloads across the replicas and reader endpoints If your main workload is from write requests add more shards to distribute the write workload across more primary nodes In addition to CPU here is some additional guidance for monitoring cache memory utilization Each of these metrics is available in CloudWatc h for your ElastiCache cluster: • Evictions —both Memcached and Redis manage cache memory internally and when memory starts to fill up they evict (delete) unused cache keys to free space A small number of evictions shouldn't alarm you but a large number means that your cache is running out of space This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale w ith Amazon ElastiCache Page 34 • CacheMisses —the number of times a key was requested but not found in the cache This number can be fairly large if you're using lazy population as your main strategy If this number is remaining steady it's lik ely nothing to worry about However a large number of cache misses combined with a large eviction number can indicate that your cache is thrashing due to lack of memory • BytesUsedForCacheItems —this value is the actual amount of cache memory that Memcached or Redis is using Both Memcached and Redis attempt to allocate as much system memory as possible even if it's not used by actual cache keys Thus monitoring the system memory usage on a cache node doesn't tell you how full your cache actually is • SwapUsage —in normal usage neither Memcached nor Redis should be performing swaps • Currconnections —this is a cache engine metric representing the number of clients connected to the engine We recommend that you determine your own alarm threshold for this metric based on your application needs An increasing number of CurrConnections might indicate a problem with your application — you’ll need to investigate the application ’s behavior to address this issue A well tuned cache node will show the number of cache byte s used to be almost equal to the maxmemory parameter in Redis or the max_cache_memory parameter in Memcached In steady state most cache counters will increase with cache hits increasing faster than misses You also will probably see a low number of evi ctions However a rising number of evictions indicates that cache keys are getting pushed out of memory which means you can benefit from larger cache nodes with more memory The one exception to the evictions rule is if you follow a strict definition of Russian doll caching which says that you should never cause cache items to expire but instead let Memcached and Redis evict unused keys as needed If you follow this approach keep a close watch on cache misses and bytes used to detect potential problems Watching for Hot Spots In general i f you are using consistent hashing to distribute cache keys across your cache nodes your access patterns should be fairly even across nod es However you still need to watch out for hot spots which are nodes in your cache that receive higher load than other nodes This pattern is caused by hot keys which are cache keys that are accessed more frequently than others Think of a social websi te where you have some users that might be 10000 times more popular than an average user That user's This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 35 cache keys will be accessed much more often which can put an uneven load onto the cache nodes that house that user's keys If you see uneven CPU usage among your cache nodes you might have a hot spot This pattern often appears as one cache node having a significantly higher operation count than other nodes One way to confirm this is by keeping a counter in your application of your cache key gets and p uts You can push these as custom metrics into CloudWatch or another monitoring service Don't do this unless you suspect a hot spot however because logging every key access will decrease the overall performance of your application In the most common c ase a few hot keys will not necessarily create any significant hot spot issues If you have a few hot keys on each of your cache nodes then those hot keys are themselves evenly distributed and are producing an even load on your cache nodes If you have three cache nodes and each of them has a few hot keys then you can continue sizing your cache cluster as if those hot keys did not exist In practice even a well designed application will have some degree of unevenness in cache key access In extreme cas es a single hot cache key can create a hot spot that overwhelms a single cache node In this case having good metrics about your cache especially your most popular cache keys is crucial to designing a solution One solution is to create a mapping table that remaps very hot keys to a separate set of cache nodes Although this approach provides a quick fix you will still face the challenge of scaling those new cache nodes Another solution is to add a secondary layer of smaller caches in front of your ma in nodes to act as a buffer This approach gives you more flexibility but introduces additional latency into your caching tier The good news is that these concerns only hit applications of a significant scale We recommend being aware of this potential issue and monitoring for it but not spending time trying to engineer around it up front Hot spots are a fast moving area of computer science research and there is no one sizefitsall solution As always our team of Solutions Architects is available to work with you to address these issues if you encounter them For more research on this topic refer to papers such as Relieving Hot Spots on the World Wide Web and Characterizing Load Imbalance in Real World Networked Caches Memcached Memory Optimization Memcached uses a slab allocator which means that it allocates memory in fixed chunks and then manag es those chunks internally Using this approach Memcached This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 36 can be more efficient and predictable in its memory access patterns than if it used the system malloc() The downside of the Memcached slab allocator is that memory chunks are rigidly allocated onc e and cannot be changed later This approach means that if you choose the wrong number of the wrong size slabs you might run out of Memcached chunks while still having plenty of system memory available When you launch an ElastiCache cluster the max_cache_memory parameter is set for you automatically along with several other parameters For a list of default values see Memcached Specific Parameters in the Amazon ElastiCache for Memcached User Guide The key parameters to keep in mind are chunk_size and chunk_size_growth_factor which work together to control how memory chunks are allocated Redis Memory Optimization Redis has a good write up on memory optimization that can come in handy for advanced use cases Redis exposes a number of Redis configuration variables that will affect how Redis balances CPU and memory for a given dataset These directives can be used with ElastiCache for Redis as well Redis Backup and Restore Redis clusters support persistence by using backup and restore When Redis backup and restore is enabled ElastiCache can automatically take snapshots of your Redis cluste r and save them to Amazon Simple Storage Service (Amazon S3) The Amazon ElastiCache User Guide includes excellent coverage of this function in the topic ElastiCache for Redis Backup and Restore Because of the way Redis backups are implemented in the Redis engine itself you need to have more memory available that your dataset consumes This requirement is because Redis forks a background process that writes the backup data To do so it makes a copy of your data using Linux copy onwrite semantics If your data is changing rapidly this approach means that those data segments will be copied consuming additional memory For more details refer to Amazon ElastiCache Backup Best Practices For production use we strongly recommend that you always enable Redis backups and retain them for a minimum of 7 days In practice retaining them for 14 or 30 days will provide better safety in the event of an applicat ion bug that ends up corrupting data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 37 Even if you plan to use Redis primarily as a performance optimization or caching layer persisting the data means you can prewarm a new Redis node which avoids the thundering herd issue that we discussed earlier To c reate a new Redis cluster from a backup snapshot see Seeding a New Cluster with an Externally Created Backup in the Amazon ElastiCache for Redis User Gu ide You can also use a Redis snapshot to scale up to a larger Amazon EC2 instance type To do so follow this process: 1 Suspend writes to your existing ElastiCache cluster Your application can continue to do reads 2 Take a snapshot by following the procedu re in the Creating a Manual Snapshot section in the Amazon ElastiCache for Redis User Guide Give it a distinctive name that you will remember 3 Create a new Ela stiCache Redis cluster and specify the snapshot you took preceding to seed it 4 Once the new ElastiCache cluster is online reconfigure your application to start writing to the new cluster Currently this process will interrupt your application's ability to write data into Redis If you have writes that are only going into Redis and that cannot be suspended you can put those into Amazon SQS while you are resizing your ElastiCache cluster Then once your new ElastiCache Redis cluster is ready you can run a script that pulls those records off Amazon SQS and writes them to your new Redis cluster Cluster Scaling and Auto Discovery Scaling your application in response to changes in demand is one of the key benefits of working with AWS Many customers find that configuring their client with a list of node DNS endpoints for ElastiCache works perfectly fine But let's look at how to sca le your ElastiCache Memcached cluster while your application is running and how to set up your application to detect changes to your cache layer dynamically Auto Scaling Cluster Nodes Amazon ElastiCache does not currently support using Auto Scaling to sc ale the number of cache nodes in a cluster To change the number of cache nodes you can use either the AWS Management Console or the AWS API to modify the cluster For more This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 38 information refer to Modifying an ElastiCache Cache Cluster in the Amazon ElastiCache for Memcached User Guide In practice you usually don't want to regularly change the number of cache nodes in your Memcached cluster Any change to your cache nodes will result in some percentage of cache keys being remapped to new (empty) nodes which means a performance impact to your application Even with consistent hashing you will see an impact on your application when adding or removing nodes Auto Discovery of Memcached Nodes The ElastiCache Clients with Auto Discovery for Java NET and PHP support Auto Discovery of new ElastiCache Memcached nodes For Ruby the open source library dallielasticache provides autodiscovery support and django elasticache is available for Python Django In other languages you'll need to implement autodiscovery yourself Luckily this implementation is very easy The overall Auto Discovery mechanism is outlined in the How Auto Discovery Works topic in the Amazon ElastiCache for Memcached User Guide Basically ElastiCache adds a special Memcached configuration variable called cluster that contains the DNS names of the current cache nodes To access this list your application connects to your cache cluster configuration endpoint which is a hostname ending in cfgregion cacheamazonawscom After you retrieve the list of cache node host names your application configures its Memcached client to connect to the list of cache nodes using consistent hashing to balance across them Here is a complete working example in Ruby: require 'socket' require 'dalli' socket = TCPSocketnew( 'mycache 2az2vq55cfgusw2cacheamazonawscom' 11211 ) socketputs(""config get cluster"") header = socketgets version = socketgets nodelist = socketgetschompsplit(/ \s+/)map{|l| lsplit(' |')first } socketclose # Configure Memcached client This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 39 cache = Dalli::Clientnew(nodelist) Using Linux utilities you can even do this from the command line using netcat which can be useful in a script: ec2host$ echo ""config get cluster"" | \ nc mycache2az2vq55cfgusw2cacheamazonawscom 11211 | \ grep 'cacheamazonawscom' | tr ' ' ' \n' | cut d'|' f 1 mycache2az2vq550001usw2cacheamazonawscom mycache2az2vq550002usw2cacheamazonawscom Using Auto Discovery your Amazon EC2 application servers can locate Memcached nodes as they are added to a cache cluster However once your application has an open socket to a Memcached instance it won't necessarily detect any changes to the cache node list that might happen later To make this a complete solution two more things are needed: • The ability to scale cache nodes as needed • The ability to trigger an application reconfiguration on the fly Cluster Reconfiguration Events from Amazon SNS Amazon ElastiCache publishes a number of notifications to Amazon SNS when a cluster change happens such as a configuration change or replacement of a node Because these notifications are sent through Amazon SNS yo u can route them to multiple endpoints including email Amazon SNS or other Amazon EC2 instances For a complete list of Amazon SNS events that ElastiCache publishes see the Event Notifications and Amazon SNS topic for Redis or Memcached in the Amazon ElastiCache User Guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Perfor mance at Scale with Amazon ElastiCache Page 40 If you want your application to dynamical ly detect nodes that are being added or removed you can use these notifications as follows Note that the following process is not required to deal with cache node failures If a cache node fails and is replaced by ElastiCache the DNS name will remain th e same Most client libraries should automatically reconnect once the cache node becomes available again The two most interesting events that ElastiCache publishes at least for the purposes of scaling our cache are ElastiCache:AddCacheNodeComplete and ElastiCache:RemoveCacheNodeComplete These events are published when cache nodes are added or removed from the cluster By listening for these events your application can dynamically reconfigure itself to detect the new cache nodes The basic process for using Amazon SNS with your application is as follows: 1 Create an Amazon SNS topic for your ElastiCache alerts as described in Managing ElastiCache Amazon SNS Notifications in the Amazon ElastiCache User Guide for Redis or Memcached 2 Modify your application code to subscribe to this Amazon SNS topic All of your application instances will listen to the same topic See the blog post Receiving Amazon SNS Messages in PHP for details and code examples 3 When a cache node is added or removed you will receive a corresponding Amazon SNS message At that point your application needs to be able to rerun the Auto Discovery code we discussed preceding to get the updated cache node list 4 After your application has the new list of cache nodes it a lso reconfigures its Memcached client accordingly Again this workflow is not needed for cache node recovery —only if nodes are added or removed dynamically and you want your application to dynamically detect them Otherwise you can simply add the new ca che nodes to your application's configuration and restart your application servers To accomplish this with zero downtime to your app you can leverage solutions such as zero downtime deploys with Elastic Beanstalk Conclusion Proper use of in memory cach ing can result in an application that performs better and costs less at scale Amazon ElastiCache greatly simplifies the process of deploying an inmemory cache in the cloud By following the steps outlined in this paper you can easily deploy an ElastiCac he cluster running either Memcached or Redis on AWS and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 41 then use the caching strategies we discussed to increase the performance and resiliency of your application You can change the configuration of ElastiCache to add remove or resize nodes as your ap plication's needs change over time in order to get the most out of your in memory data tier Contributors Contributors to this document include : • Marcelo França Sr Partner Solutions Architect Amazon Web Services • Nate Wiger Amazon Web Services • Rajan Timalsina Cloud Support Engineer Amazon Web Services Document Revisions Date Description March 30 2021 Reviewed for technical accuracy July 2019 Corrected broken links added links to libraries and incorporated minor text updates throughout May 2015 First publication",General,consultant,Best Practices Practicing_Continuous_Integration_and_Continuous_Delivery_on_AWS,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlPracticing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps First Publi shed June 1 2017 Updated October 27 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlContents The challenge of software delivery 1 What is continuous integration and continuous delivery/deployment? 2 Continuous integration 2 Continuous delivery and deployment 2 Continuous delivery is not continuous deployment 3 Benefits of continuous delivery 3 Implementing continuous integration and continuous del ivery 4 A pathway to continuous integration/continuous delivery 5 Teams 9 Testing stages in continuous integration and continuous delivery 10 Building the pipeline 13 Pipeline integration with AWS CodeBuild 22 Pipeline integration with Jenkins 23 Deployment methods 24 All at once (in place deployment) 26 Rolling deployment 26 Immutable and blue/green deplo yments 26 Database schema changes 27 Summary of best practices 28 Conclusion 29 Further reading 29 Contributors 30 Document revisions 30 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAbstract This paper explains the features and benefits of using continuous integration and continuous delivery (CI/CD) along with Amazon Web Services (AWS) tooling in your software development environment Continuous integration and continuous delivery are best practices and a vital part of a DevOps initiative This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 1 The challenge of software delivery Enterprises today face the challenge s of rapidly changing competitive landscapes evolving security requirements and performance scalability Enterprises must bridge the g ap between operations stability and rapid feature development Continuous integration and continuous delivery (CI/CD) are practice s that enable rapid software changes while maintaining system stability and security Amazon realized early on that the busine ss needs of delivering features for Amazoncom retail customers Amazon subsidiaries and Amazon Web Services (AWS) would require new and innovative ways of delivering software At the scale of a company like Amazon thousands of independent software teams must be able to work in parallel to deliver software quickly securely reliably and with zero tolerance for outages By learning how to deliver software at high velocity Amazon and other forward thinking organizations pioneered DevOps DevOps is a combination of cultural philosophies practices and tools that increase an organization’s ability to deliver applications and services at high velocity Using DevOps principles organizations can evolve and improve products at a faster pace than organizations that use traditional software development and infrastructure management processes This speed enables organizations to better serve their customers and compete more effective ly in the market Some of these principles such as twopizza teams and microservices/ service oriented architecture (SOA) are out of the scope of thi s whitepaper This whitepaper discuss es the CI/CD capability that Amazon has built and continuously improved CI/CD is key to delivering software features rapidly and reliably AWS now offers these CI/CD capabilities as a set of developer services: AWS CodeStar AWS CodeCommit AWS CodePipeline AWS CodeBuild AWS CodeDeploy and AWS CodeArtifact Developers and IT operations professionals practicing DevOps can use these services to rapidly safely and securely deliver software Together they help you securely store and apply version control to your application's source code You can use AWS CodeStar to rapidly orchestrate an end toend software release workflow using these services For an existing envi ronment CodePipeline has the flexibility to integrate each service independently with your existing tools These are highly available easily integrated services that can be accessed through the AWS Management Console AWS application programming interfac es (APIs ) and AWS software development toolkits ( SDKs ) like any other AWS service This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 2 What is continuous integration and continuous delivery /deployment ? This section discusses the practices of continuous integration and continuous delivery and explain s the d ifference between continuous delivery and continuous deployment Continuous integration Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository after which automated builds and tests are run CI most often refers to the build or integration stage of the software release process and requires both an automation component ( for example a CI or build service) and a cultural component ( for example learning to integrate frequentl y) The key goals of CI are to find and address bugs more quickly improve software quality and reduce the time it takes to validate and release new software updates Continuous integration focuses on smaller commits and smaller code changes to integrate A developer commits code at regular intervals at minimum once a day The developer pulls code from the code repository to ensure the code on the local host is merged before pushing to the build server At this stage the build server runs the various test s and either accepts or rejects the code commit The basic challenges of implementing CI include more frequent commits to the common codebase maintaining a single source code repository automating builds and automating testing Additional challenges inc lude testing in similar environments to production providing visibility of the process to the team and allowing developers to easily obtain any version of the application Continuous delivery and deployment Continuous delivery (CD) is a software developm ent practice where code changes are automatically built tested and prepared for production release It expands on continuous integration by deploying all code changes to a testing environment a production environment or both after the build stage has b een completed Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points When continuous delivery is properly implemented developers always have a deployment ready build artifact that h as passed through a standardized test process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 3 With continuous deployment revisions are deployed to a production environment automatically without explicit approval from a developer making the entire software release process automated This in turn all ows for a continuous customer feedback loop early in the product lifecycle Continuous delivery is not continuous deployment One misconception about continuous delivery is that it means every change committed is applied to production immediately after passing automated tests However t he point of continuous delivery is not to apply every change to production immediately but to ensure that every change is ready to go to production Before deploying a change to production you can implement a decision process to ensure that the production deployment is authorized and audited This decision can be made by a person and then executed by the tooling Using continuous deliver y the decision to go live becomes a business decision not a technical one The technical validation happens on every commit Rolling out a change to production is not a disruptive event Deployment doesn’t require the technical team to stop working on th e next set of changes and it doesn’t need a project plan handover documentation or a maintenance window Deployment becomes a repeatable process that has been carried out and proven multiple times in testing environments Benefits of continuous deliver y CD provides numerous benefits for your software development team including automating the process improving developer productivity improving code quality and delivering updates to your customers faster Automate the software release process CD provides a method for your team to check in code that is automatically built tested and prepared for release to production so that your software delivery is efficient resilient rapid and secure Improve developer productivity CD practices help your te am’s productivity by freeing developers from manual tasks untangling complex dependencies and returning focus to delivering new features in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 4 software Instead of integrating their code with other parts of the business and spending cycles on how to deploy this code to a platform developers can focus on coding logic that delivers the features you need Improve code quality CD can help you discover and address bugs early in the delivery process before they grow into larger problems later Your team can easil y perform additional types of code tests because the entire process has been automated With the discipline of more testing more frequently teams can iterate faster with immediate feedback on the impact of changes This enables teams to drive quality code with a high assurance of stability and security Developers will know through immediate feedback whether the new code works and whether any breaking changes or bugs were introduced Mistakes caught early on in the d evelopment process are the easiest to fix Deliver updates faster CD helps your team deliver updates to customers quickly and frequently When CI/CD is implemented the velocity of the entire team including the release of features and bug fixes is increa sed Enterprises can respond faster to market changes security challenges customer needs and cost pressures For example if a new security feature is required your team can implement CI/CD with automated testing to introduce the fix quickly and reliab ly to production systems with high confidence What used to take weeks and months can now be done in days or even hours Implementing continuous integration and continuous delivery This section discuss es the ways in which you can begin to implement a CI/C D model in your organization This whitepaper doesn’t discuss how an organization with a mature DevOps and cloud transformation model builds and uses a CI/CD pipeline To help you on your DevOps journey AWS has a number of certified DevOps Partners who can provide resources and tooling For more information on preparing for a move to the AWS Cloud refer to the AWS Building a Cloud Operating Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 5 A pathway to continuous integration /continuous delivery CI/CD can be pictured as a pipeline ( refer to the following figure ) where new code is submitted on one end tested over a series of stages (source build staging and production) and then published as production ready code If your organization is new to CI/CD it can approach this pipeline in an iterative fashion This means that you should start small and iterate at each stage so that you can understand and develop your code in a way that will help your organization grow CI/CD pipeline Each stage of the CI/CD pipeline is structured as a logical unit in the delivery process In addition each stage acts as a gate that vets a certain aspe ct of the code As the code progresses through the pipeline the assumption is that the quality of the code is higher in the later stages because more aspects of it continue to be verified Problems uncovered in an early stage stop the code from progressin g through the pipeline Results from the tests are immediately sent to the team and all further builds and releases are stopped if software does not pass the stage These stages are suggestions You can adapt the stages based on your business need Some s tages can be repeated for multiple types of testing security and performance Depending on the complexity of your project and the structure of your teams some stages can be repeated several times at different levels For example the end product of one team can become a dependency in the project of the next team This means that the first team’s end product is subsequently staged as an artifact in the next team’s project The presence of a CI/CD pipeline will have a large impact on maturing the capabilit ies of your organization The organization should start with small steps and not try to build a fully mature pipeline with multiple environments many testing phases and automation in all stages at the start Keep in mind that even organizations that hav e highly mature CI/CD environments still need to continuously improve their pipelines This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 6 Building a CI/CD enabled organization is a journey and there are many destinations along the way The next section discuss es a possible pathway that your organization could take starting with continuous integration through the levels of continuous delivery Continuous integration Continuous integration —source and build The first phase in the CI/CD journey is to develop maturity in continuous integration You should ma ke sure that all of the developers regularly commit their code to a central repository (such as one hosted in CodeCommit or GitHub) and merge all changes to a release branch for the application No developer should be holding code in isolation If a featur e branch is needed for a certain period of time it should be kept up to date by merging from upstream as often as possible Frequent commits and merges with complete units of work are recommended for the team to develop discipline and are encouraged by th e process A developer who merges code early and often will likely have fewer integration issues down the road You should also encourage developers to create unit tests as early as possible for their applications and to run these tests before pushing the code to the central repository Errors caught early in the software development process are the cheapest and easiest to fix When the code is pushed to a branch in a source code repository a workflow engine monitoring that branch will send a command to a builder tool to build the code and run the unit tests in a controlled environment The build process should be sized appropriately to handle all activities including pushes and tests that might happen during the commit stage for fast feedback Other qua lity checks such as unit test coverage style check and static analysis can happen at this stage as well Finally the builder tool creates one or more binary builds and other artifacts like images stylesheets and documents for the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 7 Conti nuous delivery : creating a staging environment Continuous delivery —staging Continuous delivery (CD) is the next phase and entails deploying the application code in a staging environment which is a replica of the production stack and running more functional tests The staging environment could be a static environment premade for testing or you could provision and configure a dynamic environment with committed infrastructure and configuration code for testing and deploying the application code Continuous delivery : creating a production environment Continuous delivery —producti on In the deployment/delivery pipeline sequence after the staging environment is the production environment which is also built using infrastructure as code (IaC) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 8 Continuous deployment Continuous deployment The final phase in the CI/CD deployment pip eline is continuous deployment which may include full automation of the entire software release process including deployment to the production environment In a fully mature CI/CD environment the path to the production environment is fully automated whi ch allows code to be deployed with high confidence Maturity and beyond As your organization matures it will continue to develop the CI/CD model to include more of the following improvements: • More staging environments for specific performance compliance security and user interface (UI) tests • Unit tests of infrastructure and configuration code along with the application code • Integration with other systems and processes such as code review issue tracking and event notification • Integration with database schema migration (if applicable) • Additional steps for auditing and business approval Even the most mature organizations that have complex multi environment CI/CD pipelines continue to look for improvements DevOps is a journey not a destination Feedback about the pipeline is continuously collected and improvements in speed scale security and reliability are achieved as a collaboration between the different parts of the development teams This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 9 Teams AWS recommends organizing three developer teams for impleme nting a CI/CD environment: an application team an infrastructure team and a tools team ( refer to the following figure ) This organization represents a set of best practices that have been developed and applied in fast moving startups large enterprise or ganizations and in Amazon itself The teams should be no larger than groups that two pizzas can feed or about 10 12 people This follows the communication rule that meaningful conversations hit limits as group sizes increase and lines of communication mu ltiply Application infrastructure and tools teams Application team The application team creates the application Application developers own the backlog stories and unit tests and they develop features based on a specified application target This team’s organizational goal is to minimize the time these developers spend on non core application tasks In addition to having functional programming skills in the application language the application team should have platform skills and an u nderstanding of system configuration This will enable them to focus solely on developing features and hardening the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 10 Infrastructure team The infrastructure team writes the code that both creates and configures the infrastructure needed to run the application This team might use native AWS tools such as AWS CloudFormation or generic tools such as Chef Puppet or Ansible The infrastructure team is responsible for specifying what resources are needed and it works closely with the applic ation team The infrastructure team might consist of only one or two people for a small application The team should have skills in infrastructure provisioning methods such as AWS CloudFormation or HashiCorp Terraform The team should also develop configu ration automation skills with tools such as Chef Ansible Puppet or Salt Tools team The tools team builds and manages the CI/CD pipeline They are responsible for the infrastructure and tools that make up the pipeline They are not part of the two pizza team; however they create a tool that is used by the application and infrastructure teams in the organization The organization needs to continuously mature its tools team so that the tools team stays one step ahead of the maturing application and infrastructure teams The tools team must be skilled in building and integrating all parts of the CI/CD pipeline This includes building source control repositories workflow engines build environments testing frameworks and artifact repositories This team may choose to implement software such as AWS CodeStar AWS CodePipeline AWS CodeCommit AWS CodeDeploy AWS CodeBuild and AWS CodeArtifact along with Jenkins GitHub Artifactory TeamCity and other similar tools Some organizations might call this a DevOps team but AWS discourage s this and instead encourage s thinking of DevOps as the sum of the people processes and tools in software delivery Testing stages in continuous integration and continuous delivery The three CI/CD teams should incorporate te sting into the software development lifecycle at the different stages of the CI/CD pipeline Overall testing should start as early as possible The following testing pyrami d is a concept provided by Mike Cohn in the book Succeeding with Agile It shows the various software tes ts in relation to their cost and the speed at which they run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 11 CI/CD testing pyramid Unit tests are on the bottom of the pyramid They are both the fastest to run and the least expensive Therefore unit tests should make up the bulk of your testing strategy A good rule of thumb is about 70 percent Unit tests should have near complete code coverage because bugs caught in this phase can be fixed quickly and cheaply Service component and integration tests are above unit tests on the pyramid These tests require detailed environments and therefore are more costly in infrastructure requirements and slower to run Performance and compliance tests are the next level They require production quality environments and are more expensive yet UI an d user acceptance tests are at the top of the pyramid and require production quality environments as well All of these tests are part of a complete strategy to assure high quality software However for speed of development emphasis is on the number of t ests and the coverage in the bottom half of the pyramid The following sections discuss the CI/CD stage s Setting up the source At the beginning of the project it’s essential to set up a source where you can store your raw code and configuration and sch ema changes In the source stage choose a source code repository such as one hosted in GitHub or AWS CodeCommit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 12 Setting up and running builds Build automation is essential to the CI process When set ting up build automation the first task is to choose t he right build tool There are many build tools such as: • Ant Maven and Gradle for Java • Make for C/C++ • Grunt for JavaScript • Rake for Ruby The build tool that will work best for you depend s on the programming language of your project and the skill set of your team After you choose the build tool all the dependencies need to be clearly defined in the build scripts along with the build steps It’s also a best practice to version the final build artifacts which makes it e asier to deploy and to keep track of issues Building In the build stage t he build tools will take as input any change to the source code repository build the software and run the following types of tests : Unit testing – Tests a specific section of code to ensure the code does what it is expected to do The unit testing is performed by software developers during the development phase At this stage a static code analysis data flow analysis code coverage and other software verification pro cesses can be applied Static code a nalysis – This test is performed without actually executing the application after the build and unit test ing This analysis can help to find coding errors and security holes and it also can ensure conformance to coding guidelines Staging In the staging phase full environments are created that mirror the eventual production environment T he following tests are performed: Integ ration testing – Verifies the interfaces between components against software design Integration testing is an iterative process and facilitates building robust interfaces and system integrity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 13 Component testing – Tests message passing between various components and their outcomes A key goal of this testing could be idempotency in component testing Tests can include extremely large data volumes or edge situations and abnormal inputs System testing – Tests the system end toend and verifies i f the software satisfies the business requirement This might include testing the user interface ( UI) API backend logic and end state Performance testing – Determines the responsiveness and stability of a system as it performs under a particular worklo ad Performance testing also is used to investigate measure validate or verify other quality attributes of the system such as scalability reliability and resource usage Types of performance tests might include load tests stress tests and spike tes ts Performance tests are used for benchmarking against predefined criteria Compliance testing – Checks whether the code change complies with the requirements of a nonfunctional specification and/or regulations It determines if you are implementing and m eeting the defined standards User acceptance testing – Validate s the end toend business flow This testing is executed by an end user in a staging environment and confirm s whether the system meets the requirements of the requirement specification Typically customers employ alpha and beta testing methodologies at this stage Production Finally after passing the previous tests the staging phase is repeated in a production environment In this phase a final Canary test can be completed by deploying the new code only on a small subset of servers or even one server or one AWS Region before deploying code to the entire production environment Specifics on how to safely deploy to production are covered in the Deployment Methods section The next section discusses building the pipeline to incorporate these stages and tests Building the pipeline This section discusses building the pipeline Start by establishing a pipeline with just the components needed for CI and then transition later to a continuous delivery pipeline with more components and stages This section also discusses how you can consider using AWS Lambda functions and manual approvals for large projects plan for multiple teams branches and AWS Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 14 Starting with a minimum viable pipeline for continuous integration Your organization’s journey toward continuous delivery begins with a minimum viable pipeline (MVP) As discussed in Implementing continuous integration and continuous delivery teams can start with a very simple process such as implementing a pipeline that performs a code style check or a single unit test without deployment A key component is a continuou s delivery orchestration tool To help you build this pipeline Amazon developed AWS CodeStar AWS CodeStar uses AWS CodePipeline AWS CodeBuild AWS CodeCommit and AWS CodeDeploy with an integrated setup process tools templates and dashboard AWS CodeStar provides everything you need to quickly develop build and deploy applications on AWS This allows you to start releasing code faster Customers who are already fam iliar with the AWS Management Console and seek a higher level of control can manually configure their developer tools of choice and can provision individual AWS services as needed AWS CodeStar setup page This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 15 AWS CodePipeline is a CI/CD service that can be used through AWS CodeStar or through the AWS Management Console for fast and reliable application and infrastructure updates AWS CodePipeline builds tests and deploys your code every time there is a code change based on the release process models you define This enables you to rapidly and reliably deliver features and updates You can easily build out an end toend solution by using our pre built plugins for popular third party services like GitHub or by integrating your own custom plugins into any stage of your release process With AWS CodePipeline you only pay for what you use There are no upfront fees or long term commitments The steps of AWS CodeStar and AWS CodePipeline map directly to the source build staging and production CI/CD stages While continuous delivery is desirable you could start out with a simple two step pipeline that checks the source repository and performs a build action: AWS CodeStar dashboard This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 16 AWS CodePipeline source and build stages For AWS CodePipeline the source stage can accept inputs from GitHub AWS CodeCommit and Amazon Simple Storage Service ( Amazon S3) Automating the build process is a critical first step for implementing continuous delivery and m oving toward continuous deployment Eliminating human involvement in producing build artifacts removes the burden from your team minimizes errors introduced by manual packaging and allows you to start packaging consumable artifacts more often AWS CodePi peline works seamlessly with AWS CodeBuild a fully managed build service to make it easier to set up a build step within your pipeline that packages your code and runs unit tests With AWS CodeBuild you don’t need to provision manage or scale your own build servers AWS CodeBuild scales continuously and processes multiple builds concurrently so your builds are not left waiting in a queue AWS CodePipeline also integrates with build servers such as Jenkins Solano CI and TeamCity For example in the following build stage three actions (unit testing code style checks and code metrics collection) run in parallel Using AWS CodeBuild these steps can be added as new projects without any further effort in building or installing build servers to handle t he load This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 17 CodePipeline — build functionality The source and build stages shown in the figure AWS CodePipeline —source and build stages along with supporting processes and automation support your team’s transition toward a continuous integration At this level of maturity developers need to regularly pay attention to build and test results They need to grow and maintain a healthy unit test base as well This in turn bolster s the entire team’s confidence in the CI/CD pipeline and further s its adoption This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 18 AWS CodePipeline stages This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 19 Continuous delivery pipeline After the continuous integration pipeline has been implemented and supporting processes have been established your teams can start transitioning toward the continuous delivery pipeline This trans ition requires teams to automate both building and deploying applications A continuous delivery pipeline is characterized by the presence of staging and production steps where the production step is performed after a manual approval In the same manner the continuous integration pipeline was built your teams can gradually start building a continuous delivery pipeline by writing their deployment scripts Depending on the needs of an application some of the deployment steps can be abstracted by existing AWS services For example AWS CodePipeline directly integrates with AWS CodeDeploy a service that automates code deployments to Amazon EC2 instances and instances running on premises AWS OpsWorks a configuration management service th at helps you operate applications using Che f and to AWS Elastic Beanstalk a service for deploying and scaling web applications and services AWS has detailed documentation on how to implement and integrate AWS CodeDeploy with your infrastructure and pipeline After your team successfully automates the deployment of the application deployment stages can be expanded with various tests For example you can add other out ofthe box integrations with services like Ghost Inspector Runscope and others as shown in the following figure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 20 AWS CodePipeline —code tests in deployment stages Adding Lambda actions AWS CodeStar and AWS CodePipeline support integration with AWS Lambda This integration enables implementation of a broad set of tasks such as creating custom resources in your environment integrating with third party systems (such as Slack) and performing checks on your newly deployed environment Lambda functions can be used in CI/CD pipelines to do the following tasks: • Roll out changes to your environment by applying or updating an AWS CloudFormation template • Create resources on demand in one stage of a pipeline using AWS CloudFormation and delete them in another stage • Deploy application version s with zero downtime in AWS Elastic Beanstalk with a Lambda function that swaps Canonical Name record (CNAME ) values • Deploy to Amazon EC2 Container Service (ECS) Docker instances • Back up resources before building or deploying by creating an AMI snapshot • Add integration with third party products to your pipeline such as posting messages to an Internet Relay Chate ( IRC) client Manual approvals Add a n approval action to a stage in a pipeline at the point where you want the pipeline processing to stop so that someone with the required AWS Identity and Access Management (IAM) permissions can approve or reject the action This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 21 If the action is approved the p ipeline processing resumes If the action is rejected —or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping —the result is the same as an action failing and the pipeline processing does not continue AWS CodeDeploy —manual approvals Deploying infrastructure code changes in a CI/CD pipeline AWS CodePipeline lets you select AWS CloudFormation as a deployment action in any stage of your pipeline You can then choose the specific action you would like AW S CloudFormation to perform such as creating or deleting stacks and creating or executing change sets A stack is an AWS CloudFormation concept and represents a group of related AWS resources While there are many ways of provisioning Infrastructure as Co de AWS CloudFormation is a comprehensive tool recommended by AWS as a scalable complete solution that can describe the most comprehensive set of AWS resources as code AWS recommend s using AWS CloudFormation in an AWS CodePipeline project to track infrastructure changes and tests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 22 CI/CD for serverless applications You can also use AWS CodeStar AWS CodePipeline AWS CodeBuild and AWS CloudFormation to build CI/CD pipelines for serverless applications Serverless applications integrate managed services such as Amazon C ognito Amazon S3 and Amazon DynamoDB with event driven servi ce and AWS Lambda to deploy applications in a manner which doesn’t require managing servers If you are a serverless application developer you can use the combination of AWS CodePipeline AWS CodeBuild and AWS CloudFormation to automate the building te sting and deployment of serverless applications that are expressed in templates built with the AWS Serverless Application Model (SAM) For more information refer to the AWS Lambda documentation for Automating Deployment of Lambda based Applications You can also create secure CI/CD pipelines that follow your organization’s best practices with AWS Serverless Applicat ion Model Pipelines (AWS SAM Pipelines) AWS SAM Pipelines are a new feature of AWS SAM CLI that give you access to benefits of CI/CD in minutes such as accelerating deployment frequency shortening lead time for changes and reducing deployment errors AWS SAM Pipelines come with a set of default pipeline templates for AWS CodeBuild/CodePipeline that follow AWS deployment best practices For more information and to view the tutorial refer to the blog Introducing AWS SAM Pipelines Pipelines for multiple teams branches and AWS Regions For a large project it’s not uncommon for multiple project teams to work on different components If multiple teams use a single code repository it can be mapped so that each team has its own branch There should also be an integration or release branch for the final merge of the project If a service oriented or microservice architecture is used each team could have its own code repository In the first scenario if a single pipeline is used it’s possible that one team could affect the other teams’ progress by blocking the pipeline AWS recommend s that you crea te specific pipelines for team branches and another release pipeline for the final product delivery Pipeline integration with AWS CodeBuild AWS CodeBuild is designed to enable your organization to build a highly available build process with almost unlimi ted scale AWS CodeBuild provides quickstart environments for a number of popular languages plus the ability to run any Docker container that you specify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 23 With the advantages of tight integration with AWS CodeCommit AWS CodePipeline and AWS CodeDeploy a s well as Git and CodePipeline Lambda actions the CodeBuild tool is highly flexible Software can be built through the inclusion of a buildspecyml file that identifies each of the build steps including pre and post build actions or specified actions through the CodeBuild tool You can view detailed history of each build using the CodeBuild dashboard Events are stored as Amazon CloudWatch Logs log files CloudWatch Logs log files in AWS CodeBuild Pipeline integration with Jenkins You can use the Jen kins build tool to create delivery pipelines These pipelines use standard jobs that define steps for implementing continuous delivery stages However this approach might not be opt imal for larger projects because the current state of the pipeline doesn’t persist between Jenkins restarts implementing manual approval is not straightforward and tracking the state of a complex pipeline can be complicated Instead AWS recommend s that you implement continuous delivery with Jenkins by using the AWS Code Pipeline plugin This plugin allows complex workflows to be described using Groovy like domain specific language and can be us ed to orchestrate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 24 complex pipelines The AWS Code Pipeline plugin’s functionality can be enhanced by the use of satellite plugins such as the Pipeline Stage View Plugin which visualizes the current progress of stages defined in a pipeline or Pipeline Multibranch Plugin which groups builds from different branches AWS recommend s that you store your pipeline configuration in Jenkinsfile and have it checked into a source code repository This allows for tracking changes to pipeline code and becomes even more important when working with the Pipeline Multibranch Plugin AWS also reco mmend s that you divide your pipeline into stages This logically groups the pipeline steps and also enables the Pipeline Stage View Plugin to visualize the current state of the pipeline The following figure shows a sample Jenkins pipeline with four defin ed stages visualized by the Pipeline Stage View Plugin Defined stages of Jenkins pipeline visualized by the Pipeline Stage View Plugin Deployment methods You can consider multiple deployment strategies and variations for rolling out new versions of so ftware in a continuous delivery process This section discusses the most common deployment methods: all at once (deploy in place) rolling immutable and blue/green AWS indicates which of these methods are supported by AWS CodeDeploy and AWS Elastic Bean stalk The following table summarizes the characteristics of each deployment method This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 25 Table 1 Characteristics of deployment methods Method Impact of failed deployment Deploy time Zero downtime No DNS change Rollback process Code deployed to Deploy in place Downtime ☓ ✓ Redeploy Existing instances Rolling Single batch out of service Any successful batches prior to failure running new application version † ✓ ✓ Redeploy Existing instances Rolling with additional batch (beanstalk) Minimal if first batch fails otherwise similar to rolling † ✓ ✓ Redeploy New and existing instances Immutable Minimal ✓ ✓ Redeploy New instances Traffic splitting Minimal ✓ ✓ Reroute traffic and terminate new instances New instances Blue/green Minimal ✓ ☓ Switch back to old environmen t New instances † Varies depending on batch size This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 26 All at once (inplace deployment ) All at once (inplace deployment ) is a method you can use to roll out new application code to an existing fleet of servers This method replaces all the code in one deployment action It requires downtime because all servers in the fleet are updated at once There is no need to update exi sting DNS records In case of a failed deployment the only way to restore operations is to redeploy the code on all servers again In AWS Elastic Beanstalk this deployment is called all at once and is available for single and load balanced applications In AWS CodeDeploy this deployment method is called inplace d eployment with a deployment configuration of AllAtOnce Rolling deployment With rolling deployment the fleet is divided into portions so that all of the fleet isn’t upgraded at once During the deployment process two software versions new and old are running on the same fleet This method allows a zero downtime update If the deployment fails only the updated portion of the fleet will be affected A variation of the rolling deployment method called canary release involves deployment of the new sof tware version on a very small percentage of servers at first This way you can observe how the software behaves in production on a few servers while minimizing the impact of breaking changes If there is an elevated rate of errors from a canary deploymen t the software is rolled back Otherwise the percentage of servers with the new version is gradually increased AWS Elastic Beanstalk has followed the rolling deployment pattern with two deployment options rolling and rolling with additional batch These options allow the application to first scale up before taking servers out of service preserving full capability during the deployment AWS C odeDeploy accomplishes this pattern as a variation of an in place deployment with patterns like OneAtATime and HalfAtATime Immutable and blue/green deplo yment s The immutable pattern specifies a deployment of application code by starting an entirely new set of servers with a new configuration or version of application code This pattern leverages the cloud capability that new server resources are created with simple API calls The b lue/green deployment strategy is a type of immutable deployment which also requires creation of another environment Once the new environment is up and passed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 27 all tests traffic is shifted to this new deployment Crucial ly the old environment that is the “blue” environment is kept idle in case a rollback is needed AWS Elastic Beanstalk supports immutable and blue/green deployment patterns AWS CodeDeploy also supports the blue/green pattern For more information on how AWS services accomplish these immutable patterns refer to the Blue/Green Deployments on AWS whitepaper Database schema changes It’s common for modern software to have a database layer Typically a relational database is used which stores both dat a and the structure of the data It’s often necessary to modify the database in the continuous delivery process Handling changes in a relational database requires special consideration and it offers other challenges than the ones present when deploying application binaries Usually when you upgrade an application binary you stop the application upgrade it and then start it again You don't really bother about the application state which is handled outside of the application When upgrading databases you do need to consider state because a database contains much state but comparatively little l ogic and structure The database schema before and after a change is applied should be considered different versions of the database You could use tools such as Liquibase and Flyway to manage the versions In general those tools employ some variant of the following method s: • Add a table to the database where a database version is stored • Keep track of database change commands and bunch them together in versioned change sets In the case of Liquibase these changes are stored in XML files Flyw ay employs a slightly different method where the change sets are handled as separate SQL files or occasionally as separate Java classes for more complex transitions • When Liquibase is being asked to upgrade a database it looks at the metadata table and de termines which change sets to run in order to bring the database uptodate with the latest version This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 28 Summary of best practices The following are some best practice dos and don’ts for CI/CD Do: • Treat your infrastructure as code o Use version control for you r infrastructure code o Make use of bug tracking/ticketing systems o Have peers review changes before applying them o Establish infrastructure code patterns/designs o Test infrastructure changes like code changes • Put developers into integrated teams of no mor e than 12 self sustaining members • Have all developers commit code to the main trunk frequently with no long running feature branches • Consistently adopt a build system such as Maven or Gradle across your organization and standardize builds • Have develope rs build unit tests toward 100% coverage of the code base • Ensure that unit tests are 70% of the overall testing in duration number and scope • Ensure that unit tests are up todate and not neglected Unit test failures should be fixed not bypassed • Treat your continuous delivery configuration as code • Establish role based security controls (that is who can do what and when) o Monitor/track every resource possible o Alert on services availability and response times o Capture learn and improve o Share acc ess with everyone on the team o Plan metrics and monitoring into the lifecycle This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 29 • Keep and track standard metrics o Number of builds o Number of deployments o Average time for changes to reach production o Average time from first pipeline stage to each stage o Number of changes reaching production o Average build time • Use multiple distinct pipelines for each branch and team Don’t: • Have long running branches with large complicated merges • Have manual tests • Have manual approval processes gates code revi ews and security reviews Conclusion Continuous integration and continuous delivery provide an ideal scenario for your organization’s application teams Your developers simply push code to a repository This code will be integrated tested deployed test ed again merged with infrastructure go through security and quality reviews and be ready to deploy with extremely high confidence When CI/CD is used code quality is improved and software updates are delivered quickly and with high confidence that ther e will be no breaking changes The impact of any release can be correlated with data from production and operations It can be used for planning the next cycle too —a vital DevOps practice in your organization’s cloud transformation Further reading For mo re information on the topics discussed in this whitepaper re fer to the following AWS whitepapers: • Overview of Deployment Options on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 30 • Blue/Green Deployments on AWS • Setting up CI/CD pipeline by in tegrating Jenkins with AWS CodeBuild and AWS CodeDeploy • Implementing Microservices on AWS • Docker on AWS: Running Containers in the Cloud Contributors The following individuals and organizations contributed to this document: • Amrish Thakkar Principal Solutions Architect AWS • David Stacy Senior Consultant DevOps AWS Professional Services • Asif Khan Solutions Architect AWS • Xiang Shen Senior Solutions Architect AWS Document revisions Date Description October 27 2021 Updated co ntent June 1 2017 First publication,General,consultant,Best Practices Provisioning_Oracle_Wallets_and_Accessing_SSLTLSBased_Endpoints_on_Amazon_RDS_for_Oracle,"Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle February 2018 Copyright 2018 Amazoncom Inc or its affiliates All Rights Reserved Notices Licensed under the Apache License Version 20 (the ""License"") You may not use this file except in compliance with the License A copy of the License is located at http://awsamazoncom/apache20/ or in the ""license"" file accompanying this file This file is distributed on an ""AS IS"" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own in dependent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations c ontractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agre ement between AWS and its customers Contents Introduction 1 Creating and Uploading Custom Oracle Wallets 2 Creating and Uploading a Wallet with an Amazon S3 Certificate 3 Uploading a Customized Wallet Bundle 5 Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections 6 Using UTL_HTTP over an SSL/TLS Endpoint 7 Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint 7 Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) 7 Downloading a File fr om Amazon S3 to an RDS Oracle DB Instance 8 Uploading a File from RDS Oracle DB Instance to Amazon S3 8 Conclusion 9 Appendi x 9 Sample PL/SQL Procedure to Download Artifacts from Amazon S3 9 Sample PL/SQL Procedure to Send an Email Through Amazon SES 12 Abstract This paper explain s how to extend outbound network access on your Amazon Relational Database Service (Amazon RDS) for Oracle database instances to connect securely to remote SSL/TLS based endpoints SSL/TLS endpoints require one or more valid Certificate Authority (CA) certificates that can be bundled within an Oracle wallet By uploading Oracle wallets to your Amazon RDS for Oracle DB instances certain ou tbound network calls can be made aware of the uploaded Oracle wallets This enables outbound network traffic to access any SSL/TLS based endpoint that can be validated using the CA certificate bundle within the Oracle wallets Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 1 Introduction Amazon Relational Database Service (Amazon RDS ) is a managed relational database service that provides you with six familiar database engines to choose from including Amazon Aurora MySQL MariaDB Oracle Microsof t SQL Server and PostgreSQL1 You can use your existing database code applications and tools with Amazon RDS and RDS will handle routine database tasks such as provisioning patching backup recovery failure detection and repair With Amazon RDS you can use replication to enhance availability and reliability for production workloads Using the Multi AZ deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a s ynchronously replicated secondary database Amazon RDS for Oracle provides scalability performance monitoring and backup and restore support Multi AZ deployment for Oracle DB instances simplifies creating a highly available architecture This is becaus e a Multi AZ deployment contains built in support for automated failover from your primary database to a synchronously replicated secondary database in a different Availability Zone Amazon RDS for Oracle provides the latest version of Oracle Database with the latest Patch Set Updates (PSUs) Amazon RDS manages the database upgrade process on your schedule eliminating manual database upgrade and patching tasks Amazon Virtual Private Cloud (Amazon VPC) is a virtu al network dedicated to your AWS account2 It is logically isolated from other virtual networks in the AWS Cloud You can launch AWS resources such as Amazon RDS DB instance s or Amazon Elastic Compute Cloud (Ama zon EC2) instance s into your VPC3 When you create a VPC you specify IP address ranges subnet s routing tables and network gateways to your own data center and to the internet You can move RDS DB instances that are not already in a VPC into an existing VPC4 Outbound network access is only supported fo r Oracle DB instances in a VPC 5 Using outbound network access you can use PL/SQL code inside the database to initiate connections to servers elsewhere on the network This lets you use utilities such as UTL_HTTP UTL_TCP and UTL_SMTP to connect your DB instance to remote endpoints For example you can use UTL_MAIL or Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 2 UTL_SMTP to send emails or UTL_HTTP to communicate with external web servers By default an Amazon DNS server provides name resolutions for outbound traffic from the instances in your VPC Should you choose to resolve private domain names for outbound traffic you can configure a custom DNS server 6 Always take care when enabling outbound networking as attackers can use it as a vector to remove data from your systems In addition to other security best practices keep the following in mind:  Carefully configure VPC security groups to only allow ingress from and egress to known netwo rks  Use in database network access control lists (ACLs) to allow only trusted users to initiate connections out of the database  Always upgrade to the latest release of Amazon RDS for Oracle to ensure you have the latest Oracle PSU and security fixes To protect the integrity and content of your data you should use Transport Layer Security (TLS also referred to as Secure Sockets Layer or SSL) to provide encryption and server verification By default outbound network access support s only external traffic over and to nonTLS/SSL mediums For TLS/SSL based traffic you can use Oracle wallets to store Certificate Authority (CA) certificates which enable the verification of remote entities You can make utilities that use outbound network access traffic (such as UTL_HTTP and UTL_SMTP ) aware of these wallets This enables outbound communication from your DB instance to remote endpoints over SSL In th is paper we discuss how to create Oracle wal lets and copy them to an Amazon RDS for Oracle DB instance using Amazon S3 We also demonstrate how to use a wallet to protect calls made using UTL_HTTP and UTL_SMTP utilities Creating and Uploading Custom Oracle Wallets To enable SSL/TLS connections from PL/SQL you can upload custom O racle wallet s to your Amazon RDS for Oracle DB instances These wallets can contain Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 3 public and private certificates to access SSL/TLS based endpoints from your RDS Oracle DB instances First you create an initial Oracle wal let containing an Amazon S3 certificate as a onetime setup Then you can securely upload any number of wallets to Amazon RDS for Oracle DB instances through Amazon S3 Creating and Upload ing a Wallet with an Amazon S3 Certificate 1 Download the Baltimore CyberTrust Root certificate7 2 Convert the certificate to the x509 PEM format openssl x509 inform der in BaltimoreCyberTrustRootcrt outform pem out BaltimoreCyberTrustRoot pem 3 Using the orapki utility 8 create a wallet and add the certificate This export s the wallet to a file named cwalletsso Alternatively if you don’t specify an auto login wallet you can use ewalletp12 In this case PL/SQL applications must provide a password when opening the wallet orapki wallet create wallet auto_login _only orapki wallet add wallet trusted_cert cert BaltimoreCyberTrustRoot pem auto_login_only orapki wallet display wallet 4 Using high level aws s3 commands with the AWS Command Line Interface ( CLI)9 create a n S3 bucket (or use an existing bucket) and upload the wallet artifact aws s3 mb s3:// aws s3 cp cwalletsso s3:/// 5 Generate a presigned URL for the wallet artifact By default presigned URLs are valid for an hour However you can set the expiration explicitly 10 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 4 aws s3 presign s3:///cwalletsso 6 Import the procedure provided in the Appendix into your RDS for Oracle DB instance 7 Using this procedure download the wallet from the S3 bucket a Create a directory for this initial wallet (Be sure to always store each wallet in its own director y) exec rdsadminrdsadmin_utilcreate_directory('S3_SSL_WALLET'); b Whitelist outbound traffic on Oracle’s ACL (using the ‘user’ defined earlier ) BEGIN DBMS_NETWORK_ACL_ADMINCREATE_ACL ( acl => 's3xml' description => 'AWS S3 ACL' principal => UPPER('&user') is_grant => TRUE privilege => 'connect'); COMMIT; END; / BEGIN DBMS_NETWORK_ACL_ADMINASSIGN_ACL ( acl => 's3xml' host => '* amazonawscom '); COMMIT; END; / c Using the procedure above fetch the wallet artifact uploaded earlier to the S3 bucket Replace the p_s3_url value with the presigned URL generated in step 5 (after stripping it to be HTTP instead of HTTPS) Although access to t his S3 wallet artifact is presigned it must be over HTTP Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 5 set define #; BEGIN s3_download_presigned_url ( p_s3_url => ' ' p_local_filename => 'cwalletsso' p_local_directory => 'S3_SSL_WALLET' ); END; / 8 Set the S3_SSL_WALLET path above for utl_http transactions DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)=' S3_SSL_WALLET '; utl_httpset_wallet('file:/' || l_wallet_path ); END; / At this point you can use the wallet to acces s any artifact (not limited to Oracle wallets) from Amazon S3 over SSL/TLS as long as you’re pointing to the wallet directory specified above Upload ing a Customized Wallet Bundle With the capability we’ve described in the previous procedure you can also download customized Oracle wallets (containing customized selections of publ ic or private CA certificates) For example you can create a new Oracle wallet containing a wallet bundle of your choice upload it to an S3 bucket and use one of the previo us procedures to securely download this wallet to a n Amazon RDS for Oracle DB instance 1 Create a new directory (named MY_WALLET for example) for this new wallet bundle Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 6 exec rdsadminrdsadmin_utilcreate_directory(' MY_WALLET '); 2 Download the new wallet artifacts from the S3 bucket to the new directory Notice that we’ve passed on the S3_SSL_WALLET directory from the initial setup above to validate against the S3 bucket certific ate The download is requested over HTTPS BEGIN s3_download_ presigned_url ( '' p_local_filename => 'cwalletsso' p_local_directory => 'MY_WALLET' p_wallet_directory => ' S3_SSL_WALLET ' ); END; / 3 Run this procedure to use this newly uploaded wallet ( for example with UTL_ HTTP ) DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)='MY _WALLET' ; utl_httpset_wallet('file:/' || l_wallet_path ); END; / Similarly you can upload and use any generic wallet where it’s need ed Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections Oracle wallets containing CA certificate bundles allow SSL/TLS based outbound traffic to access any endpoint that can validate itself against o ne of the CA Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 7 certificate s in the bundle Here are a few examples of how you can use wallets to establish SSL/TLS outbound connections Using UTL_HTTP over a n SSL/TLS Endpoint Once you create a wallet accessing an endpoint over SSL/TLS requires setting the wallet path In this example robotstxt from statusawsamazoncom is accessed with an Oracle wallet containing Amazon’s CA certificate (obtained from https://wwwamazontrustcom/repository ) BEGIN utl_httpset_wallet('file:/rdsdbdata/userdirs/02'); END; / select utl_httprequest('https://statusawsamazoncom/robotstxt') as ROBOTS_TXT from dual; ROBOTS_TXT Useragent: * Allow: / Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint Database links can be established between RDS Oracle DB instances over an SSL/TLS endpoint as long as the SSL option is configured for each instance 11 No further setup is required Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) You can use Amazon SES to send emails on UTL_SMTP over SSL/TLS 1 Obtain the relevant AWS Region endpoint and credentials from Amazon SES 12 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 8 2 Obtain a Verisign Symantec based CA certificates13 3 Create or update an existing wallet containing the relevant certificate For this example assume that the wallet has been uploaded to a directory called SES_SSL_WALLET created through the RDSADMIN utility Using your Amazon SES SMTP credentials send an email through UTL_SMTP u sing this sample code snippet Downloading a File from Amazon S3 to an RDS Oracle DB Instance Using a utility similar to the s3_download_presigned_url procedure you can download files from Amazon S3 For e xample: BEGIN s3_download_presigned_url ( 'https:// s3amazonawscom/ /?AWSAccessKeyId=' p_local_filename => ' ' p_local_directory => ' ' p_wallet_directory => 'S3_SSL_ WALLET' ); END; / Uploading a File from RDS Oracle DB Instance to Amazon S3 Uploading an artifact from your database instance to Amazon S3 is possible through HTTP PUT multipart requests using AWS Signature Version 4 signing 14 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 9 Conclusion In this paper we explained how to create Oracle wallets containing CA certificate bundles and copy them to Amazon RDS for Oracle DB instances We also provided a few examples that show ed how you can use wallets to establish SSL/TLS based outbound connections You can ex tend t he steps highlighted in this paper to access any secure endpoint fro m your Amazon RDS Oracle DB instances Appendix Sample PL/SQL Procedure to Download Artifacts from Amazon S3 Define your user here define user='admin'; Directgrant required privs BEGIN rdsadminrdsadmin_utilgrant_sys_object('DBA_DIRECTORIES' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_HTTP' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_FILE' UPPER('&user')); END; Example download procedure CREATE OR REPLACE PROCEDURE s3_download_presigned_url ( p_s3_url IN VARCHAR2 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 10 p_local_filename IN VARCHAR2 p_local_directory IN VARCHAR2 p_wallet_directory IN VARCHAR2 DEFAULT NULL ) AS Local variables l_req utl_httpreq; l_wallet_path VARCHAR2(4000); l_fh utl_filefile_type; l_resp utl_httpresp; l_data raw(32767); l_file_size NUMBER; l_file_exists BOOLEAN; l_block_s ize BINARY_INTEGER; l_http_status NUMBER; Userdefined exceptions e_https_requires_wallet EXCEPTION; e_wallet_dir_invalid EXCEPTION; e_http_exception EXCEPTION; BEGIN Validate input IF (regexp_like(p_s3_url '^https:' 'i') AND p_wallet_directory IS NULL) THEN raise e_https_requires_wallet; END IF; Use wallet if specified IF (p_wallet_directory IS NOT NULL) THEN BEGIN SELECT directory_path INTO l_wallet_path FROM dba_directories WHERE upper(directory_name)=upper(p_wallet_directory); utl_httpset_wallet('file:' || l_wallet_path); EXCEPTION WHEN NO_DATA_FOUND THEN raise e_wallet_dir_invalid; END; END IF; Do HTTP request BEGIN Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 11 l_req := utl_httpbegin_request(p_s3_url 'GET' 'HTTP/11'); l_fh := utl_filefopen(p_local_directory p_local_filename 'wb' 32767); l_resp := utl_httpget_response(l_req); If we get HTTP error code write that instead l_http_s tatus := l_respstatus_code; IF (l_http_status != 200) THEN dbms_outputput_line('WARNING: HTTP response ' || l_http_status || ' ' || l_respreason_phrase || ' Details in ' || p_local_filename ); END IF; Loop over response and write to file BEGIN LOOP utl_httpread_raw(l_resp l_data 32766); utl_fileput_raw(l_fh l_data true); END LOOP; EXCEPTION WHEN utl_httpend_of_body THEN utl_httpend_respon se(l_resp); END; Get file attributes to see what we did utl_filefgetattr( location => p_local_directory filename => p_local_filename fexists => l_file_exists file_length => l_file_size block_size => l_block_size ); utl_filefclose(l_fh); dbms_outputput_line('wrote ' || l_file_size || ' bytes'); EXCEPTION WHEN OTHERS THEN utl_httpend_response(l_resp); utl_filefclose(l_fh); dbms_outputput_line(dbms_utilityform at_error_stack()); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 12 dbms_outputput_line(dbms_utilityformat_error_backtrace()); raise; END; EXCEPTION WHEN e_https_requires_wallet THEN dbms_outputput_line('ERROR: HTTPS requires a valid wallet location'); WHEN e_wallet_dir_invalid THEN dbms_outputput_line('ERROR: wallet directory not found'); WHEN others THEN raise; END s3_download_presigned_url; / Sample PL/SQL Procedure to Send an Email Through Amazon SES declare l_smtp_server va rchar2(1024) := 'email smtpuswest 2amazonawscom'; l_smtp_port number := 587; l_wallet_dir varchar2(128) := 'SES_SSL_WALLET'; l_from varchar2(128) := 'user@lorem ipsumdolar'; l_to varchar2(128) := 'user@lorem ipsumdolar'; l_user varchar2(12 8) := ''; l_password varchar2(128) := ''; l_subject varchar2(128) := 'Test subject'; l_wallet_path varchar2(4000); l_conn utl_smtpconnection; l_reply utl_smtpreply; l_replies utl_smtpreplies; begin select 'file:/' || directory_path into l_wallet_path from dba_directories where directory_name=l_wallet_dir; Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 13 open a connection l_reply := utl_smtpopen_connection( host => l_smtp_server port => l_smtp_port c => l_conn wallet_path => l_wallet_path secure_connection_before_smtp => false ); dbms_outputput_line('opened connection received reply ' || l_replycode || '/' || l_replytext); get supported configs from server l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)code || '/' || l_replies(r)text); end loop; STARTTLS l_reply := utl_smtpstarttls(l_conn); dbms_outputput_line('starttls received reply ' || l_replycode || '/' || l_replytext); l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)c ode || '/' || l_replies(r)text); end loop; utl_smtpauth(l_conn l_user l_password utl_smtpall_schemes); utl_smtpmail(l_conn l_from); utl_smtprcpt(l_conn l_to); utl_smtpopen_data l_conn); utl_smtpwrite_data(l_conn 'Date: ' || to_char(SYSDATE 'DD MONYYYY HH24:MI:SS') || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'From: ' || l_from || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'To: ' || l_to || utl_tcpcrlf); utl_smtpwrite_data(l _conn 'Subject: ' || l_subject || utl_tcpcrlf); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 14 utl_smtpwrite_data(l_conn '' || utl_tcpcrlf); utl_smtpwrite_data(l_conn ' Test message ' || utl_tcpcrlf); utl_smtpclose_data(l_conn); l_reply := utl_smtpquit(l_conn); exception when oth ers then utl_smtpquit(l_conn); raise; end; / 1 https://awsamazoncom/rds/ 2 https://awsamazoncom/vpc/ 3 https://awsamazon com/ec2/ 4 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_VPCWo rkingWithRDSInstanceinaVPChtml#USER_VP CNon VPC2VPC 5 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_Oracleh tml#OracleConceptsONA 6 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracl eCommonDBATasksSystemhtml#Ap pendixOracleCommonDBATasksCust omDNS 7 https://wwwdigicertcom/digicert root certificateshtm 8 https://docsoraclecom/database/121/DBSEG/asoappfhtm#DBSEG610 9 http://docsawsamazoncom/cli/latest/userguide/using s3commandshtml 10 http://docsawsamazoncom/cli/latest/reference/s3/presignhtml Notes Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 15 11 https://docsawsamazoncom/Ama zonRDS/latest/UserGuide/AppendixOrac leOptionsSSLhtml 12 https://docsawsamazoncom/ses/latest/DeveloperGuide/send email smtphtml 13https://wwwsymanteccom/theme/roots 14 https://docsawsamazoncom/AmazonS3/latest/API/sigv4 authenticatio n HTTPPOSThtml",General,consultant,Best Practices RealTime_Communication_on_AWS,RealTime Communication on AWS Best Practices for Designing Highly Available and Scalable Real Time Communication (RTC) Workloads on AWS February 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Fundamental Components of RTC Architecture 2 Softswitch/PBX 2 Session Border Controller (SBC) 3 PSTN Connectivity 3 Media Gateway (Transcoder) 3 WebRTC and WebRTC gateway 4 High Availability and Scalability on AWS 5 Floating IP Pattern for HA Between Active –Standby Stateful Servers 6 Load Balancing for Scalabili ty and HA with WebRTC and SIP 8 Cross Region DNS Based Load Balancing and Failover 11 Data Durability and HA with Persistent Storage 13 Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling 14 Highly Available WebRTC with Kinesis Video Streams 14 Highly Available SIP Trunking with Amazo n Chime Voice Connector 15 Best Practices from the Field 15 Create a SIP Overlay 15 Perform Deta iled Monitoring 17 Use DNS for Load Balancing and Floating IPs for Failover 18 Use Multiple Availability Zones 19 Keep Traffic within One Availability Zone and use EC2 Placement Groups 20 Use Enhanced Networking EC2 Instance Types 21 Security Considerations 21 Conclusion 22 Contributors 22 Document Revisions 23 Abstract Today many organizations are looking to reduce cost and attain scalability for realtime voice messaging and multimedia workloads This paper outlines the best practices for managing real time communication workloads on AWS and includes reference architectures to meet these requirements This paper serves as a guide for individuals familiar with real time communication on how to achieve high availability and scalability for these workloads Amazon Web Services RealTime Commun ication on AWS Page 1 Introduction Telecommunication applications using voice video and messaging as channels are a key requirement for many organizations and their end users These realtime communication (RTC) workloads have specific latency and availability requirements that can be met by following relevant design best practices In the past RTC workloads have been deployed in traditional on premises data centers with dedicated resources However due to a mature and burgeoning set of features RTC workloads can be deployed on Amazon Web Services (AWS) despite stringent service level requirements while also benefiting from scalability elasticity and high availability Today several custom ers are using AWS its partners and open source solutions to run RTC workloads with reduced cost faster agility the ability to go global in minutes and rich features from AWS services Customers leverage AWS features such as enhanced networking with a n Elastic Network Adapter (ENA) and the latest generation of Amazon Elastic Compute Cloud (EC2) instance s to benefit from data plane development kit (DPDK) single root I/O virtualization (SR IOV) huge pages NVM Express (NVMe) nonuniform memory access (NUMA) support as well as bare metal insta nces to meet RTC workload requirements These Instances offer n etwork bandwidth of up to 100 Gbps and commensurate packets per second delivering increased performance for network intensive applications For scaling Elastic Load Balancing offers Application Load Balancer which offer s WebS ocket support and Network Load Balancer that can handle millions of requests per second For network acceleration AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints in AWS It has support for static IP addresses for the load balancer For reduced latency cost and increased bandwidth throughput AWS Direct Connect establishes dedica ted network connection from on premises to AWS Highly available managed SIP trunking is provided by Amazon Chime Voice Connector Amazon Kinesis Video Streams with WebRTC easily stream real time two way media with high availability This pa per includes reference architectures that show how to set up RTC workloads on AWS and best practices to optimize the solutions to meet end user requirements while optimizing for the cloud The evolved packet core (EPC) is out of scope for this white paper but the best practices detailed can be applied to virtual network functions (VNFs) Amazon Web Services RealTime Communication on AWS Page 2 Fundamental Components of RTC Architecture In the telecommunications industry real time communication (RTC) commonly refer s to live media sessions between two endpoints with minimum latency These sessions could be related to: • A voice session between two parties (eg telephone system mobile VoIP) • Instant messaging (eg chatting IRC) • Live video session (eg videoconfer encing telepresence) Each of the preceding solutions has some components in common (eg components that provide authentication authorization and access control transcoding buffering and relay and so on ) and some components unique to the type of medi a transmitted (eg broadcast service messaging server and queues and so on ) This section focuses on defining a voice and video based RTC system and all of the related components illustrated in Figure 1 Figure 1: Essential architectural components for RTC Softswitch /PBX A softswitch or PBX is the brain of a voice telephone system and provides intelligence for establishing maintaining and routing of a voice call within or outside the enterprise Amazon Web Services RealTime Communication on AWS Page 3 by using different components All of the subscribers of the enterprise are required to register with the softswitch to receive or make a call An important functionality of the softswitch is to keep track of each subscriber and how to reach them by using the other components within the voice network Session Border Controller (SBC) A session border controller (SBC) sits at the edge of a voice network and keeps track of all incoming and outgoing traffic (both control and data planes ) One of the key responsibilit ies of an SBC is to protect the voice system from malicious use The SBC can be used to interconnect with session initiation protocol ( SIP) trunks for external connectivity Some SBCs also provide transcoding capabilities for converting CODECS from one format to another Finally most SBCs also provide NAT Traversal capabilities which aids in ensuring calls are established even across firewalled networks PSTN Connectivity Voice o ver IP (VoIP) solutions use PSTN Gateways and SIP Trunks to connect with legacy PSTN network s PSTN Gateway The p ublic switched telephone network (PSTN ) Gateway convert s the signaling (between SIP and SS7) and media ( between RTP and time division multiplexing [TDM ] using CODEC transcoding) PSTN Gateways always sit at the edge close to the PSTN network SIP Trunk In a SIP Trunk the enterprise does not terminate its calls onto a TDM (SS7 based) network but rather the flows between enterprise and te lco remain over IP Most of the SIP Trunks are established by using SBCs The enterprise must agree on the predefined security rules from telco such as allowing a certain range of IP addresses ports and so on Media Gateway ( Transcoder) A typical voice solution allows various types of CODECs Some of the common CODECs are G711 µ law for North America G711 A law for outside of North America G729 and G 722 When two devices that are using two different CODECs communicate with each other a media server translates the CODEC flow between the Amazon Web Services RealTime Communication on AWS Page 4 devices In other words a media gateway processes media and ensures that the end devices are able to communicate with each other WebRTC and WebRTC g ateway Web realtime communication (WebRTC ) allows you to establish a call from a web browser or request resources from the backend server by using API The technology is designed with cloud technology in mind and therefore provide s various API s which could be used to establish a call Since not all of the voice solution s (including SIP) support these API s the WebRTC gateway is required to translate API call s into SIP messages and vice versa Figure 2 shows a design pattern for a highly available WebRTC architecture The incoming traffic from WebRTC clients is balanced by an Amazon application load balancer with WebRTC running on EC2 instances that are part of an Auto Scal ing Group Figure 2: A basic topology of an RTC system for voice Another design pattern for SIP and RTP traffic is to use pairs of SBCs on Amazon EC2 in active passive mode across Availability Zones (Figure 3) Here an Elastic IP address can be dynamically moved between instances upon failure where DNS can not be used Amazon Web Services RealTime Communication on AWS Page 5 Figure 3: RTC architecture using Amazon EC2 in a VPC High Availability and Scalability on AWS Most providers of real time communications align with service levels that provide availability from 999% to 99999% Depending on the degree of high availability (HA) that you want you must take increasingly sophisticated measures along the full lifecycle of the application We re commend following these guidelines to achieve a robust degree of high availability : • Design the system to have no single point of failure Use automated monitoring failure detection and f ailover mechanisms for both stateless and stateful components Amazon Web Services RealTime Communication on AWS Page 6 o Single points of failure (SPOF) are commonly eliminated with an N+1 or 2N redundancy configuration where N+1 is achieved via load balancing among active–active nodes and 2N is achieved by a p air of nodes in active– standby configuration o AWS has several methods for achieving HA through both approaches such as through a scalable load balanced cluster or assuming an active–standby pair • Correctly instrument and test system availability • Prep are operating procedures for manual mechanisms to respond to mitigate and recover from the failure This section focus es on how to achieve no single point of failure using capabilities available on AWS Specifically this section describe s a subset of co re AWS capabilities and design patterns that allow you to build highly available real time communication applications on the platform Floating IP Pattern for HA Between Active–Standby Stateful Servers The Floating IP design pattern is a well known mechani sm to achieve automatic failover between an active and standby pair of hardware nodes (media servers) A static secondary virtual IP address is assigned to the active node Continuous monitoring between the active and standby node s detect s failure I f the active node fails the monitoring script assigns the virtual IP to the ready standby node and the standby node takes over the primary active function In this way the virtual IP floats between the active and standby node Applicability in RTC solutions It is not always possible to have multiple active instances of the same component in service such as an active –active cluster of N nodes An active –standby configuration provides the best mechanism for HA For example the stateful components in an RTC solution such as the media server or conferencing server or even an SBC or database server are well suited for an active –standby setup An SBC or media server has several long running sessions or channels active at a given time and in the case of the SBC active instance failing the endpoints can reconnect to the standby node without any client side configuration due to the floating IP Amazon Web Services RealTime Communication on AWS Page 7 Implementation on AWS You can implement this pattern on AWS using core capabilities in Amazon Elastic Compute Cloud ( Amazon EC2) Amazon EC2 API Elastic IP addresses and support on Amazon EC2 for secondary private IP addresses 1 Launch two EC2 instances to assume the role s of primary and secondary nodes where the primary is assumed to be in active state by default 2 Assign an additional secondary private IP address to the primary EC2 instance 3 An Elastic IP address which is similar to a virtual IP (VIP) is associated with the secondary private address This secondary private address is the address that is used by exte rnal endpoints to access the application 4 Some OS configuration is required to make the secondary IP address added as an alias to the primary network interface 5 The application must bind to this Elastic IP address In the case of Asterisk software you can configure the binding through advanced Asterisk SIP settings 6 Run a monitoring script —custom KeepAlive on Linux Corosync and so on —on each node to monitor the state of the peer node In the event that the current active node fails the peer detects th is failure and invokes the Amazon EC2 API to reassign the secondary private IP address to itself 7 Therefore the application that was listening on the VIP associated with the secondary private IP address becomes available to endpoints via the standby node Figure 4: Failover between stateful EC2 instances using Elastic IP address Amazon Web Services RealTime Communication on AWS Page 8 Benefits This approach is a reliable low budget solution that protects against failures at the EC2 instance infrastructure or application level Limitations and extensibility This design pattern is typically limited to within a single Availability Zone It can be implemented across two Availability Zones but with a variation In this case the Floating Elastic IP address is reassociated between active and standby node in different Availability Zone s via the reassociate elastic IP address API available In the failover implementation shown in Figure 4 calls in progress are dropped and endpoints must reconne ct It is possible to extend this implementation with replication of underlying session data to provide seamless failover of sessions or media continuity as well Load Balancing for Scalability and HA with WebRTC and SIP Load balancing a cluster of active instances based on predefined rules such as round robin affinity or latency and so on is a design pattern widely popularized by the stateless nature of HTTP request s In fact load balancing is a viable option in case of many RTC application components The load balancer acts as the reverse proxy or entry point for requests to the desired application which itself is configured to run in multiple active nodes simulta neously At any given point in time the load balancer directs a user request to one of the active nodes in the defined cluster Load balancers perform a health check against the nodes in their target cluster and do not send an incoming request to a node t hat fails the health check Therefore a fundamental degree of high availability is achieved by load balancing Also because a load balance r performs active and passive health check s against all cluster nodes in sub second intervals the time for failover is near instantaneous The decision on which node to direct is based on system rules defined in the load balancer including: • Round robin • Session or IP affinity which ensures that multiple requests within a session or from the same IP are sent to the same node in the cluster Amazon Web Services RealTime Communication on AWS Page 9 • Latency based • Load based Applicability in RTC Architectures The WebRTC protocol makes it possible for WebRTC Gateways to be easily load balanced via an HTTP based load balancer such as Elastic Load Balanc ing Application Load Bala ncer or Network Load Balancer With most SIP implementations relying on transport over both TCP and UDP network or connection level load balancing with support for both TCP and UDP based traffic is needed Load Balancing on AWS for WebRTC using Applicat ion Load Balancer and Auto Scaling In the case of WebRTC based communications Elastic Load Balanc ing provides a fully managed highly available and scalable load balancer to serve as the entry point for requests which are then directed to a target cluster of EC2 instances associated with Elastic Load Balancing Also because WebRTC requests are stateless you can use Amazon EC2 Auto Scaling to provide fully automated and controllable scalability elasticity and high availability The Application Load Balancer provides a fully managed load balancing service that is highly available using multiple Availability Zones and scalable This supports the load balancing of WebSoc ket requests that handle the signaling for WebRTC applications and bidirectional communication between the client and server using a long running TCP connection The Application Load Balancer also supports content based routing and sticky sessions routing requests from the same client to the same target using load balancer generated cookies If you enable sticky sessions the same target receives the request and can use the cookie to recover the session context Figure 5 shows the target topology Amazon Web Services RealTime Communication on AWS Page 10 Figure 5: WebRTC scalability and high availability architecture Implementation for SIP using Network Load Balancer or AWS Marketplace Product In the case of SIP based communications the connections are made over TCP or UDP with the majority of RTC applications using UDP If SIP/TCP is the s ignal protocol of choice then it is feasible to use the Network Load Balancer for fully managed highly available scalable and performan ce load balancing A Network Load Balancer operates at the connection level (Layer 4) routing connections to targets such as Amazon EC2 instances containers and IP addresses based on IP protoco l data Ideal for TCP or UDP traffic load balancing network load balanc ing is capable of handling millions of requests per second while maintaining ultra low latencies It is integrated with other popular AWS services such as AWS Auto Scaling Amazon Elastic Container Service ( Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) and A WS CloudFormation If SIP connections are initiated another option is to use AWS Marketplace commercial offtheshelf software (COTS) The AWS Marketplace offers many products that can handle UDP and other types of layer 4 connection load balancing These COTS typically include support for high availability and are commonly integrated with features Amazon Web Services RealTime Communication on AWS Page 11 such as AWS Auto Scaling to further enhance availability and scalabil ity Figure 6 shows the target topology: Figure 6: SIPbased RTC s calability with AWS Marketplace product Cross Region DNS Based Load Balancing and Failover Amazon Route 53 provi des a global DNS service that can be used as a public or private endpoint for RTC clients to register and connect with media applications With Amazon Route 53 DNS health checks can be configured to route traffic to healthy endpoints or to independently m onitor the health of your application The Amazon Route 53 Traffic Flow feature makes it easy for you to manage traffic globally through a variety of routing types including latency based routing geo DNS geoproximity and weighted round robin—all of whi ch can be combined with DNS Failover to enable a variety of low latency fault tolerant architectures The Amazon Route 53 Traffic Flow simple visual editor allows you to manage how your end users are routed to your application’s endpoints —whether in a sin gle AWS Region or distributed around the globe Amazon Web Services RealTime Communication on AWS Page 12 In the case of global deployments the latency based routing policy in Route 53 is especially useful to direct customers to the nearest point of presence for a media server to improve the quality of service associated with real time media exchanges Note that to enforce a failover to a new DNS address clien t caches must be flushed Also DNS changes may have a lag as they are propagated across global DNS servers You can manage the refresh interval for DNS lookups with t he Time to Live attribute This attribute is configurable at the time of setting up DNS p olicies To reach global users quickly or to meet the requirements of using a single public IP AWS Global Accelerator can also be used for cross region failover AWS Global Accelerator is a networking service that improves availability and performance for applications with both local and global reach AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints such as your Application Load Balancers Network Load Balancers or Amazon EC2 instances in single or multiple AWS Regions It uses the AWS global network to optimize the path from your users to your applications improving performance such as the latency of your TCP and UDP traffic AWS Global Accelerator continually monitors the health of your application endpoints and automatically redirects traffic to the nearest healthy endpoints in the event of current endpoints turn ing unhealthy For additional security requirements Accelerated Site toSite VPN uses AWS Global Accelerator to improve the performance of VPN connections by intelligently routing traffic through the AWS Global Network and AWS edge locations Amazon Web Services RealTime Communication on AWS Page 13 Figure 7: Interregion high availability design using AWS Global Accelerator or Amazon Route 53 Data Durability and HA with Persistent Storage Most RTC applications rely on persistent storage to store and access data for authentication authorization accounting (session data call detail records etc) operational monitoring and logging In a traditional data center ensuring high availability and durability for the persistent storage components (databases file systems and so on) typically requires heavy lifting via the setup of a SAN RAID design and processes for backup restore and failo ver processing The AWS Cloud greatly simplifies and enhances traditional data center practices around data durability and availability For object storage and file storage AWS services like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Fil e System (Amazon EFS) provide managed high availability and scalability Amazon S3 has a data durability of 11 nines For transactional data storage customers have the option to take advantage of the fully managed Amazon Relational Database Service (Amazo n RDS) that supports Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server with high availability deployments For the registrar function subscriber profile or accounting Amazon Web Services RealTime Communication on AWS Page 14 records storage (eg CDRs) the Amazon RDS provides a fault tolerant highly available and scalable option Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling AWS allows the chaining of features and the ability to incorporate custom serverless functions as a service based on infrastructure even ts One such design pattern that has many versatile uses in RTC applications is the combination of auto matic scaling lifecycle hooks with Amazon Cloud Watch Events Amazon Route 53 and AWS Lambda functions AWS Lambda functions can embed any action or logic Figure 8 demonstrate s how these features chained together can enhance system reliability and scalability with automation Figure 8: Auto matic scaling with dynamic u pdates to Amazon Route 53 Highly Available WebRTC with Kinesis Video Streams Amazon Kinesis Video Streams offers realtime media streaming via WebRTC allowing users to c apture process and store media streams for playback analytics and machine learning These streams are highly available scalable and compliant with WebRTC standards Amazon Kinesis Video Streams include a WebRTC signaling Amazon Web Services RealTime Communication on AWS Page 15 endpoint for fast peer discovery and secure connection establi shment It includes managed Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) end points for real time exchange of media between peers It also includes a free open source SDK that directly integrates with camera firmw are to enable secure communication with Kinesis Video Streams end points allowing for peer discovery and media streaming Finally it provides client libraries for Android iOS and JavaScript that allow WebRTC compliant mobile and web players to securely discover and connect with a camera device for media streaming and two way communication Highly Available SIP Trunking with Amazon Chime Voice Connector Amazon Chime Voice Connector delivers a pay asyougo SIP trunking service that enables companies to m ake and/or receive secure and inexpensive phone calls with their phone systems Amazon Chime Voice Connector is a low cost alternative to service provider SIP trunks or Integrated Services Digital Network (ISDN) Primary Rate Interfaces (PRIs) Customers ha ve the option to enable inbound calling outbound calling or both The service leverages the AWS network to deliver a highly available calling experience across multiple AWS Regions You can stream audio from SIP trunking telephone calls or forwarded SIP based media recording (SIPREC) feeds to Amazon Kinesis Video Streams to gain insights from business calls in real time You can quickly build applications for audio analytics through integration with Amazon Transcribe and other common machine learning lib raries Best Practices from the Field This section aims to summarize the best practices that have been implemented by some of largest and most successful AWS customers that run large real time Session Initiation Protocol (SIP) workloads AWS customers want ing to run their own SIP infrastructure in the public cloud would find these best practices valuable as they can help increase the reliability and resiliency of the system in case of different kinds of failures Although some of these best practices are SI P specific most of them are applicable to any real time communication application running on AWS Create a SIP Overlay AWS has a robust scalable and redundant network backbone that provides connectivity between different Regions When a network event such as a fiber cut degrades an Amazon Web Services RealTime Communication on AWS Page 16 AWS backbone link traffic is quickly failed over to redundant paths using network level routing protocols such as BGP This network level traffic engineering is a black box to AWS customers and most do not even notice these failover events However customers that run real time workloads such as voice high quality video and low latency messaging do sometimes notice these events So how can an AWS customer implement their own traffic engineering on top of what is provide d by AWS at the network level? The solution is deploying SIP infrastructure at many different AWS Regions As part of the call control features SIP also provides the ability to route calls through specific SIP proxies Figure 9: Using SIP routing to override network routing In Figure 9 SIP infrastructure (represented by green dots) is running in all four US Regions The blue lines represent a fictional depiction of the AWS backbone If no SIP routing is implemented a call originating in the US west coast and destined for the US east coast goes over the backbone link that is directly connecting the Oregon and Virginia regions The diagram shows how a customer might override the network level routing and make the same call between Oregon and Virginia route d through California using SIP routing This type of SIP traffic engineering can be implemented using SIP proxies and media gateways based on network metrics such as SIP retransmissions and customer specific business preferences Amazon Web Services RealTime Communication on AWS Page 17 Perform Detailed Monitoring End users of real time voice and video applications expect the same level of performance as they achieve with traditional telephony services So when they experience issues with an application it ends up hurting the provider’s reputation To be proactive rather than reactive it is imperative that detailed monitoring be deployed at every part of the system that serves end users Figure 10: Using SIPp to Monitor VoIP Infrastructure Many open source tools such as iPerf or SIPp and VOIPMonitor are available that can be used to monitor SIP/RTP traffic In the preceding example nodes running SIPp in client and server modes are measuring SIP metrics such as Successful Calls and SIP Retransmits between all four US AWS Regions These metrics can then be exported into Amazon CloudWatch using a custom script Using CloudWatch customers can create alarms on these custom metrics based on a certain threshold value Automatic or manual remediation acti ons can then be taken based on the state of these CloudWatch alarms For customers not wanting to allocate engineering resources needed to develop and maintain a custom monitoring system many good VoIP monitoring solutions are Amazon Web Services RealTime Communicat ion on AWS Page 18 available on the market such as ThousandEyes An example of a remediation action is changing the SIP routing based on increased SIP retransmits Use DNS for Load Balancing and Floating IPs for Failover IP telephony clients that support DNS SRV capability can efficiently use the redundancy built into the infrastructure by load balancing clients to different SBCs/PBXs Figure 11: Using DNS SRV records to load balance SIP clients Figure 11 shows how customers can use the SRV records to load balance SIP traffic Any IP telephony client that supports the SRV standard will look for the sip_ prefix in an SRV type DNS record In the example the answer section from DNS conta ins both of the PBXs running in different AWS Availability Zones However in addition to the endpoint URIs the SRV record contains three additional pieces of information: • The first number is the Priority (1 in the example above ) A lower priority is preferred over higher • The second number is the Weight (10 in the example above ) Amazon Web Services RealTime Communication on AWS Page 19 • And the third number is the Port to be used (5060 ) Since the priority is the same (1) for both PBXs servers the clients use the w eight to load balance between the two PBXs In this case since the weights are the same SIP traffic should be load balanced equally between the two PBXs DNS can be a good solution for client load balancing but what about implementing failover by changing/updating DNS ‘A’ records? This method is d iscouraged because of inconsistency found in DNS caching behavior within the client and intermediate nodes A better approach for intra AZ failover between a cluster of SIP nodes is to use the EC2 IP reassignment where an impaired host’s IP address is inst antly reassigned to a healthy host by using the EC2 API Paired with a detailed monitoring and health check solution IP reassignment of a failed node ensures that traffic is moved over to a healthy host in a timely manner that minimizes end user disruptio n Use Multiple Availability Zones Each AWS Region is subdivided into separate Availability Zones Each Availability Zone has its own power cooling and network connectivity and thus forms an isolated failure domain Within the constructs of AWS it is a lways encouraged that customers run their workloads in more than one Availability Zone This ensures that customer applications can withstand even a complete Availability Zone failure a very rare event in itself This recommendation stands for real time SIP infrastructure as well Figure 12: Handling Availability Zone failure Let’s assume that a catastrophic event (such as Category 5 hurricane) causes a complete Availability Zone outage in the us east1 region With the infrastructure Amazon Web Services RealTime Communication on AWS Page 20 running as shown in the diagram all SIP clients that were originally registered with the node s in the failed Availability Zone should re register with the SIP nodes running in Availability Zone #2 (Test this behavior with your SIP clients/phones to make sure it is supported ) Although the active SIP calls at the time of the Availability Zone outage are lost any new calls are routed through Availability Zone #2 To summarize DNS SRV records should point the client to multiple ‘A’ records one in each Availability Zone Each of those ‘A’ records should in turn point to multiple IP addresses of SBCs/PBXs in that Availability Zone providing both intra and inter AZ resiliency Both intra and inter AZ failover can be implemented by using IP reassignment if the IPs are public Private IPs however cannot be reassigned across Availability Zone s If a customer is using private IP addressing then they would have to rely on the SIP clients re registering with the backup SBC/PBX for inter AZ failover Keep Traffic within One Availability Zone and use EC2 Placement Groups Also known as Availability Zone Affinity this best practice also applies to the rare event of a complete Availability Zone failure It is recommended that you eliminate any cross AZ traffic such that any SIP or RTP traffic that enters one Availability Zone should remain in that Availab ility Zone until it exits the Region Figure 13: Availability Zone Affinity (at most 50% of active calls are lost) Figure 13 shows a simplified architecture that uses Availability Zone Affinity The comparative advantage of this approach becomes clear if one accounts for the effects of a complete Availability Zone outage As depicted in the diagram if Availability Zone Amazon Web Services RealTime Communication on AWS Page 21 #2 is lost 50% of active calls are affected at most (assuming equal load balancing between Availability Zone s) Had Availability Zone Affinity not been implemented some calls would flow between Availability Zone s in one Region and a failure would most likely affect more than 50% of active calls Furthermore to minimize latency for traffic we also recommend that you consider using EC2 placement groups within each Availability Zone Instances launched within the same EC2 placement group have higher bandwidth and reduced latency as EC2 ensures netwo rk proximity of these instances relative to each other Use Enhanced Networking EC2 Instance Types Choosing the right instance type on Amazon EC2 ensure s system reliability as well as efficient usage of infrastructure EC2 provides a wide selection of inst ance types optimized to fit different use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications These enhanced net working instance types ensure that the SIP workloads running on them have access to consistent bandwidth and comparatively lower aggregate latency A recent addition to Amazon EC2 is the availability of the Elastic Network Adapter (ENA) that provides up to 100 Gbps of bandwidth The latest catalog of EC2 instance types and associated features can be found on the EC2 instance types page For most customers the latest generation of compute optimized instances should provide the best value for the cost For example the C5 N supports the new Elastic Network Adapter with bandwidth up to 100 Gbps with millions of packets per second (PPS) Most re altime applications would also benefit from using the Intel Data Plane Developer Kit (DPDK) which can greatly boost network packet processing However it is always a best practice to benchmark the various EC2 instance types according to your requirements to see which instance type works best for you Benchmarking also enable s you to find other configuration parameters such as the maximum number of calls a certain instance type can process at a time Security Considerations RTC application components typically run directly on internet facing Amazon EC2 instances In addition to TCP flows use protocols like UDP and SIP In these cases AWS Shield Standard protects Amazon EC2 instance s from common infrastructure layer (Layer 3 and 4) DDoS attacks such as UDP reflection attacks DNS reflection Amazon Web Services RealTime Communication on AWS Page 22 NTP reflection SSDP reflection and so on AWS Shield Standard uses various techniques like priority based traffic shaping that are automatically engaged when a welldefined DDoS attack signature is detected AWS also provides advanced protection against large and sophisticated DDoS attacks for these applications by enabling AWS Sh ield Advanced on Elastic I P addresses AWS Shield Advanced provides enhanced DDoS detection that automatically detects the type of AWS resource and size of EC2 instance and applies appropriate predefined mitigations with protections against SYN or UDP floo ds With AWS Shield Advanced customers can also create their own custom mitigation profiles by engaging the 24 x7 AWS DDoS Response Team (DRT) AWS Shield Advanced also ensures that during a DDoS attack all of your Amazon VPC Network Access Control Lists (ACLs) are automatically enforced at the border of the AWS network providing you with access to additional bandwidth and scrubbing capacity to mitigate large volumetric DDoS attacks Conclusion Real time communication (RTC) workloads can be deployed on Am azon Web Services (AWS) to attain scalability elasticity and high availability while meeting the key requirements Today several customers are using AWS its partners and open source solutions to run RTC workloads with reduced cost and faster agility a s well as a reduced global footprint The reference architectures and best practices provided in this white paper can help customers successfully set up RTC workloads on AWS and optimize the solutions to meet end user requirements while optimizing for the cloud Contributors The following individuals and organizations contributed to this document: • Ahmad Khan Senior Solutions Archi tect Amazon Web Services • Tipu Qureshi Principal Engineer AWS Support Amazon Web Services • Hasan Khan Senior Technical Acco unt Manager Amazon Web Services • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services Amazon Web Services RealTime Communication on AWS Page 23 Document Revisions Date Description February 2020 Updated for latest services and features October 2018 First publication,General,consultant,Best Practices Regulation_Systems_Compliance_and_Integrity_Considerations_for_the_AWS_Cloud,"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Regulation Systems Compliance and Integrity Considerations for the AWS Cloud November 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current p roduct offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Security and Shared Responsibility 1 Governance and Monitoring 2 AWS Regions 2 Business Continuity and Disaster Recovery 3 Conclusion 3 Reg SCI Workbook 4 Document Revisions 16 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This document provides information to assist SCI entities with running applications and services on the AWS cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 1 Introduction The US Securities and Exchange Commission adopted Regulation Systems Compliance and Integrity (Reg SCI) to strengthen the technology infrastructure of the US securities markets Reg SCI applies to entities that operate the core components of the securities markets including national securities exchanges clearing agencies securities information processors and alternative trading systems These SCI entities a re required to adopt an IT governance framework and system controls that ensure an adequate level of integrity availability resiliency capacity and security for systems that are necessary to maintain a fair and orderly securities market SCI entities m ust monitor systems for disruptions intrusions and compliance events and report these instances to the SEC and impacted market participants You should review the full text of Reg SCI here available here: https://wwwsecgov/rules/final/2014/34 73639pdf This document is not legal advice Security and Shared Responsibility Cloud security is a shared responsibility While AWS manages security of the cloud by ensuring that its infrastructure complies with global and regional regulatory requirements and best practices security in the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks no differently than they would for applications in an on site datacenter In order to help customers establish operate and leverage the AWS security control environment AWS has devel oped a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments Customers can review and download reports and details about more than 2500 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’ security and compliance documents including Service Organization Control (SOC) reports Payment Card Industry (PCI) reports AWS MAS TRM Workbook and certifications from accreditation bodies across geographies and compliance verticals This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 2 Governance and Monitoring While SCI entities are ultimately responsible for establishing a governance framework and monitoring their own environments AWS provides many tools to help customers efficiently achieve compliance For example AWS Config helps customers continuously monitor and record their AWS r esource configurations and automate the evaluation of recorded configurations against desired configurations Amazon CloudWatch allows customers to collect and track metrics collect and monitor log files set alarms and automatically react to changes in their AWS resources Customers use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health AWS provides up totheminute information on the AWS services that customers use to power thei r applications via the publicly available Service Health Dashboard Customers can configure a Personal Health Dashboard to receive a personalized view of the performance and availability of the AWS services underlying their resources and applications The dashboard displays relevant and timely information to help customers manage events in progress and it provides proactive notification to help customers plan for scheduled activities With Personal Health Dashboard changes in the health of AWS resources automatically trigger alerts providing event visibility and guidance to help quickly diagnose and resolve issues Customers can use these insights to react and keep their applications running smoothly AWS Regions The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”) A Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity hous ed in separate facilities These Availability Zones offer customers the ability to operate production applications and databases which are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates 42 Availability Zones within 16 geographic Regions around the world For current information on AWS Regions and AZs see https://awsamazoncom/about aws/global infrastructure/ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 3 Business Continuity and Disaster Recovery SCI entities must implement policies and procedures to ensure that their applicable systems have high levels of resiliency and availability Customers utilize AWS to enable faster disaster recovery of their IT sy stems without incurring the infrastructure expense of a second physical site With data centers in regions all around the world AWS provides a set of cloud based disaster recovery services that enable rapid recovery of customers’ IT infrastructure and data The AWS cloud supports many popular disaster recovery architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover Conclusion Proper Reg SCI implementation depends on the customer’s ability to leverage the resilient secure and elastic solutions that AWS provides Customers can decrease their operational risk and increase the security availability and resiliency of their systems by running well architected applications on the AWS Cloud Customers can option ally enroll in an Enterprise Agreement with AWS which customers can use to tailor agreements that best suit their needs For additional information on Enterprise Agreements please contact a sales representative This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 4 Reg SCI Workbook The Reg SCI Workbook provides additional information to help customers map their alignment to Reg SCI This is not legal or compliance advice Customers should consult with their legal and compliance teams Requirement Reference Requirement Implementation Implementation Considerations Obligations related to policies and procedures of SCI entities § 2421001(a)(1) Each SCI entity shall establish maintain and enforce written policies and procedures reasonably designed to ensure that its SCI systems and for purposes of security standards indirect SCI systems have levels of capacity integrity resiliency availability and security adequate to maintain the SCI entity’s operational capability and promote the maintenance of fair and orderly markets Policies and procedures required by this section shall include at a minimum: Shared Responsibility AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually Detailed information is provided in the AWS Security Whitepaper https://d0awsstaticcom/whitepapers/Security/AWS_Securi ty_Whitepaperpdf Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In the case of failure au tomated processes move customer data traffic away from the affected area Each Availability Zone is designed as an independent failure zone This means that Availability Zones are typically physically separated within a metropolitan region and are in different flood plains Customers utilize AWS to enable faster disaster recovery of their critical IT systems without incurring the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 5 Requirement Reference Requirement Implementation Implementation Considerations infrastructure expense of a second physical site The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover To learn more about AWS Disaster Recovery see http://mediaamazonwebservicescom/AWS_Disaster_ Recoverypdf § 2421001 (a)(2)(i) The establishment of reasonable current and future technological infrastructure capacity planning estimates Shared Responsibility AWS continuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model to assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Customers are responsible for capacity planning for their application In addition to ondem and capacity AWS offers Reserved Instances (RI); RIs can provide a capacity reservation offering additional confidence in your ability to launch the number of instances you have reserved when you need them § 2421001 (a)(2)(ii) Periodic capacity stress tests of such systems to determine their ability to process transactions in an accurate timely and efficient manner Shared Responsibility Customers should consider using Elastic Load Balancing (ELB) ELB automatically distributes incoming application traffic across multiple Amazon EC2 instances It enables you to achieve fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to route application traffic This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 6 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(2)(iii) A program to review and keep current systems development and testing methodology for such systems Shared Responsibility AWS employs a shared responsibility model for data ownership and security AWS operates manages and controls the infrastructure components from t he host operating system and virtualization layer down to the physical security of the facilities in which the services operate AWS Services in production operations are managed in a manner that preserves their confidentiality integrity and availability AWS has implemented secure software development procedures that are followed to ensure appropriate security controls are incorporated into the application design As part of the application design process new applications must participate in an AWS Secur ity review including registering the application initiating the application risk classification participating in the architecture review and threat modeling performing code review and performing a penetration test Customers assume responsibility and m anagement of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewalls and other security change management and logging features § 2421001 (a)(2)(iv) Regular reviews and testing as applicable of such systems including backup systems to identify vulnerabilities pertaining to internal and external threats physical hazards and natural or manmade disasters Shared Responsibility AWS tests the Business Continuity plan and its associated procedures at least annually to ensure effectiveness of the plan and the organization readiness to execute the plan Testing consists of engagement drills that execute on activities that would be performed in an actual outage AWS documents the results including lessons learned and any corrective actions that were completed As previously stated customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS Customers can request permission to conduct penetration testing to or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 7 Requirement Reference Requirement Implementation Implementation Considerations originating from any AWS resources as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Penetration tests s hould include customer IP addresses and not AWS endpoints AWS endpoints are tested as part of AWS compliance vulnerability scans Advance approval for these types of scans can be initiated by submitting a request using the AWS Vulnerability / Penetration Testing Request Form found here: https://awsamazoncom/security/penetration testing/ § 2421001 (a)(2)(v) Business continuity and disaster recovery plans that include maintaining backup and recovery capabilities sufficiently resilient and geographically diverse and that are reasonably designed to achieve next business day resumption of trading and twohour resumption of critical SCI systems following a widescale disruption; Shared Responsibility Learn how to architect DR in the AWS Cloud based on your specific requirements https://mediaamazonwebservicescom/AWS_Disaster_Re coverypdf Also consider the use of ELB health checks on their target EC2 instances and detect whether or not an instance and the app running on it are healthy combined with Auto Scaling groups to identify failing instances and cycle them out automatically with limited downtime"" § 2421001 (a)(2)(vi) Standards that result in such systems being designed developed tested maintained operated and surveilled in a manner that facilitates the successful collection processing and dissemination of market data; and Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 8 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(2)(vii) Monitoring of such systems to identify potential SCI events Shared Responsibility One way to monitor your systems includes the use of Amazon CloudWatch a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these insights to react and keep your application running smoothly Visit here to learn more: https://awsamazoncom/cloudwatch/ § 2421001 (a)(3) Each SCI entity shall periodically review the effectiveness of the policies and procedures required by this paragraph (a) and take prompt action to remedy deficiencies in such policies and procedures Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 9 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(4) For purposes of this paragraph (a) such policies and procedures shall be deemed to be reasonably designed if they are consistent with current SCI industry standards which shall be comprised of information technology practices that are widely available to information technology professionals in the financial sector and issued by an authoritative body that is a US governmental entity or agency association of US governmental entities or agencies or widely recognized organization Compliance with such current SCI industry standards however shall not be the exclusive means to comply with the requirements of this paragraph (a) Customer Responsibility § 2421001 (b) Each SCI entity shall establish maintain and enforce written policies and procedures reasonably designed to ensure that its SCI systems operate in a manner that complies with the Act and the rules and regulations thereunder and the entity’s rules and governing documents as applicable Customer Responsibility § 2421001 (c) Each SCI entity shall establish maintain and enforce reasonably designed written policies and procedures that include the criteria for identifying responsible SCI personnel the designation and documentation of responsible SCI personnel and escalation procedures to quickly inform responsible SCI personnel of potential SCI events Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 10 Requirement Reference Requirement Implementation Implementation Considerations Obligations related to SCI event § 2421002 (a) Upon any responsible SCI personnel having a reasonable basis to conclude that an SCI event has occurred each SCI entity shall begin to take appropriate corrective action which shall include at a minimum mitigating potential harm to investors and market integrity resulting from the SCI event and devoting adequate resources to remedy the SCI event as soon as reasonably practicable Shared Responsibility The AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you The Service Health Dashboard is publicly available and displays the general status of AWS services Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources The dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for scheduled activities With Personal Health Dashboard alerts are automatically triggered by changes in the health of AWS resources giving you event visibility and guidance to help quickly diagnose and resolve issues § 2421002(b) Commission notification and recordkeeping of SCI events Each SCI entity shall (1) notify the Commission of such SCI event immediately (2) Within 24 hours of any responsible SCI personnel having a reasonable basis to conclude that the SCI event has occurred submit a written notification pertaining to such SCI event to the Commission which shall be made on a good faith (3) Until such time as the SCI event is resolved and the SCI entity’s investigation of the SCI event is closed provide updates pertaining to such SCI event to the Commission on a regular basis or at such frequency as reasonably requested by a representative of the Commission (4) Continue to communicate action with the Commission until a final report is issued (5) Make keep and preserve records relating to all such SCI events Customer Responsibility Amazon Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup Customers can reliably store large or small amounts of data for as little as $0004 per gigabyte per month a significant savings com pared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon Glacier provides three options for access to archives from a few minutes to several hours Learn more here https://awsamazoncom/glacier/details/#Vault_Lock This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 11 Requirement Reference Requirement Implementation Implementation Considerations § 2421002 (c) Promptly after any responsible SCI personnel has a reasonable basis to conclude that an SCI event that is a systems disruption or systems compliance issue has occurred disseminate follow the requirements setforth within for dissemination of SCI events Customer Responsibility Obligations r elated to systems changes; SCI review § 2421003 (a) Within 30 calendar days after the end of each calendar quarter each SCI entity submit to the Commission a report describing completed ongoing and planned material changes to its SCI systems and the security of indirect SCI systems during the prior current and subsequent calendar quarters including the dates or expected dates of commencement and completion An SCI entity shall establish reasonable written criteria for identifying a change to its SCI systems and the security of indirect SCI systems as material and report such changes in accordance with such criteria Custom er Responsibility Customers can use the AWS Service Health Dashboard for detailed information on service disruptions § 2421003 (b) Each SCI entity shall: conduct an SCI review of the SCI entity’s compliance with Regulation SCI not less than once each calendar year; provided however that: (i) Penetration test reviews of the network firewalls and production systems shall be conducted at a frequency of not less than once every three years; and (ii) Assessments of SCI systems directly supporting market regulation or market surveillance shall be conducted at a frequency based upon the risk assessment conducted as part of the SCI review but in no case less than once every three years; and (2) Submit a report of the SCI review required by paragraph (b)(1) of this section to senior management of the SCI entity for review no more Shared Responsibility AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to the documented audit scheduled to review the continued performance of AWS against standards based criteria and to identify general improvement opportunit ies Compliance reports from these assessments are made available to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance A vendor or supplier evaluation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 12 Requirement Reference Requirement Implementation Implementation Considerations than 30 calendar days after completion of such SCI review; and (3) Submit to the Commission and to the board of directors of the SCI entity or the equivalent of such board a report of the SCI review required by paragraph (b)(1) of this section together with any response by senior management within 60 calendar days after its submission to senior management of the SCI entity can be performed by leveraging these reports and certifications Included in these audit reports is Vulnerability Management The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting security related activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tr acked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system AWS Security teams also subscribe to newsfeeds for applicable vendor flaws and proactively monitor vendors’ websites and other relevant outlets for new patches AWS customers also have the ability to report issues to AWS via the AWS Vulnerability Reporting website at: http://awsamazoncom/security/vulnerability reporting/ SCI entity business continuity and disaster recovery plans testing requirements for members or participants § 2421004 With respect to an SCI entity’s business continuity and disaster recovery plans including its backup systems each SCI entity shall: (a) Establish standards for the designation of those members or participants that the SCI entity reasonably determines are taken as a whole the minimum necessary for the maintenance of fair and orderly markets in the event of the activation of such plans; (b) Designate members or participants pursuant to the standards established in paragraph (a) of this section and require participation by such designated members or participants in scheduled functional and performance testing of the operation of such Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 13 Requirement Reference Requirement Implementation Implementation Considerations plans in the manner and frequency specified by the SCI entity provided that such frequency shall not be less than once every 12 months; and (c) Coordinate the testing of such plans on an industry or sector wide basis with other SCI entities Recordkeeping requirements related to compliance with Regulation SCI § 2421005 (a) An SCI SRO shall make keep and preserve all documents relating to its compliance with Regulation SCI as prescribed in §24017a1 of this chapterAn SCI entity that is not an SCI SRO shall: (1) Make keep and preserve at least one copy of all documents including correspondenc e memoranda papers books notices accounts and other such records relating to its compliance with Regulation SCI including but not limited to records relating to any changes to its SCI systems and indirect SCI systems; (2) Keep all such documents for a period of not less than five years the first two years in a place that is readily accessible to the Commission or its representatives for inspection and examination; and Customer Responsibility Amazon Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup Customers can reliably store large or small amounts of data for as little as $0004 per gigabyte per month a significant savings com pared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon Glacier provides three options for access to archives from a few minutes to several hours Learn more here https://awsamazoncom/glacier/details/#Vault_Lock Electronic filing and submission This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 14 Requirement Reference Requirement Implementation Implementation Considerations § 2421006 (a) Except with respect to notifications to the Commission made pursuant to § 2421002(b)(1) or updates to the Commission made pursuant to paragraph § 2421002(b)(3) any notification review description analysis or report to the Commission required to be submitted under Regulation SCI shall be filed electronically on Form SCI (§2491900 of this chapter) include all information as prescribed in Form SCI and the instructions thereto and contain an electronic signature; and (b) The signatory to an electronically filed Form SCI shall manually sign a signature page or document in the manner prescribed by Form SCI authenticating acknowledging or otherwise adopting his or her signature that appears in typed form within the electronic filing Such document shall be executed before or at the time Form SCI is electronically filed and shall be retained by the SCI entity in accordance with § 2421005 Customer Responsibility Requirements for service bureaus § 2421007 If records required to be filed or kept by an SCI entity under Regulation SCI are prepared or maintained by a service bureau or other recordkeeping service on behalf of the SCI entity the SCI entity shall ensure that the records are available for review by the Commission and its representatives by submitting a written undertaking in a form acceptable to the Commission by such service bureau or other recordkeeping service signed by a duly authorized person at such service bureau or other recordkeeping service Such a written undertaking shall include an agreement by the service bureau to permit the Commission and its representatives to examine such records at any time or from time Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 15 Requirement Reference Requirement Implementation Implementation Considerations to time during business hours and to promptly furnish to the Commission and its representatives true correct and current electronic files in a form acceptable to the Commission or its representatives or hard copies of any or all or any part of such records upon request periodically or continuously and in any case within t he same time periods as would apply to the SCI entity for such records The preparation or maintenance of records by a service bureau or other recordkeeping service shall not relieve an SCI entity from its obligation to prepare maintain and provide the Commission and its representatives access to such records This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 16 Document Revisions Date Description November 2017 First publication",General,consultant,Best Practices Right_Sizing_Provisioning_Instances_to_Match_Workloads,Right Sizing Provisioning Instances to Match Workloads January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Right Size Before Migrating 1 Right Sizing is an Ongoing Process 1 Overview of Amazon EC2 and Amazon RDS Instance Families 2 Identifying Opportunities to Right Size 4 Tools for Right Sizing 4 Tips for Developing Your Own Right Sizing Tools 5 Tips for Right Sizing 6 Right Size Using Performance Data 6 Right Size Based on Usage Needs 8 Right Size by Turning Off Idle Instances 8 Right Size by Selecting the Right Instance Family 9 Right Size Your Database Instances 10 Conclusion 10 Contributors 11 Document Revisions 11 Abstract This is the seventh in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from you r investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how to provision instances to match your workload performance and capacity requirements to optimize costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 1 Introduction Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirem ents which result s in lower costs Right sizing is a key mechanism for optimizing AWS costs but it is often ignored by organizations when they first move to the AWS Cloud They lift and shift their environments and expect to right size later Speed and performance are often prioritized over cost which result s in oversized instances and a lot of wasted spend on un used resources Right Siz e Before Migrati ng One reason for the waste is the mindset to overprovision that many IT professionals bring with them when they build their cloud infrastructure Historically IT departments have had to provision for peak demand However cloud environments minimize costs because capacity is provisioned based on averag e usage rather than peak usage When you learn how to right size you can save up to 70 % percent on your monthly bill The key to right sizing is to understand precis ely your organization’s usage needs and patterns and know how to take advantage of the elasticity of the AWS Cloud to respond to those needs By right sizing before a migration you can significantly reduce your infrastructure costs If you skip right sizing to save time your migration speed might be faster but you will end up with higher cloud infrastructure spend for a potentially long time Right Sizing is a n Ongoing Process To achieve cost optimization righ t sizing must become an ongoing process within your organization It’s important to right size when you first consider moving to the cloud and calculate total cost of ownership but it’s equally Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 2 important to right size periodically once you’re in the cloud to ensure ongoing costperformance optimization Why is it necessary to right size continually? Even if you right size workloads initially performance and capacity requirements can change over time which can result in underused or idle resources Additi onally new projects and workloads require additional cloud resources and overprovisioning is the likely outcome if there is no process in place to support right sizing and other cost optimization efforts You should r ight siz e your workloads at least once a month to control costs You can make ri ght sizing a smooth process by: • Having each team set up a right sizing schedule and then re port the savings to management • Monitoring costs closely using AWS cost and reporting tools such as Cost Explorer budgets and detailed billing reports in the Billi ng and Cost Management console • Enforcing tagging for all instances so that you can quickly identify attributes such as the instance own er application and environment (deve lopment/testing or production) • Understanding how to right size We first describe the types of instances that AWS offers and then discuss key considerations for right sizing your instances Overview of Amazon EC2 and Amazon RDS Instance Families Picking an Amazon Elastic Compute Cloud (Amazon EC2) instance for a given workload means finding the instance family that most closely matches the CPU and m emory needs of your workload Amazon EC2 provides a wide selection of instances which gives you lots of flexibility to right size your compute resources to match capacity needs at the lowest cost There are five families of EC2 instances with different op tions for CPU memory and network resources: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 3 • General purpose (includes T2 M3 and M4 instance types) – T2 instances are a very low cost option that provide a small amount of CPU resources that can be increased in short bursts when additional cycles are available They are well suited for lower throughput applications such as administrative applic ations or low traffic websites M3 and M4 instances provide a balance of CPU memory and network resources and are ideal for running small and midsize database s more memory intensive data processing tasks caching fleets and backend servers • Compute optimized (includes the C3 and C4 instance types ) – Have a higher ratio of virtual CPUs to memory than the other families and the lowest cost per virtual CPU of all the EC2 instance types Consider compute optimized instances first if you are running CPU bound scale out applications such as frontend fleets for high traffic websites on demand batch processing distributed analytics web servers video encoding a nd high performance scienc e and engineering applications • Memory optimized (includes the X1 R3 and R4 instance types ) – Designed for memory intensive applications these instances have the lowest cost per GiB of RAM of all EC2 instance types Use these instances if your application is memory bound • Storage optimized (includes the I3 and D2 instance types ) – Optimized to deliver tens of thousands of low latency random input/output ( I/O) operations per second (IOPS) to applications Storage optimize d instances are best for large deployments of NoSQL databases I3 instances are designed for I/O intensive workloads and equipped with super efficient NVMe SSD storage These instances can deliver up to 33 million IOPS in 4 KB blocks and up to 16 GB/secon d of sequential disk throughput D2 or dense storage instances are designed for workloads that require high sequential read and write access to very large data sets such as Hadoop distributed computing massively parallel processing data warehousing and logprocessing applications Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 4 • Accelerated computing (includes the P2 G3 and F1 instance types ) – Provide access to hardware based compute accelerators such as graphics processing units (GPUs) or field programmable gate arrays (FPGAs) Accelerated computin g instances enable more parallelism for higher throughput on compute intensive workloads Amazon Relational Database Service (Amazon RDS) database instances are similar to Amazon EC2 instances in that there are d ifferent families to suit different workloads These database instance families are optimized for memory performance or I/O: • Standard performance (includes the M3 and M4 instance types ) – Designed for general purpose database workloads that don’t run man y inmemory functions This family has the most options for provisioning increased IOPS • Burstable performance (includes T2 instance types ) – For workloads that require burstable performance capacity • Memory optimized (includes the R3 and R4 instance types ) – Optimized for in memory functions and big data analysis Identifying Opportunities to Right Size The first step in right sizing is to monitor and analyze your current use of services to gain insight into instance performance and usage patterns To gather sufficient data observe performance over at least a two week period (ideally over a onemonth period ) to capture the workload and business peak The most common metrics that define instance performance are vCPU utilization memory utilization network utilization and ephemeral disk use In rare cases where instances are selected for reasons other than these metrics it is important for the technical owner to review the right sizing effort Tools for Right Sizing You can use t he following tools to evaluate costs and monitor and analyze instance usage for right sizing : Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 5 • Amazon CloudWatch – Lets you observe CPU utilization network throughput and disk I/O and match the observed peak metrics to a new and cheaper instance type You can also regularly monitor Amazon EC2 Usage Reports which are updated several times a day and provide in depth usag e data for all your EC2 instances Typically this is feasible only for small environments given the time and effort required • AWS Cost Explorer – This free tool lets you dive de eper into your cost and usage data to identify trends pinpoint cost drivers and detect anomalies It includes Amazon EC2 Usage Reports which let you analyze the cost and usage of your EC2 ins tances over the last 13 months • AWS Trusted Advisor – Lets you inspect your AWS environment to identify idle and underutilized resources and provide s real time insight into service usage to help you improve system performance and reliability increase security and look for opportunities to save money • Third party monitoring tools such as CloudHealth Cloudability and CloudCheckr are also an option to automatically identify opportunities and suggest alternate instances These tool s have years of development effort and customer feedback points built into them They also provide additional cost management and optimization functionality Tips for Developing Your Own Right Sizing Tools You can also develop your own tools for monitoring and analyzing performance The following guidelines can help if you are considering this option : • Focus on instances that have run for at least half the time you’re looking at • Focus on instances with lower reserved in stance coverage • Exclude resources that have been switched off (reducing search effort) • Avoid conversions to older generation instances where possible • Apply a savings threshold below which right sizing is not worth considering • Make sure the following conditions are met before you switch to a new instance: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 6 o The vCPU of the new instance is equal to that of the old instance or the application’s observed vCPU is less than 80 % of the vCPU capacity of the new instance o The memory of th e new instance is equal to that of the old instance or the application’s obser ved memory peak is less than 80% of the memory capacity of the new instance Note: You can capture memory utilization metrics by using monitoring scripts that report these metric s to Amazon CloudWatch For more information see Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances o The network throughput of the new instance is equal to that of the old instance or the application ’s network peak is less than the network capacity of the new instance Note: Maximum NetworkIn and NetworkOut values are measured in bytes perminute Use the following formula to convert these metrics to megabit s per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024/1024 / 60 = Number of Mbps o If the ephemeral storage disk I/O is less than 3000 you can use Amazon Elastic Block Store (Amazon EBS) storage If not use instance families that have ephemeral storage For more information see Amazon EBS Volume Types Tips for Right Sizing This section offers tips t o help you right size your EC2 instances and RDS DB instances Right Siz e Using Performance Data Analyze performance data to right size your EC2 instances Identify idle instances and ones that are underutilized Key metrics to look for are CPU usage and m emory usage Identify instances with a maximum CPU usage and memory usage of less than 40 % over a four week period These are the instances that you will want to right size to reduce costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 7 For compute optimized instances keep the following in mind: • Focus on very recent instance data (old data may not be actionable) • Focus on instances that have run for at least half the time you’re looking at • Ignore burstable instance families (T2 instance types ) because these families are designed to typically run at lo w CPU percentages for significant periods of time For storage optimized instances (I2 and D2 instance types) where the key feature is high data IOPS focus on IOPS to see whether instances are overprovisioned Keep the following in mind for storage optim ized instances: • Different size instances have different IOPS ratings so tailor your reports to each instance type Start with your most commonly used storage optimized instance type • Peak NetworkIn and NetworkOut values are measured in bytes per minute U se the following formula to convert these metrics to megabits per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024 /1024/ 60 = Number of Mbps • Take note of how I/O and CPU percentage metrics change during the day and whether there are peaks that need to be accommodated Right size against memory if you find that maximum memory utilization over a fourweek period is less than 40 % AWS provides sample scripts for monitoring memory and disk space utilization on your EC2 instances running Linux You can configure the scripts to report the metrics to Amazon CloudWatch When analyzing performance data for Amazon RDS DB instances focus on the following metrics to determine whether actual usage is lower than instance capacity: • Average CPU utilization • Maximum CPU utilization • Minimum available RAM • Average number of bytes read from disk per second Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 8 • Average number of bytes written to disk per second Right Siz e Based on Usage N eeds As you monitor current performance identify the following usage needs and patterns so that you can take advantage of potential right sizing options: • Steady state – The load remains at a relatively constant level over time and you ca n accurately forecast the likely compute load For this usage pattern you might consider Reserved Instances which can provide significant savings • Variable but predictable – The load changes but on a predictable schedule Auto Scaling is well suited for applications that have stable demand patterns with hourly daily or weekly variability in usage You can use this feature to scale Amazon EC2 capacity up or down when you experience spiky traffic or pr edictable fluctuations in traffic • Dev/test/production – Development testing and production environments are typically used only during business hours and can be turned off during evenings weekends and holidays (You’ll need to rely on tagging to identify dev/test/production instances) • Temporary – For temporary workloads that have flexible start times and can be interrupted you can consider placing a bid for an Amazon EC2 Spot Instance instead of using an On Demand Instance Right Size by Turn ing Off Idle Instances The easiest way to reduce operational costs is to turn off instances that are no longer being used If you find instances that have been idle for more than two weeks it’s safe to stop or even terminate them Before terminating an insta nce that’s been idle for two weeks or less consider: • Who owns the instance? • What is the potential impact of terminating the instance? • How hard will it be to re create the instance if you need to restore it? Stopping an EC2 instance leaves any attached EBS volumes operational You will continue to be charged for these volumes until you delete them If you need the instance again you can easily turn it back on Terminating an instance Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 9 however automatically deletes attached EBS volumes and requires effort to re provision should the instance be needed again If you decide to delete an EBS volume consider storing a snapshot of the volume so that it can be restored later if needed Another simple way to reduce costs is to stop instances used in development and production during hours when these instances are not in use and then start them again when their capacity is needed Assuming a 50 hour work week you can save 70 % by automatically stopping dev/test/production instances during nonbusiness hours Many to ols are available to automate scheduling including Amazon EC2 Scheduler AWS Lambda and AWS Data Pipeline as well as thirdparty tools s uch as CloudHealth and Skeddly Right Siz e by Selecting the Right Instance Family You can right size an instance by migrating to a different model within the same instance family or by migrating to another instance family When migrating within the same instance family you only need to consider vCPU memory network throughput and ephemeral storage A good general rule for EC2 instances is that if your maximum CPU and memory usage is less than 40 % over a four week period you can safely cut the machine in half For example if you were using a c48xlarge EC2 you could move to a c44xlarge which would save $190 every 10 days When migrating to a different instance family make sure the current instance type and the new instance type are compatible in terms of virtualization type network and platform: • Virtualization type – The instances must have the same Linux AMI virtualization type (PV AMI versus HVM) and platform (EC2 Classic versus EC2 VPC) For more information see Linux AMI Virtualization Types • Network – Some instances are not supported in EC2 Classic and must be launched in a virtual private cloud (VPC) For more information see Instance Types A vailable Only in a VPC Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 10 • Platform – If your current instance type supports 32 bit AMIs make sure to select a new instance type that also supports 32 bit AMIs (not all EC2 instance types do) To check the platform of your instance go to the Instances scree n in the Amazon EC2 console and choose Show/Hide Columns Architecture When you resize an EC2 instance the resized instance usually has the same number of instance store volumes that you specified when you launched the original instance You cannot attac h instance store volumes to an instance after you’ve launched it so if you want to add instance store volumes you will need to migrate to a new instance type that contains the higher number of volumes Right Siz e Your Database Instances You can scale you r database instances by adjusting memory or compute power up or down as performance and capacity requirements change The following are some things to consider when scaling a database instance: • Storage and instance type are decoupled When you scale your database instance up or down your storage size remains the same and is not affected by the change • You can separately modify your Amazon RDS DB instance to increase the allocated storage space or improve the performance by changing the storage type (such a s General Purpose SSD to Provisioned IOPS SSD) • Before you scale make sure you have the correct licensing in place for commercial engines (SQL Server Oracle) especially if you Bring Your Own License (BYOL) • Determine when you want to apply the change Y ou have an option to apply it immediately or during the maintenance window specified for the instance Conclusion Right sizing is the most effective way to control cloud costs It involves continually analyzing instance performance and usage needs and patterns — and then turning off idle instances and right sizing instances that are either overprovisioned or poorly matc hed to the workload Because your resource needs are always changing right sizing must become an ongoing process to Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 11 continually achieve cost optimization You can make right sizing a smooth process by establishing a right sizing schedule for each team en forcing tagging for all instances and taking full advantage of the powerful tools that AWS and others provide to simplify resource monitoring and analysis Contributors Contributors to this document include: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document Revisions Date Description January 2020 Minor revisions March 2018 First publication,General,consultant,Best Practices Robust_Random_Cut_Forest_Based_Anomaly_Detection_on_Streams,"Robust Random Cut Forest Based Anomaly Detection On Streams Sudipto Guha SUDIPTO @CISUPENN EDU University of Pennsylvania Philadelphia PA 19104 Nina Mishra NMISHRA @AMAZON COM Amazon Palo Alto CA 94303Gourav Roy GOURA VR @AMAZON COM Amazon Bangalore India 560055Okke Schrijvers OKKES @CSSTANFORD EDU Stanford University Palo Alto CA 94305 Abstract In this paper we focus on the anomaly detection problem for dynamic data streams through thelens of random cut forests We investigate a robust random cut data structure that can be usedas a sketch or synopsis of the input stream Weprovide a plausible definition of nonparametricanomalies based on the influence of an unseenpoint on the remainder of the data ie the externality imposed by that point We show how thesketch can be efficiently updated in a dynamicdata stream We demonstrate the viability of thealgorithm on publicly available real data 1 Introduction Anomaly detection is one of the cornerstone problems indata mining Even though the problem has been well studied over the last few decades the emerging explosion ofdata from the internet of things and sensors leads us to reconsider the problem In most of these contexts the datais streaming and wellunderstood prior models do not exist Furthermore the input streams need not be append onlythere may be corrections updates and a variety of other dynamic changes Two central questions in this regard are(1) how do we define anomalies? and (2) what data structure do we use to efficiently detect anomalies over dynamicdata streams? In this paper we initiate the formal study ofboth of these questions For (1) we view the problem fromthe perspective of model complexity and say that a point isan anomaly if the complexity of the model increases substantially with the inclusion of the point The labeling of Proceedings of the 33rdInternational Conference on Machine Learning New Y ork NY USA 2016 JMLR: W&CP volume 48 Copyright 2016 by the author(s)a point is data dependent and corresponds to the external ity imposed by the point in explaining the remainder of thedata We extend this notion of externality to handle “outliermasking” that often arises from duplicates and near duplicate records Note that the notion of model complexity hasto be amenable to efficient computation in dynamic datastreams This relates question (1) to question (2) which wediscuss in greater detail next However it is worth notingthat anomaly detection is not well understood even in thesimpler context of static batch processing and (2) remainsrelevant in the batch setting as well For question (2) we explore a randomized approach akin to (Liu et al 2012) due in part to the practical success re ported in (Emmott et al 2013) Randomization is a pow erful tool and known to be valuable in supervised learning (Breiman 2001) But its technical exploration in the context of anomaly detection is not wellunderstood andthe same comment applies to the algorithm put forth in (Liuet al 2012) Moreover that algorithm has several lim itations as described in Section 41 In particular we show that in the presence of irrelevant dimensions crucial anomalies are missed In addition it is unclear howto extend this work to a stream Prior work attempted solutions (T a ne ta l 2011) that extend to streaming however those were not found to be effective (Emmott et al 2013) To address these limitations we put forward a sketch orsynopsis termed robust random cut forest (RRCF) formally defined as follows Definition 1 Arobust random cut tree (RRCT) on point setSis generated as follows: 1 Choose a random dimension proportional to /lscripti/summationtext j/lscriptj where/lscripti=m a x x∈Sxi−minx∈Sxi 2 Choose Xi∼Uniform[min x∈Sximaxx∈Sxi] 3 LetS1={x|x∈Sxi≤Xi}andS2=S\S1and recurse on S1andS2This document has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams A robust random cut forest (RRCF) is a collection of inde pendent RRCTs The approach in (Liu et al 2012) differs from the above procedure in Step (1) and chooses the dimension to cut uni formly at random We discuss this algorithm in more detailin Section 41and provide extensive comparison Following question (2) we ask: Does the RRCF data structure contain sufficient information that is independent ofthe specifics of the tree construction algorithm? In this pa per we prove that the RRCF data structure approximatelypreserves distances in the following sense: Theorem 1 Consider the algorithm in Definition 1 Let the weight of a node in a tree be the corresponding sum of dimensions/summationtext i/lscripti Given two points uv∈S define the tree distance between uandvto be the weight of the least common ancestor of uv Then the tree distance is always at least the Manhattan distance L1(uv) and in expectation at most O/parenleftBig dlog|S| L1(uv )/parenrightBig timesL1(uv) Theorem 1provides a low stretch distance preserving em bedding reminiscent of the JohnsonLindenstrauss Lemma(Johnson & Lindenstrauss 1984) using random projections forL 2()distances (which has much better dependence on d) The theorem is interesting because it implies that ifa point is far from others (as is the case with anomalies)that it will continue to be at least as far in a random cuttree in expectation The proof of Theorem 1follows along the same lines of the proof of approximating finite metric spaces by a collection of trees (Charikar et al 1998) Most of the proofs appear in the supplementary material The theorem shows that if there is a lot of empty space around a point ie γ=m i n vL1(uv)is large then we will isolate the point within O(dlog|S|/γ)levels from the root Moreover since for any p≥1 thepnormed dis tance satisfies d1−1/pLp(uv)≥L1(uv)≥Lp(uv)and therefore the early isolation applies to all large Lp()dis tances simultaneously This provides us a pointer towards the success of the original isolation forest algorithm in lowto moderate dimensional data because dis small and the probability of choosing a dimension is not as important if they are small in number Thus the RRCF ensemble contains sufficient information that allows us to determine dis tance based anomalies without focusing on the specificsof the distance function Moreover the distance scales are adjusted appropriately based on the empty spaces betweenthe points since the two bounding boxes may shrink afterthe cut Suppose that we are interested in the sample maintenance problem of producing a tree at random (with the correct probability) from T(S−{x}) or fromT(S∪{x})I n this paper we prove that we can efficiently insert and deletepoints into a random cut tree Theorem 2 (Section 3)Given a tree Tdrawn accordingtoT(S); if we delete the node containing the isolated point xand its parent (adjusting the grandparent accordingly see Figure 2) then the resulting tree T /primehas the same proba bility as if being drawn from T(S−{x}) Likewise we can produce a tree T/prime/primeas if drawn at random from T(S∪{x}) is time which is O(d)times the maximum depth of T which is typically sublinear in |T| Theorem 2demonstrates an intuitively natural behavior when points are deleted — as shown in the schematic in Figure 1 In effect if we insert x perform a few more op erations and then delete x then not only do we preserve distributions but the trees remain very close to each other — as if the insertion never happened This behavior is a classic desiderata of sketching algorithms xa b c (a) Before: Ta bc (b) After: T/prime Figure 1 Decremental maintenance of trees The natural behavior of deletions is not true if we do not choose the dimensions as in Step (1) of RRCF construction For example if we choose the dimensions uniformlyat random as in (Liu et al 2012) suppose we build a tree for(10)(/epsilon1/epsilon1)(01)where1/greatermuch/epsilon1>0and then delete (10) The probability of getting a tree over the two re maining points that uses a vertical separator is 3/4−/epsilon1/2 and not1/2 as desired The probability of getting that tree in the RRCF process (after applying Theorem 2)i s1−/epsilon1 as desired This natural behavior under deletions is also nottrue of most space partitioning methods –such as quadtrees(Finkel & Bentley 1974) kdtrees (Bentley 1975) and R trees (Guttman 1984) The dynamic maintenance of a dis tribution over trees in a streaming setting is a novel contri bution to the best of our knowledge and as a consequence we can efficiently maintain a tree over a sample of a stream: Theorem 3 We can maintain a random tree over a sample Seven as the sample Sis updated dynamically for stream ing data using sublinear update time and O(d|S|)space We can now use reservoir sampling (Vitter 1985) to main tain a uniform random sample of size |S|or a recency biased weighted random sample of size |S|(Efraimidis & Spirakis 2006) in space proportional to |S|on the fly In effect the random sampling process is now orthogo nal from the robust random cut forest construction For example to produce a sample of size ρ|S|forρ<1 in an uniform random sampling we can perform straight forward rejection sampling; in the recency biased sample ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams in (Efraimidis & Spirakis 2006) we need to delete the (1−ρ)|S|lowest priority points This notion of downsam pling via deletions is supported perfectly by Theorem 2– even for downsampling rates that are determined after the trees have been constructed during postprocessing Thus Theorem 4 Given a tree T(S)for sample S if there exists a procedure that downsamples via deletion then we have an algorithm that simultaneously provides us a downsampled tree for every downsampling rate Theorems 3and 4taken together separate the notion of sampling from the analysis task and therefore eliminates the need to fine tune the sample size as an initial parameterMoreover the dynamic maintenance of trees in Theorem 3 provides a mechanism to answer counterfactual questionsas given in Theorem 5 Theorem 5 Given a tree T(S)for sample S and a point pwe can efficiently compute a random tree in T(S∪{p}) and therefore answer questions such as: what would have been the expected depth had pbeen included in the sample? The ability to answer these counterfactual questions arecritical to determining anomalies Intuitively we label a pointpas an anomaly when the joint distribution of in cluding the point is significantly different from the distri bution that excludes it Theorem 5allows us to efficiently (pretend) sketch the joint distribution including the point p However instead of measuring the effect of the sampled data points on pto determine its label (as is measured by notions such as expected depth) it stands to reason that we should measure the effect of pon the sampled points This leads us to the definition of anomalies used in this paper 2 Defining Anomalies Consider the hypotheses: (a) An anomaly is often easy to describe – consider Waldo wearing a red fedora in a sea of dark felt hats While it may be difficult for us to find Waldo in a crowd ifwe could forget the faces and see the color (as is the case when Waldo is revealed by someone else) thenthe recognition of the anomaly is fairly simple (b) An anomaly makes it harder to describe the remainder of the data – if Waldo were not wearing the red fedora we may not have admitted the possibility that hats canbe colored In essence an anomaly displaces our at tention from the normal observation to this new one The fundamental task is therefore to quantify the shift in attention Suppose that we assign left branches the bit 0 and right branches the bit 1in a tree in a random cut forest Now consider the bits that specify a point (excluding thebits that are required to store the attribute values of the point itself) This defines the complexity of a random model M T which in our case corresponds to a tree Tthat fits the initialdata Therefore the number of bits required to express a point corresponds to its depth in the tree Given a set of points Zand a point y∈Zletf(yZT)be the depth of yin treeT Consider now the tree produced by deleting xas in Theorem 2asT(Z−{x}) Note that givenTandxthe treeT(Z−{x}) is uniquely1determined Let the depth of yinT(Z−{x}) bef(yZ−{x}T)(we drop the qualification of the tree in this notation since it is uniquely defined) xa b c10 10q0q r (a) Tree T(Z)a bc10q0q r (b) Tree T(Z−{x}) Figure 2 A correspondence of trees Consider now a point yin the subtree cin Figure 2a Its bit representation in Twould be q0q r00 The model complexity denoted as |M(T)|the number of bits required to write down the description of all points yin treeTtherefore will be |M(T)|=/summationtext y∈Zf(yZT)I f we were to remove xthen the new model complexity is |M(T/prime)|=/summationdisplay y∈Z−{x}f(yZ−{x}T/prime) whereT/prime=T(Z−{x}) is a tree over Z−{x}N o w consider the expected change in model complexity under a random model However since we have a many to onemapping from T(Z)toT(Z−{x}) as a consequence of Theorem 2 we can express the second sum over T(Z)in stead ofT /prime=T(Z−{x}) and we get ET(Z)[|M(T)|]−ET(Z−{x})[|M(T(Z−{x})|] =/summationdisplay T/summationdisplay y∈Z−{x}Pr[T]/parenleftbigg f(yZT)−f(yZ−{x}T/prime)/parenrightbigg +/summationdisplay TPr[T]f(xZT) (1) Definition 2 Define the bitdisplacement or displacement of a point xto be the increase in the model complexity of all other points ie for a set Z to capture the externality introduced by x define where T/prime=T(Z−{x}) DISP(xZ)=/summationdisplay Ty∈Z−{x}Pr[T]/parenleftbigg f(yZT)−f(yZ−{x}T/prime)/parenrightbigg 1The converse is not true this is a manytoone mapping ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Note the total change in model complexity is D ISP(xZ)+ g(xZ)whereg(xZ)=/summationtext TPr[T]f(xZT)is the ex pected depth of the point xin a random model Instead of postulating that anomalies correspond to large g() we fo cus on larger values of D ISP() The name displacement is clearer based on this lemma: Lemma 1 The expected displacement caused by a point x is the expected number of points in the sibling node of the leaf node containing x when the partitioning is done ac cording to the algorithm in Definition 1 Shortcomings While Definition 2points towards a pos sible definition of an anomaly the definition as stated arenot robust to duplicates or nearduplicates Consider onedense cluster and a point pfar from away from the cluster The displacement of pwill be large But if there is a point q very close to p thenq’s displacement in the presence of pis small This phenomenon is known as outlier masking Duplicates and near duplicates are natural and therefore the semantics of any anomaly detection algorithm has to ac commodate them Duplicate Resilience Consider the notion that Waldo has a few friends who help him hide – these friends are colluders; and if we were to get rid of all the colluders then the description changes significantly Specifically in stead of just removing the point xwe remove a set Cwith x∈C Analogous to Equation (1) E T(Z)[|M(T)|]−ET(Z−C)[|M(T(Z−C)|] =DISP(CZ)+/summationdisplay T/summationdisplay y∈CPr[T]f(yZT)(2) where D ISP(CZ)is the notion of displacement extended to subsets denoted as where T/prime/prime=T(Z−C) /summationdisplay Ty∈Z−CPr[T]/parenleftbigg f(yZT)−f(yZ−CT/prime/prime)/parenrightbigg (3) Absent of any domain knowledge it appears that the dis placement should be attributed equally to all the points inC Therefore a natural choice of determining Cseems to bemax D ISP(CZ)/|C|subject to x∈C⊆Z However two problems arise First there are too many subsets C and second in a streaming setting it is likely we would be using a sample S⊂Z Therefore the supposedly natural choice does not extend to samples To avoid both issues we al low the choice of Cto be different for different samples S; in effect we are allowing Waldo to collude with differentmembers in different tests! This motivates the following: Definition 3 The Collusive Displacement ofxdenoted by C ODISP(xZ|S|)of a point xis defined asE S⊆ZT⎡ ⎣max x∈C⊆S1 |C|/summationdisplay y∈S−C/parenleftbigg f(yST)−f(yS−CT/prime/prime)/parenrightbigg⎤ ⎦ Lemma 2 CODISP(xZ|S|)can be estimated efficiently While C ODISP(xZ|S|)is dependent on |S| the depen dence is not severe We envision using the largest sample size which is permitted under the resource constraints Wearrive at the central characterization we use in this paper: Definition 4 Outliers correspond to large C ODISP() 3 Forest Maintenance on a Stream In this section we discuss how Robust Random Cut Trees can be dynamically maintained In the following letRRCF(S)be a the distribution over trees by running Def inition 1onS Consider the following operations: Insertion: GivenTdrawn from distribution RRCF(S) andp/negationslash∈Sproduce a T /primedrawn from RRCF(S∪{p}) Deletion: GivenTdrawn from distribution RRCF(S)and p∈Sproduce a T/primedrawn from RRCF(S−{p}) We need the following simple observation Observation 1 Separating a point set Sandpusing an axisparallel cut is possible if and only if it is possible to separate the minimal axisaligned bounding box B(S)and pusing an axisparallel cut The next lemma provides a structural property about RRCFtrees We are interest in incremental updates with as fewchanges as possible to a set of trees Note that given a spe cific tree we have two exhaustive cases that (i) the new point which is to be deleted (respectively inserted) is notseparated by the first cut and (ii) the new point is deleted (respective inserted) is separated by the first cut Lemma 3 addresses these for collections of trees (not just a single tree) that satisfy (i) and (ii) respectively Lemma 3 Given point pand set of points Swith an axis parallel minimal bounding box B(S)such that p/negationslash∈B: (i) F or any dimension i the probability of choosing an axis parallel cut in a dimension ithat splits Susing the weighted isolation forest algorithm is exactly the same as the conditional probability of choosing an axis parallel cut that splits S∪{p}in dimension i conditioned on not isolating pfrom all points of S (ii) Given a random tree of RRCF(S∪{p}) condi tioned on the fact the first cut isolates pfrom all points ofS the remainder of the tree is a random tree in RRCF(S) 31 Deletion of Points We begin with Algorithm 1which is deceptively simple ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Algorithm 1 Algorithm ForgetPoint 1:Find the node vin the tree where pis isolated in T 2:Letube the sibling of v Delete the parent of v(and of u) and replace that parent with u(ie we short circuit the path from uto the root) 3:Update all bounding boxes starting from u’s (new) par ent upwards – this state is not necessary for deletions but is useful for insertions 4:Return the modified tree T/prime Lemma 4 IfT were drawn from the distribution RRCF(S)then Algorithm 1produces a tree T/primewhich is drawn at random from the probability distribution RRCF(S−{p}) Lemma 5 The deletion operation can be performed in timeO(d)times the depth of point p Observe that if we delete a random point from the tree then the running time of the deletion operation is O(d)times the expected depth of any point Likewise if we delete pointswhose depth is shallower than most points in the tree thenwe can improve the running time of Lemma 5 32 Insertion of Points Given a tree TfromRRCF(S)we produce a tree T /primefrom the distribution RRCF(S∪{p}) The algorithm is pro vided in Algorithm 2 Once again we will couple the deci sions that is mirror the same split in T/primeas inT as long as pis not outside a bounding box in T Up to this point we are performing the same steps as in the construction of the forest on S∪{p} with the same probability Lemma 6 IfT were drawn from the distribution RRCF(S)then Algorithm 1produces a tree T/primewhich is drawn at random from the probability distribution RRCF(S∪{p}) 4 Isolation Forest and Other Related Work 41 The Isolation Forest Algorithm Recall that the isolation forest algorithm uses an ensem ble of trees similar to those constructed in Definition 1 with the modification that the dimension to cut is chosenuniformly at random Given a new point p that algorithm follows the cuts and compute the average depth of the point across a collection of trees The point is labeled an anomalyif the score exceeds a threshold; which corresponds to average depth being small compared to log|S|whereSis suitably sized sample of the data The advantage of the isolation forest is that different di mensions are treated independently and the algorithm is invariant to scaling different dimensions differently However consider the following exampleAlgorithm 2 Algorithm InsertPoint 1:We have a set of points S/primeand a tree T(S/prime) We want to insertpand produce tree T/prime(S/prime∪{p} 2:IfS/prime=∅then we return a node containing the single nodep 3:Otherwise S/primehas a bounding box B(S/prime)=[x/lscript 1xh1]× [x/lscript2xh2]×···[x/lscript dxhd] Letx/lscript i≤xhifor alli 4:For alliletˆx/lscripti=m i n{pix/lscripti}andˆxhi=m a x{xhipi} 5:Choose a random number r∈[0/summationtext i(ˆxhi−ˆx/lscripti)] 6:Thisrcorresponds to a specific choice of a cut in the construction of RRCF(S/prime∪{p}) For instance we can computeargmin{j|/summationtextj i=1(ˆxh i−ˆx/lscripti)≥r}and the cut corresponds to choosing ˆx/lscript j+/summationtextj i=1(ˆxh i−ˆx/lscript i)−rin dimension j 7:If this cut separates S/primeandp(ie is not in the interval [x/lscriptjxhj]) then and we can use this as the first cut for T/prime(S/prime∪{p}) We create a node – one side of the cut is pand the other side of the node is the tree T(S/prime) 8:If this cut does not separate S/primeandpthen we throw away the cut! We choose the exact same dimension as T(S/prime)inT/prime(S/prime∪{p}) and the exact same value of the cut chosen by T(S/prime)and perform the split The point p goes to one of the sides say with subset S/prime/prime We repeat this procedure with a smaller bounding box B(S/prime/prime)of S/prime/prime For the other side we use the same subtree as in T(S/prime) 9:In either case we update the bounding box of T/prime Example 1 ( IRRELEV ANT DIMENSIONS )Suppose we have two clusters of 1000 points each corresponding to x1=±5in the first dimension and xi=0 in all remain ing dimensions i In all coordinates (including x1)w e add a random Gaussian noise with mean 0and standard deviation 001simulating white noise Now consider 10 points with x1=0 and the same behavior in all the other coordinates When d=2 the small cluster of points in the center is easily separated by the isolation forest algorithmwhich treats the dimensions independently When d=3 0 the vast majority of cuts are in irrelevant dimensions andthe algorithm fails (when run on entire data) as shown inFigure 1afor a single trial over 100 trees F or 10trials (for the same data set) the algorithm determined that 430 27014722048244193158 250 and103 points had the same of higher anomaly score than the point with the highest anomaly score among the 10points (the identity of this point varied across the trials) In essence the algorithm either produces too many false alarms or does not have good recall Note that AUC isnot a relevant measure here since the class sizes betweenanomalous and nonanomalous are skewed 1 : 200 The results were consistent across multiple data sets generatedaccording to the example Figure 3bshows a correspond ing single trial using C ODISP() The C ODISP()measure places the 10points in the largest 20values most of the ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams time Example 1 shows that scale independence therefore can be negative feature if distance is a meaningful conceptin the dataset However in many tasks that depend on detecting anomalies the relevance of different dimensions isoften unknown The question of determining the appropriate scale of measurement often has far reaching consequences in data analysis 6 4 2 0 2 4 601 0 0101 0 01 01 02 03 (a) Performance of Isolation Forest (Liu et al 2012) Note that the score never exceeds 03whereas a score of 05corresponds to an outlier Note also that the two clusters are not distinguishable from the 10points near origin outliers in depth values (color) 642 0 2 4 601 0 0101 0 01 0 50 100 150 200 (b) Performance of C ODISP(x Z |Z|) Observe that the clus ters and outliers are separated; some of the extremal points in the clusters have the same (collusive) displacement as the 10points near the origin which is expected Figure 3 The result of running isolation forest and C ODISP()on the input in Example 1ford=3 0 A modified version of the above example also is helpful in arguing why depth of a point is a not always helpful in char acterizing anomalies even in low dimensions Consider Example 2 ( HELD OUTDATA )Consider the same dataset as in Example 1ind=2 dimensions Suppose that we have only sampled 100 points and all the samples correspond to x1=±5 Suppose we now want to evaluate: is the point (00)an anomaly? Based on the samples the natural answer is yes The scoring mechanism of isolation forest algorithm fails because once the two clusters are separated this new point (00)behaves as a point in one of the two other clusters! The situation however changes completely if we include (00)to build the trees The example explains why the isolation forest algorithm is sensitive to sample size However most anomalies are not usually seen in samples – anomaly detection algorithmsshould be measured on held out data Note that Theorem 5 can efficiently solve the issue raised in Example 2 by an swering the contrafactual question of what is the expectedheight has we observed (00)in the sample (without re building the trees) However expected depth seems to gen erate more false alarms as we investigate this issue further in the supplementary material42 Other Related Work The problem of (unsupervised) outlier detection has a rich literature We survey some of the work here; for an extensive survey see (Aggarwal 2013; Chandola et al 2009) and references therein We discuss some of techniqueswhich are unrelated to the concepts already discussed Perhaps the most obvious definition of an anomaly is density based outlier detection which posits that a low probability events are likely anomalous This has led to different approaches based on estimating the density of datasets For points in R nKnorr & Ng (1997; 1998; 1999); Knorr et al (2000) estimate the density by looking at the number of points that are within a ball of radius do fag i v e n data point The lower this number the more anomalous thedata point is This approach may break down when different parts of the domain have different scales To remedythis there a methods (Breunig et al 1999; 2000) that look at the density around a data point compared to its neighborhood A variation of the previous approach is to consider a fixed knumber of nearest neighbors and base the anomaly score on this (Eskin et al 2002; Zhang & Wang 2006) Here the anomaly score is monotonically increasing in the distances to the knearestneighbors Taking the idea of density one step further some authors have looked at finding structure in the data through clustering The intuition here is that for points that cannot easily be assigned toa cluster there is no good explanation for their existenceThere are several clustering algorithms that work well tocluster part of the data such as DBSCAN (Ester et al1996) and STREAM (Guha et al 2003) Additionally FindOut (Y u et al 2002) removes points it cannot clus ter and then recurses Finally the notion of sketching used in this paper is orthogonal to the notion used in ( Huang & Kasiviswanathan 2015) which uses streaming low rank approximation of the data 5 Experiments In the experiments we focus on datasets where anomalies are visual verifiable and interpretable We begin with asynthetic dataset that captures the classic diurnal rhythm ofhuman activity We then move to a real dataset reflecting taxi ridership in New Y ork City In both cases we comparethe performance of RRCF with IF A technique that turns out to be useful for detecting anoma lies in streams is shingling If a shingle of size 4 is passedover a stream the first 4 values of the stream received at timet 1t2t3t4are treated as a 4dimensional point Then at time t5 the values at time t2t3t4t5are treated as as the next fourdimensional point The window slidesover one unit at each time step A shingle encapsulates atypical shape of a curve – a departure from a typical shapecould be an anomaly ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams 51 Synthetic Data Many real datasets implicitly reflect human circadian rhythms For example an eCommerce site may monitorthe number of orders it receives per hour Search enginesmay monitor search queries or ad clicks per minute Content delivery networks may monitor requests per minute Inthese cases there is a natural tendency to expect higher values during the day and lower values at night An anomalymay reflect an unexpected dip or spike in activity In order to test our algorithm we synthetically generated a sine wave where a dip is artificially injected around times tamp 500 that lasts for 20 time units The goal is to deter mine if our anomaly detection algorithm can spot the be ginning and end of the injected anomaly The experimentswere run with a shingle of length four and one hundredtrees in the forest where each tree is constructed with auniform random reservoir sample of 256 points We treatthe dataset as a stream scoring a new point at time t+1 with the data structure built up until time t ! !! "" ""! # #! $ $! % %!    "" $     ""    # ! """"#%%  ! $   !# #  $  %"" %  ! $ ""  #  $#   "" % !  ""! #$ %   #   !"" ""% $  %! ! $! ! ! # !""  !# !$""!%%""  "" ! "" $""! """" ""## ""%  # # ""# %          (a) The bottom red curve reflects the anomaly score produced by IF Note that the start of the anomaly is missed                                                                                                     (b) The bottom red curve represents the anomaly score produced by RRCF Both the beginning and end of the anomaly are caught Figure 4 The top blue curve represents a sine wave with an artifi cially injected anomaly The bottom red curve shows the anomaly score over time In Figure 4a we show the result of running IF on the sine wave For anomalies detecting the onset is critical – and even more important than detecting the end of an anomalyNote that IF misses the start of the anomaly at time 500 The end of the anomaly is detected however by then thesystem has come back to its normal state – it is not useful to fire an alarm once the anomaly has ended Next considerFigure 4bwhich shows the result of running RRCF on the same sine wave Observe that the two highest scoring moments in the stream are the end and the beginning of theanomaly The anomaly is successfully detected by RRCFWhile the result of only a single run is shown the experiment was repeated many times and the picture shown inFigure 4is consistent across all runs 52 Real Life Data: NYC Taxicabs Next we conduct a streaming experiment using taxi rid ership data from the NYC Taxi Commission 2 We con sider a stream of the total number of passengers aggregated over a 30 minute time window Data is collected over a 7month time period from 7/14 – 1/15 Note while this is a1dimensional datasets we treat it as a 48dimensional dataset where each point in the stream is represented by a sliding window or shingle of the last day of data ignoring thefirst day of data The intuition is that the last day of activitycaptures a typical shape of passenger ridership The following dates were manually labeled as anomalies based on knowledge of holidays and events in NYC (Lavin & Ahmad 2015): Independence Day (7/4/147/6/14) Labor Day (9/1/14) Labor Day Parade (9/6/14) NYC Marathon (11/02/14) Thanksgiving (11/27/14) Christmas(12/25/14) New Years Day (1/1/15) North American Blizzard (1/26/151/27/15) For simplicity we label a 30minute window an anomaly if it overlaps one of these days Stream We treat the data as a stream – after observing points1i our goal is to score the (i+1) st point The score that we produce for (i+1) is based only on the pre vious data points 1i but not their labels We use IF as the baseline While a streaming version was subsequently published (Tan et al 2011) since it was not found to im prove over IF ( Emmott et al 2013) we consider a more straightforward adaptation Since each tree in the forest iscreated based on a random sample of data we simply buildeach tree based on a random sample of the stream eg uniform or timedecayed as previously referenced Our aimhere is to compare to the baseline with respect to accuracynot running time Each tree can be updated in an embarrassingly parallel manner for a faster implementation Metrics To quantitatively evaluate our approach we re port on a number of precision/recallrelated metrics We learn a threshold for a good score on a training set and re port the effectiveness on a held out test set The training setcontains all points before time tand the test set all points after time t The threshold is chosen to optimize the F1 measure (harmonic mean of precision and recall) We focus our attention on positive precision and positive recall toavoid “boy who cried wolf” effects (Tsien & Fackler 1997; Lawless 1994) 2http://wwwnycgov/html/tlc/html/about/trip record datashtml ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams T able 1 Comparison of Baseline Isolation Forest to proposed Robust Random Cut Forest Method Sample Positive Positive Negative Negative Accuracy AUC Size Precision Recall Precision Recall IF 256 042 (005) 037 (002) 096 (000) 097 (001) 093 (001) 083 (001) RRCF 256 087 (002) 044 (004) 097 (000) 100 (000) 096 (000) 086 (000) IF 512 048 (005) 037 (001) 097 (001) 096 (000) 094 (000) 086 (000) RRCF 512 084 (004) 050 (003) 099 (000) 097 (000) 096 (000) 089 (000) IF 1024 051 (003) 037 (001) 096 (000) 098 (000) 094 (000) 087 (000) RRCF 1024 077 (003) 057 (002) 097 (000) 099 (000) 096 (000) 090 (000) Method Segment Segment Time to Time to Prec@5 Prec@10 Prec@15 Prec@20 Precision Recall Detect Onset Detect End IF 040 (009) 080 (009) 2268 (305) 2330 (154) 052 (010) 050 (000) 034 (002) 028 (003) RRCF 065 (014) 080 (000) 1353 (205) 1085 (389) 058 (006) 049 (003) 039 (002) 030 (000) T able 2 SegmentLevel Metrics and Precision@K For the finer granularity data in the taxi cab data set we view the ground truth as segments of time when the data is in an anomalous state Our goal is to quickly and reliablyidentify these segments We say that a segment is identified in the test set if the algorithm produces a score over the learned threshold anytime during the segment (including the sliding window if applicable) Results In the experiments there were 200 trees in the forest each computed based on a random sample of 1K points Note that varying the sample size does not alter thenature of our conclusions Since ridership today is likely similar to ridership tomorrow we set our timedecayedsampling parameter to the last two months of ridership Allresults are averaged over multiple runs (10) Standard deviation is also reported Figure 5shows the result of the anomaly scores returned by C ODISP()    ! "" # $ % &   ""   ""   ""   "" ! !""  ! & #    ! & &    ! &     ! & ! $   ! & $ $   ! & &    !      !  ""    !  $ $   !   %   !      !  ""    !  %    !   %   !   %   !  ""    !  %    !   !   !   %   !  "" &   !  $    !   !   !   !   !  "" &   !  % &   !      !   !   !  # ""   !  % &   !      !  !    !  # ""   !  & ""   !      !  !    !  $    !  & ""   !   #   !  !    !  $    !      ""   #   ""  ! #   ""  #    ""  &    ""      ""  ! #   ""  $ $   ""  &    ""      ""  ""    ""  $ $   ""   $                     Figure 5 NYC taxi data and C ODISP() Note that Thanksgiving is not captured In a more detailed evaluation the first set of results (Ta ble1) show that the proposed RRCF method is more accu rate than the baseline Particularly noteworthy is RRCF’s higher positive precision which implies a lower false alarmrate In Table 2 we show the segmentbased results Whereas Table 1may give more credit for catching a long anomaly over a short one the segment metric weighs eachalarm equally The proposed RRF method not only catchesmore alarms but also catches them more quickly The unitsare measured in 30 minute increments – so 11 hours on av erage to catch an alarm on the baseline and 7 hours for theRRCF method These actual numbers are not as important here since anomaly start/end times are labeled somewhat loosely The difference in time to catch does matter Preci sion@K is also reported in Table 2 Discussion: Shingle size if used matters in the sense that shingles that are too small may catch naturally vary ing noise in the signal and trigger false alarms On theother hand shingles that are too large may increase thetime it takes to find an alarm or miss the alarm altogetherTime decay requires knowledge of the domain Sample sizechoice had less effect – with varying sample sizes of 256512 and 1K the conclusions are unchanged on this dataset 6 Conclusions and Future Work We introduced the robust random cut forest sketch andproved that it approximately preserves pairwise distancesIf the data is recorded in the correct scale distance iscrucially important to preserve for computations and notjust anomaly detection We adopted a modelbased def inition of an anomaly that captures the differential effectof adding/removing a point on the size of the sketch Ex periments suggest that the algorithm holds great promisefor fighting alarm fatigue as well as catching more missedalarms We believe that the random cut forest sketch is more bene ficial than what we have established For example it may also be helpful for clustering since pairwise distances areapproximately preserved In addition it may help detectchangepoints in a stream A changepoint is a moment in timetwhere before time tthe data is drawn from a distri butionD 1and after time tthe data is drawn from a distri butionD2 andD1is sufficiently different from D2(Kifer et al 2004; Dasu et al 2006) By maintaining a sequence of sketches over time one may be able to compare two sketches to determine if the distribution has changed ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Acknowledgments We thank Roger Barga Charles Elkan and Rajeev Rastogi for many insightful discussions We also thank Dan BlickPraveen Gattu Gaurav Ghare and Ryan Nienhuis for theirhelp and support References Aggarwal Charu C Outlier Analysis Springer New Y ork 2013 Bentley Jon Louis Multidimensional binary search trees used for associative searching Commun ACM 18(9): 509–517 September 1975 ISSN 00010782 Breiman Leo Random forests Machine Learning pp 5–32 2001 Breunig Markus M Kriegel HansPeter Ng Raymond T and Sander J ¨org Opticsof: Identifying local outliers InPKDD pp 262–270 1999 Breunig Markus M Kriegel HansPeter Ng Raymond T and Sander J ¨org Lof: identifying densitybased local outliers In ACM sigmod record volume 29 pp 93–104 2000 Chandola V arun Banerjee Arindam and Kumar Vipin Anomaly detection: A survey ACM computing surveys (CSUR) 41(3):15 2009 Charikar Moses Chekuri Chandra Goel Ashish Guha Sudipto and Plotkin Serge Approximating a finite metric by a small number of tree metrics Proceedings of F oundations of Computer Science pp 379–388 1998 Dasu Tamraparni Krishnan Shankar V enkatasubrama nian Suresh and Yi Ke An informationtheoretic ap proach to detecting changes in multidimensional data streams In In Proc Symp on the Interface of Statistics Computing Science and Applications Citeseer 2006 Efraimidis Pavlos S and Spirakis Paul G Weighted ran dom sampling with a reservoir Information Processing Letters 97(5):181–185 2006 Emmott Andrew F Das Shubhomoy Dietterich Thomas Fern Alan and Wong WengKeen Systematic construction of anomaly detection benchmarks from realdata In ACM SIGKDD Workshop on Outlier Detection and Description pp 16–21 2013 Eskin Eleazar Arnold Andrew Prerau Michael Portnoy Leonid and Stolfo Sal A geometric framework for unsupervised anomaly detection In Barbar ´a Daniel and Jajodia Sushil (eds) Applications of Data Mining in Computer Security pp 77–101 Boston MA 2002Ester Martin Kriegel HansPeter Sander J ¨org and Xu Xiaowei A densitybased algorithm for discoveringclusters in large spatial databases with noise In KDD volume 96 pp 226–231 1996 Finkel R A and Bentley J L Quad trees a data structure for retrieval on composite keys Acta Informatica 4(1): 1–9 1974 Guha Sudipto Meyerson Adam Mishra Nina Mot wani Rajeev and O’Callaghan Liadan Clustering data streams: Theory and practice IEEE Trans Knowl Data Eng 15(3):515–528 2003 Guttman Antonin Rtrees: A dynamic index structure for spatial searching In SIGMOD pp 47–57 1984 Huang Hao and Kasiviswanathan Shiva Prasad Stream ing anomaly detection using randomized matrix sketch ing Proceedings of the VLDB Endowment 9(3):192– 203 2015 Johnson William B and Lindenstrauss Joram Extensions of lipschitz mappings into a hilbert space Contemporary Mathematics 26 Providence RI: American Mathemati cal Society 1984 Kifer Daniel BenDavid Shai and Gehrke Johannes De tecting change in data streams In VLDB pp 180–191 2004 Knorr Edwin M and Ng Raymond T A unified notion of outliers: Properties and computation In KDD pp 219– 222 1997 Knorr Edwin M and Ng Raymond T Algorithms for min ing distancebased outliers in large datasets In VLDB pp 392–403 1998 Knorr Edwin M and Ng Raymond T Finding intensional knowledge of distancebased outliers In VLDB vol ume 99 pp 211–222 1999 Knorr Edwin M Ng Raymond T and Tucakov Vladimir Distancebased outliers: algorithms and applications VLDB Journal 8(34):237–253 2000 Lavin Alexander and Ahmad Subutai Evaluating realtime anomaly detection algorithmsthe numentaanomaly benchmark arXiv:151003336 2015 Lawless Stephen T Crying wolf: false alarms in a pedi atric intensive care unit Critical care medicine 22(6): 981–985 1994 Lindvall T Lectures on the coupling method Wiley New Y ork 1992 Liu Fei Tony Ting Kai Ming and Zhou ZhiHua Isolationbased anomaly detection ACM Trans Knowl Discov Data 6(1):3:1–3:39 March 2012 ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Tan Swee Chuan Ting Kai Ming and Liu Fei Tony Fast anomaly detection for streaming data In IJCAI pp 1511–1516 2011 Tsien Christine L and Fackler James C Poor prognosis for existing monitors in the intensive care unit Critical care medicine 25(4):614–619 1997 Vitter Jeffrey S Random sampling with a reservoir ACM Transactions on Mathematical Software 11(1): 3757 1985 Y u Dantong Sheikholeslami Gholamhosein and Zhang Aidong Findout: finding outliers in very large datasets Knowledge and Information Systems 4(4):387–412 2002 Zhang Ji and Wang Hai Detecting outlying subspaces for highdimensional data: the new task algorithms andperformance Knowledge and information systems1 0 (3):333–355 2006 Archived",General,consultant,Best Practices Running_Adobe_Experience_Manager_on_AWS,Running Adobe Experience Manager  on AWS First published July 2016 Updated November 25 202 0 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why use AEM on AWS? 1 Adobe Experien ce Manager Overview 3 AEM Platform Overview 3 Repositories 4 AEM Implementation on AWS 6 Self or Partner Managed Deployment 6 AEM Managed Services 6 Architecture Options 7 Reference Architecture 7 Reference Ar chitecture Components 7 AEM OpenCloud 11 Security 15 Compliance and GovCloud 17 Digital Asset Management 18 Automate d Deployment 18 Automated Operations 19 Additional AWS Services 20 Conclusion 20 Contributors 20 Further Reading 21 Document Revisions 21 Abstract This whitepaper outlines the benefits and strategy for hosting for Adobe Experience Manager ( AEM ) on Amazon Web Services ( AWS ) It discusses various migration strategies architecture choices and deployment strategies including a reference architecture for self hosting on AWS It also provides guidance for disaster recovery DevO ps and high compliance workloads su ch as government finance and healthcare This whitepaper is for technical leaders and business leaders responsible for deploying and managing AEM on AWS Amazon Web Services Running Adobe Experience Manager on AWS 1 Introduction Delivering a fast secure and seamless experience is essential i n today’s digital marketing environment The need to reach a broader audience across all devices is essential and a shorter time to market can be a differentiator Companies are turning to cloud based solutions to boost business agility harness new oppor tunities and gain cost efficiencies Adobe Experience Manager (AEM) is a comprehensive content management solution for building websites mobile apps and forms AEM makes it easy to manage your marketing content and assets Adopting AWS for running AEM presents many benefits such as increased business agility added flexibility and reduced costs This whitepaper provides technical guidance for running AEM on AWS With any deployment on AWS there are many different considerations and options so your approach might be different from the approach we walk through in this paper Lastly th is whitepaper concludes by discussing security and compliance architectural components connectivity and a strategy you can employ for migration Why use AEM on AWS? Hosting AEM on AWS offers some key benefits such as global capacity security reliability fault tolerance programmability and usability This section discusses several ways in which deploying AEM on AWS is different from deploying it to an onpremises infrastructure Flexible Capacity One of the benefits of using the AWS Cloud is the ability to scale up and down as needed When using AEM you have full freedom to scale all of your environments quickly and cost effectively giving you opportu nities to establish new development quality assurance (QA) and performance testing environments AEM is frequently used in scenarios that have unknown or significant variations in traffic volume The on demand nature of the AWS platform allows you to sca le your workloads to support your unique traffic peaks during key events such as holiday shopping seasons major sporting events and large sale events Amazon Web Services Running Adobe Experience Manager on AWS 2 Flexible capacity also streamlines upgrades and deployments AWS makes it very easy to set up a paral lel environment so you can migrate and test your application and content in a production like environment Performing the actual production upgrade itself can then be as simple as the change of a domain name system (DNS) entry Broad Set of Capabilities As a leading web content management system solution AEM is often used by customers as the foundation of their digital marketing platform Running AEM on AWS provides customers with the benefits of easily integrating third party solutions for auxiliary expe riences such as blogs and provid ing additional tools for supporting mobile delivery analytics and big data management You can integrate the open and extensible APIs of both AWS and AEM to create powerful new combinations for your firm Also AEM can be used to augment or create headless commerce architectures seamlessly With services like Amazon Simple Notification Service ( Amazon SNS) Amazon Simple Queue Service ( Amazon SQS) and AWS Lambda AEM functionality can easily be integrated with other third party functionalit ies in a decoupled fashion AWS can also provide a clean manageable and auditable approach to decoupled integration with backend systems such as Customer Relationship Management (CRM) and commerce systems Benefits of Cloud and Global Availability Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure (Capex) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large and upfront sunk costs ( that is costs that have already been incurred and can’t be recovered) AWS helps to lower customer costs through its pay forwhat youuse pricing model Also as of writing AWS Global Infrastructure spans 24 geographic regions around the world enabling customers to deploy on a global footprint quickly and easily Security and High Compliance Workloads Using AWS you will gain the control and confiden ce you need to safely run your business with the most flexible and secure cloud computing environment available today With AWS you can improve your ability to meet core security and compliance requirements with a comprehensive set of services and feature s The AWS Compliance Amazon Web Services Running Adobe Experience Manager on AWS 3 Program s will help you understand the robust controls in place at AWS to maintain security and compliance in the cloud Compliance certifications and attestations are assesse d by a third party independent auditor Running AEM on AWS provides customers with the benefits of leveraging the compliance and security capabilities of AWS along with the ability to monitor and audit access to AEM using AWS Security Identity and Compliance services AWS also offers the GovCloud (US) Regions which are designed to host sensitive data regulate workloads and address the most str ingent US government security and compliance requirements Adobe Experience Manager Overview This section highlights some of the key technical elements for AEM and offers some best practice recommendations This whitepaper focuses on AEM 65 (released April 2019) AEM Platform Overview A standard AEM architecture consists of three environments: author publish and dispatcher Each of these environments consists of one or more instances Figure 1 – Sample AEM Architecture The author environment is used for crea ting and managing the content and layout of an AEM experience It provides functionality for reviewing and approving content updates and publishing approved versions of content to the publish environment Amazon Web Services Running Adobe Experience Manager on AWS 4 The publish environment delivers the experience to the intended audience It renders the actual pages with an ability to personalize the experience based on audience characteristics or targeted messaging The author and publish instances are Java web applications that have identical installed software T hey are differentiated by configuration only The dispatcher environment is a caching and/or load balancing tool that helps realize a fast and dynamic web authoring environment For caching the dispatcher works as part of an HTTP server such as Apache HTTP Server with the aim of storing (or caching) as much of the static website content as possible and accessing the website's publisher layout engine as infrequently as possible For cachin g the dispatcher module uses the web server's ability to serve static content The dispatcher places the cached documents in the document root of the web server Repositories Within AEM everything is content and stored in the underlying repository AEM’s repository is called CRX it imple ments the Content Repository API for Java ( JCR) and it is based on Apache Jackrabbit Oak Figure 2 – AEM Storage Options The Oak storage layer provides an abstraction layer for the actual storage of the content MicroKernels act as persistence managers in AEM There are two primary storage implementations available in AEM 6: Tar Storage and MongoDB Storage The Tar storage uses tar files It stores the content as various types of records within larger segments Journals are use d to track the latest state of the repository The MongoDB Amazon Web Services Running Adobe Experience Manager on AWS 5 storage leverages MongoDB for sharding and clustering The repository tree is kept in one MongoDB database where each node is a separate document At a high level Tar MicroKernel (TarMK) is used f or performance and MongoDB is used for scalability Publish instances are always TarMK Multiple publish instances with each instance running its own TarMK are referred to as TarMK farm This is the default deployment for publish environments Author instances can either use TarMK for a single author instance or MongoDB when horizontal scaling is required For TarMK author instance deployments a cold standby TarMK instance can be configured in another availability zone to provide backup in case the primary author instance fails although the failover is not automatic TarMK is the default persistence system in AEM for both author and publish configurations Although AEM can be configured to use a different persistence system (such as MongoDB ) TarMK is performance optimized for typical JCR use cases and is very fast TarMK uses an industry standard data format that can be quickly and easily backed up providing high performance and reliable data storage with minimal operational overhead and lower total cost of ownership (TCO) MongoDB is recommended for AEM author deployments when there are more than 1000 unique users per day 100 concurrent users or high volumes of page edits (For details r efer to When to use Mongo DB ) MongoDB provides high availability redundancy and automated failovers for author instances although performance can be lower than TarMK A minimum deployment with MongoDB typically involves a MongoDB replica consisting of one primary node and two secondary nodes with each node running in its separate availability zone In AEM binary data can be stored independently from the content nodes The binary data is stored in a data store whereas content nodes are stored in a node store You can use Amazon Simple Storage Service (Amazon S3) as a shared datasto re between publish and author instances to store binary files This approach makes the cluster high performant For details see How to configure S3 as a datastore Amazon Web Services Running Adobe Experience Manager on AWS 6 AEM Implementation on AWS This section outline s the following two deployment options and the key design elements to consider for deploying AEM on AWS • Self or partner managed deployment • AEM Managed Services by Adobe Self or Partner Managed Deployment In a self managed deployment the organization itself is responsible for the deployment and maintenance of AEM and the underlying AWS infrastructure In partner managed deployment the organizat ion engages with a partner from the AWS Partner Network (APN) for the deployment and maintenance of AEM and the underlying AWS infrastructure AEM customizations in both models can be done by the organizatio n or the partner For organizations who cannot manage their own deployment of AEM on AWS (either because they do not have the resources or because they are not comfortable) there are several APN partners that specialize in providing managed hosting deploy ments of AEM on AWS These companies take care of all aspects of deploying securing patching and maintaining AEM Some partners also provide design services and custom development for AEM You can use AWS Partner Finder to find and compare providers that specialize in Adobe products on AWS AEM Managed Services AEM Managed Services by Adobe enables customers to launch faster by deploying on the AWS cloud and also by leaning on best practices and support from Adobe Organizations and business users can engage customers in minimal time drive market share and focus on creating innovative marketing campaigns while reducing the burden on IT Cloud Manager part of the AEM Managed Services offering is a self service portal that further enables organizations to self manage AEM Manager in the cloud It includes a continuous integration and continuous delivery (CI/CD) pipeline that lets IT teams and implement ation partners speed up the delivery of customizations or updates without compromising performance or security Cloud Manager is only available for Adobe Managed Service customers Amazon Web Services Running Adobe Experience Manager on AWS 7 Architecture Options This section present s a reference architecture for run ning AEM on AWS along with various architectural options to consider when planning AEM on AWS deployment Alternately you can also consider adopt ing AEM OpenCloud an open source framework for running AEM on AWS Reference Architecture The following reference architecture is recommended for both self or partner managed deployment methods For reference architecture details see Hosting Adobe Experience Manager  on AWS Figure 3 –AEM on AWS Reference Architecture Reference Architecture Components Architecture Sizing For AEM the right instance type depends on th e usage scenario For AEM author and publish instances in the most common publishing scenario a solid mix of memory Amazon Web Services Running Adobe Experience Manager on AWS 8 CPU and I/O performance is necessary Therefore the Amazon EC2 General Purpose M5 family of instances are good candidate s for these environments depending upon sizing Amazon EC2 M5 Instan ces are the next generation of the Amazon EC2 General Purpose compute instances M5 instances offer a balance of compute memory and networking resources for a broad range of workloads Additionally M5d M5dn and M5ad instances have local storage offer ing up to 36TB of NVMe based SSDs AEM Dispatcher is installed on a web server (Apache httpd on Amazon EC2 instance ) and it is a key caching layer It provides caching load balancing and application security Therefore sizing memory and compute is im portant but optimization for I/O is critical for this tier Amazon Elastic Block Store ( Amazon EBS) I/O optimized volumes are recommended Each dispatcher instance is mapped to a publish instance in a one toone fashion in each availability zone For all of these instances Amazon EBS optimization is important EBS volumes on which AEM is installed should use either General Purpose SSD (GP2) volumes or provisioned Input/ Output operations Per Second (IOPS) volumes This configuration provides a specific level of performance and lower latency for operations Adobe recommends Intel Xeon or AMD Opteron CPU with at least 4 cores and 16 GB of RAM for AEM environments This translates to Amazon EC2 M5XL instance type Typically you can start with Amazon EC2 M52XL instance type and then adjust based on your workload needs For guidance on selecting the right instance r efer to the Adobe hardware sizing guide The specific sizing for the number of servers you need depends on your AEM use case (for example experience management or digital asset management) and the level of caching that should be applied At minimum you need five total servers for a high availability configuration utilizing two Availability Zones This architecture place s a dispatcher publi sher pair in each of the two Availability Zones and a single author node in one Availability Zone (fronting each of the publish instances with a dispatcher instance) For guidelines for calculating the number of servers required refer to the Adobe support site Load Balancing In an AEM setup Elastic Load Balancing is configured to balance traffic to the dispatchers By default a load balancer distributes incoming requests evenly across its enabled Availability Zo nes (AZs) To ensure that a load balancer distributes incoming Amazon Web Services Running Adobe Experience Manager on AWS 9 requests evenly across all back end instances (regardless of the Availability Zone that they are in ) enable cross zone load balancing For authenticated AEM experiences authentication is main tained by a login token When a user logs in the token information is stored under the tokens node of the corresponding user node in the repository The value of the token ( that is the session ID) is also stored in the browser as a cookie named login token In this case the load balancer should be configured for sticky sessions routing requests with the login token cookie to the same instance AEM can be configured to recognize the authentication cookie across all publish instance s However it also req uires that all relevant user session information ( for example a shopping cart) is available across all publish instances Elastic Load Balancing can be used in front of the dispatchers to provide a Single CNAME URL for the application The load balancer in conjunction with AWS Certificate Manager can be used to provide an HTTPS access and to offload SSL By using the load balancer you can further secure your website deployment by moving the publisher instances into a private subnet allowing access from only the load balancer The load balancer can also translate the port access from port 80 to the default publish port 4503 High Availability For a highly available AEM architecture the architecture should be set up to leverage AWS strengths Configure e ach instance in the AEM cluster for Amazon EC2 Auto Recovery Additionally when the clu ster is built in conjunction with a load balancer you can use AWS Auto Scaling to automatically provision nodes across multiple Availability Zones We recommend that you provision nodes across multiple Availability Zones for high availability and use multiple AWS Regions to address global deployment considerations as needed In a multi Region deployment you can set up Amazon Route 53 to perform DNS failover based on health checks Scaling A simple way to accomplish scaling is to create separate Amazon Machine Images (AMIs) for the publish instance dispatcher instance (mapped to publish) and dispatcher instance (mapped to author if in use) Three separate launch configurations can be created using these AMIs and included in separate Auto Scaling groups Newly launched dispatcher instances require a corresponding publish instance and need to author instances to receive future invalidation calls AWS Lambda can provide scaling logic in response to scale up/down events from Auto Scaling groups The Amazon Web Services Running Adobe Experience Manager on AWS 10 scaling logic consists of pairing/unpair ing the newly launched dispatcher instance to an available publish instance (or the other way around ) updat ing the replication agent (reverse replication if applicable) between the newly launched publish instance and author instance and updat ing AEM content health check alarms Each d ispatcher instance is mapped to a publish instance in a one toone fashion in separate availability zone s For faster startup and synchronization you can place the AEM installation on a separate Amazon EBS volume By taking frequent snapshots of the volume and attaching those snapshots to the newly launched instances the need to repl icate large amounts of data from the author can be cut down In the startup process the publish instance can then trigger author —publish replication to fully ensure the latest content Content Delivery AEM can use a content delivery network (CDN) such as Amazon CloudFront as a caching layer in addition to the standard AEM dispatcher When you use a CDN you need to consider how content is invalidated and refreshed in the CDN when content is updated Explicit configuration regarding how long particular resources are held in the CloudFront cache along with expiration and cache control headers sent by dispatcher can help in controlling the CDN cache Cache control headers can be controlled by using the mod_expires Apache Module For API based invalidation associated with content replication o ne approach is to build a custom invalidation workflow and set up an AEM Replication Agent that will use your own ContentBuilder and TransportHandler to invalidate the Amazon CloudFront cache using API For more details r efer to Using Dispatcher with a CDN Dynamic Content The dispatcher is the caching layer with the AEM product It allows for defining caching rules at the web server layer To realize the full benefit of the dispatcher pages should be fully cacheable Any element that isn’t cacheable will “break” the cache functionality To incorporate dynamic elements in a static page the recommended approach is to use client side JavaScript Edge Side Includes (ESI s) or web server level Server Side Includes (SSI s) Within an AWS environment ESIs can be configured using a solution such as Varnish replacing the dispatcher However using such configuration may not be supported by Adobe Amazon Web Services Running Adobe Experience Manager on AWS 11 Amazon S3 Data Store Binary data can be stored independently from the content nodes in AEM When deploying on AWS the binary data store can be Amazon S3 simplifying management and backups Also the binary data store can then be shared across author instances and even betwee n author and publish instances reducing overall storage and data transfer requirements Refer to Amazon S3 Dat a Store documentation by Adobe to learn how to configure S3 for AEM AEM OpenCloud AEM OpenCloud is an open source platform for running AEM on AWS It provides an outofthebox solution for provisioning a highly available AEM architecture which implements auto scaling auto recovery chaos testing CDN multi level backup blue green deployment repository upgrade security and monitoring capab ilities by leveraging a multitude of AWS services AEM OpenCloud code base is open source and available on GitHub with an Apache 2 license The code base is maintained by Shine Solutions Group an APN Partner You are free to use AEM OpenCloud on your own or engage with the Shine Solution s Group for custom use cases and implementation support AEM OpenCloud supports multiple AEM versions from 62 to 65 using Amazon Linux 2 or RHEL7 operating system with two architecture options: fullset and consolidated This platform can also be built and run in multiple AWS Regions It is highly configurable and provides a number of customization points where users can provision various other software into their AEM environment provisioning automation AEM OpenCloud is available through the AEM OpenCloud on AWS Quick Start an architecture based on AWS best practices you easily launch in a few clicks AEM OpenCloud FullSet Architecture A fullset architecture is a full featured environment suitable for production and staging environments It includes AEM Publish Author Dispatcher and Publish Dispatcher EC2 instances within Auto Scaling groups which (combined with an Orche strator application ) provide the capability to manage AEM capacity as the instances scale out and scale in corresponding to the load on the Dispatcher instances Orchestrator application manages AEM replication and flush agents as instances are created and terminated This architecture also includes chaos testing capability by using Netflix Chaos Monkey which can be configured to randomly terminate either one of those instances within the Amazon Web Services Running Adobe Experience Manager on AWS 12 autoscaling groups or allow the architecture to live in production continuously verifying that AEM OpenCloud can automatically recover from failure AEM Author Primary and Author Standby are managed separately where a failure on Author Primary instance can be mitiga ted by promoting an Author Standby to become the new Author Primary as soon as possible while a new environment is being built in parallel and will take over as the new environment replacing the one which lost its Author Primary Fullset architecture us es Amazon CloudFront as the CDN sitting in front of AEM Publish Dispatcher load balancer providing global distribution of AEM cached content Fullset offers three types of content backup mechanisms: AEM package backup live AEM repository EBS snapshots (taken when all AEM instances are up and running ) and offline AEM repository EBS snapshots (taken when AEM Author and Publish are stopped ) You can u se any of these backups for blue green deployment providing the capability to replicate a complete environment or to restore an environment from any point of time Figure 4 – AEM OpenCloud Full Set Architecture Amazon Web Services Running Adobe Experience Manager on AWS 13 On the security front this architecture provides a minimal attack surface with one public entry point to either Amazon CloudFront distribution or an AEM Publish Dispatcher load balancer whereas the other entry point is for AEM Author Dispatcher load balancer AEM OpenCl oud supports encryption using AWS Key Management Service (AWS KMS ) keys across its AWS resources The f ullset architecture also includes a n Amazon CloudWatch Monitoring Dashboard which visualizes the capacity of AEM Author Dispatcher Author Primary Author Standby Publish and Publish Dispatcher along with their CPU memory and disk consumptions Amazon CloudWatch Alarms are also configured across the most important AWS resources allow ing notification mechanism via an SNS topic Consolidated Architecture A consolidated architecture is a cut down environment where an AEM Author Primary an AEM Publish and an AEM Dispatcher are all running on a single Amazon EC2 instance This architecture is a low cost alternative suitable for development and testing environments This architecture also offers those three types of backup just like fullset architecture where the backup AEM package and EBS snapshots are interchangeable between consolidated and fullset environments This option is useful for example when you want to restore production backup from a fullset environment to multiple development environments running consolidated architecture Another example is if you want ed to upgrade an AEM repository to a newer version in a development environment which is then pushed through to testing staging and eventua lly production Amazon Web Services Running Adobe Experience Manager on AWS 14 Figure 5 – AEM OpenCloud Consolidated Architecture Environment Management To manage multiple environments with a mixture of fullset and consolidated architectures AEM OpenCloud has a Stack Manager that handles the command executions within AEM instances via AWS Systems Manager These commands include taking backups checking environment readiness running the AEM security checklist enabling and disabling CRX DE and SAML deploying multiple AEM packages configured in a descriptor flushing AEM Dispatcher cache and promoting the AEM Author Standby instance to Primary Other than the Stack Manager there is also AEM OpenCloud Manager which currently provides Jenkins pipelines for creating and terminating AEM fullset and consolidated architectures baking AEM Amazon Machine Images (AMIs) executing operational tasks via Stack Manager and upgrading an AEM repository between versions (for example from AEM 62 to 64 or from AEM 64 to 65 ) Amazon Web Services Running Adobe Experience Manager on AWS 15 Figure 6 – AEM OpenCloud Stack Manager Security The security of the A EM hosting environment can be broken down into two areas: application security and infrastructure security A crucial first step for application security is to follow the Security Checklist for AEM and the Dispatcher Security Checklist These checklists cov er various parts of security considerations from running AEM in production mode to using mod_rewrite and mod_security modules from Apache to prevent Distributed Denial of Service ( DDoS) attacks and cross site scripting ( XSS) attacks From an infrastructure level AWS provides several security services to secure your environment These services are grouped into five main categories – network security; data protection; access control; d etection audit monitoring and logging ; and incident response Networ k Security One of the core components of network security is Amazon V irtual Private Cloud (Amazon VPC) This service provides multiple layers of network security for your application such as public and private subnets security groups and network access Amazon Web Services Running Adobe Experience Manager on AWS 16 control lists for subnet s Also VPC endpoints for S3 enable you to privately connect your VPC to Amazon S3 Amazon CloudFront can offload direct access to your backend infrastructure and using the Web Application Firewall (WAF) provided by the AWS WAF service you can apply rules to prevent the application from getting compromised by scripted attacks The same r ules that are encoded in Apache mod_security on the dispatcher can be moved or replicated in AWS WAF Since AWS WAF integrates with Amazon CloudFront CDN this enables earlier detection minimizing overall traffic and impact AWS WAF provides centralized c ontrol automated administration and real time metrics Additionally AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield pr ovides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection There are two tiers of AWS Shield : Standard and Advanced All AWS custome rs benefit from the automatic protections of AWS Shield Standard at no additional charge Data Protection Organizations should encrypt data at rest and in transit AEM provides SSL wizard to easily configure SSL certificates AWS data protection services provide encryption and key management and threat detection that continuously monitors and protects your AWS infrastructure For exam ple AWS Certificate Manager can p rovision manage and deploy public and private SSL/TLS certificates ; AWS KMS can help with Key storage and management ; and Amazon Macie can d iscover and protect your sensitive data at scale Access Control AWS Identity & Access Management (IAM) helps securely manage access to AWS services and resources In addition AWS provides identity services to connect your on prem directory service or use AWS Directory Service as a managed Microsoft Active Directory to provide access to AEM infrastructure as needed within your organization Detection Audit Monitoring and Logging Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads With AWS Security Hub you have a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services Amazon Web Services Running Adobe Experience Manager on AWS 17 such as Amazon GuardDuty Amazon Inspector and Amazon Macie as well as f rom APN Partner solutions AWS also provides audit tools such as AWS Trusted Advisor which inspects your AWS environment and makes recommendations for cost saving improving system performance and reliability and security Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Insp ector produces a detailed report with prioritized steps for remediation This can support system management and gives security professionals the necessary visibility into vulnerabilities that need to be fixed In addition to Amazon Inspector you can use other third party products such as Burp Suite or Qualys SSL Test (for certificate validation Finally havi ng an audit log of all API actions and configuration changes can be useful in determining what changed and who changed it AWS CloudTrail and AWS Config provide you with the capability to capture extensive audit logs We recommend that you enable these services in your hosting environment Incident Response AWS provides services such as AWS Lambda and AWS Config Rules which can evaluate whether your AWS resources comply with your desir ed settings and set them back into compliance or notify you Amazon Detective is another service that simplifies the process of investigating security findings and identifying the root cause Amazon Detecti ve analyzes events from multiple data sources such as VPC Flow Logs AWS CloudTrail logs and Amazon GuardDuty findings and automatically creates a graph model that provides you with a unified interactive view of your resources users and the interactions between them over time Compliance and GovCloud The AWS GovCloud (US) gives government customers and their partners the flexibility to architect secure cloud solutions that comply with many compliance programs (FedRAMP High FISMA DoD SRG ITAR and CJIS to name a few) AWS GovCloud (USEast) and (US West) Regions are operated by employees who are US citizens on US soil AWS GovCloud (US) is only accessible to US entities and root account holders who pass a screening process Service mapping t o compliance programs is detailed on the AWS Services in Scope by Compliance Program page Amazon Web Services Running Adobe Experience Manager on AWS 18 Digital Asset Management AEM includes a Digital Asset Management (DAM) solution called AEM Asset s AEM assets enables your enterprise users to manage and distribute digital assets such as images videos documents audio clips 3D files and rich media When planning for your AWS architecture you should evaluate the potential use of the AEM Assets solution as part of your planning With AEM Assets the number of large files usually increases and often involves resource intensive processes such as image transformations and renditions Various architecture best practices should be considered depending on the scenario and they are described in detail in Best Practices for Assets Automated Deployment AWS provides API access to all AWS servi ces and Adobe does this for AEM as well Many of the various commands to deploy code or content or to create backups can be invoked through an HTTP service interface This allows for a very clean organization of the continuous integration and deployment process with the use of Jenkins as a central hub invoking AEM functionality through CURL or similar commands Jenkins can support manual scheduled and triggered deployments and can be the central point for your AEM on AWS deployment If necessary you can enable additional automation using Jenkins with AWS CodeBuild and AWS CodeDeploy enabling the creation of a complete environment from the Jenkins console Refer to Set up a Jenkins Build Server on AWS to set up Jenkins Amazon Web Services Running Adobe Experience Manager on AWS 19 Figure 7 – Example CI Setup for an AEM Jenkins Architecture Automated Operations One of the key benefits of running AEM on AWS is the str eamlined AEM Operations process To provision instances AWS CloudFormation or AWS OpsWorks can be leveraged to fully automate the deployment process fro m setting up the architecture to provisioning the necessary instances Using the AWS CloudFormation embedded stacks functionality scripts can be organized to support the different architectures outlined in the earlier sections Also AEM OpenCloud manager provides automated operations functionality out of the box with little effort When using AEM’s Tar Storage repository content is stored on the file system To create an AEM backup you must create a file system snapshot You can make a file system snapshot on AWS through Amazon Data Lifecycle Manager Alternately you can create a centralized b ack up plan using AWS Backup You should use Amazon Data Lifecycle Manager when you want to automate the creation retention and deletion of EBS snapsh ots You should use AWS Backup to manage and monitor backups across the AWS services you use including EBS volumes from a single place Lastly review the best practices and checks (such as log file monitoring AEM performance monitoring and Replication Agent monitoring ) outlined in the Monitoring and Maintaining AEM guide to ensure smooth operations of your AEM environment Amazon Web Services Running Adobe Experience Manager on AWS 20 Additional AWS Services You can use additional services and capabilities from both AWS and the AEM platform to add further value to your AEM deployment on AWS With AEM you can integrate with a variety of thirdparty services outofthe box as well as Amazon SNS for mobile notifications relating to changes to the AEM environment AEM offers tools to manage targeting within experiences delivered through the solution Adobe also has complementary products (which integrate well with AEM ) that further personalize and target the experience for customers Combined with AWS services such as Amazon Personalize Amazon Kinesis and AWS Lambda you can create a powerful targeting engine to deliver onetoone personalization Conclusion This paper presented the business and technology drivers for running AEM on AWS along with the strategies and considerations Running AEM on AWS provides a secure and scalable foundation for delivering great digital ex periences for customers As you prepare for your AEM migration to AWS we recommend that you consider the guidance outlined in this document Contributors Contributors to this document include : • Anuj Ratra Sr Solutions Architect Amazon Web Services • Cliffano Subagio Principal Engineer Shine Solutions Group • Michael Bloch Senior DevOps Engineer Shine Solutions Group • Matthew Holloway Manager Solutions Architects Amazon Web Services • Pawan Agnihotri Sr Mgr Solution Architecture Amazon Web Services • Martin Jacobs GVP Technology Razorfish Amazon Web Services Running Adobe Experience Manager on AWS 21 Further Reading For additional information see: • Hosting Adobe Experience Manager on AWS Reference Architecture Document Revisions Date Description November 2020 Updated Reference Architecture for AEM 65 Added AEM OpenC loud framework as an alternative option July 2016 First publication,General,consultant,Best Practices Running_Containerized_Microservices_on_AWS,"Running Containerized Microservices on AWS First Published November 1 2017 Updated August 5 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Componentization Via Services 2 Orga nized Around Business Capabilities 4 Products Not Projects 7 Smart Endpoints and Dumb Pipes 8 Decentralized Governance 10 Decentralized Data Management 12 Infrastructure Automation 14 Design for Failure 17 Evolutionary Design 20 Conclusion 22 Contributors 23 Document Revisions 23 Abstract This whitepaper is intended for architects and developers who want to run containerized applications at scale in production on Amazon Web Services (AWS ) This document provides guidance for application lifecycle management security and architectural soft ware design patterns for container based applications on AWS We also discuss architectural best practices for adoption of containers on AWS and how traditional software design patterns evolve in the context of containers We leverage Martin Fowler’s prin ciples of microservices and map them to the twelve factor app pattern and real life considerations This whitepaper gives you a starting point for building microservices using best practices and software design patterns Amazon Web Services Running Containerized Microservices on AWS 1 Introduction As modern microservice sbased applications gain popularity containers are an attractive building block for creat ing agile scalable and efficient microservices architectures Whether you are considering a legacy system or a greenfield appli cation for containers there are well known proven software design patterns that you can apply Microservices are an architectural and organizational approach to software development in which software is composed of small independent services that commun icate to each other There are different ways microservices can communicate but the two commonly used protocols are HTTP request/response over w elldefined APIs and lightweight asynchronous messaging1 These services are owned by small selfcontained t eams Microservices architectures make applications easier to scale and faster to develop This enabl es innovation and accelerat es timetomarket for new features Containers also provide isolation and packaging for software and help you achieve more deployment velocity and resource density As proposed by Martin Fowler2 the characteristics of a microservices architecture include the following : • Componentization via services • Organized ar ound business capabilities • Products not projects • Smart endpoints and dum b pipes • Decentralized governance • Decentralized data management • Infrastructure automation • Design for failure • Evolutionary design These characteristics tell us how a microservices archit ecture is supposed to behave To help achieve these characteristics many development teams have adopted the twelve factor app pattern methodology The twelve factors are a set of best practices for building modern app lications that are optimized for cloud computing The twelve factors cover four key areas: deployment scale portability and architecture : Amazon Web Services Running Containerized Microservices on A WS 2 1 Codebase One codebase tracked in revision control many deploys 2 Dependencies Explicitly declare and isolate dep endencies 3 Config Store configurations in the environment 4 Backing services Treat backing services as attached resources 5 Build release run Strictly separate build and run stages 6 Processes Execute the app as one or more stateless processes 7 Port bind ing Export services via port binding 8 Concurrency Scale out via the process model 9 Disposability Maximize robustness with fast startup and graceful shutdown 10 Dev/prod parity Keep development staging and production as similar as possible 11 Logs Treat logs as event streams 12 Admin processes Run admin/management tasks as one off processes After reading this whitepaper you will know how to map the microservices design characteristics to twelve factor app patterns down to the design pattern to be implemented Componentization Via Services In a microservices architecture software is composed of small independent services that communicate over well defined APIs These small components are divided so that each of them does one thing and does it well while cooperat ing to deliver a full featu red application An analogy can be drawn to the Walkman portable audio cassette players that were popular in the 1980s : batteries bring power audio tapes are the medium headphones deliver output while the main tape player takes input through key presses Using them together plays music Similarly microservices need to be decoupled and each should focus on one functionality Additionally a microservices architecture allows for replacement or upgrade Using the Walkman analogy if the headphones are worn out you can replace them without replacing the tape player If an order management service in our store keeping application is falling behind and performing too slow ly you can swap it for a more performant more streamlined Amazon Web Services Running Containerized Microservices on AWS 3 component Such a permutatio n would not affect or interrupt other microservices in the system Through modularization microservices offer developers the freedom to design each feature as a black box That is microservices hide the details of their complexity from other components Any communication between services happens by using well defined APIs to prevent implicit and hidden dependencies Decoupling increases agility by removing the need for one development team to wait for another team to finish work that the first team depend s on When containers are used container images can be swapped for other container images These can be either different versions of the same image or different images altogether —as long as the functionality and boundaries are conserved Containerization is an operating system level virtualization method for deploying and running distributed applications without launching an entire virtual machine (VM) for each application Container images allow for modularity in services They are constructed by building functionality onto a base image Developers operation s teams and IT leaders should agree on base images that have the security and tooling profile that they want These images can then be shared throughout the organization as the initial building block Replacing or upgrading th ese base image s is as simple as updating the FROM field in a Dockerfile and rebuilding usually through a Continuous Integration/Continuous Delivery (CI/CD) pipeline Here are the key factors from the twelve factor app pattern methodology that play a role in componentization: • Dependencies (explicitly declare and isolate dependencies) – Dependencies are selfcontained within the container and not shared with other services • Disposability (maximize robustness with fast sta rtup and graceful shutdown) – Disposability is leveraged and satisfied by containers that are easily pulled from a repository and discarded when they stop running • Concurrency (scale out via the process model) – Concurrency consists of tasks or pods (made of containers working together ) that can be auto scaled in and out in a memory and CPU efficient manner As each business function is implemented as its own service the number of containerized services grow s Each service should have its own integration and its own deployment pipeline This increases agility Since c ontainerized services are subject to frequent deployments you need to introduce a coordination layer that that tracks which Amazon Web Services Running Containerized Microservices on AWS 4 containers are running on which hosts Eventually you will want a system that provides the state of containers the resource s available in a cluster etc Container orchestration and scheduling systems enable you to define applications by assembling a set of containers that work together You can think of the definitio n as the blueprint for your applications You can specify various parameters such as which containers to use and which repositories they belong in which ports should be opened on the container instance for the application and what data volumes should be mounted Container management systems enable you to run and maintain a specified number of instances of a container set —containers that are instantiated together and collaborate using links or volumes Amazon ECS refers to these as Tasks Kubernetes refers to them as Pods Schedulers maintain the desired count of container sets for the service Additionally the service infrastructure can be run behind a load balancer to distribute traffic acr oss the container set associated with the service Organized Around Business Capabilities Defining exactly what constitutes a microservice is very important for development teams to agree on What are its boundaries? Is an application a microservice? Is a shared library a microservice? Before microservices system architecture would be organized around technological capabilities such as user interface database and server side logic In a microservice s based approach as a best practice each development t eam owns the lifecycle of its service all the way to the customer For example a recommendations team might own development deployment production support and collection of customer feedback In a microservices driven organization small teams act auto nomously to build deploy and manage code in production This allows teams to work at their own pace to deliver features Responsibility and accountability foster a culture of ownership allowi ng teams to better align to the goals of their organization an d be more productive Microservices are as much an organizational attitude as a technological approach This principle is known as Conway’s Law : Amazon Web Services Running Containerized Microservices on AWS 5 ""Organizations which design systems are constrained to produce designs which are copies of the communicatio n structures of these organizations"" — M Conway3 When architecture and capabilities are organized around atomic business functions dependencies between components are loosely coupled As long as there is a communication contract between services and teams each team can run at its own speed With this approach the stack can be polyglot meaning that developers are free to use the programming languages that are optimal for their component For example the user interface can be written in JavaScript or HTML5 the backend in Java and data processing can be done in Python This means that business functions can drive development decisions Organizing around capabilities mean s that each API team owns the function d ata and performance completely The following are key factors from the twelve factor app pattern methodology that play a role in organizing around ca pabilities: • Codebase (one codebase tracked in revision control many deploys) – Each microservice owns its own codebase in a separate repository and throughout the lifecycle of the code change • Build release run (strictly separate build and run stages) – Each microservice has its own deployment pipeline and deployment frequency This enables the development teams to run microservices at varying speed s so they can be responsive to customer needs • Processes (execute the app as one or more stateless processe s) – Each microservice does one thing and does that one thing really well The micro service is designed to solve the problem at hand in the best possible manner • Admin processes (run admin/management tasks as one off processes) – Each micro service has its own admin istrative or management tasks so that it function s as designed To achieve a microservices architecture that is organized around business capabilities use popular microse rvices design patterns A design pattern is a general reusable solution to a commonly occurring problem within a giving context Amazon Web Services Running Containerized Microservices on AWS 6 Popular microservice design patterns4 5 6: • Aggregator Pattern – A basic service which invokes other services to gather t he required information or achieve the required functionality This is beneficial when you need an output by combining data from multiple microservices • API Gateway Design Pattern – API Gateway also acts as the entry point for all the microservices and cre ates fine grained APIs for different types of clients It can fan out the same request to multiple microservices and similarly aggregate the results from multiple microservices • Chained or Chain of Responsibility Pattern – Chained or Chain of Responsibility Design Patterns produces a single output which is a combination of multiple chained outputs • Asynchronous Messaging Design Pattern – In this type of microservices design pattern all the services can communicate with each other but they do not have to communicate with each other sequentially and they usually communicate asynchronously • Database or Shared Data Pattern – This design pattern will enable you to use a database per service and a shared database per service to solve various proble ms These problems can include duplication of data and inconsistency different services have different kinds of storage requirements few business transactions can query the data and with multiple services and d enormalization of dat a • Event Sourcing Des ign Pattern – This design pattern helps you to create events according to change of your application state • Command Query Responsibility Segregator (CQRS) Design Pattern – This design pattern enables you to divide the command and query Using the common CQRS pattern where t he command part will handle all the requests related to CREATE UPDATE DELETE while the query part will take care of the materialized views • Circuit Breaker Pattern – This design pattern enables you to stop the process of the request an d response when the service is not working For example when you need to redirect the request to a different service after certain number of failed request intents Amazon Web Services Running Containerized Microservices on AWS 7 • Decomposition Design Pattern – This design pattern enables you to decompose an application based on business capability or on based on the sub domains Products Not Projects Companies that have mature applications with successful software adoption and who want to maintain and expand their user base will likely be more successful if t hey focus on the experience for their customers and end users To stay healthy simplify operations and increase efficiency your e ngineering organization should treat software components as products that can be iteratively improved and that are constantl y evolving This is in contrast to the strategy of treating software as a project which is completed by a team of engineers and then handed off to an operations team that is responsible for running it When software architecture is broken into small micro services it becomes possible for each microservice to be an individual product For internal microservice s the end user of the product is another team or service For an external microservice the end user is the customer The core benefit of treating so ftware as a product is improved end user experience When your organization treats its software as an always improving product rather than a oneoff project it will produce code that is better architected for future work Rather than taking shortcuts that will cause problems in the future engineers will plan software so that they can continue to maintain it in the long run Software planned in this way is easier to operate maintain and extend Your c ustomers appreciate such dependable software because t hey can trust it Additionally when engineers are responsible for building delivering and running software they gain more visibility into how their software is performing in real world scenarios which accelerates the feedback loop This makes it easier to improve the software or fix issues The following are key factors from the twelve factor app pattern methodology that play a role in adopt ing a product mindset for delivering software: • Build release run – Engineers adopt a devops culture that allows them to optimize all three stages • Config – Engineers build better configuration management for software due to their involvement with how that software is used by the customer Amazon Web Services Running Containerized Microservices on AWS 8 • Dev/prod parity – Software treated as a product can be it eratively developed in smaller pieces that take less time to complete and deploy than long running projects which enables development and production to be closer in parity Adopting a product mindset is driven by culture and process —two factors that drive change The goal of your organization’s engineering team should be to break down any walls between the engineers who build the code and the engineers who run the code in production The following concepts are crucial: • Automat ed provisioning – Operations should be automated rather than manual This increases velocity as well as integrates engineering and operations • Selfservice – Engineers should be able to configure and provision their own dependencies This is enabled by containerized envi ronments that allow engineers to build their own container that has anything they require • Continuous Integration – Engineers should check in code frequently so that incremental improvements are available for review and testing as quickly as possible • Cont inuous Build and Delivery – The process of building code that’s been checked in and delivering it to production should be automated so that engineers can release code without manual intervention Containerized microservices help engineering organizations i mplement these best practice patterns by creating a standardized format for software delivery that allows automation to be built easily and used across a variety of different environments including local quality assurance and production Smart Endpoints and Dumb Pipes As your engineering organization transition s from building monolithic architecture s to building microservices architecture s it will need to understand how to enable communications between microservices In a monolith the various component s are all in the same process In a microservices environment components are separated by hard boundaries At scale a microservices environment will often have the various components distributed across a cluster of servers so that they are not even neces sarily collocated on the same server This means there are two primary forms of communication between services: Amazon Web Services Running Containerized Microservices on AWS 9 • Request/Response – One service explicitly invokes another service by making a request to either store data in it or retrieve data from it For e xample when a new user creates an account the user service makes a request to the billing service to pass off the billing address from the user’s profile so that that billing service can store it • Publish/Subscribe – Event based architecture where one se rvice implicitly invokes another service that was watching for an event For example when a new user creates an account the user service publishes this new user signup event and the email service that was watching for it is triggered to email the user asking them to verify their email One architectural pitfall that generally leads to issues later on is attempting to solve communication requirements by building your own complex enterprise service bus for routing messages between microservices AWS recomme nds using a message broker such as Amazon MSK Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS ) Microservices architectures favor these tools because they enable a decentralized approach in which the endpoints that produce and consume messages are smart but the pipe between the endpoints is dumb In other words concentrate the logic in the containers and refrain from leveraging (and coupling to) sophisticated buses and messaging services Network communication often plays a central role in distributed systems Service meshes strive to address this issue Here you can leverage the idea of externalizing selected functionalities Service meshes work on a sidecar pattern where you add containers to extend the behavior of existing containers Sidecar is a microservices design pattern where a companion service runs next to your pr imary microservice augmenting its abilities or intercepting resources it is utilizing AWS App Mesh a sidecar container Envoy is used as a proxy for all ingress and egress traffic to the primary microservice Using this sidecar pattern with Envoy you can create the backbone of the service mesh without impacting our applications a service mesh is comprised of a control plane and a data plane In current implemen tations of service meshes the data plane is made up of proxies sitting next to your applications or services intercepting any network traffic that is under the management of the proxies Envoy can be used as a communication bus for all traffic internal to a service oriented architecture (SOA) Sidecars can also be used to build monitoring solutions When you are running microservices using Kubernetes there are multiple observability strategies one of them is using sidecars Due to the modular nature of the sidecars you can use it for your logging and monitoring needs For e xample you can setup FluentBit or Firelens for Amazon Web Services Running Containerized Microservices on AWS 10 Amazon ECS to send logs from containers to Amazon CloudWatch Logs AWS Distro for Open Telemetry can also be used for gathering metrics and sending metrics off to other services Recently AWS has launched managed Prometheus and Grafana for the monitoring/ visualization use cases The core benefit of building smart endpoints and dumb pipes is the ability to decentralize the architecture particularly when it comes to how endpoints are maintained updated and e xtended One goal of microservices is to enable parallel work on different edges of the architecture that will not conflict with each other Building dumb pipes enables each microservice to encapsulate its own logic for formatting its outgoing responses or suppl ementing its incoming requests The following are the key factors from the twelve factor app pattern methodology that play a role in building smart endpoints and dumb pipes: • Port Binding – Services bind to a port to watch for incoming requests and send requests to the port of another service The pipe in between is just a dumb network protocol such as HTTP • Backing services – Dumb pipes allow a background microservice to be attached to another microservice in the same way that you attac h a database • Concurrency – A properly designed communication pipeline between microservices allows multiple microservices to work concurrently For example several observer microservices may respond and begin work in parallel in response to a single even t produced by another microservice Decentralized Governance As your organization grows and establishes more code driven business processes one challenge it could face is the necessity to scale the engineering team and enable it to work efficiently in par allel on a large and diverse codebase Additionally your engineering organization will want to solve problems using the best available tools Decentralized governance is an approach that works well alongside microservices to enable engineering organizati ons to tackle this challenge Traffic lights are a great example of decentralized governance City traffic lights may be timed individually or in small groups or they may react to sensors in the pavement However for the city as a whole there is no need for a primary traffic control center in order to keep cars moving Separately implemented local optimizations work together to provide a city wide Amazon Web Services Running Containerized Microservices on AWS 11 solution Decentralized governance helps remove potential bottlenecks that would prevent engineers from bein g able to develop the best code to solve business problems When a team kicks off its first greenfield project it is generally just a small team of a few people working together on a common codebase After the greenfield project has been completed the bus iness will quickly discover opportunities to expand on their first version Customer feedback generates ideas for new features to add and ways to expand the functionality of existing features During this phase engineers will start grow ing the codebase an d your organization will start divid ing the engineering organization into service focused teams Decentralized governance means that each team can use its expertise to choose the best tools to solve their specific problem Forcing all teams to use the same database or the same runtime language isn’t reasonable because the problems they ’re solving aren’t uniform However d ecentralized governance is not without boundaries It is helpful to use standards throughout the organization such as a standard build and code review process because this helps each team continue to function together Source control plays an important role in the decentralized governance Git can be used as a source of truth to operate the deployment and governance strategies For example version control history peer review and rollback can happen through Git withou t needing to use additional tools With GitOps automated delivery pipelines roll out changes to your infrastructure when changes are made by pull request to Git GitOps also makes use of tools that compares the production state of your application with what’s under source control and alerts you if your running cluster doesn’t match your desired state The following are the principles for GitOps to work in practice : 1 Your entire system described declaratively 2 A desired system state version controlled in Git 3 The ability for changes to be automatically applied 4 Software agents that verify correct system state and alert on divergence Most CI/CD tools available today use a push based model A push based pipeline means that code starts with the CI system and then continues its path through a series of encoded scripts in your CD system to push changes to the destination The reason you don’t want to use y our CI/CD system as the basis for your deployments is because of the potential to expose credentials outside of your cluster While it is possible to secure your CI /CD scripts you are still working outside the trust domain of your cluster Amazon Web Services Running Containerized Microservices on AWS 12 which is not rec ommended With a pipeline that pulls an image from the repository your cluster credentials are not exposed outside of your production environment The following are the key factors from the twelve factor app pattern methodology that play a role in enablin g decentralized governance: • Dependencies – Decentralized governance allows teams to choose their own dependencies so dependency isolation is critical to make this work properly • Build release run – Decentralized governance should allow teams with differ ent build processes to use their own toolchains yet should allow releasing and running the code to be seamless even with differing underlying build tools • Backing services – If each consumed resource is treated as if it was a third party service then de centralized governance allows the microservice resources to be refactored or developed in different ways as long as they obey an external contract for communication with other services Centralized governance was favored in the past because it was hard to efficiently deploy a polyglot application Polyglot applications need different build mechanisms for each language and an underlying infrastructure that can run multiple languages and frameworks Polyglot architectures had varying dependencies which coul d sometimes have conflicts Containers solve th ese problem s by allowing the deliverable for each individual team to be a common format: a Docker image that contains their component The contents of the container can be any type of runtime written in any l anguage However the build process will be uniform because all containers are built using the common Dockerfile format In addition all containers can be deployed the same way and launched on any instance since they carry their own runtime and dependenci es with them An engineering organization that chooses to employ decentralized governance and to use containers to ship and deploy this polyglot architecture will see that their engineering team is able to build performant code and iterate more quickly Decentralized Data Management Monolithic architectures often use a shared database which can be a single data store for the whole application or many applications This leads to complexities in changing schemas upgrades downtime and dealing with backward compatibility risks A Amazon Web Services Running Containerized Microservices on AWS 13 service based approach mandates that each service get its own data storage and doesn’t share that d ata directly with anybody else All data bound communication should be enabled via services that encompass the data As a result each service team chooses the most optimal data store type and schema for their application T he choice of the database type is the responsibility of the service teams It is an example of decentralized decision making with no central group enforcing standards apart from minimal guidance on connectivity AWS offers many fully managed storage servic es such as object store key value store file store block store or traditional database Options include Amazon S3 Amazon DynamoDB Amazon Relational Database Service (Amazon RDS ) and Amazon Elastic Block Store (Amazon EBS) Decentralized data manag ement enhances application design by allowing the best data store for the job to be used This also removes the arduous task of a shared database upgrade which could be weekends worth of downtime and work if all goes well Since each service team owns it s own data its decision making become s more independent The teams can be self composed and follow their own development paradigm A secondary benefit of decentralized data management is the disposability and fault tolerance of the stack If a particular data store is unavailable the complete application stack does not become unresponsive Instead the application goes into a degraded state losing some capabilities while still servicing requests This enables the application to be fault tolerant by desi gn The following are the key factors from the twelve factor app pattern methodology that play a role in organizing around capabilities: • Disposability (maximize robustness with fast startup and graceful shutdown ) – The services should be robust and not dep endent on externalities This principle further allows for the services to run in a limited ca pacity if one or more components fail • Backing services (treat backing services as attached resources ) – A backing service is any service that the app consumes over the network such as data stores messaging systems etc Typically backing services are managed by operations The app should make no distinction between a local and an external service • Admin pro cesses (run admin/management tasks as one off processes ) – The process es required to do the app’s regular business for example running Amazon Web Services Running Containerized Microservices on AWS 14 database migrations Admin processes should be run in a similar manner irrespective of environments To achieve a micr oservices architecture with decoupled data management the following software design patterns can be used: • Controller – Helps direct the request to the appropriate data store using the appropriate mechanism • Proxy – Helps provide a surrogate or placeholder for another object to control access to it • Visitor – Helps represent an operation to be performed on the elements of an object structure • Interpreter – Helps map a service to data store semantics • Observer – Helps define a one tomany dependency between objects so that when one object changes state all of its dependents are notified and updated automatically • Decorator – Helps attach additional responsibilities to an object dynamically Decorators provide a fl exible alternative to sub classing for extending functionality • Memento – Helps capture and externalize an object's internal state so that the object can be returned to this state later Infrastructure Automation Contemporary architectures whether monolit hic or based on microservices can greatly benefit from infrastructure level automation With the introduction of virtual machines IT teams were able to easily replicate environments and create templates of operating system states that they wanted The ho st operating system became immutable and disposable With cloud technology the idea bloomed and scale was added to the mix There is no need to predict the future when you can simply provision on demand for what you need and pay for what you use If an en vironment isn’t needed anymore you can shut down the resources On demand provisioning can be combined with spot compute7 which enables you to request unused compute capacity at steep discounts One useful mental image for infrastructure ascode is to p icture an architect’s drawing come to life Just as a blueprint with walls windows and doors can be transformed into Amazon Web Services Running Containerized Microservices on AWS 15 an actual building so load balancers databases or network equipment can be written in source code and then instantiated Microservices not only need disposable infrastructure ascode they also need to be built tested and deployed automatically Continuous integration and continuous delivery are important for monoliths but they are indispensable for microservices Each service needs i ts own pipeline one that can accommodate the various and diverse technology choices made by the team An automated infrastructure provides repeatability for quickly setting up environments These environments can each be dedicated to a single purpose: dev elopment integration user acceptance testing ( UAT) or performance testing and production Infrastructure that is described as code and then instantiated can eas ily be rolled back This drastically reduces the risk of change and in turn promotes innova tion and experiments The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design : • Codebase (one codebase tracked in revision control many deploys ) – Because the infrastructure can be described as code treat all code similarly and keep it in the service repository • Config (store config urations in the environmen t) – The environment should hold and share its own specificities • Build release run (strictly sepa rate build and run stages ) – One environment for each purpose • Disposability (maximize robustness with fast startup and graceful shutdown ) – This factor transcends the process layer and bleeds into such downstream layers as containers virtual machines and virtual private cloud • Dev/prod parity – Keep development staging and production as similar as possible Successful applications use some form of infrastructure ascode Resources such as databases container clusters and load balancers can be instant iated from description To wrap the application with a CI /CD pipeline you should choose a code repository an integration pipeline an artifact building solution and a mechanism for deploying these artifacts A microservice should do one thing and do it well This implies that when you build a full application there will potentially be a large number of services Each of these Amazon Web Services Running Containerized Microservices on AWS 16 need their own integration and deployment pipeline Keeping infrastructure automation in mind architects who face this challenge of proliferating services will be able to find common solutions and replicate pipelines that have made a particular service successful An image repository should be used in the CI/CD pipeline to push the containerized image of the microservice We have v arious popular image repositories such as Amazon ECR Redhat Quay Docker Hub JFrog Container registries can be used as part of the infrastructure automation As previously described in the Decentralized Gover nance section GitOps is a popular operational framework for achieving Continuous Delivery Git is used as single source of truth for deploying into your cluster Tools such as Flux runs in your cluster and implements changes based on monitoring Git and image repositories Flux keeps an eye on image repositories detects new images and updates the running configurations based on a configurable policy Continuous Delivery (CD) tools such as ArgoCD Spinnaker can also be leveraged for immediate autonomous deployment to production environments Ultimately the goal is to enable developers to push code updates to container image repositories and have the updated container images of the application sent to multiple environments in minutes There are many ways to successfully deploy in phases including the blue/green and canary methods With the blue/green deployment two environments live side by side with one of them running a newer version of the application Traffic is sent to the older version until a swi tch happens that route s all traffic to the new environment You can see an example of this happening in this reference architecture Blue/green deployment Amazon Web Services Running Containerized Microservices on AWS 17 In this case we use a switch of target groups behind a load balancer in order to redirect traffic from the old to the new resources Another way to achieve this is to use services fronted by two load balancers and operate the switch at the DNS level Design for Failure “Everything fails all the time” – Werner Vogels This adage is not any less true in the container world than it is for the cloud Achieving high availability is a top priority for workloads but remains an arduous undertaking for development teams Modern applications running in containers should not be tasked with managing the underlying layers from physical infrastructure like electricity sources or environmental controls all the way to the stability of the underlying operating system If a set of contai ners fails while tasked with deliver ing a service these containers should be re instantiated automatically and with no delay Similarly as microservices interact with each other over the network more than they do locally and synchronously connections ne ed to be monitored and managed Latency and timeouts should be assumed and gracefully handled More generally microservices need to apply the same error retries and exponential backoff with jitter as advised with applications running in a networked environment8 Designing for failure also means testing the design and watching services cope with deteriorating conditions Not all technology departments need to apply th is principle to the extent that Netflix does9 10 but we encourage you to test these mechanisms often Designing for failure yields a self healing infrastructure that acts with the maturity that is expected of recent workloads Preventing emergency calls guarantees a base level of satisfaction for the service owning team This also removes a level of stress that can otherwise grow into accelerated attrition Designing for failure will deliver greater uptime for your products It can shield a company from outages that could erode customer trust Here are the key factors from the twelve factor app pattern methodology that play a role in designing for failure: • Disposabilit y (maximize robustness with fast startup and graceful shutdown ) – Produce lean container images and striv e for processes that can start and stop in a matter of seconds Amazon Web Services Running Containerized Microservices on AWS 18 • Logs (treat logs as event streams ) – If part of a system fail s troubleshooting is nece ssary Ensure that material for forensics exists • Dev/prod parity – Keep development staging and production as similar as possible AWS recomme nds that container hosts be part of a self healing group Ideally container management systems are aware of di fferent data centers and the microservices that span across them mitigating possibl e events at the physical level Containers offer an abstraction from operating system management You can treat container instances as immutable servers Containers will behave identically on a developer’s laptop or on a fleet of virtual machines in the cloud One very useful container pattern for hardening an application’s resiliency is the circuit break er With circuit breakers such as Resilience4j Hystrix an application container is proxied by a container in charge of monitoring connection attempts from the application container If connections are successful the circuit breaker container remains in closed status letting communication happen When connections start failing the circuit breaker logic triggers If a pre defined threshold for failure/success ratio is breached the container enters an open status that prevents more connections This mech anism offers a predictable and clean breaking point a departure from partially failing situations that can render recovery difficult The application container can move on and switch to a backup service or enter a degraded state One other useful containe r pattern for application’s resilience is the using Service Mesh which forms a network of microservices communicating with each other Tools such as AWS App Mesh Istio have been available recently to manage and monitor such service meshes Services meshe s have sidecars which refers to a separate process that is installed along with the service in a container set Important feature of the sidecar is that all communication to and from the service is routed through the sidecar process This redirection of co mmunication is completely transparent to the service This service meshes offer several resilience patterns which can be activated by rules in the sidecar and these are Timeout Retry and Circuit Breaker Modern container management services allow develo pers to retrieve near real time event driven updates on the state of containers Docker supports multiple logging drivers (list as of Docker v 2010 ): 11 12 Amazon Web Services Running Containerized Microservices on AWS 19 Driver Description none No logs will be available for the container and Docker logs will not return any output jsonfile The logs are formatted as JSON The default logging driver for Docker syslog Writes logging messages to the syslog facility The syslog daemon must be running on the host machine journald Writes log messag es to journal d The journald daemon must be running on the host machine gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash fluentd Writes log messages to fluentd (forward input) The fluentd daemon must be running on the host machine awslogs Writes log messages to Amazon CloudWatch Logs splunk Writes log messages to splunk using the HTTP Event Collector etwlogs Writes log messages as Event Tracing for Windows (ETW) events Only available on Windows platforms gcplogs Writes log messages to Google Cloud Platform (GCP) Logging local Logs are stored in a custom format designed for minimal overhead logentries Writes log messages to Rapid7 Logentries Sending these log s to the appropriate destination becomes as easy as specifying it in a key/value manner You can then define appropriate metrics and alarms in your monitoring solution Another way to collect telemetry and troubleshooting material from containers is to link a logging container to the application container in a pattern generically referred to as sidecar More specifically in the case of a container working to standardize and normalize the output the pattern is known as an adapter Contain er monitoring is another approach for tracking the operation of a containerized application These system s collect metrics to ensure application running on containers are performing properly Container monitoring solutions use metric capture analytics Amazon Web Services Running Containerized Microservices on AWS 20 transaction tracing and visualization Container monitoring covers basic metrics like memory utilization CPU usage CPU limit and memory limit Container monitoring also offers the real time streaming logs tracing and observability that containers need Containers can also be leveraged to ensure that various environments are as similar as possible Infrastructure ascode can be used to turn infrastructure into templates and easily replicate one footprint Evolutionary Design In modern systems architecture design you need to assume that you don’t have all the requirements up front As a result having a detailed design phase at the beginning of a project becomes impractical The services have to evolve through various iteratio ns of the software As services are consumed there are learnings from real world usage that help ev olve their functionality An example of this could be a silent inplace software update on a device While the feature is rolled out an alpha /beta testing strategy can be used to understand the behavior in real time The feature can be then rolled out more broadly or rolled back and worked on using the feedback gained Using deployment techniques such as a canary release a new feature can be tested in an accelerated fashion against it s target audience This provid es early fe edback to the development team As a result of the evolutionary design principle a service team can build the minimum viable set of features needed to stand up the stack and roll it ou t to users The development team doesn’t need to cover edge cases to roll out features Instead the team can focus on the needed pieces and evolve the design as customer feedback comes in At a later stage the team can decide to refactor after they feel confident that they have enough feedback Conducting periodical product workshops also helps in evolution of product design The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design: • Codebase (one codebase tracked in revision control many deploys ) – Helps evolve features faster since new feedback can be quickly incorporated • Dependencies (explicitly declare and isolate dependencies ) – Enables quick iterations of the design since features are t ightly coupled with externalities Amazon Web Services Running Containerized Microservices on AWS 21 • Configuration (store configurations in the environment ) – Everything that is likely to vary between deploys (staging production developer environments etc) Config varies substantially across deploys code does not With configurations stored outside code the design can evolve irrespective of the environment • Build release run (strictly separate build and run stages ) – Help roll out new features using various deployment techniques Each release has a specific ID and can be used to gain design efficiency and user feedback The following software design patterns can be used to achieve an evolutionary design : • Sidecar extend s and enhance s the main service • Ambassador creates helper services that send network requests on behalf of a consumer service or application • Chain provides a defined order of starting and stopping containers • Proxy provide s a surrogate or placeholder for another object to control access to it • Strategy defines a family of algorithms encapsulate s each one and make s them interchangeable Strategy lets the algorithm vary independently from the clients that use it • Iterator provides a way to access the elements of an aggregate object sequentially wi thout exposing its underlying representation • Service Mesh is a dedicated infrastructure layer for facilitating service toservice communications between microservices using a proxy Containers provide additional tools to evolve design at a faster rate wi th image layers As the design evolves each image layer can be added keeping the integrity of the layers unaffected Using Docker an image layer is a change to an image or an intermediate image Every command (FROM RUN COPY etc) in the Dockerfile causes the previous image to change thus creating a new layer Docker will build only the layer that was changed and the ones after that This is called layer caching Using layer caching deployment times can be reduced Deployment strategies such as a Canary release provide added agility to evolve design based on user feedback Canary release is a technique that’s used to reduce the risk inherent in a new software version release In a canary release the new software is Amazon Web Services Running Containerized Microservices on AWS 22 slowly rolled out to a small subset of users before it’s rolled out to the entire infrastructure and made available to everybody In the diagram that follows a canary release can easily be implemented with containers using AWS primitives As a container announces its health via a health check API the canary directs more traffic to it The state of the canary and the execution is maintained using Amazon DynamoDB Amazon Route 53 Amazon CloudWatch Amazon Elastic Container Service (Amazon ECS) and AWS Step Functions Canary deployment with containers Finally usage monitoring mechanisms ensure that development teams can evolve the design as the usage patterns change with variables Conclusion Microservices can be designed using the twelve factor app pattern methodology an d software design patterns enable you to achieve this easily These software design patterns are well known If applied in the right context they can enable the design benefits of microservices AWS provides a wide range of primitives that can be used to enab le containerized microservices Amazon Web Services Running Containerized Microservices on AWS 23 Contributors The following individuals contributed to this document: • Asif Khan Technical Business Development Manager Amazon Web Services • Pierre Steckmeyer Solutions Architect Amazon Web Service • Nathan Peck Developer Advocate Amazon Web Services • Elamaran Shanmugam Cloud Architect Amazon Web Services • Suraj Muraleedharan Senior DevOps Consultant Amazon Web Services • Luis Arcega Technical Account M anager Amazon Web Services Document Revisions Date Descript ion August 5 2021 Whitepaper updated with latest technical content November 1 2017 First publication Notes 1 https://docsmicrosoftcom/en us/dotnet/architecture/microservices/architect microserv icecontainer applications/communication inmicroservice architecture 2 https://martinfowlercom/articles/microserviceshtml 3 https://enwikipediaorg/wiki/Conway's_law 4 https://microservicesio/patterns/microserviceshtml 5 https://d zonecom/articles/design patterns formicroservices 6 https://docsawsamazoncom/prescriptive guidance/latest/modernization integrating microservices/welcomehtml Amazon Web Services Running Containerized Microservices on AWS 24 7 https://awsamazoncom/blogs/containers/running airflow workflow jobsonamazon eksspotnodes/ 8 https://docsawsamazoncom/general/latest/gr/api retrieshtml 9 https://githubcom/netflix/chaosmonkey 10 https://githubcom/Netfl ix/SimianArmy 11 https://docsdockercom/engine/admin/logging/overview/ 12 https://wwweksworkshopcom/intermediate/230_logging/",General,consultant,Best Practices Running_Neo4j_Graph_Databases_on_AWS,"ArchivedRunning Neo4j Graph Databases on AWS May 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in th is document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuran ces from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Transacting with the Graph 1 Deployment Patterns for Neo4j on AWS 2 Basics 3 Networking 3 Clustering 5 Database Storage Considerations 10 Operations 15 Disaster Rec overy 19 Conclusion 20 Contributors 20 Further Reading 21 Notes 21 ArchivedAbstract Amazon Web Services (AWS) is a flexibl e cost effective and easy touse cloud computing platform Neo4j is the leading NoSQL graph database that is widely deployed in the AWS C loud Running your own Neo4j deployment on Amazon Elastic Compute Cloud (Amazon EC2) is a great solution for users whose applications require high performance operations on large datasets This whitepaper provides an overview of Neo4j and its implementation on the AWS Cloud It also discusses best practices and implementation characteristics such as performance durabi lity cost optimization and security ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 1 Introduction NoSQL refers to a subset of structured storage software that is optimized for high performance operations on large datasets As the name implies querying of these systems is not base d on the SQL language —instead each product provides its own interface for accessing the system and its features A common way to understand the spectrum of NoSQL databases is by looking at their underlying data models: • Column stores – Data is organized into columns and column families providing a nested hashmap like structure accessed by key • Keyvalue stores – Data is organized as key value relationships and accessed by primary key • Document databases – Data is organized as documents (eg JSON XML) and accessed by fields within the document Neo4j provides a far richer data model than other NoSQL databases Instead of working with isolated values columns or documents Neo4j support s relationships between data so that webs of interconnected data can be created and queried We see this kind of data every day in use cases from social networks to transport road and rail networks G raph databases are already widely applied in fields as diverse as healthcare finance education IT infrastructure identity management Internet of Things (IoT) and many more In this whitepaper we'll discuss how to run Neo4j effectively on AWS T he on demand nature of Amazon Elastic Compute Cloud ( Amazon EC2 ) and the power of Neo4j together provide a great way for you to deploy graph data to support your use case while avoiding the undifferentiated heavy lifting typically associated with purchasing deploying and managing traditional infrastructure Transacting with the Graph To make traversing the graph efficient and safe from physical and semantic corruption the graph model demands strong consistency with its underlying storage That is if a relationship exists between two nodes it must be reachable from both of them ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 2 Graph Consistency If the records in a graph database disagree about connectivity a non deterministic structure will result Traversing the graph in one direction leads to different actions being taken than if the graph were traversed in the other direction This in turn leads to different decisions being recorded in the graph which may lead to semantic corruption spreading throughout the graph compounding the initial physical corruption In order to preserve the rigorous consistency required for graphs Neo4j uses atomi c consistent isolated and durable ( ACID ) transactions when modifying the graph In the case of read only transactions the cost of transactions is minimal because read locks do not block other reads and there is no need to flush to disk To ensure safe recoverable write transactions the system will take write locks which will block reads and flush data to the transaction log before completing the transaction To reduce the impact of a physical flush Neo4j amortize s the cost of flushing across multiple small concurrent transactions This means thousands of ACID transactions per second can be processed in a well tuned system while preserving safety With Amazon EC2 there are multiple instance types that feature high performance solid state drives ( SSDs ) that vastly reduce the cost of writing to disk Therefore it’s possible to tune for a greater number of transactions per second based on high performance block storage Such performance is available in the I2 instance family which is designed to perform up to 300000 input/output operations per second ( IOPS) Deployment Patterns for Neo4j on AWS Neo4j differs from other database management system ( DBMS ) engines in that it can either be deployed as a traditional database server or embedded within an applic ation This bimodal operation provides the same APIs the same transactional guarantees and the same level of cluster support either way In the follow ing sections we describe the deployment model of a traditional database server that is deployed on Amazon EC2 ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 3 Basics Neo4j was originally conceived as an embedded Java library intended to provide idiomatic access to connected data through a graph API While Neo4j retains the ability to be embedded in JVM based applications it has grown in sophistication since those days adding an excellent query language practical programmatic APIs and support for h igh availability (HA) via clusters of Neo4j instances That functionality can be invoked over the network from any platform despite the 4j naming! Neo4j is a transactional database that supports high concurrency while ensuring that concurrent transactions do not interfere with each other Even deadlocking transactions are automatically detected and rolled back When data is written to Neo4j it’s guaranteed du rable on disk In the event of a fault no partially written records will exist after restart and recovery A single instance of the database is resilient right up to the point where the disk is lost To protect against the failure of a disk Neo4j has a n HA mode in which multiple instances of Neo4j can collaborate to store and query the same graph data The loss of any individual Neo4j instance can be tolerated since others will remain available In fact work proceed s as usual when a majority of the Neo4j cluster is available Neo4j is able to capitalize on the robust features that AWS offers not only to detect failures but also to provide automated recovery mechanisms Networking Neo4j HA trusts the network and so it’s important to physically secure it a gainst intrusion and tampering Conversely f or application database interactions Neo4j supports transport level security (TLS) out of the box for privacy and integrity AWS offers a high performance networking environment in a customer controlled VPC created with Amazon Virtual Private Cloud (VPC) Within your VPC you can create and manage the logical network components that you need to deploy your application infrastructure The VPC enables you to create your own network address space subnets f irewall rules route tables as well as extend connectivity to your own data centers and the Internet ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 4 The network design for a Neo4j cluster can be easily customized to the specific application on AWS Most customers choose to keep the database on a private subnet that has strict network controls in place to prevent unauthorized network access There are two different types of firewalls built into the AWS Cloud that provide a high level of network isolation The first type is a security group which is a stateful firewall that is applied at the instance level in both the inbound and outbound directions The security group defines which protocols ports and Classless Inter Domain Routing ( CIDR) IP address ranges have access to a specific instance Security groups have an implicit deny which means that there is no network access by default To be granted network access a security group must specific ally allow the traffic through Deploying your Neo4j cluster into a VPC with a private subnet and configuring your security group to permit ingress over the appropriate TCP ports builds another layer of network security The following table shows default TCP port numbers for Neo4j : Port Process 7474 The Neo4j REST API and web frontend are available at this port 7687 The binary protocol endpoint Used by application drivers to query and transact with the database The second type of firewall is a Network Access Control List ( NACL ) A NACL is defined at the subnet level and is a stateless firewall A NACL is an ordered set of rules that is evaluated with the lowest number rule first B y default the NACL has an explicit rule to allow all traffic to flow in both directions on the subn et However NACL rules can also be applied to allow traffic to flow in either t he inbound or outbound direction s as well Every Amazon EC2 instance is allocated bandwidth that corresponds to it s size which currently in the X1 instance family can be up to 20 Gbps of network bandwidth As instance size decreases so does the bandwidth allocated to the instance If you r application requires a high level of network communication between hosts ensure that the instance size selected will deliver the bandwidth ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 5 needed In Neo4j this t ypical ly corresponds to systems that sustain high write loads VPC Design Figure 1: Sample VPC design To optimize network performance we suggest using EC2 instances that support enhanced networking which uses single root I/O virtualization (SR IOV) to ensure that your instances can achieve greater packets per second reduced latency and reduced jitter AWS recommends that you use a multi Availability Zone ( AZ)1 design for your applications in order to achieve a high level of fault tolerance By using multiple Availability Zones you can mitigate the risk of an entire Availability Zone failing by replicating to another instance in a separate Availability Zone With Neo4j the network latency between instances will increase because the Availability Zones are in separate physical locations Clustering Neo4j HA is available both to server based and embedded instances of Neo4j as part of Neo4j Enterprise Edition 2 The clustering architecture has been designed with two features in mind: • Optimized for graph workloads • Simple to understand and operate ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 6 A beneficial side effect of the high availability of Neo4j Enterprise Edition is that it can scale horizontally for graph operations while scaling vertically for transaction processing This topology is favorable for graph workloads since graphs are intrinsically read heavy (even when writing to a graph it must first be traversed to find the right part of the structure to update) On that basis Neo4j has opted for a clustering system similar to that found in mature relational databases in which the cluster members can have either a master or slave role A Neo4j HA cluster operates cooperatively because each database in stance contains the logic it needs to coordinate with the other members of the cluster On startup a Neo4j HA database instance tr ies to connect to an existing cluster specified by configuration If the cluster exists the instance join s it as a slave Otherwise the cluster will be created and the instance will become the current master Note that the master role is transitory A master is elected via an instance of t he Paxos algorithm embedded in Neo4j Any machine can instigate an election if it thinks it has detected a fault but a majority of the machines in the cluster must participate in the election After the election is complete one master remains or becomes elected and all other machines in the cluster become slaves Whenever a Neo4j instance becomes unavailable the other database instances in the cluster detect that and mark it as temporarily failed A database instance that becomes available after being unavailable will automatically catch up with the latest cluster updates ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 7 Figure 2: Neo4j HA clusters showing current master If the master fails another (best suited) member will be elected and have its role switched from slave to master after a quorum has been reached within the cluster When the role switch has been performed the new master will broadcast its availability to all the other cluster members A new master is typically elected and started within just a few seconds ; during this time no writes can take place Be aware that during the transition period if an old master had changes that did not get replicated to any other member before becoming unavailable and if a new master is elected and performs changes before the old master recovers there will be two ""branches"" of the database The old master will move away its database (its ""branch"") and download a full copy from the new master to become available as a slave in the cluster An operator can then choose to replay the transactions in the branched data to the cluster Neo4j High Availability In the Neo4j HA architecture the cluster is typically fronted by load balancers provided by Elastic Load Balancing or HAProxy Elastic Load Balanc ing (ELB) is an AWS service that offers load balancer s that automatically distribute traffic across multiple EC2 instances and across multiple Availability Zones An ELB load balancer is elastic because it automatically scales its request handling capacity to support network traffic and doesn’t cap the number of connections that it can establish with EC2 instances If an inst ance fails the load balancer automatically reroutes the traffic to the remaining EC2 instances that are ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 8 running If the failed EC2 instance is restored the load balancer restore s the traffic to that instance Data Integrity Integrity is crucial for a transactional database The master role imposes a total ordering of transactions on the system This has the beneficial side effect that all replicas in the cluster apply transactions in exactly the same order and therefore are kept identical Elastic Load Balancing controls and distributes traffic to your EC2 instances and serves as a first line of defense to mitigate network attacks You can offload the work of encryption and decryption to your load balancer so that your EC2 instances can focus on their main work Elastic Load Balancing has configurable health checks that can be used in conjunction with Amazon Cloud Watch to send alerts and take action when specified thresholds are reached If Auto Scaling is used with Elastic Load Balancing instances that are launched by Auto Scaling are automatically registered with the load balancer and instances that are terminated by Auto Scaling are automatically de registered from the load balancer An ELB load balancer can be Internet facing or internal and can accept HTTP HTTPS SSL and TCP connections with the ability to terminate SSL to offload the burden on the backend EC2 instances Elastic Load Balancing can also bring an extra level of security to your network design because the security groups applied t o the Neo4j servers can be configured to only accept traffic from the load balancer which m ight help prevent unauthorized access to the instances To maintain static connection points to the master database server and to the read replicas from the application it is suggested that you use two separate load balancers for this By doing this your application will not need to be updated when a new master database is elected a new slave is added or a failure on one of the nodes occurs Neo4j advertises separate REST endpoints for both the master node and the slave nodes so that the load balancers can determine what role each instance in a cluster plays By creating two load balancers and adding all of the Neo4j instances to both load balancers we can ensure that during an election the master node load balancer will properly redirec t requests to the proper nodes ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 9 Figure 3: Neo4j cluster REST endpoints fo r the master node and the slave nodes Master Node E lastic Load Balancer The master node will respond with a 200 status code with a body text of “true” and the slaves will return a 404 Not Found with a body text of “false” when the load balancer health check references /db/manage/server/ha/master ""HealthCheck"": { ""HealthyThreshold"": 2 ""Interval"": 10 ""Target"": ""HTTP:7474/db/manage/server/ha/master"" ""Timeout"": 5 ""UnhealthyThreshold"": 2 } Slave Node E lastic Load Balancer The master node will respond with a 404 status code with a body text of “false” and the slaves will return a 200 status code with a body text of “true” when the load balancer health check references /db/manage/server/ha/slave ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 10 ""HealthCheck"": { ""HealthyThreshold"": 2 ""Interval"": 10 ""Target"": ""HTTP:7474/db/manage/server/ha/slave"" ""Timeout"": 5 ""UnhealthyThreshold"": 2 } This functionality allows both the master and slave load balancers to respond to Neo4j cluster events without any application changes or administrator involvement Database Storage Considerations AWS provides two fundamental kinds of storage: Amazon Elastic Block Store (EBS) and EC2 ephemeral instance store Several EC2 instance types expose multiple ephemeral instance stores that can be used for mirroring data for fault tolerance However if the instance stops fails or is terminated all data will be lost and so strategies need to be in place to address those risks H ighspeed ephemeral storage is beneficial to graph databases that are larger than the physical memory l imit of a single EC2 instance In the case that the database is larger than the main memory specific considerations need to be taken since it will not be possible for a single Neo4j instance to cache the whole database in RAM This means that the portions of the graph that are not frequently accessed will have to run out of m emory and on a storage device The preference would be to maintain extremely rapid in memory traversals of the graph for the whole graph no matter its size • Amazon EC2 X1 Instance s – X1 instances have the lowest price per GB of RAM and are ideally suited for in memory databases With up to 1952 GB of DDR based memory 128 vCPUs and 3 840 GB of SSD storage the X1 instance is the most performant for the largest Neo4j use cases • Amazon EC2 I2 Instance s – High I/O (I2) instances are optimized to deliver more than 300000 lowlatency IOPS to applications by utilizing up to 8 SSD drives to minimize access time with a capacity of up to 6400 GB ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 11 • Amazon EC2 D2 Instance s – Dense storage (D2) instances provide an array of up to 24 internal drives with a capacity of 2 TB each These disks can be configured with multiple RAID types and partition sizes as needed A D2 instance can provide up to 35 Gbps read and 31 Gbps write disk throughpu t with a 2 MB block size and a capacity of 48 TB Neo4j is a shared nothing architecture and can therefore happily consume instance based storage Inevitable data loss when instances are stopped or terminated can be prevented by clustering We mostly focus on instance storage here but other uses exist for Neo4j on Amazon EBS that we explore at the end of this section EC2 instances can also use Amazon EBS which provides persistent block level storage volumes Amazon EBS volumes are highly available and re liable storage volumes that can be attached to any running instance that is in the same Availability Zone EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance Besides being persistent data stores you can create point intime snapshots of EBS volumes which are persisted to Amazon Simple Storage Service ( S3) Snapshots protect data for long term durability and they can be used as the starting point for new EBS volumes Th e same snapshot can be used to instantiate as many volumes as you want These snapshots can be copied across A WS Regions In a large database bringing up a cluster from scratch can take time to transfer all of the data between existing and new Neo4j insta nces Amazon EBS provides the ability to mount the data store files from a snapshot of another instance and then recover the Neo4j instance atop that store file before it rejoins the cluster This reduces the overall time that it takes to bring a new Neo4j instance into the cluster Storage Scaling Now that you have learned the fundamentals of clustering Neo4j let’s look at how the platform can be used to scale out the database For scaling Neo4j you need to consider the performance of the database under load and the physical volume of the graph being stored These two concerns are almost but not entirely orthogonal There are subtle interplays between operational load and low latency data access as you scale both ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 12 Scaling for Volume Let’s start wit h understanding scaling for volume since scaling for performance arise s naturally from there In the Neo4j world large datasets are those that are substantially larger than main memory This presents an interesting challenge for performance engineering s ince RAM provides the best balance of performance and size for database operations ( that is CPU cache is tiny but faster and disks are larger but slower) Databases love RAM and Neo4j is no exception to that rule —the more RAM available to the database the lower the possibility that it runs at disk speed rather than at memory speed The majority of this memory can be used by the database in particular consumed by Neo4j 's page cache Data Consistency Neo4j is an ACID transactional database Any abrupt shutdown of an instance such as when an EC2 instance unexpectedly dies will leave the database files in an inconsistent but repairable state Hence when booting the new Neo4j instance using files on the existing EBS volume the database will first have to recover to a consistent state before joining the cluster This process may be much quicker than a full sync from scratch Although scaling vertically is always an option thanks to rapid growth in affordable large memory machines in the AWS ecosystem and the ease of switching from one instance type to another scaling horizontally offers its own advantages Neo4j uses an HA cluster wit h a pattern called ""Cache Sharding"" to maintain high performance traversals with a dataset that substantially exceeds main memory space Cache sharding isn’t sharding in the traditional sense since we expect a full data set to be present on each database instance for impeccable fault tolerance and to maintain excellent performance when the memory todisk ratio is lopsided But cache sharding allows Neo4j to aggregate the RAM of individual instances by consistently routing like queries to the same database endpoint In the typical case where a server supports multiple concurrent clients access patterns tend to be noisy at first glance approximating all of the random walks of the graph overall Yet even at large scale randomness is never truly dominant ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 13 After all since the graph is structure d it makes sense that queries would be structured too With multiple concurrent clients it’s possible to discern commonality between them Whether by geograph y username or other ap plication specific feature it’s almost always possible to discern a coarse feature of the access pattern on the wire so that like requests can be consistently routed to the same server instance The solution architecture for this setup is shown in Figure 4 The technique of consistent routi ng has been implemented by high volume web properties for a long time and it is simple to implement scales well and is very robust The strategy we use to implement consistent routing will typically vary by domain Sometimes it’s fine just to use sessio n affinity (commonly called “sticky sessions”) implemented by the Elastic Load Balanc ing At other times we’ll want to route based on the characteristics of the data set A simple strategy is that the instance that first serves requests for a particular user will serve subsequent requests By doing this there is a greater chance that a warm cache will process the requests Other domain specific approaches will also work For example in a geographical data system we can route requests about particular locations to specific database instances that will be warm for that location ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 14 Figure 4: Solution architecture for consistent routing Either way we’re increasing the likelihood of the required graph data already being cached in RAM which mak es traversals extremely performant Adding high performance block storage to the mix means that even where the cache is cold (eg when a machine is restarted or a new part of the graph is being travers ed) the cache miss penalty is minimized Scaling for Performance Now that you have seen how Neo4j scales for volume scaling for performance is simplified by adding more instances Provide d that you can identify suitable workloads that do not decrease cach e performance you can simply add more instances to the Neo4j cluster to support more graph operations at an approximately linear rate Cache Sharding Assuming a uniform distribution usernames work well with this scheme and sticky sessions with round robin load balancing work in almost all domains In practice to get the best performance your choice of routing key for cache sharding must be able to become finer grained as the number of servers grows For example you could chang e the routing based on the names of countries beginning AG HN and then OZ ultimately to a separate Neo4j database ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 15 instance for each group By designing the database this way it’s possible to gain more throughput by adding more machines and using each of those machi nes as efficiently as possible Operations Operating a Neo4j cluster at scale is similar to running other database servers on Amazon EC2 Like all databases Neo4j uses working files such as logs as it executes Logs can be ke pt for troubleshooting purposes However we recommend limiting them in size or temporal scope The transaction log is important because this is where Neo4j transactions are made durable bef ore being applied to the data model This log particular ly important in the backup process Although you can keep the logical logs forever (and therefore rebuild your database from scratch merely by replaying all the transactions in those log files) in practice this would require a lot of storage for a database that has run in production for a reasonable amount of time In practice logs are maintained on a schedule that is suitable for troubleshooting and that takes the incremental backup schedule into consideration ( We discuss incremental backups in more detail later ) Monitoring AWS offers a monitoring service called Amazon CloudWatch which provides a reliable scalab le and flexible monitoring solution for EC2 instances and AWS services CloudWatch enables near real time monitoring on multiple EC2 metrics as well as the ability to monitor customer supplied metrics With CloudWatch alarms and notifications can be triggered based on events which can quickly alert you to issues and can apply automation to resolve the issues Additionally Amazon CloudWatch Logs provides the ability to collect store monitor and troubleshoot application level issues CloudWatch Logs can greatly simplify aggregating the system and application logs from all of the nodes in the Neo4j cluster CloudWatch Logs is agent based and enables every EC2 instance in the cluster to perform comprehensive logging Configuring CloudWa tch with Neo4j CloudWatch can be configu red on an existing EC2 instance 3 or on a new EC2 instance 4 Once installed the /etc/awslogs/awslogsconf file is configured to ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 16 monitor the Neo4j log At the bottom of the awslogsconf file the following section w ill be added: [Neo4j log] datetime_format = %Y %m%d %H:%M:%S%f%z file = /home/ec2 user/Neo4j 3/Neo4j enterprise 300 RC1/logs/Neo4j log log_stream_name = {instance_id} initial_position = start_of_file log_group_name = /Neo4j /logs After the awslogs service is started check the /var/log/awslogslog for any errors Configuring metrics and alerts for Neo4j is addressed in this Neo4j knowledge base article 5 Online Backup Neo4j can be backed up while it continues to serve user traffic (called “online” backup) Neo4j offers two backup options: full or incremental These strategies can be combined to provide the best mix of safety and efficiency Depending on the risk profile of the system a typical strategy m ight be to have daily full backups and hourly incremental backups or weekly full backups with daily incremental backup s As the name suggests a full backup will clone an entire database The se are the characteristics of a full backup: • Copies database store files • Does not take locks • Replays transactions run after backup started until end of store file copy At the end of a full backup there is a consistent database image on disk This backup file can be safely stored away and recovering to this backup is as simple as co pying the database files back into the Neo4j data directory (typically /data/graphdb) ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 17 After the backup has been created the recommendation is for the backup to be copied from the EC2 instance that ran the process into stable long term storage Amazon S3 provides a range of suitable archive storage platforms depending on your needs The backup can be copied to Amazon S3 directly or you can achieve the same level of durability by using an EBS snapshot which is stored in Amazo n S3 automatically Amazon EBS is a network shared storage service that can be mounted from any EC2 instance Amazon EBS provides persistent block level storage volumes that are automatically replicated within their Availability Zones to protect from compo nent failure offering high availability and durability A snapshot can be created from an EBS volume which not only provides the ability to restore data in the future but also provides the ability to mount that volume to another EC2 instance This process can greatly decrease the time that it takes to add an additional Neo4j node to the cluster A side benefit of EBS snapshots is that they are persisted to Amazon S3 which means that they are protected for long term durability Volumes can be created fro m snapshots in any Availability Zone in the Region and snapshots can also be copied across Regions to provide an even greater level of durability Amazon S3 provides three tiers of storage optimized for cost versus frequency of access Amazon also provides lifecycle policies that can automatically transition objects from Amazon S3 Standard to Amazon S3 Infrequent Access and AWS Glacier (for long term archive) after a specific amount of time has elapsed Lifecycle policies streamline the archival and cost saving process so that you don’t have to manually transition objects or pay increased storage fees for cold data In addition to simplifying storage maintenance Amazon S3 also supports versioning which can help organize redundant backups based on timestamp • Standard Amazon S3 Standard offers high durability availability and performance object storage for frequently accessed data Because it delivers low latency and high throughput Standard is perfect for a wide variety of use cases • Infrequent Access Infrequent Access (Standard IA) is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed It offers high durability throughput and low latency like Amazon S3 Standard with a low per GB storage price and per GB retrieval fee This combination of low cost and high performance ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 18 make s it a sensible option for backups and as a data store for disaster recovery • Archive AWS Glacier is a lowcost long term storage service that provides secure durable storage intended for data backup and archival AWS Glacier provides reliable long term storage for your data and eliminates the administrative burdens of operating and scaling storage to AWS Using AWS Glacier Neo4j backup operators never have to worry about capacity planning hardware provisioning data replication hardware failure detection and repair or time consuming hardware migrations Long term storage on AWS Glacier is the least expensive storage tier per GB However the SLA fo r retrieving data has a much longer latency and is typically in the 3 to 5hour range whereas the other storage tiers have a shorter retrieval time measured in milliseconds By default a consistency check is run at the end of each full backup to ensure that the files being moved to long term storage will be usable up on recovery The consistency checker is a relatively intensive operation since it makes a thorough check of the graph structure at the individual record level So run ning the consistency chec ker on the same EC2 instances as your production cluster will result in performance degradation For this reason it is advisable to run this process on another instance It favors EC2 instances with high I /O capacity and large RAM such as the i28xlarge instance However this instance doesn’t need to be continuously active— it needs only to be instantiated for the duration of the backup and consistency check Any failure during backup (such as the unscheduled termination of the underlying EC2 instance) mea ns that the backup must be repeated After you have a full backup you can then take incremental backups against that state An incremental backup is performed whenever an existing backup directory is specified which the backup tool will automatically dete ct The backup tool will then copy any new transactions from the Neo4j instance and apply them to the backup The result will be an updated backup that is consistent with the current server state and specifically is one that : • Requires a full backup be completed first • Replays logs of transactions since last backup ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 19 Restoring from a backup is very easy This is an important operational affordance since restoring is typically done when a catastrophe has occurred To restore you s imply do the following task s: 1 Make sure Neo4j is not running 2 Replace the /graphdb directory with the contents of the backup 3 Start Neo4j (If clustered start the first instance and then rolling start the remaining instances ) Now that you have seen the nuts and bolts of a Neo4j backup on AWS you can focus on having the appropriate backup hygiene by following these recommendations : • Take regular periodic full backups with a n I/O and RAM optimized EC2 instance Repeat these backups if they fail Move the backup to Amazo n S3 (or to Amazon EBS) • Take incremental backups several times a day ensuring the Neo4j log files are kept for longer than this period Ensure that the backups are transmitted to Amazon S3 (or to Amazon EBS) Disaster Recovery After you have a Neo4j backup on stable long term storage disaster recovery (DR) is greatly simplified If an incident occurs that for whatever reason wipes out all your active Neo4j instances and irrevocably wipes all instance storage then you must quickly work to restore s ervice Fortunately DR with Neo4j on AWS is straightforward You can place backups in long term stable storage and restore them by a simple file copy in the event of a disaster From there you can seed a new cluster of Neo4j instances and resume service Any transactions that occurred between the backup and disaster will have been lost Neo4j clusters can easily span multiple Availability Zones within the same VPC to create private logically isolated networks We recommend that you use a design for deploy ing Neo4j on multiple Availability Zones ELB load balancers can operate across multiple Availability Zones which enables these high availability designs to function seamlessly ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 20 In addition to using a multiple Availability Zone design it is also possible to use multiple Regions O ne useful DR pattern is to host an instance or instances of Neo4j in other AWS Region s in slave and read only mode All slaves in Neo4j whether they are r eadwrite read only or slave only are replicated asynchronously from the master This asynchronous replication allows for regional diversity and availability of the database A sync hronous replication across Regions is quite normal with Neo4j However typically one Region is designated as the master R egion and other Regions are designated as s lave Regions that only contain slave only + read only instances In the extremely rare event of a region al failure there is an administrative procedure to change one of the slave only Regions to be th e master It is important to note that slave and read only instances never volunteer to take on important roles in the Neo4j HA cluster but they are fed a stream of transactions from that cluster This means that such instances can be used as a means of keeping a live backup of a cluster with a minimal downtime window between disaster and recovery On disaster we simply take the data store directory from one of the remote DR instances and seed a new cluster Conclusion The AWS C loud provides a unique platform for running Neo4j clusters at scale With capacities that can meet d ynamic needs costs based on usage and easy integration with other AWS services such as Amazon CloudWatch AWS CloudFormation Amazon EBS and Amazon S3 the AWS Cloud enables you to reliably run Neo4j at full scale without having to manage the hardware yourself By using AWS services to complement the Neo4j graph database AWS provides a convenient platform for developing scalable high performance applications atop Neo4j Customers who are interested in deploying Neo4j Enterprise on AWS now have access to a broad set of services beyond Amazon EC2 such as Elastic Load Balanc ing Amazon EBS Amazon CloudWatch and Amazon S3 The combination of t hese services enable the creation of a reliable secure cost effective and performance oriented graph database Contributors The following individuals and organizations contributed to th is document: ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 21 • Justin De Castri Solutions Architect AWS • David Fauth Field Engineer Neo Technology • Ian Robinson Engineer Neo Technology • Jim Webber Chief Scientist Neo Technology Further Reading In addition to the depth of high quality information ava ilable on AWS there are several books on Neo4j that can help you get started with the database: Graph Databases (O’Reilly) : full e book version available for free at http://graphdatabasescom Learning Neo4j (Packt) : http://Neoj4com/books/learning Neo4j/ The Neo4j manual ( http:// Neo4j com/docs/stable/ ) has a wealth of information about the Neo4j Cypher que ry language the programmatic APIs and operational surface Notes 1 Availability Zones are distinct geographical locations that are engineered to be insulated from failures in other Availability Zones They use separate power grids ISPs and cooling systems and they are placed on different fault lines and flood plains when possible All of this separation and isolation is designed to deliver a level of protection from the failure of a single instance to the failure of an entire Availability Zone 2 This may change in future versions of Neo4j Distributed transaction processing which is at the heart of Neo4j clustering is a fast moving area in computer science and the Neo4j team is very much involved with developing novel protocols for future releases 3 http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/Q uickStartEC2Instancehtml ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 22 4 http://docsawsamazoncom/Amazo nCloudWatch/latest/DeveloperGuide/E C2NewInstanceCWLhtml 5 https://neo4jcom/developer/kb/amazon cloudwatch configuration for neo4j logs/",General,consultant,Best Practices SaaS_Solutions_on_AWS_Tenant_Isolation_Architectures,ArchivedSaaS Solutions on AWS Tenant Isolation Architectures January 2016 This paper has been archived For the most update content see https://d1awsstaticcom/whitepapers/saastenant isolationstrategiespdfArchived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Common Solution Components 1 Security and Networking (Tenant Isolation Modeling) 1 Identity Management User Authentication and Authorization 2 Monitoring Logging and Application Performance Management 2 Analytics 3 Configuration Management and Provisioning 4 Storage Backup and Restore Capabilities 4 AWS Tagging Strategy 5 Chargeback Module 6 SaaS Solutions – Tenant Isolation Architecture Patterns 7 Model # 1 – Tenant Isolation at the AWS Account Layer 8 Model # 2 – Tenant Isolation at the Amazon VPC Layer 11 Model # 3 – Tenant Isolation at Amazon VPC Subnet Layer 14 Model # 4 – Tenant Isolation at the Container Layer 15 Model # 5 – Tenant Isolation at the Application Layer 17 General Recommendations 20 Conclusion 21 Contributors 22 Further Reading 22 APN Partner Solutions 22 Additional Resources 23 Archived Abstract Increasingly the mode of delivery for enterprise solutions is turning toward the software as a service (SaaS) model but architecting a SaaS solution can be challenging There are multiple aspects that need to be taken care of and a variety of options for deploying SaaS solutions on AWS This paper covers the different SaaS deployment models and the combination of AWS services and AWS Partner Network (APN) partner solutions that can be used to achieve a scalable available secure performant and costeffective SaaS offering AWS now offers a structured AWS SaaS Partner Program to help you build launch and grow SaaS solutions on AWS As your business evolves AWS will be there to provide the business and technical enablement support you need Please review the SaaS Partner Program website for more details1 ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 1 of 26 Introduction There are a variety of solutions that can be deployed in a SaaS model and these share a number of similarities and common patterns In this paper we will discuss: • Common solution components – These are aspects that we recommend handling separately from the core solution related functio nal components such as billing monitoring and analytics We will discuss these components in detail • SaaS solution tenant isolation architecture patterns – A solution can be deployed in multiple ways on AWS We will discuss typical models that help with the requirements around a multi tenant SaaS deployment along with considerations for each of those cases This white paper focuses on the technology and architecture aspects of SaaS deployments and does not attempt to address business and process related aspects such as software vendor licensing SLAs pricing models and DevOps practice considerations Common Solution Components In addition to building the core functional components of your SaaS solution we highly recommend that you buil d additional supporting components that will help in future proofing your solution and making it easier to manage Building additional supporting components will also enable you to easily grow and add more tenants over time The following sections discuss some of the recommended supporting components for SaaS solution setups Security and Networking (Tenant Isolation Modeling) The first step in any multi tenant system design is to define a strategy to keep the tenants secure and isolated from one another This may include security considerations such as defining segregation at the network/storage layer encrypting data at rest or in transit managing keys and certificates safely and even managing application level security constructs There are a number of AWS services you can use to help address security considerations at each level including AWS CloudHSM AWS CloudTrail Amazon VPC AWS WAF Amazon Inspector Amazon CloudWatch and Amazon CloudWatch Logs 2 By using ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 2 of 26 native AWS services such as these you can define a model that matches the solution’s security and networking requirements In addition to AWS native services many customers also make use of APN Partner offerings in the infrastructure security space to augment their security posture and add capabilities like intrusion detection systems (IDS)/intrusion prevention systems (IPS)3 Identity Management User Authentication and Authorization It’s important to decide on the strategy for authenticating and authorizing users to manage both the AWS services and the SaaS application itself For AWS services you can use AWS Identity and Access Management (IAM) users IAM roles Amazon Elastic Compute Cloud (Amazon EC2) roles social identities directory/ LDAP users and even federated identities using SAML based integrations4 Likewise for your application you have multiple ways to authenticate users We recommend building a layer that supports your application authentication requirements You might consider Amazon Cognito based authentication for mobile users and you can also look to APN Partner offerings in the identity and access control space for managing authentication across different identity providers5 Monitoring Logging and Application Performance Management You should have monitoring enabled at multiple layers not only to help diagnose issues but also to enable proactive measures to avoid issues down the road You can benefit from utilizing the data from Amazon CloudWatch which enables detailed monitoring for critical infrastructure and lets you confi gure alarms to notify you of any issues6 You could also make use of AWS Config that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance7 For application level monitoring you could use the Amazon CloudWatch Logs functionality to stream the logs in real time to the service; in addition you can search for patterns and you can also track the number of errors that occur in your application logs and configure Amazon CloudWatch to send you a notification whenever the rate of errors exceeds a threshold you specify Many ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 3 of 26 companies also use APN Partner offerings in the logging and monitoring space to monitor application performance aspects8 Analytics Most SaaS solutions have a wealth of raw data including application logs user access logs and billing related data which generally can provide a lot of insight if properly analyzed In addition to batch oriented analysis you can do real time analytics to see what kind of actions are being invoked by various tenants on the platfor m or look at realtime infrastructure related metrics to detect any unexpected behavior and to preempt any future problems You can use AWS services such as Amazon Elastic MapReduce (Amazon EMR) Amazon Redshift Amazon Kinesis Amazon Machine Learning Amazon QuickSight Amazon Simple Storage Service (Amazon S3) and Amazon EC2 Spot Instances to build these types of capabilities9 Analytics is normally an ancillary function of a platform in the early stages but as soon as multiple tenants are on boarded to a SaaS platform analytics quickly becomes a core function for detecting and understanding usage patterns providing recommenda tions and driving decisions We recommend that you plan for this layer early in the solution development cycle Figure 1 shows some of the AWS big data services and their capabilities ranging from data ingestion to storage to data analytics/processing Figure 1: AWS Big Data and analytics services ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 4 of 26 Configuration Management and Provisioning AWS provides a number of possibilities for automating solution deployments You have the ability to bake some deployment tasks within the Amazon Machine Images (AMIs) themselves and you can automate more configurable or frequent changes using various other means: One time tasks like OS hardening or setting up specific versions of runtime environments that do not change without an application recertification process (like a Java upgrade) or even time consuming installations (like middleware/database setup) can be baked into the AMI itself To handle more frequently changing aspects of deployment like code updates from a code repository boot time tasks (like joining a domain/cluster) and certain environment specific configurations (like different parameters for dev/test/production) you can use custom scripts in the EC2 instance’s user data section or AWS services such as AWS CodeCommit AWS CodePipeline and AWS CodeDeploy 10 For complete stack spin up a higher level of automation can be achieved by using AWS CloudFormation which gives developers and systems administrators an easy way to create and manage a collection of related AWS resources and enables them to provision and update those resources i n an orderly and predictable fashion11 Depending on your requirements AWS Elastic Beanstalk and AWS OpsWorks can also help with quick deployments and automation12 With the right mix of segregation across different types of tasks you can achieve the correct balance between faster boot time (often needed for auto scaled layers) and a configurable automated setup (needed for flexible deployments) Storage Backup and Restore Capabilities Most AWS services have mechanisms in place to perform backup so that you can revert to a last known stable state if any newer changes need to be backed out Features including Amazon EC2 AMI creation or snapshotting (Amazon EBS Amazon RDS and Amazon Redshift snapshots) can potentially support a majority of backup requirements However for advanced needs such as the need to quiesce a file system and then take a consistent snapshot of an active ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 5 of 26 database you can use third party backup tools many of which are available on AWS Marketplace 13 AWS Tagging Strategy To help you manage instances images and other Amazon EC2 resources you can assign your own metadata to each resource in the form of tags We recommend that you adopt a tagging strategy before you begin to roll out your SaaS solution Each tag consists of a key and an optional value both of which you define You can also have multiple tags on a single resource There are two main uses of tags: 1 General management of resources: Tags enable you to categorize your AWS resources in different ways such as by purpose owner or environment This can simplify filtering and searching across different resources You can also use resource groups to create a custom console that organizes and consolidates the information you need based on your project and the resources you use14 You can also create a resource group to view resources from different regions on the same screen as shown in Figure 2 Figure 2: AWS reso urce grou ps ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 6 of 26 2 Billing segregation: Tags enable cost allocation reports and allow you to get cost segregation based on a particular business unit or environment depending on the tagging strategy used15 This along with AWS Cos t Explorer can greatly simplify the billing data related visibility & reports16 Chargeback Module Another important aspect of a multi tenant system is cost segregation across tenants based on their usage From an AWS resources perspective tagging can be a great resource to help you separate out usage at a macro level However for most SaaS solutions greater controls are needed for usage monitoring so we recommend that you build your own custom billing module as needed A billing module could look like the high level generic example shown in Figure 3 Figure 3: Sample metering and chargeback module • All of the resources that are launched stopped and terminated are tracked and the data is then sent to an Amazon Kinesis stream • Granular measurements such as the number of API requests made or the time taken to process any request are tracked and the data is then fed into the Kinesis stream in real time ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 7 of 26 • Two types of consumer applications can process the data stored in Amazon Kinesis: o A consumer fleet that generates real time metrics on how the system is being utilized by various tenants This may help you make decisions such as whether to throttle a particular tenant’s usage or perform other corrective actions based on real time feeds o A second set of a Kinesis consumer fleet could aggregate the continuous feed and generate monthly or quarterly usage reports for billing It could also provide usage analytics for each tenant by processing the raw data and storing it in Amazon Redshift For historical data processing or transformation Amazon EMR can be used SaaS Solutions – Tenant Isolation Architecture Patterns There are multiple approaches to deploying a packaged solution on AWS rangin g from a fully isolated deployment to a completely shared SaaS type architecture with many other deployment options in between In order to support any of the deployment options the solution or application itself should be able to support that SaaS multi tenancy model which is the basic assumption we will take here before diving deep into AWS specific components of different deployment models The decision to pick a particular AWS deployment model depends on multiple criteria including: • Level of segregation across tenants and deployments • Application scalability aspects across tenant specific stacks • Level of tenant specific application customizations • Cost of deployment • Operations and management efforts • End tenant metering and billing aspects The different choices are a “Rubik’s cube” of options that impact one another in potentially unforeseen ways The goal of this paper is to help with these multi ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 8 of 26 dimensional unforeseen impacts The following sections describe some of the SaaS deployment model s on AWS and include a pros and cons section for each option to help guide you to the optimal solution given your business and technical requirements as below: Model #1 – Tenant Isolation at the AWS Account Layer Model #2 – Tenant Isolation at the Amazon VPC Layer Model #3 – Tenant Isolation at Amazon VPC Subnet Layer Model #4 – Tenant Isolation at the Container Layer Model #5 – Tenant Isolation at the Application Layer Model # 1 – Tenant Isolation at the AWS Account Layer In this model all the tenants will have their individual AWS accounts and will be isolated to an extent In essence this is not truly a multi tenant SaaS solution but can be treated as a managed solution on AWS Figure 4: Tenant isolation at AWS account layer Pros: • Tenants are completely separated out and they do not have any overlap which can provide each tenant with a greater sense of security ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 9 of 26 • Solution or general configuration customizations are easy because every deployment is specific to a tenant (or organization) • It’s easy to track AWS usage because a separate monthly bill is generated for each tenant (or organization) Cons: • This option lacks the resources and cost optimizations that can be achieved by the economies of scale provided by a multi tenant SaaS model • With a large number of tenants it can become challenging to manage separate AWS a ccounts and individual tenant deployments from an operations perspective • As a best practice all the AWS account root logins should be multi factor authentication (MFA) enabled With ever increasing individual tenant accounts it becomes difficult to manage all the MFA devices Best Practices: • Centralized operations and management – IAM supports delegating access across AWS accounts for accounts you own using IAM roles 17 Using this functionality you can manage all tenants’ AWS accounts through your own common AWS account by assuming roles to perform variou s actions (such as launching a new stack using AWS CloudFormation or updating a security group configuration) instead of having to log in to each AWS account individually You can utilize this functionality by using the AWS Management Console AWS Command Line Interface (AWS CLI) and the API18 Figure 3 provides a snapshot of how to set this up from the AWS Management Console ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 10 of 26 Figure 5: Cross account IAM r olebased access setup Figure 6: Cross account IAM r olebased access switching • Consolidated AWS billing – You can use the Consolidated Billing feature to consolidate payment for multiple AWS accounts within your organization by designating one of them to be the payer account 19 With Consolidated Billing you can see a combined view of AWS charges ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 11 of 26 incurred by all accounts and you can get a detailed cost report for each individual AWS account associated with your payer account Figure 7: AWS consolidated billing • VPC peering – If you would like to have a central set of services (say for backup anti virus OS patching and so on) you can use a VPC peering connection in the same AWS region between your common AWS account that has these shared services and the respective tenant’s AWS account However note that you are charged for data transfer within a VPC peering connection at the same rate as data transfer across Availability Zones Theref ore you should factor this cost into the solution’s overall cost modeling exercise Model # 2 – Tenant Isolation at the Amazon VPC Layer In this model all the tenant solution deployments are in the same AWS account but the level of separation is at the VPC layer For every tenant deployment there’s a separate VPC which provides logical separation between tenants ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 12 of 26 Figure 8: Tenant isolation at VPC layer Pros: • Everything is in a single account so this model is easier to manage than a multi account setup • There’s appropriate isolation between different tenants because each one lives in a different VPC • Compared with the previous model this model provides better economies of scale and improved utilization of Amazon EC2 Reserved Instances because all reservations and volume pricing constructs are applicable on the same AWS account However if Consolidated Billing is used this model provides no advantage over the previous model because Consolidated Billing treats all the accounts on the consolida ted bill as one account Cons: • Amazon VPC related limits will have to be closely monitored both from an overall account perspective and from each tenant’s VPC perspective ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 13 of 26 • If all the VPCs need connectivity back to an on premises setup then managing individual VPN connections may become a challenge • Even though it’s the same account if a shared set of services needs to be provided (such as backups anti virus updates OS updates and so forth) then VPC peering will need to be set up from the shared services VPC to all tenant VPCs • Security groups are tied to a VPC so depending on the deployment architecture you may have to create and manage multiple security groups for each VPC • AWS supports tagging as described in the Amazon EC2 documentation 20 However if you need to separate usage and costs for services and resources beyond the available tagging support you shoul d either build a custom chargeback layer or have a separate AWS account strategy to help clearly demarcate individual tenant usage Best Practices: In this setup use tags to separate out AWS costs for each of the tenant deployments You can define resource groups and manage tags there instead of managing them at the individual resource level21 Once you have defined the tagging strategy you can use the monthly cost allocation reports to view a breakup of AWS costs by tags and segregate it as per your needs (see the sample report in Figure 9)22 Figure 9: Sample cost allocation report ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 14 of 26 Model # 3 – Tenant Isolation at Amazon VPC Subnet Layer In this model we will discuss the case where we have a single AWS account and a single VPC for all tenant deployments The isolation happens at the level of subnets and each tenant has their own separate version of an application or solution with no sharing across tenants Figure 10 illustrates this type of deployment Figure 10: Tenant isolation at VPC subnet layer Pros: • You don’t need to set up VPC peering for intercommunication • VPN and AWS Direct Connect connectivity to a single on premises site is simplified as there is a single VPC23 Cons: • Isolation between tenants has to be managed at the subnet level so Amazon VPC network access control lists (NACLs) and security groups need to be carefully managed • VPC limits are harder to manage as the number of tenants increases Furthermore you can provision only a few subnets under the VPC CIDR ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 15 of 26 (Classless Inter Domain Routing) depending on its size and the CIDR cannot be resized once created • Changing a VPC level setting (say DHCP options set) affects all tenants although they have their individual deployments • There are limits on the number of security groups and the number of rules per security group at the VPC level so managing those limits with multiple tenants in the same VPC may be complicated Best Practices: • To access public AWS service endpoints (like Amazon S3) utilize VPC endpoints This will scale better than routing the traffic for multiple tenants through a network address translation (NAT) instance • To avoid hitting security group related limits in a VPC: o Consolidate security groups to stay under the limit o Don’t use security group cross references; instead refer to CIDR ranges Model # 4 – Tenant Isolation at the Container Layer With the advent of container based deployment it is now possible to have a single instance and slice it for multiple tenant applications based on requirem ents The Amazon EC2 Container Service (Amazon ECS) helps easily set up and manage Docker container based deployments and could be used to deploy tenant specific solution components in individual containers24 Figure 11 illustrates a scenario where different tenants’ containers are deployed in the same VPC ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 16 of 26 Figure 11: Tenant isolation at container layer Pros: • You can have a higher level of resource utilization by having a container based model on shared instances • It’s easier to manage the clusters at scale as Amazon ECS takes away the heavy lifting involved in terms of cluster management and general fault tolerance • Simplified deployments are possible by testing a Docker image on any test/development environment and then using simple CLI based options to directly put it into production • Amazon ECS deploys images on your own Amazon EC2 instances which can be further segmented and controlled using VPC based controls This along with Docker’s own isolation model meets the security requirements of most multi tenant applications Cons: • You can use Amazon EC2 and VPC security groups to limit the traffic on an Amazon EC2 instance However you need to manage the container configuration to control whic h ports are open Managing those aspects may become a little tedious at scale • Tags do not work at the Amazon ECS task (container) level so separating costs based on tags will not work and a custom billing layer will be needed ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 17 of 26 Best Practices: • To secure container communication beyond the controls provided by VPC security groups you could create a software defined network for the containers using point topoint tunneling with Generic Routing Encapsulation (GRE) to route traffic between the container based subnets • In order to architect auto scaling functionality using Amazon ECS use a combination of Amazon CloudWatch and AWS Lambda based container deployment25 In this setup an AWS Lambda function is triggered by an Amazon CloudWatch alarm to automatically add another Amazon ECS task to dynamically scale as shown in Figure 12 Figure 12: Autoscaling architecture for container based deployment Model # 5 – Tenant Isolation at the Application Layer This model represents a major shift from the earlier discussed models; now the application or solution deployment is shared across different tenants This is a radical change and a movement toward a true multi tenant SaaS model However to achieve this model the application itself should be designed to ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 18 of 26 support multi tenancy For example if we take a typical 3tier application with shared web and application layers there can be some subtle variations at the database layer (which for example could be either Amazon RDS or a database on an Amazon EC2 instance): 3 Separate databases: Each tenant will have a different database for maximum isolation To enable the application layers to pick up the right database upon each tenant’s request you will need to maintain metadata in a separate store (such as Amazon DynamoDB) where mapping of a tenant to its database is managed 4 Separate tables/schemas : Different database flavors have different constructs but another possible dep loyment model could be that all tenants’ data resides in the same database but the data is tied to different schemas or tables to provide a level of isolation 5 Shared database shared schema/tables: In this model all tenants’ data is placed together A unique tenant ID column separates data records for each tenant Whenever a new tenant needs to be added to the system a new tenant ID is generated additional capacity is provisioned and traffic routing is started to an existing or new stack Pros: • You c an achieve economies of scale and better resource usage and optimization across the entire stack As a result this can often be the cheapest option to operate at scale when you have shared components across the architecture o For example having a huge multi tenant Amazon DynamoDB table that can absorb the request spikes can be much cheaper than having higher provisioned Amazon DynamoDB tables for individual tenants • It’s easy to manage and operate the stack because it is a single deployment Any change s or enhancements that need to be made are rolled out at once rather than having to manage n different environments • Network connectivity is simplified and the challenges around the VPC limits with other models are also subdued because it’s a single VPC deployment (although it may be bigger in size) • All shared services (such as patching OS updates and antivirus) are also centralized and deployed as a single unit for all the tenants ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 19 of 26 Cons: • Applications need to be multi tenant aware so existing applic ations may have to be re architected • Depending on certain compliance and security requirements cohosting tenants with different security profiles may not be possible Best Practices: • To implement this model successfully consider the following important aspects: • Often times different tenants have their own specific needs for certain features or customizations: o Try to group tenants according to their requirements; tenants with similar needs should be put on the same deployment o Try to build the most asked for features in the core platform or application itself and avoid customizations at the tenant level for long term maintainability • Closely monitor the stack for each tenant’s activities If necessary you should be able to throttle or deprioritize any particular tenant’s actions to avoid affecting other tenants adversely • Ensure that you have the ability to scale the stacks up and down automatically to address the changing needs of the tenants on a particular stack This should be built into the ar chitecture rather than being done by manual updates • Use role based and fine grained access controls to enable access to limit a tenant’s access across the entire stack Amazon DynamoDB provides fine grained access controls which enable you to determine who can access individual data items and attributes in Amazon DynamoDB tables and indexes and the actions that can be performed on them Using Amazon DynamoDB in SaaS architectures can greatly reduce complexities • Another important aspect to handle is the AWS cost management across tenants according to their usage To handle this we recommend that you design a custom billing layer (as explained and outlined in previous sections) and incorporate it in the solution ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 20 of 26 General Recommendations Consider the following general best practices for a packaged SaaS solution design and delivery on AWS: • Instead of building large monolithic application architectures it’s often helpful to create smaller independent single responsibility services that can be clubbe d together to achieve the overall business functionality These smaller microservices based architectures can be easier to manage and can independently scale You could use services like Amazon ECS and AWS Lambda to create these smaller components AWS Simple Queue Service (Amazon SQS) could also potentially help decouple microservices by introducing a queuing layer in between for communication26 You can also use Amazon API Gateway to enable API based interactions between the layers thereby keeping them integrated just at the interface layer27 To learn more about this microservices based architecture pattern see the blog post SquirrelBin: A Serverless Microservice Using AWS Lambda 28 • Build abstraction at each layer so that you can future proof your solution by being able to change the underlying implementation without affecting the public interfaces Consider aspects such as where you want the solution to be in next few years and think about technology trends For example mobile was not as big five years ago as it is today Plan for the future and design your solution in a manner that is scalable and extensible to meet future needs • Define a release management process to enable f requent quality updates to the solution AWS CodeCommit AWS CodePipeline and AWS CodeDeploy can help with this aspect of your deployment • Keep tenant specific customizations to a minimum and try to build most of the features within the platform itself For tenant specific configuration metadata AWS DynamoDB can be useful • Build an API for your solution or platform if it needs to integrate with third party systems • Use IAM roles for Amazon EC2 instead of using hard coded credentials within various application components ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 21 of 26 • Find ways to cost optimize your solution For instance you can use Reserved or Spot Instances adopt AWS Lambda to design an event driven architecture or use Amazon ECS to containerize smaller functional blocks • Utilize Auto Scaling to dynamically scale your environment up and down as per load • Benchmark application performance to right size your Amazon EC2 instances and their count • Make use of AWS Trusted Advisor recommendations to further optimize your AWS deployment29 • There are often custom capabilities that you may like to build into your platform that could be supplied by a packaged solution from an APN Technology Partner Look for opportunities to pick and choose what to build on your own versus utilizing an existing solution Leverage various APN Partner solutions and offerings and AWS Marketplace to augment the features and functionalities provided by AWS services • Enroll in the AWS SaaS Partner Program to learn build and grow your SaaS business on AWS30 • It’s important to ensure that your solution can be effectively managed on AWS by your firm Another option is to work with an AWS MSP Consulting Partner 31 • Validate your operational model using the AWS operational checklist 32 • Validate your security model using the AWS auditing security checklist 33 • Leverage various APN Partner solutions and offerings and AWS Marketplace to augm ent the features and functionalities provided by AWS services Conclusion Every packaged SaaS solution is different in nature but they share common ingredients You can use the practices and architecture methodologies described in this paper to deploy a scalable secure optimized SaaS solution on AWS The paper describes different models you can adopt Depending on the type of SaaS solution you’re building using multiple models or even a hybrid approach may suit your needs ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 22 of 26 Contributors The following individuals and organizations contributed to this document: • Kamal Arora Solutions Architect Amazon Web Services • Tom Laszewski Sr Manager Solutions Architects Amazon Web Services • Matt Yanchyshyn Sr Manager Solutions Architects Amazon Web Services Further Reading APN Partner Solutions In order to build out various functions in a custom SaaS solution you will likely want to integrate with popular ISV solutions across various functions To make your selection easy the APN has developed the AWS Competency Program designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas34 Below are some of the AWS Comp etency solution pages which you can refer to for more details: • DevOps : https://awsamazoncom/devops/partner solutions/ • Mobile : https://awsamazoncom/mobile/partner solutions/ • Security : https://awsamazoncom/security/partner solutions/ • Digital Media: https://awsamazoncom/partners/competencies/digital media/ • Marketing & Commerce : https://awsamaz oncom/digital marketing/partner solutions/ • Big Data: https://awsamazoncom/partners/competencies/big data/ • Storage : https://awsamazoncom/backup recovery/partner solutions/ • Healthcare : https://awsamazoncom/partners/competencies/healthcare/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 23 of 26 • Life Sciences : https://awsamazoncom/partners/competencies/life sciences/ • Microsoft Solutions : https://awsamazoncom/pa rtners/competencies/microsoft/ • SAP Solutions : https://awsamazoncom/partners/competencies/sap/ • Oracle Solutions: https://awsamazoncom/partners/competencies/oracle/ • AWS Managed Service Program: http://awsamazoncom/partners/managed service/ • AWS SaaS Partner program: http://awsamazoncom/partners/saas/ Additional Resources • Details on various AWS usage and billing reports: http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/billing what ishtml • Amazon EC2 IAM roles: http://docsawsamazoncom/AWSEC2/latest/UserGuide/i amroles for amazon ec2html • Auto scaling Amazon ECS services using Amazon CloudWatch and AWS Lambda: https://awsamazoncom/blogs/compute/scaling amazon ecs services automatically using amazon cloudwatch and awslambda/ • Working with Tag Editor: http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/tag editorhtml • Working with resource groups: http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/resource groupshtml • Backup archive and restore approaches on AWS: https://d0awsstaticcom/whitepapers/Storage/Backup_Archive_and_R estore_Approaches_Using_AWSp df ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 24 of 26 Notes 1 http://awsamazoncom/partners/saas/ 2 https://awsamazoncom/cloudhsm/ https://awsamazoncom/cloudtrail/ https://awsamazoncom/vpc/ https://awsamazoncom/waf/ https://awsamazoncom/inspector/ https://awsamazoncom/cloudwatch/ http://docsawsamazoncom/AmazonCloudWatch/latest/logs/WhatIsCloud WatchLogshtml 3 https://awsamazoncom/security/partner solutions/#infrastructure 4 https://awsamazoncom/iam/ 5 https://awsamazoncom/cognito/ https://awsamazoncom/security/partner solutions/#iac 6 https://awsamazoncom/cloudwatch/ 7 https://awsamaz oncom/config/ 8 https://awsamazoncom/security/partner solutions/#log monitor 9 https://awsamazoncom/elasticmapreduce/ https://awsamazoncom/redshift/ https://awsamazoncom/kinesis/ https://awsamazoncom/machine learning/ https://awsamazoncom/quicksight/ https://awsamazoncom/s3/ https://awsamazoncom/ec2/spot/ 10 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance metadatahtml https://awsamazoncom/codecommit/ https://awsamazoncom/codepipeline/ https://awsamazoncom/codedeploy/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 25 of 26 11 https://awsamazoncom/cloudformation/ 12 https://awsamazoncom/elasticbeanstalk/ https://awsamazoncom/opsworks/ 13 https://awsamazoncom/marketplace/ 14 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 15 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost alloc tagshtml 16 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost explorer what ishtml 17 http://docsawsamazoncom/IAM/latest/UserGuide/roles walkthrough crossaccthtml 18 https://awsamazoncom/console/ https://awsamazoncom/cli/ 19 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/consolidated billinghtml 20 http://docsawsamazoncom/AWSEC2/latest/UserGuide/Using_Tagshtml#t agrestrictions 21 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EC2_Resourcesht ml 22 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost alloc tagshtml 23 https:/ /awsamazoncom/directconnect/ 24 https://awsamazoncom/ecs/ 25 https://awsamazoncom/lambda/ 26 https://awsamazoncom/ecs/ https://awsamazoncom/lambda/ 27 https://awsamazoncom/api gateway/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 26 of 26 28 https://awsamazoncom/blogs/compute/the squirrelbin architecture a serverless microservice using awslambda/ 29 https://awsamazoncom/trusted advisor/ 30 http://awsamazoncom/partners/saas/ 31 http://awsamazonc om/partners/managed service/ 32 https://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 33 https://d0awsstaticcom/whitepapers/compliance/AWS_Auditing_Security_ Checklistpdf 34 https://awsamazoncom/partners/competencies/,General,consultant,Best Practices SaaS_Storage_Strategies_Building_a_Multitenant_Storage_Model_on_AWS,This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/multi tenantsaasstoragestrategies/multitenantsaasstorage strategieshtml SaaS Storage Strategies Building a multitenant storage model on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS SaaS Storage Strategies: Building a multitenant storage model on AWS Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Table of Contents Abstract and introduction i Abstract 1 Are you WellArchitected? 1 Introduction 1 SaaS partitioning models 3 Silo model 3 Bridge model 3 Pool model 4 Setting the backdrop 4 Finding the right fit 5 Assessing tradeoffs 5 Pros 6 Cons 6 Pool model tradeoffs 6 Pros 7 Cons 7 Hybrid: The business compromise 7 Data migration 9 Migration and multitenancy 9 Minimizing invasive changes 9 Security considerations 10 Isolation and security 10 Management and monitoring 11 Aggregating storage trends 11 Tenantcentric views of activity 11 Policies and alarms 11 Tiered storage models 12 The developer experience 13 Linked account silo model 14 Multitenancy on DynamoDB 15 Silo model 15 Bridge model 16 Pool model 16 Managing shard distribution 19 Dynamically optimizing IOPS 19 Supporting multiple environments 19 Migration efficiencies 19 Weighing the tradeoffs 19 Multitenancy on RDS 21 Silo model 21 Bridge model 22 Pool model 23 Factoring in single instance limits 24 Weighing the tradeoffs 25 Multitenancy on Amazon Redshift 26 Silo model 26 Bridge model 26 Pool model 27 Keeping an eye on agility 28 Conclusion 29 Contributors 30 Document revisions 31 Notices 32 AWS glossary 33 iii This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Abstract SaaS Storage Strategies Publication date: May 6 2021 (Document revisions (p 31)) Abstract Multitenant storage represents one of the more challenging aspects of building and delivering Software as a Service (SaaS) solutions There are a variety of strategies that can be used to partition tenant data each with a unique set of nuances that shape your approach to multitenancy Adding to this complexity is the need to map each of these strategies to the different storage models offered by AWS such as Amazon DynamoDB Amazon Relational Database Service (Amazon RDS) and Amazon Redshift Although there are highlevel themes you can apply universally to these technologies each storage model has its own approach to scoping managing and securing data in a multitenant environment This paper offers SaaS developers insights into a range of data partitioning options allowing them to determine which combination of strategies and storage technologies best align with the needs of their SaaS environment Are you WellArchitected? The AWS WellArchitected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable secure efficient costeffective and sustainable systems Using the AWS WellArchitected Tool available at no charge in the AWS Management Console you can review your workloads against these best practices by answering a set of questions for each pillar In the SaaS Lens we focus on best practices for architecting your software as a service (SaaS) workloads on AWS For more expert guidance and best practices for your cloud architecture—reference architecture deployments diagrams and whitepapers—refer to the AWS Architecture Center Introduction AWS offers Software as a Service (SaaS) developers a rich collection of storage solutions each with its own approach to scoping provisioning managing and securing data The way that each service represents indexes and stores data adds a unique set of considerations to your multitenant strategy As a SaaS developer the diversity of these storage options represents an opportunity to align the storage needs of your SaaS solution with the storage technologies that best match your business and customer needs As you weigh AWS storage options you must also consider how the multitenant model of your SaaS solution fits with each storage technology Just as there are multiple flavors of storage there are also multiple flavors of multitenant partition strategies The goal is to find the best intersection of your storage and tenant partitioning needs This paper explores all the moving parts of this puzzle It examines and classifies the models that are typically used to achieve multitenancy and helps you weigh the pros and cons that shape your selection of a partitioning model It also outlines how each model is realized on Amazon RDS Amazon DynamoDB and Amazon Redshift As you dig into each storage technology you’ll learn how to use the AWS constructs to scope and manage your multitenant storage 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Introduction Although this paper gives you general guidance for selecting a multitenant partitioning strategy it’s important to recognize that the business technical and operational dimensions of your environment will often introduce factors that will also shape the approach you select In many cases SaaS organizations adopt a hybrid of the variations described in this paper 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model SaaS partitioning models To get started you need a welldefined conceptual model to help you understand the various implementation strategies The following figure shows the three basic models—silo bridge and pool—that are commonly used when partitioning tenant data in a SaaS environment Each partitioning model takes a very different approach to managing accessing and separating tenant data The following sections give a quick breakdown of the models giving you the ability to explore the values and tenets of each model outside of the context of any specific storage technology SaaS partitioning models Silo model In the silo model storage of tenant data is fully isolated from any other tenant data All constructs that are used to represent the tenant’s data are considered logically “unique” to that client meaning that each tenant will generally have a distinct representation monitoring management and security footprint Bridge model The bridge model often represents an appealing compromise for SaaS developers Bridge moves all of the tenant’s data into a single database while still allowing some degree of variation and separation for each tenant Typically you achieve this by creating separate tables for each tenant and allow each of which is allowed table to have its own representation of data (schema) 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model Pool model The pool model represents the allin multitenant model where tenants share all of the system’s storage constructs Tenant data is placed into a common database and all tenants share a common representation (schema) This requires the introduction of a partitioning key that is used to scope and control access to tenant data This model tends to simplify a SaaS solution’s provisioning management and update experience It also fits well with the continuous delivery and agility goals that are essential to SaaS providers Setting the backdrop The silo bridge and pool models provide the backdrop for our discussion As you dig into each AWS storage technology you’ll discover how the conceptual elements of these models are realized on a specific AWS storage technology Some map very directly to these models; others require a bit more creativity to achieve each type of tenant isolation It’s worth noting that these models are all equally valid Although we’ll discuss the merits of each the regulatory business and legacy dimensions of a given environment often play a big role in shaping the approach you ultimately select The goal here is to simply bring visibility to the mechanics and tradeoffs associated with each approach 4 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Assessing tradeoffs Finding the right fit Selecting a multitenant partitioning storage model strategy is influenced by many different factors If you are migrating from an existing solution you might favor adopting a silo model because it creates the simplest and cleanest way to transition to multitenancy without rewriting your SaaS application If you have regulatory or industry dynamics that demand a more isolated model the efficiency and agility of the pool model might unlock your path to an environment that embraces rapid and continual releases The key here is to acknowledge that the strategy you select will be driven by a combination of the business and technical considerations in your environment In the following sections we highlight the strengths and weaknesses of each model and provide you with a welldefined set of data points to use as part of your broader assessment You’ll learn how each model influences your ability to align with the agility goals that are often at the core of adopting a SaaS model When selecting an architectural strategy for your SaaS environment consider how that strategy impacts your ability to rapidly build deliver and deploy versions in a zero downtime environment Assessing tradeoffs If you were to put the three partitioning models—silo bridge and pool—on a spectrum you’d see the natural tensions associated with adopting any one of these strategies The qualities that are listed as strengths for one model are often represented as weaknesses in another model For example the tenets and value system of the silo model are often in opposition to those of the pool model Partitioning model tradeoffs The preceding figure highlights these competing tenets Across the top of the diagram you’ll see the three partitioning models represented On the left are the pros and cons associated with the silo model On the right we provide similar lists for the pool model The bridge model is a bit of a hybrid of these considerations and as such represents a mix of the pros and cons shown at the extremes 5 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pros Silo model tradeoffs Representing tenant data in completely separate databases can be appealing In addition to simplifying migration of existing singletenant solutions this approach also addresses concerns some tenants might have about operating a fully shared infrastructure Pros •Silo is appealing for SaaS solutions that have strict regulatory and security constraints — In these environments your SaaS customers have very specific expectations about how their data must be isolated from other tenants The silo model lets you offer your tenants an option to create a more concrete boundary between tenant data and provides your customers with a sense that their data is stored in a more dedicated model •Crosstenant impacts can be limited — The idea here is that via the isolation of the silo model you can ensure that the activity of one tenant does not impact another tenant This model allows for tenantspecific tuning where the database performance SLAs of your system can be tailored to the needs of a given tenant The knobs and dials that are used to tune the database also generally have a more natural mapping to the silo model which makes it simpler to configure a tenantcentric experience •Availability is managed at the tenant level minimizing tenant exposure to outages — With each tenant in their own database you don’t have to be concerned that a database outage might cascade across all of your tenants If one tenant has data issues they are unlikely to adversely impact any of the other tenants of the system Cons •Provisioning and management is more complex — Any time you introduce a pertenant piece of infrastructure you’re also introducing another moving part that must be provisioned and managed on a tenantbytenant basis You can imagine for example how a siloed database solution might impact the tenant onboarding experience for your system Your signup process will require automation that creates and configures a database during the onboarding process It’s certainly achievable but it adds a layer of complexity and a potential point of failure in your SaaS environment •Your ability to view and react to tenant activity is undermined — With SaaS you might want a management and monitoring experience that provides a crosstenant view of system health You want to proactively anticipate database performance issues and react with policies in a more holistic way However the silo model makes you work harder to find and introduce tooling to create an aggregated systemwide view of health that spans all tenants •The distributed nature of a silo model impacts your ability to effectively analyze and assess performance trends across tenants — With each tenant storing data in its own silo you can only manage and tune service loads in a tenantcentric model This essentially leads to the introduction of a set of oneoff settings and policies that you have to manage and tune independently This can be both inefficient and could impose overhead that undermines your ability to respond quickly to customer needs •Silo limits cost optimization — Perhaps the most significant downside the oneoff nature of the silo model tends to limit your ability to tune your consumption of storage resources Pool model tradeoffs The pool model represents the ultimate allin commitment to the SaaS lifestyle With the pool model your focus is squarely on having a unified approach to your tenants that lets you streamline tenant storage provisioning migration management and monitoring 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pros Pros •Agility — Once all of your tenant data is centralized in one storage construct you are in a much better position to create tooling and a lifecycle that supports a streamlined universal approach to rapidly deploying storage solutions for all of your tenants This agility also extends to your onboarding process With the pool model you don’t need to provision separate storage infrastructure for each tenant that signs up for your SaaS service You can simply provision your new tenant and use that tenant’s ID as the index to access the tenant’s data from the shared storage model used by all of your tenants •Storage monitoring and management is simpler — In the pool model it’s much more natural to put tooling and aggregated analytics into place to summarize tenant storage activity The everyday tools you’d use to manage a single storage model can be leveraged here to build a comprehensive crosstenant view of your system’s health With the pool model you are in a much better position to introduce global policies that can be used to proactively respond to system events Generally the unification of data into a single database and shared representation simplifies many aspects of the multitenant storage deployment and management experience •Additional options help optimize the cost footprint of your SaaS solutions — The costs opportunities often show up in the form of performance tuning You might for example have throughput optimization that is applied across all tenants as one policy (instead of managing separate policies on a tenantbytenant basis) •Pool improves deployment automation and operational agility — The shared nature of the pool model generally reduces the overall complexity of your database deployment automation which aligns nicely with the SaaS demand for continual and frequent releases of new product capabilities Cons •Agility means a higher bar for managing scale and availability — Imagine the impact of a storage outage in a pooled multitenant environment Now instead of having one customer down all of your customers are down This is why organizations that adopt a pool model also tend to invest much more heavily in the automation and testing of their environments A pooled solution demands proactive monitoring solutions and robust versioning data and schema migration Releases must go smoothly and tenant issues need to be captured and surfaced efficiently •Pool challenges management of tenant data distribution — In some instances the size and distribution of tenant data can also become a challenge with pooled storage Tenants tend to impose widely varying levels of load on your system and these variations can undermine your storage performance The pool model requires more thought about the mechanisms that you will employ to account for these variations in tenant load The size and distribution of data can also influence how you approach data migration These issues are typically unique to a given storage technology and need to be addressed on a casebycase basis •The shared nature of the pooled environment can meet resistance in some domains — For some SaaS products customers will demand a silo model to address their regulatory and internal data protection requirements Hybrid: The business compromise For many organizations the choice of a strategy is not as simple as selecting the silo bridge or pool model Your tenants and your business are going to have a significant influence on how you approach selection of a storage strategy In some cases a team might identify a small collection of their tenants that require the silo or bridge model Once they’ve made this determination they assume that they have to implement all of the storage with that model This artificially limits your ability to embrace those tenants that may be open 7 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Hybrid: The business compromise to a pool model In fact it may add cost or complexity for a tier of tenants that aren’t demanding the attributes of the silo or bridge model One possible compromise is to build a solution that fully supports pooled storage as your foundation Then you can carve out a separate database for those tenants that demand a siloed storage solution The following figure provides an example of this approach in action Hybrid silo/pool storage Here we have two tenants (Tenant 1 and Tenant 2) that are leveraging a silo model and the remaining tenants are running in a pooled storage model This is magically abstracted away by a data access layer that hides developers from the tenant’s underlying storage Although this can add a level of complexity to your data access layer and management profile it can also offer your business a way to tier your offering to represent the best of both worlds 8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Migration and multitenancy Data migration Data migration is one of those areas that is often left out of the evaluation of competing SaaS storage models However with SaaS consider how your architectural choices will influence your ability to continually deploy new features and capabilities Although performance and general tenant experience are important to emphasize it’s also essential to consider how your storage solution will accommodate ongoing changes in the underlying representation of your data Migration and multitenancy Each of the multitenant storage models requires its own unique approach to tackling data migration In the silo and bridge models you can migrate data on a tenantbytenant basis Your organization may find this appealing because it allows you to carefully migrate each SaaS tenant without exposing all tenants to the possibility of a migration error However this approach can introduce more complexity into the overall orchestration of your deployment lifecycle Migrating data in the pool model can be both appealing and challenging In one respect migration in a pool model provides a single point that once migrated has all tenants successfully transitioned to your new data model On the other hand any problem introduced during a pool migration could impact all of your tenants From the outset you should be thinking about how data migration fits into your overall multitenant SaaS strategy If you bake this migration orchestration into your delivery pipeline early you tend to achieve a greater degree of agility in your release process Minimizing invasive changes As a rule of thumb you should have clear policies and tenets to follow as you consider how the data in your systems will evolve Wherever possible teams should favor data changes that have backward compatibility with earlier versions If you can find ways to minimize changes to your application’s data representation you will limit the high overhead of transforming your data into a new representation You can leverage commonly used tools and techniques to orchestrate the migration process In reality while minimizing invasive changes is often of great importance to SaaS developers it’s not unique to the SaaS domain As such it’s beyond the scope of what we’ll cover in this paper 9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Isolation and security Security considerations Data security must be a top priority for SaaS providers When adopting a multitenant strategy your organization needs a robust security strategy to ensure that tenant data is effectively protected from unauthorized access Protecting this data and conveying that your system has employed the appropriate security measures is essential to gaining the trust of your SaaS customers The storage strategies you choose are likely to use common security patterns supported on AWS Encrypting data at rest for example is a horizontal strategy that can be applied universally across any of the models This provides a foundational level of security which ensures that—even if there is unauthorized access to data—it would be useless without the keys needed to decrypt the information Now as you look at the security profiles of the silo bridge and pool models you will notice additional variations in how security is realized with each one You’ll discover that AWS Identity and Access Management (Amazon IAM) for example has nuances in how it can scope and control access to tenant data In general the silo and bridge models have a more natural fit with IAM policies because they can be applied to limit access to entire databases or tables Once you cross over to a pool model you may not be in a position to leverage IAM to scope access to the data Instead more responsibility shifts to the authorization models of your application’s services These services must use a user’s identity to resolve the scope and control they have over data in a shared representation Isolation and security Supporting tenant isolation is fundamental for some organizations and domains The notion that data is separated—even in a virtualized environment—can be seen as essential to SaaS providers that have specific regulatory or security requirements As you consider each AWS storage solution think about how isolation is achieved on each of the AWS storage services As you will see achieving isolation on RDS looks very different from how it does on DynamoDB Consider these differences as you select your storage strategy and assess the security considerations of your customers 10 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Aggregating storage trends Management and monitoring The approach you adopt for multitenant storage can have a significant impact on the management and monitoring profile of your SaaS solution In fact the complexity and approach you take to aggregate and analyze system health can vary significantly for each storage model and AWS technology Aggregating storage trends To build an effective operational view of SaaS storage you need metrics and dashboards that provide you with an aggregated view of tenant activity You have to be able to proactively identify storage trends that could be influencing the experience spanning all of your tenants The mechanisms you need to create this aggregated view look very different in the silo and pool models With siloed storage you must put tooling in place to collect the data from each isolated database and surface that information in an aggregated model In contrast the pool model by its nature already has an aggregated view of tenant activity Tenantcentric views of activity Your management and monitoring storage solution should provide a way to create tenantcentric views of your storage activity If a particular tenant is experiencing a storage issue you’ll want to be able to drill into the storage metrics and profile data to identify what could be impacting that individual tenant Here the silo model aligns more naturally with constructing a tenantcentric view of storage activity A pooled storage strategy will require some tenant filtering mechanism to extract storage activity for a given tenant Policies and alarms Each AWS storage service has its own mechanisms for evaluating and tuning your application’s storage performance Because storage can often represent a key bottleneck of your system you should introduce monitoring policies and alarms that will allow you to surface and respond to changes in the health of your application’s storage The partitioning model you choose will also impact the complexity and manageability of your storage monitoring strategy The more siloed your solution the more moving parts to manage and maintain on a tenantbytenant basis In contrast the shared nature of a pooled storage strategy makes it simpler to have a more centralized crosstenant collection of policies and alarms The overall goal with these storage policies is to put in place a set of proactive rules that can help you anticipate and react to health events As you select a multitenant storage model consider how each approach might influence how you implement your system’s storage policies and alarms 11 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Tiered storage models AWS provides developers with a wide range of storage services each of which can be applied in combinations to address the varying cost and performance requirements of SaaS tenants The key here is not to artificially constrain your storage strategy to any one AWS service or storage technology As you profile your application’s storage needs take a more granular approach to matching the strengths of a given storage service with the specific requirements of the various components of your application DynamoDB for example might be a great fit for one application service while RDS might be a better fit for another If you use a microservice architecture for your solution where each service has its own view of storage think about which storage technology best fits each service’s profile It’s not uncommon to find a spectrum of different storage solutions in use across the set of microservices that make up your application This strategy also creates an opportunity to use storage as another way of tiering your SaaS solution Each tier could essentially leverage a separate storage strategy offering varying levels of performance and SLAs that would distinguish the value proposition of your solution’s tiers By using this approach you can better align the tenant tiers with the cost and load they are imposing on your infrastructure 12 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS The developer experience As a general architectural principle developers typically attempt to introduce layers or frameworks that centralize and abstract away horizontal aspects of their applications The goal here is to centralize and standardize policies and tenant resolution strategies You might for example introduce a data access layer that would inject tenant context into data access requests This would simplify development and limit a developer’s awareness of how tenant identity flows through the system Having this layer in place also provides you with more options for policies and strategies that might vary on a tenantbytenant basis It also creates a natural opportunity to centralize configuration and tracking of storage activity 13 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Linked account silo model Before digging into specifics of each storage service let’s look at how you can use AWS Linked Accounts to implement the silo model on top of any of the AWS storage solutions To achieve a silo with this approach your solution needs to provision a separate Linked Account for every tenant This can truly achieve a silo because the entire infrastructure for a tenant is completely isolated from other tenants The Linked Account approach relies on the Consolidated Billing feature that allows customers to associate child accounts with an overall payer account The idea here is that—even with separate linked accounts for each tenant—the billing for these tenants is still aggregated and presented as part of a single bill to the payer account The following figure shows a conceptual view of how Linked Accounts are used to implement the silo model Here you have two tenants with separate accounts each of which is associated with a payer account With this flavor of isolation you have the freedom to leverage any of the available AWS storage technologies to house your tenant’s data Silo model with linked accounts At first blush this can seem like a very appealing strategy for those SaaS providers that require a silo environment It certainly can simplify some aspects of management and migration of individual tenants Assembling a view of your tenant costs would also be more straightforward because you can summarize the AWS expenses at the Linked Account level Even with these advantages the Linked Account silo model has important limitations Provisioning for example is certainly more complex In addition to creating the tenant’s infrastructure you need to automate the creation of each Linked Account and adjust any limits that need it The larger challenge however is scale AWS has constraints on the number of Linked Accounts you can create and these limits aren’t likely to align with environments that will be creating a large number of new SaaS tenants 14 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on DynamoDB The nature of how data is scoped and managed by DynamoDB adds some new twists to how you approach multitenancy Although some storage services align nicely with the traditional data partitioning strategies DynamoDB has a slightly less direct mapping to the silo bridge and pool models With DynamoDB you have to consider some additional factors when selecting your multitenant strategy The sections that follow explore the AWS mechanisms that are commonly used to realize each of the multitenant partitioning schemes on DynamoDB Silo model Before looking at how you might implement the silo model on DynamoDB you must first consider how the service scopes and controls access to data Unlike RDS DynamoDB has no notion of a database instance Instead all tables created in DynamoDB are global to an account within a region That means every table name in that region must be unique for a given account Silo model with DynamoDB tables If you implement a silo model on DynamoDB you have to find some way to create a grouping of one or more tables that are associated with a specific tenant The approach must also create a secure controlled view of these tables to satisfy the security requirements of silo customers preventing any possibility of crosstenant data access The preceding figure shows one example of how you might achieve this tenantscoped grouping of tables Notice that two tables are created for each tenant (Account and Customer) These tables also have a tenant identifier that is prepended to the table names This addresses DynamoDB’s table naming requirements and creates the necessary binding between the tables and their associated tenants 15 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Bridge model Access to these tables is also achieved through the introduction of IAM policies Your provisioning process needs to automate the creation of a policy for each tenant and apply that policy to the tables owned by a given tenant This approach achieves the fundamental isolation goals of the silo model defining clear boundaries between each tenant’s data It also allows for tuning and optimization on a tenantbytenant basis You can tune two specific areas: •Amazon CloudWatch Metrics can be captured at the table level simplifying the aggregation of tenant metrics for storage activity • Table write and read capacity measured as input and output per second (IOPS) are applied at the table level allowing you to create distinct scaling policies for each tenant The disadvantages of this model tend to be more on the operational and management side Clearly with this approach your operational views of a tenant require some awareness of the tenant table naming scheme to filter and present information in a tenantcentric context The approach also adds a layer of indirection for any code that needs to interact with these tables Each interaction with a DynamoDB table requires you to insert the tenant context to map each request to the appropriate tenant table SaaS providers that adopt a microservicebased architecture also have another layer of considerations With microservices teams typically distribute storage responsibilities to individual services Each service is given the freedom to determine how it stores and manages data This can complicate your isolation story on DynamoDB requiring you to expand your population of tables to accommodate the needs of each service It also adds another dimension of scoping where each table for each service identifies its binding to a service To offset some of these challenges and better align with DynamoDB best practices consider having a single table for all of your tenant data This approach offers several efficiencies and simplifies the provisioning management and migration profile of your solution In most cases using separate DynamoDB tables and IAM policies to isolate your tenant data addresses the needs of your silo model Your only other option is to consider the Linked Account silo model (p 14) described earlier However as outlined previously the Linked Account isolation model comes with additional limitations and considerations Bridge model For DynamoDB the line between the bridge model and silo model is very blurry Essentially if your goal using the bridge model is to have a single account with oneoff schema variation for each client you can see how that can be achieved with the silo model described earlier For bridge the only question would be whether you might relax some of the isolation requirements described with the silo model You can achieve this by eliminating the introduction of any tablelevel IAM policies Assuming your tenants aren’t requiring full isolation you could argue that removing the IAM policies could simplify your provisioning scheme However even in bridge there are merits to the isolation So although dropping the IAM isolation might be appealing it’s still a good SaaS practice to leverage constructs and policies that can constrain crosstenant access Pool model Implementing the pool model on DynamoDB requires you to step back and consider how the service manages data As data is stored in DynamoDB the service must continually assess and partition the data to achieve scale And if the profile of your data is evenly distributed you could simply rely on this underlying partitioning scheme to optimize the performance and cost profile of your SaaS tenants 16 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model The challenge here is that data in a multitenant SaaS environment doesn’t typically have a uniform distribution SaaS tenants come in all shapes and sizes and as such their data is anything but uniform It’s very common for SaaS vendors to end up with a handful of tenants that consume the largest portion of their data footprint Knowing this you can see how it creates problems for implementing the pool model on top of DynamoDB If you simply map tenant identifiers to a DynamoDB partition key you’ll quickly discover that you also create partition “hot spots” Imagine having one very large tenant who would undermine how DynamoDB effectively partitions your data These hot spots can impact the cost and performance of your solution With the suboptimal distribution of your keys you need to increase IOPS to offset the impact of your hot partitions This need for higher IOPS translates directly into higher costs for your solution To solve this problem you have to introduce some mechanism to better control the distribution of your tenant data You’ll need an approach that doesn’t rely on a single tenant identifier to partition your data These factors all lead down a single path—you must create a secondary sharding model to associate each tenant with multiple partition keys Let’s look at one example of how you might bring such a solution to life First you need a separate table which we’ll call the “tenant lookup table” to capture and manage the mapping of tenants to their corresponding DynamoDB partition keys The following figure represents an example of how you might structure your tenant lookup table Introducing a tenant lookup table This table includes mappings for two tenants The items associated with these tenants have attributes that contain sharding information for each table that is associated with a tenant Here our tenants both have sharding information for their Customer and Account tables Also notice that for each tenanttable combination there are three pieces of information that represent the current sharding profile for a table These are: •ShardCount — An indication of how many shards are currently associated with the table •ShardSize — The current size of each of the shards •ShardId — A list of partition keys mapped to a tenant (for a table) With this mechanism in place you can control how data is distributed for each table The indirection of the lookup table gives you a way to dynamically adjust a tenant’s sharding scheme based on the amount of data it is storing Tenants with a particularly large data footprint will be given more shards Because the model configures sharding on a tablebytable basis you have much more granular control over 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model mapping a tenant’s data needs to a specific sharding configuration This allows you to better align your partitioning with the natural variations that often show up in your tenant’s data profile Although introducing a tenant lookup table provides you with a way to address tenant data distribution it does not come without a cost This model now introduces a level of indirection that you have to address in your solution’s data access layer Instead of using a tenant identifier to directly access your data first consult the shard mappings for that tenant and use the union of those identifiers to access your tenant data The following sample Customer table shows how data would be represented in this model Customer table with shard IDs In this example the ShardID is a direct mapping from the tenant lookup table That tenant lookup table included two separate lists of shard identifiers for the Customer table one for Tenant1 and one for 18 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Managing shard distribution Tenant2 These shard identifiers correlate directly to the values you see in this sample customer table Notice that the actual tenant identifier never appears in this Customer table Managing shard distribution The mechanics of this model aren’t particularly complex The problem gets more interesting when you think about how to implement a strategy that effectively distributes your data How do you detect when a tenant requires additional shards? Which metrics and criteria can you collect to automate this process? How do the characteristics of your data and domain influence your data profile? There is no single approach that universally resolves these questions for every solution Some SaaS organizations manually tune this based on their customer insights Others have more natural criteria that guide their approach The approach outlined here is one way you might choose to handle the distribution of your data Ultimately you’ll likely find a hybrid of the principles we describe that best aligns with the needs of your environment The key takeaway is that if you adopt the pool model be aware of how DynamoDB partitions data Moving in data blindly without considering how the data will be distributed will likely undermine the performance and cost profile of your SaaS solution Dynamically optimizing IOPS The IOPS needs of a SaaS environment can be challenging to manage The load tenants place on your system can vary significantly Setting the IOPS to some worst case maximum level undermines the desire to optimize costs based on actual load Instead consider implementing a dynamic model where the IOPS of your tables are adjusted in real time based on the load profile of your application Dynamic DynamoDB is one configurable opensource solution you can use to address this problem Supporting multiple environments As you think about the strategies outlined for DynamoDB consider how each of these models will be realized in the presence of multiple environments (QA development production etc) The need for multiple environments impacts how you further partition your experience to separate out each of your storage strategies on AWS With the bridge and pool models for example you can end up adding a qualifier to your table names to provide environment context This adds a bit of misdirection that you must factor into your provisioning and runtime resolution of table names Migration efficiencies The schemaless nature of DynamoDB offers real advantages for SaaS providers allowing you to apply updates to your application and migrate tenant data without introducing new tables or replication DynamoDB simplifies the process of migrating tenants between your SaaS versions and allows you to simultaneously host agile tenants on the latest version of your SaaS solution while allowing other tenants to continue using an earlier version Weighing the tradeoffs Each of the models has tradeoffs to consider as you determine which model best aligns with your business needs The silo pattern may seem appealing but the provisioning and management add a 19 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Weighing the tradeoffs dimension of complexity that undermines the agility of your solution Supporting separate environments and creating unique groups of tables will undoubtedly impact the complexity of your automated deployment The bridge represents a slight variation of the silo model on DynamoDB As such it mirrors most of what we find with the silo model The pool model on DynamoDB offers some significant advantages The consolidated footprint of the data simplifies the provisioning migration and management and monitoring experiences It also allows you to take a more multitenant approach to optimizing consumption and tenant experience by tuning the read and write IOPS on a crosstenant basis This allows you to react more broadly to performance issues and introduces opportunities to minimize cost These factors tend to make the pool model very appealing to SaaS organizations 20 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on RDS With so many early SaaS systems delivered on relational databases the developer community has established some common patterns for address multitenancy in these environments In fact RDS has a more natural mapping to the silo bridge and pool models The construct and representation of data in RDS is very much an extension of nonmanaged relational environments The basic mechanisms that are available in MySQL for example are also available to you in RDS This makes the realization of multitenancy on all of the RDS flavors relatively straightforward The following sections outline the various strategies that are commonly employed to realize the partitioning models on RDS Silo model You can achieve the silo pattern on AWS in multiple ways However the most common and simplest approach for achieving isolation is to create separate database instances for each tenant Through instances you can achieve a level of separation that typically satisfies the compliance needs of customers without the overhead of provisioning entirely separate accounts RDS instances as silos The preceding figure shows a basic silo model as it could be realized on top of RDS Here two separate instances are provisioned for each tenant The diagram depicts a master database and two read replicas for each tenant instance This is an optional concept to highlight how you can use this approach to set up and configure an optimized highly available strategy for each tenant 21 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Bridge model Bridge model Achieving the bridge model on RDS fits the same themes we see across all the storage models The basic approach is to leverage a single instance for all tenants while creating separate representations for each tenant within that database This introduces the need to have provisioning and runtime table resolution to map each table to a given tenant The bridge model offers you the opportunity to have tenants with different schemas and some flexibility when migrating tenant data You could for example have different tenants running different versions of the product at a given moment in time and gradually migrate schema changes on a tenantbytenant basis The following figure provides an example of one way you can implement the bridge model on RDS In this diagram you have a single RDS database instance that contains separate customer tables for Tenant1 and Tenant2 Example of a bridge model on RDS This example highlights the ability to have schema variation at the tenant level Tenant1’s schema has a Status column while that column is removed and replaced by the Gender column used by Tenant2 Another option here would be to introduce the notion of separate databases for each tenant within an instance The terminology varies for each flavor of RDS Some RDS storage containers refer to this as a database; others label it as a schema 22 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model RDS bridge with separate tables/schemas The preceding figure provides an illustration of this alternate bridge model Notice that we created databases for each of the tenants and the tenants then have their own collection of tables For some SaaS organizations this scopes the management of their tenant data more naturally avoiding the need to propagate the naming to individual tables This model is appealing but it may not be the best fit for all flavors of RDS Some RDS containers limit the number of databases/schemas that you can create for an instance The SQL Server container for example allows only 30 databases per instance which is likely unacceptable for most SaaS environments Although the bridge model allows for variation from tenant to tenant it’s important to know that typically you should still adopt policies that try to limit schema changes Each time you introduce a schema change you can take on the challenge of successfully migrating your SaaS tenants to the new model without absorbing any downtime So although this model simplifies those migrations it doesn’t promote oneoff tenant schemas or regular changes to the representation of your tenant’s data Pool model The pool model for RDS relies on traditional relational indexing schemes to partition tenant data As part of moving all the tenant data into a shared infrastructure model you store the tenant data in a single RDS instance and the tenants share common tables These tables are indexed with a unique tenant identifier that is used to access and manage each tenant’s data 23 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Factoring in single instance limits RDS pool model with shared schema The preceding figure provides an example of the pool model in action Here a single RDS instance with one Customer table holds data for all of the application’s tenants RDS is an RDBMS so all tenants must use the same schema version RDS is not like DynamoDB which has a flexible schema that allows each tenant to have a unique schema within a single table Factoring in single instance limits Many of the models we described concentrate heavily on storing data in a single instance and partitioning data within that instance Depending on the size and performance needs of your SaaS environment using a single instance might not fit the profile of your tenant data RDS has limits on the amount of data that can be stored in a single instance The following is a breakdown of the limits: • MySQL MariaDB Oracle PostgreSQL – 6 TB • SQL Server – 4 TB • Aurora – 64 TB In addition a single instance introduces resource contention issues (CPU memory I/O) In scenarios where a single instance is impractical the natural extension is to introduce a sharding scheme where your tenant data is distributed across multiple instances With this approach you start with a small collection of sharded instances Then continually observe the profile of your tenant data and expand the number of instances to ensure that no single instance reaches limits or becomes a bottleneck 24 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Weighing the tradeoffs Weighing the tradeoffs The tradeoffs of using RDS are fairly straightforward The primary theme is often more about trading management and provisioning complexity for agility Overall the pain points of provisioning automation are likely lower with the silo model on RDS However the cost and management efficiency associated with the pool model is often compelling This is especially significant as you think about how these models will align with your continuous delivery environment 25 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on Amazon Redshift Amazon Redshift introduces additional twists to factor into your multitenant thinking Amazon Redshift focuses on building highperformance clusters to house largescale data warehouses Amazon Redshift also places some limits on the constructs that you can create within each cluster Consider the following limits: • 60 databases per cluster • 256 schemas per database • 500 concurrent connections per database • 50 concurrent queries • Access to a cluster enables access to all databases in the cluster You can imagine how these limits influence the scale and performance that is delivered to Amazon Redshift You can also see how these limits can impact your approach to multitenancy with Amazon Redshift If you are targeting a modest tenant count these limits might have little influence on your solution However if you’re targeting a large number of tenants you’d need to factor these limits into your overall strategy The following sections highlight the strategies that are commonly used to realize each multitenant storage model on Amazon Redshift Silo model Achieving a true silo model isolation of tenants on Amazon Redshift requires you to provision separate clusters for each tenant Via clusters you can create the welldefined boundary between tenants that is commonly required to assure customers that their data is successfully isolated from crosstenant access This approach best leverages the natural security mechanisms in Amazon RedShift so you can control and restrict tenant access to a cluster using a combination of IAM policies and database privileges IAM controls overall cluster management and the database privileges are used to control access to data within the cluster The silo model gives you the opportunity to create a tuned experience for each tenant With Amazon Redshift you can configure the number and type of nodes in your cluster so that you can create environments that target the load profile of each individual tenant You can also use this as a strategy for optimizing costs The challenge of this model as we’ve seen with other silo models is that each tenant’s cluster must be provisioned as part of the onboarding process Automating this process and absorbing the extra time and overhead associated with the provisioning process adds a layer of complexity to your deployment footprint It also has some impact on the speed with which a new tenant can be allocated Bridge model The bridge model does not have a natural mapping on Amazon Redshift Technically you could create separate schemas for each tenant However you would likely run into issues with the Amazon Redshift limit of 256 schemas In environments with any significant number of tenants this simply doesn’t scale Security is also a challenge for Amazon Redshift in the bridge model When you are authorized as a 26 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model user of an Amazon Redshift cluster you are granted access to all the databases within that cluster This pushes the responsibility for enforcing finergrained access controls to your SaaS application Given the motives for the bridge model and these technical considerations it seems impractical for most SaaS providers to consider using this approach on Amazon Redshift Even if the limits are manageable for your solution the isolation profile is likely unacceptable to your customers Ultimately the best answer is to simply use the silo model for any tenant that requires isolation Pool model Building the pool model on Amazon Redshift looks very much like the other storage models we’ve discussed The basic idea is to store data for all tenants in a single Amazon Redshift cluster with shared databases and tables In this approach the data for tenants is partitioned via the introduction of a column that represents a unique tenant identifier This approach gives most of the goodness that we saw with the other pool models Certainly the overall management monitoring and agility are improved by housing all of the tenant data in a single Amazon Redshift cluster The limit on concurrent connections is the area that adds a degree of difficulty to implementing the pool model on Amazon Redshift With an upper limit of 500 concurrent connections many multitenant SaaS environments can quickly exceed this limit This doesn’t eliminate the pool model from contention Instead it pushes more responsibility to the SaaS developer to put an effective strategy in place to manage how and when these connections are consumed and released There are some common ways to address connection management Developers often leverage client based caching to limit their need for actual connections to Amazon Redshift Connection pooling can also be applied in this model Developers need to select a strategy that ensures that the data access patterns of their application can be met effectively without exceeding the Amazon Redshift connection limit Adopting the pool model also means keeping your eye on the typical issues that come up any time you’re operating in a shared infrastructure The security of your data for example requires some application level policies to limit crosstenant access Also you likely need to continually tune and refine the performance of your environment to prevent any one tenant from degrading the experience of others 27 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Keeping an eye on agility The matrix of multitenant storage options can be daunting It can be challenging to identify the solution that represents the best mix of flexibility isolation and manageability Although it’s important to consider all the options it’s also essential to continually factor agility into your multitenant storage thinking The success of SaaS organizations is often heavily influenced by the amount of agility that is baked into their solution The storage technology and isolation model you select directly impacts your ability to easily deploy new features and functionality The shape of your structure and content of your data often change to support new features and this means your underlying storage model must accommodate these changes without requiring downtime Each isolation model has pros and cons when it comes to supporting this seamless migration As you consider your options give these factors the appropriate weight While the silo bridge and pool models all have an agility footprint you can apply common tenets to help you remain as nimble as possible A key tenet is the rather obvious but occasionally violated need to minimize oneoff variations for tenant data The silo and bridge models for example can lead to storage variations that can complicate your ability to push out new features to all of your SaaS customers as part of a single automated event Teams often use automation and continuous deployment to limit the amount of friction introduced by their multitenant storage strategy As you settle into a storage strategy expect and embrace the reality that your storage requirements continually evolve The needs of SaaS customers are a moving target and the storage model you pick today might not be a good fit tomorrow AWS also continues to introduce new features and services that can represent new opportunities to enhance your approach to storage 28 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Conclusion The storage needs of SaaS customers aren’t simple The reality of SaaS is that your business’s domain customers and legacy considerations affect how you determine which combination of multitenant storage options best meet the needs of your business Although there is no single strategy that universally fits every environment it is clear that some models do align better with the core tenets of the SaaS delivery model In general the poolbased approaches to storage—on any AWS storage technology—align well with the need for a unified approach to managing and operating a multitenant environment Having all your tenants in one shared repository and representation streamlines and unifies your approach’s operational and deployment footprint enabling crosstenant views of health and performance The silo and bridge models certainly have their place and for some SaaS providers are absolutely required The key here is that if you head down this path agility can get more complicated Some AWS storage technologies are better positioned to support isolated tenant storage schemes Building a silo model on RDS for example is less complex than it is on DynamoDB Generally whenever you rely on linked accounts as your partitioning model you will tackle more provisioning management and scaling challenges Beyond the mechanics of achieving multitenancy think about how the profile of each AWS storage technology can fit with the varying needs of your multitenant application’s functionality Consider how tenants will access the data and how the shape of that data will need to evolve to meet the needs of your tenants The more you can decompose your application into autonomous services the better positioned you are to pick and choose separate storage strategies for each service After exploring these services and portioning schemes you should have a much better sense of the patterns and inflection points that will guide your selection of a multitenant storage strategy AWS equips SaaS providers with a rich palette of services and constructs that can be combined to address any number of multitenant storage needs 29 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Contributors The following individuals and organizations contributed to this document: • Tod Golding Partner Solutions Architect AWS Partner Program • Clinton Ford Senior Product Marketing Manager DynamoDB • Zach Christopherson Database Engineer Amazon Redshift • Brian Welker Principal Product Owner RDS MySQL and MariaDB 30 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 31) Updated for latest technical accuracyMay 6 2020 Initial publication (p 31) Whitepaper published November 6 2016 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved 32 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS AWS glossary For the latest AWS terminology see the AWS glossary in the AWS General Reference 33,General,consultant,Best Practices SAP_HANA_on_AWS_Operations_Overview_Guide,"SAP HANA on AWS Operations Overview Guide December 2017 The PDF version of the paper has been archived For the latest HTML version of the paper see: https://docsawsamazoncom/sap/latest/saphana/saphanaonawsoperationshtml Archived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Introduction 1 Administration 1 Starting and Stopping EC2 Instances Running SAP HANA Hosts 2 Tagging SAP Resources on AWS 2 Monitoring 4 Automation 4 Patching 5 Backup/Recovery 7 Creating an Image of an SAP HANA System 8 AWS Services and Components for Backup Solutions 9 Backup Destination 11 AWS Command Line Interface 12 Backup Example 13 Scheduling and Executing Backups Remotely 14 Restoring SAP HANA Backups and Snapshots 19 Networking 21 EBS Optimized Instances 22 Elastic Network Interfaces (ENIs) 22 Security Groups 23 Network Conf iguration for SAP HANA System Replication (HSR) 24 Configuration Steps for Logical Network Separation 25 SAP Support Access 26 Support Channel Setup with SAProuter on AWS 26 Support Channel Setup with SAProuter On Premises 28 Security 29 OS Hardening 29 Archived Disabling HANA Services 29 API Call Logging 29 Notifications on Access 30 High Availability and Disaster Recovery 30 Conclusion 30 Contributors 30 Appendix A – Configuring Linux to Recognize Ethernet Devices for Multiple ENIs 31 Notes 33 Archived Abstract Amazon Web Services (AWS) offers you the ability to run your SAP HANA systems of various sizes and operating systems Running SAP systems on AWS is very similar to running SAP systems in your data center To a SAP Basis or NetWeaver administrator there are minimal differences between the two environments There are a number of AWS Cloud considerations relating to security storage compute configurations management and monitoring that will help you get the most out of your SAP HANA implementatio n on AWS This whitepaper provides the best practices for deployment operations and management of SAP HANA systems on AWS The target audience for this whitepaper is SAP Basis and NetWeaver administrators who have experience running SAP HANA systems in an onpremises environment and want to run their SAP HANA systems on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 1 Introduction This guide provides best practice s for operating SAP HANA systems that have been deployed on Amazon Web Services (AWS) either using the SAP HANA Quick Start reference deployment process1 or manually following the instructions in Setting up AWS Resources and the SLES Operating System for SAP HANA Installation 2 This guide is not intended to replace any of the standard SAP documentation See the following SAP guides and notes: o SAP Library (helpsapcom) SAP HANA Administration Guide3 o SAP installation gui des4 (These require SAP Support Portal access ) o SAP notes5 (These require SAP Support Portal access ) This guide assumes that you have a basic kno wledge of AWS If you are new to AWS read the following guides before continuing with this guide: o Getting Started with AWS6 o What is Amazon EC2?7 In addition the following SAP on AWS guides can be found here:8 o SAP on AWS Implementation and Operations Guide provides best practices for achieving optimal performance availability and reliability and lower total cost of ownership (TCO) while running SAP solutions on AWS9 o SAP on AWS High Availability Guide explains how to configure SAP systems on Amazon Elastic Compute Cloud (Amaz on EC2 ) to protect your application from various single points of failure10 o SAP on AWS Backup and Recovery Guide explains how to back up SAP systems running on AWS in contrast to backing up SAP systems on traditional infrastructure11 Administration This section provides guidance on common administrative tasks required to operate an SAP HANA system including information about starting stopping and cloning systems ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 2 Start ing and Stopping EC2 Instances Running SAP HANA Hosts At any time you can stop one or multiple SAP HANA h osts Before stopping the EC2 instance of an SAP HANA host first stop SAP HANA on that instance When you resume the instance it will automatically start with the same IP address network and storage configuration as before You also have the option of using the EC2 Scheduler to schedule starts and stops of your EC2 instances12 The EC2 Scheduler relies on the native shutdown and start up mechanisms of the operating sy stem These native mechanisms will invoke the orderly shutdown and startup of your SAP HANA instance Here is an architectural diagram of how the EC2 S cheduler work s: Figure 1: EC2 Scheduler Tagging SAP Resources on AWS Tagging your SAP resources on AWS can significantly simplify identification security manageability and billing of those resources You can tag your resources using the AWS Management C onsole or by using the createtags functionality of the AWS Command Line Interface (AWS CLI ) This table lists some example tag name s and tag values : Tag Name Tag Value Name SAP server’s virtual (host) name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 3 Tag Name Tag Value Environment SAP server’s landscape role such as: SBX DEV QAT STG PRD etc Application SAP solution or product such as: ECC CRM BW PI SCM SRM EP etc Owner SAP point of contact Service Level Know n uptime and downtime schedule After you have tagged your resources you can then apply specific security restrictions to them for example access control based on the tag values Here is an example of such a policy from our AWS blog :13 { ""Version"" : ""2012 1017"" ""Statement"" : [ { ""Sid"" : ""LaunchEC2Instances"" ""Effect"" : ""Allow"" ""Action"" : [ ""ec2:Describe*"" ""ec2:RunInstances"" ] ""Resource"" : [ ""*"" ] } { ""Sid"" : ""AllowActionsIfYouAreTheOwner"" ""Effect"" : ""Allow"" ""Action"" : [ ""ec2:StopInstances"" ""ec2:StartInstances"" ""ec2:RebootInstances"" ""ec2:TerminateInstances"" ] ""Condition"" : { ""StringEquals"" : { ""ec2:ResourceTag/PrincipalId"" : ""${aws:userid}"" } } ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 4 ""Resource"" : [ ""*"" ] } ] } The AWS Identity and Access Management ( IAM ) policy only allows specific permissions based on the tag value In this scenario the current user ID must match the tag value in order to be granted permissions For more information on tagging refer to our AWS documentation and our AWS blog 14 15 Monitoring There are various AWS SAP and third party solutions that you can leverage for monitoring your SAP workloads Here are some of the core AWS monitoring services: • Amazon CloudWatch – CloudWatch is a monitoring service for AWS resources16 It’s critical for SAP workloads where it’s used to collect resource utilization logs and create alarms to automatically react to changes in AWS resources • AWS CloudTrail – CloudTrail keeps track of all API calls made within your AWS account It captures key metrics about the API calls and can be useful for automating trail creation for your SAP resources Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting AWS and SAP support You can use native AWS monitoring services in a compl ement ary fashion with the SAP Solution Manager Third party monitoring tools can be found on AWS Marketplace 17 Automation AWS offers multiple options for programmatically scripting your resources to operate or scale them in a predictable and repeatable manner You can leverage AWS CloudFormation to aut omate and operate SAP systems on AWS Here are some examples for automating your SAP environment on AWS: ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 5 Area Activities AWS Services Infrastructure Deployment Provision new SAP environment SAP system cloning AWS CloudFormation18 AWS CLI19 Capacity Management Automate scaleup/scaleout of SAP application servers AWS Lambda 20 AWS Cloud Formation Operations SAP b ackup automation (see the Backup Example ) Perform ing monitor ing and visualization Amazon CloudWatch Amazon EC2 System s Manager Patching There are two ways for you to patch your SAP HANA database with alternative s for minimizing cost and/or downtime With AWS y ou can provision additional servers as needed to minimize downtime for patching in a cost effective manner You can also minimize risks by creating on demand copies of your existing production SAP HANA databases for life like production readiness testing This table summarizes the tradeoffs of the two patching methods : Patching Method Benefits Technologies Available Patch an existing server [x] Patch existing OS and DB [x] Longest downtime to existing server and DB [] No costs for additional on demand instances [] Lowest levels of relative complexity and setup tasks involved Native OS patching tools Patch Manager21 Native SAP HANA patching tools22 Provision and patch a new server [] Leverage latest AMIs (only DB patch needed) [] Shortest downtime to existing server and DB [] Can patch and test OS and DB separately and together [x] More costs for additional on demand instances [x] More complexity and setup tasks involved Amazon Machine Image (AMI) 23 AWS CLI24 AWS Cloud Formation25 SAP HANA System Replication26 SAP HANA System Cloning27 SAP HANA backups28 SAP Notes : 198488229 Using HANA System Replication for Hardware Exchange with minimum/zero downtime ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 6 Patching Method Benefits Technologies Available 191330230 HANA: Suspend DB connections for short maintenance tasks The first method (patch an existing server) involves patching the operating system (OS) and database (DB) components of your SAP HANA server The goal of the method is to minimize any additional server costs and avoid any tasks needed to set up additional systems or tests This method may be most appropriate if you have a well defined patching process and are satisfied with your current downtime and costs With this method you must use the correct OS update process and too ls for your Linux distribution S ee this SUSE blog31 and Red Hat FAQ page32 or check each vendor’s documentation for their specific processes and procedures In addition to patching tools provided by our Linux partners AWS offers a free of charge patching service33 called Patch Manager 34 At th e time of this writing Patch Manager support s Red Hat 35 Patch Manager is an automated tool that helps you simplify your OS patching process You can scan your EC2 instances for missing patches and automatically install them select the timing for patch rollouts control instance reboots and many other tasks You can also define auto approval rules for patches with an added ability to black list or white list specific patches control how the patches are deployed on the target instances (eg stop services before applying the patch) and schedule the automatic rollout through maintenance windows The second method (provision and patch a new server) involves provisioning a new EC2 instance that will receive a copy of your source system and database The goal of the method is to minimize downtime minimize risks (by having production data and executing production like testing) and hav e repeatable proc esses This method may be most appropriate if you are looking for higher degrees of automation to enable these goals and are comfortable with the trade offs This method is more complex and has a many more options to fit your requirements Certain options are not exclusive and can be used together For example your AWS CloudFormation template can include the latest Amazon Machine Images ( AMIs ) which you can then use to automate the provisioning set up and configuration of a new SAP HANA server ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 7 Here is an ex ample of a process that can be used to automate OS/HANA patching /upgrade : 1 Download the AWS CloudFormation template offered in the SAP HANA Quick Start 36 2 Update the CloudFormation template with the latest OS AMI ID and execute the updated template to provision a new SAP HANA server The latest OS AMI ID has the specific security patches that your organization needs As part of the provisioning process you need to pro vide the latest SAP HANA installation binaries to get to the required version This allow s you to provision on a new HANA system with the required OS version and security patches along with SAP HANA software versions 3 After the new SAP HANA system is available use one of the following methods to copy the data from the original SAP HANA instance to the newly created system : o SAP HANA native backup/restore o Use SAP HANA System Replication (HSR) technology to replicate the data and then perform an HSR take over o Take snapshots of the old system’s Amazon Elastic Block Store (Amazon EBS ) volumes and create new EBS volumes from it Mount them in the new environment (M ake sure that the HANA SID stays the same for minimal post processing ) o Use new SAP HANA 20 functionality such as SAP HANA Cloning 37 The new system will become a clone of the original system At the end of this process you will have a new SAP HANA system that is ready to test SAP Note 198488238 (Using HANA System Replication for Hardware Exchange with Minimum/Z ero Downtime ) has specifi c recommendations and guidelines on the process for promoting to production Backup and Recovery This section provides an overview of the AWS services used in the backup and recovery of SAP HANA systems and provides an example backup and recovery scenario This guide does not include detailed instructions on how to execute ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 8 database backups using native HANA backup and recovery features or third party backup tools Please refer to the standard OS SAP and SAP HANA documentation or the documentation provided by backup software vendor s In addition backup schedules frequency and retention periods m ight vary with your system type and business requirements See the following standard SAP documentation f or guidance on these topics (SAP notes require SAP Support Portal access ) Note : Both general and advanced backup and recovery concepts for SAP systems on AWS can be found in detail in the SAP on AWS Backup and Recovery Guide 39 SAP Note Description 164214840 FAQ: SAP HANA Database Backup & Recovery 182120741 Determining required recovery files 186911942 Checking backups using hdbbackupcheck 187324743 Checking recoverability with hdbbackupdiag check 165105544 Scheduling SAP HANA Database Backups in Linux 248417745 Sche duling backups for multi tenant SAP HANA Cockpit 20 Creating an Image of an SAP HANA System You can use the AWS Management Console or the command line to create your own AMI based on an existing instance46 For more information see the AWS documentation 47 You can use an AMI of your SAP HANA instance for the following purposes: o To c reate a full offline system backup (of the OS / usr/sap HANA shared backup data and log files ) – AMIs are automatically saved in multiple Availability Zones within the same Region o To move a HANA system from one R egion to another – You can create an image of an existing EC2 instance and move it to another Region by following the instructions in the AWS documentation 48 Once the AMI has been copied to the target R egion the new instance can be launched there ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 9 o To c lone an SAP HANA system – You can creat e an AMI of an existing SAP HANA system to create an exact clone of the system See the following section for additional information Note – See the restore section later in this whitepaper to view the recommended restore steps for production environments Tip: The SAP HANA system should be in a consistent state before you creat e an AMI To do this stop the SAP HANA instance before creating the AMI or by following the instructions in SAP Note 1703435 (requires SAP Support Portal access) 49 AWS Services and Components for Backup Solutions AWS provides a number of services and options for storage and backup including Amazon Simple Storage Service ( Amazon S3) AWS Identity and Access Management (IAM) and Amazon Glacier Amazon S3 Amazon S3 is the center of any SAP backup and recovery solution on AWS50 It provides a highly durable storage infrastructure designed for mission critical and primary data storage It is designed to provide 99999999999% durability and 9999% availability over a given year See the Amazon S3 documentation for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup files51 AWS IAM With IAM you can securely control access to AWS services and resources for your users52 You can create and manage AWS users and groups and use permissions to grant user access to AWS resources You can create roles in IAM and manage permissions to control which operations can be performed by the entity or AWS service that assumes the role You can also define which entity is allowed to assume the role During the deployment process CloudFormation creates a n IAM role that allow s access to get objects from and/or put objects in to Amazon S3 That role is ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 10 subsequently assigned to each EC2 instance that is hosting SAP HANA master and worker nodes at launch time as they are deployed Figure 2 : IAM r ole example To ensure security that applies the principle of least privilege permissions for this role are limited only to actions that are required for backup and recovery {""Statement"":[ {""Resource"":""arn:aws:s3::: /*"" ""Action"":[""s3:GetObject""""s3:PutObject""""s3:DeleteObject"" ""s3:ListBucket""""s3:Get*""""s3:List*""] ""Effect"":""Allow""} {""Resource"":""*""""Action"":[""s3:List*""""ec2:Describe*""""ec2:Attach NetworkInterface"" ""ec2:AttachVolume""""ec2:CreateTags""""ec2:CreateVolume""""ec2:RunI nstances"" ""ec2:StartInstances""]""Effect"":""Allow""}]} To add functions later you can use the AWS Management Console to modify the IAM role Amazon Glacier Amazon Glacier is an extremely low cost service that provides secure and durable storage for data archiving and backup53 Amazon Glacier is optimized for data that is infrequently accessed and provides multiple options like expedited standard and bulk methods for data retrieval With standard and bulk retrievals data is available in 3 5 hours or 5 12 hours respectively ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 11 However with expedited retrieval Amazon Glacier provides you with an option to retrieve data in 3 5 minutes which can be ideal for occasional urgen t requests With Amazon Glacier you can reliably store large or small amounts of data for as little as $001 per gigabyte per month a significant savings compared to on premises solutions You can use lifecycle policies as explained in the Amazon S3 Developer Guide to push SAP HANA backups to Amazon Glacier for long term archiv ing54 Backup Destination The primary difference between backing up SAP systems on AWS compared with traditional on premises infrastructure is the backup destination Tape is the typical backup destination used with on premises infrastructure On AWS backups are stored in Amazon S3 Amazon S3 has many benefits over tape including the ability to automatically store b ackups “offsite” from the source system since data in Amazon S3 is replicated across multiple facilities within the AWS R egion SAP HANA systems provisioned using the SAP HANA Quick Start reference deploy ment are configured with a set of EBS volumes to be used as an initial local backup destination HANA backups are first stored on these local EBS volumes and then copied to Amazon S3 for long term storage You can use SAP HANA S tudio SQL commands or the DBA Cockpit to start or schedule SAP HANA d ata backups L og backups are written automatically unless disabled The /backup file system is configured as part of the deployment process Figure 3 : SAP HANA file system l ayout ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 12 The SAP HANA globalini configuration file has been customized by the SAP HANA Quick Start reference deployment process as follows : database backups go directly to /backup/data/ while automatic log archival files go to /backup/log/ [persistence] basepath_shared = no savepoint_intervals = 300 basepath_datavolumes = /hana/data/ basepath_logvolumes = /hana/log/ basepath_databackup = /backup/data/ basepath_logbackup = /backup/log/ Some third party backup tools like Commvault NetBackup and TSM are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups directly into Amazon S3 without needing to store th e backups on EBS volumes first AWS Command Line I nterface The AWS CLI which is a unified tool to manage AWS services is instal led as part of the base image55 Using various commands you can control multiple AWS services from the command line directly and aut omate t hem through scripts Access to your S3 bucket is available through the IAM role assigned to the instance (discussed earlier ) Using the AWS CLI commands for A mazon S3 you can list the contents of the previously created bucket back up files and restore files as explained in the AWS CLI documentation56 imdbmaster:/backup # aws s3 ls region=us east1 s3://node2 hanas3bucket gcynh5v2nqs3 Bucket: node2 hanas3bucket gcynh5v2nqs3 Prefix: LastWriteTime Length Name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 13 Backup Example Here are the steps you might take for a typical backup task: 1 In the SAP HANA Backup E ditor choose Open Backup Wizard You can also open the B ackup Wizard by r ightclicking the system that you want to back up and choo sing Back Up a Select destination type File This will back up the database to files in the specified file system b Specify the backup destination ( /backup/data/) and the backup prefix Figure 4 : SAP HANA backup example c Choose Next and then Finish A confirmation message will appear when the backup is complete d Verify that the backup files are available at the OS level The next step is to push or synchronize the backup files from the /backup file system to Amazon S3 by using the aws s3 sync command57 imdbmaster:/ # aws s3 sync backup s3://node2 hanas3bucket gcynh5v2nqs3 region=us east1 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 14 2 Use the AWS Management Console to v erify that the files have been pushed to Amazon S3 You can also use the aws s3 ls comma nd shown previously in the AWS Command Line Interface section 58 Figure 5 : Amazon S3 bucket contents after backup Tip: The aws s3 sync command will only upload new files that don’t exist in Amazon S3 Use a periodic ally scheduled cron job to sync and then delete files that have been uploaded See SAP Note 1651055 for scheduling periodic backup jobs in Linux and extend the supplied scripts with aws s3 sync commands59 Scheduling and Executing Backups Remotely The Amazon EC2 System s Manager Run Command along with Amazon CloudWatch Events can be leveraged to schedule backups for your HANA SAP system remotely with the need to log in to the EC2 instances You can also leverage cron or any other instance level scheduling mechanism The Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances A managed instance is any EC2 instance or on premises machine in your hybrid environment that has been configured for Systems Manager The Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 15 scale You can use the Run Command from the Amazon EC2 console the AWS CLI Windows PowerShell or the AWS SDKs Systems Manager Prerequisites Systems Manager has the following prerequisites Supported Operating System (Linux) Instances must run a supported version of Linux 64bi t and 32b it systems: • Amazon Linux 201409 201403 or later • Ubuntu Server 1604 LTS 1404 LTS or 1204 LTS • Red Hat Enterprise Linux (RHEL) 65 or later • CentOS 63 or later 64bit systems only: • Amazon Linux 201509 201503 or later • Red Hat Enterprise Linux (RHEL) 7x or later • CentOS 71 or later • SUSE Linux Enterprise Server (SLES) 12 or higher Roles for Systems Manager Systems Manager requires an IAM role for instances that will process commands and a separate role for users executing commands Both roles require permission policies that enable them to communicate with the Systems Manager API You can choose to use Systems Manager managed policies or you can create your own roles and specify permissions For more information see Configuring Security Roles for Systems M anager 60 If you are configuring on premises servers or virtual machines ( VMs) that you want to configure using Systems Manager you must also configure an IAM service role For more information see Create an IAM Service Role 61 SSM Agent (EC2 Linux instances) SSM Agent processes Systems Manager requests and configures your machine as specified in the request You must download and install SSM Agent to your EC2 Linux instances For more information see Installing SSM Agent on Linux To schedule remote backups here are the high level steps: 1 Install and configure the Systems Manager agent on the EC2 instance For detailed installation steps please see http://docsawsamazoncom/systems manager/latest/userguide/ssm agenthtml#sysman install ssmagent ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 16 2 Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance For detailed info rmation on how to assign SSM access to a role please see http://docsawsamazoncom/sy stems manager/latest/userguide/systems manager accesshtml 3 Create an SAP HANA backup script A sample script is shown below You can use this as a starting point and then modify it to meet your requirement s #!/bin/sh set x S3Bucket_Name=<> TIMESTAMP=$(date +\ %F\_%H\%M) exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout 2>&1 echo ""Starting to take backup of Hana Database and Upload the backup files to S3"" echo ""Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP"" BACKUP_PREFIX=${SAPSYSTEMNAME}_${TIMESTAMP} echo $BACKUP_PREFIX # source HANA environment source $DIR_INSTANCE/hdbenvsh # execute command with user key hdbsql U BACKUP ""backup data using file ('$BACKUP_PREFIX')"" echo ""HANA Backup is completed"" echo ""Continue with copying the backup files in to S3"" echo $BACKUP_PREFIX sudo u root /usr/local/bin/aws s3 cp recursive /backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ exclude ""*"" in clude ""${BACKUP_PREFIX}*"" echo ""Copying HANA Database log files in to S3"" sudo u root /usr/local/bin/aws s3 sync /backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ exclude ""*"" include ""log_backup*"" sudo u root /usr/local/bin/aws s3 cp /backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME} ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 17 Note : This script takes into consideration that hdbuserstore has a key named Backup 4 At this point you can test an one time backup by executing an ssm command directly : aws ssm send command instance ids <> document name AWS RunShellScript parameters commands=""sudo u adm TIMESTAMP=$(date +\ %F\_%H\%M) SAPSYSTEMNAME= DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 i /usr/sap/HDB/HDB00/hana_backupsh"" Note : For this command to execute successfully you will have to enable adm login using sudo 5 Using CloudWatch E vents you can schedule backups remotely at any desired frequency Navigate to the Cloud Watch Events page and create a rule ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 18 Figure 6 : Amazon CloudWatch event rule creation When configuring the rule : • Choose Schedule • Select SSM Run Command as the Target • Select AWS RunShellScript (Linux) as the D ocument type • Choose InstanceIds or Tags as Target Keys ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 19 • Choose Constant under Configure Parameters and type the run command Restoring SAP HANA Backups and Snapshots Restor ing SAP Backups To restore your SAP HANA database from a backup perform the following steps : 1 If the backup files are not already available in the /backup file system but are in Amazon S3 restore the files from Amazon S3 by using the aws s3 cp command62 This command has the following syntax: aws region cp recursive * For e xample : imdbmaster:/backup/data/YYZ # aws region us east1 s3 cp s3://node2 hanas3bucket gcynh5v2nqs3/data/YYZ recursive include COMPLETE* 2 Recover the SAP HANA database by using the R ecovery Wizard as outlined in the SAP HANA Administration Guide 63 Specify File as the Destination Type and enter the correct B ackup Prefix Figure 7 : Restore example ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 20 3 When the recovery is complete you can resume normal operation s and clean up backup files from the /backup//* directories Restor ing EBS/AMI Snapshots To r estore EBS snapshots perform the following steps: 1 Create a new volume from the snapshot: aws ec2 create volume region us west2 availability zone us west2a snapshot id snap 1234abc123a12345a volume type gp2 2 Attach the newly created volume to your EC2 host: aws ec2 attach volume region=us west2 volume id vol 4567c123e45678dd9 instance id i03add123456789012 device /dev/sdf 3 Mount the logical volume associated with SAP HANA data on the host: mount /dev/sdf /hana/data 4 Start your SAP HANA instance Note: For large mission critical systems we highly recommend that you execute the volume initialization command on the database data and log volumes after the AMI restore but before starting the database Executing the volume initialization command will help you avoid extensive wait times before the database is available Here is the sample fio command that you can use : sudo fio – filename=/dev/xvdf –rw=read –bs=128K –iodepth=32 – ioengine=libaiodirect=1 –name=volume initialize For m ore information about initializing Amazon EBS volumes see the AWS documentation 64 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 21 Restoring AMI Snapshots You can restore your HAN SAP AMI snapshots through the AWS Management Console On the EC2 Dashboard select AMI s in the left hand navigation Choose the AMI that you want to restore expand Actions and select Launch Figure 8 : Restor e AMI snapshot Networking SAP HANA components communicate over the following logical network zones: • Client zone – t o communicate with different clients such as SQL clients SAP Application Server SAP HANA Extended Application Services ( XS) SAP HAN A Studio etc • Internal zone – t o communicate with hosts in a distributed SAP HANA system as well as for SAP HSR • Storage zone – t o persist SAP HANA data in the storage infrastructure for resumption after start or recovery after failure Separating network zones for SAP HANA is considered both an AWS and an SAP best practice because it enables you to isolate the traffic required for each communication channe l ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 22 In a traditional bare metal setup these different network zones are set up by having multiple physical network cards or virtual LANs ( VLANs ) Conversely on the AWS Cloud this network isolation can be achieved simply through the use of elastic networ k inter faces (ENI s) combined with s ecurity groups Amazon EBS optimized instances can also be used for further i solation for storage I/O EBSOptimized Instances Many newer Amazon EC2 instance types such as the X1 use an optimized configuration stack and provide additional dedicated capacity for Amazon EBS I/O These are called EBS optimized instances 65 This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance Figure 9 : EBS optimized instances Elastic Network Interfaces (ENI s) An ENI is a virtual network interface that you can attach to an EC2 instance in an Amazon Virtual Private Cloud (Amazon VPC) With ENI s you can create different logical network s by specifying multiple private IP addresses for your instances For more information about ENIs see the AWS documentation 66 In the following example two ENIs are attached to each SAP HANA node as well as in separate communication channel for storage ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 23 Figure 10 : ENIs a ttached to SAP HANA nodes Security Groups A security group acts as a virtual firewall that controls the traffic for one or more instances When you launch an instance you associate one or more security groups with the instance You add rules to each security group that allow traffic to or from its associated instances Y ou can modify the rules for a security group at any time The new rules are automatically applied to all instances that are associated with the security group To learn more about security groups see the AWS documentation 67 In the following example EN I1 of each instance shown is a member of the same security group that controls inbound and outbound network traffic for the client network ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 24 Figure 11: ENIs and se curity groups Network Configuration for SAP H ANA System Replication (HSR) You can configure a dditional ENIs and security groups to further isolate inter node communication as well as SAP HSR network traffic In Figure 10 ENI 2 is dedicated for inter node communication with its own security group (not shown) to secure client traffic from inter node communication ENI 3 is configured to secure SAP HSR traffic to another A vailability Zone within the same Region In this exam ple the target SAP HANA cluster would be configured with additional ENIs similar to the source environment and ENI 3 would share a common security group ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 25 Figure 12 : Further isolation with a dditional ENIs and s ecurity groups Configuration Steps for L ogical Network Separation To configure your logical network for SAP HANA follow these steps : 1 Create new security groups to allow for isolation of client internal communication and if applicable SAP HSR network traffic See Ports and Connections in the SAP HANA documentation to learn about the list of ports used for different network zones68 For more information about how to create and configure security groups see the AWS documentation 69 2 Use Secure Shell ( SSH ) to connect to your EC2 instance at the OS level Follow the steps described in Appendix A to configure the OS to properly recognize and name the Ethernet devices associated with the new elastic network interfaces (ENIs ) you will be creating 3 Create new ENI s from the AWS M anage ment Console or through the AWS CLI Make sure that the new ENIs are created in the subnet where your SAP HANA instance is deployed As you create each new ENI associate it with the appropriate security group you created in step 1 For more information ab out how to create a new ENI see the AWS documentation 70 4 Attach the ENIs you created to your EC2 instance where SAP HANA is installed For more information about how to attach an ENI to an EC2 instance see the AWS documentation71 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 26 5 Create virtual host names and map them to the IP addresses associated with client internal and replication network interfaces Ensure that host nam etoIPaddress resolution is working by creating entries in all applicable host files or in the Domain Name System (DNS) When complete test that the virtual host names can be resolved from all SAP HANA nodes clients etc 6 For scale out deployments configure SAP HANA i nter service communication to let SAP HANA communicate over the internal network To learn more about this step s ee Configuring SAP HANA Inter Service Communication in the SAP HANA documentation72 7 Configure SAP HANA hostname resolution to let SAP HANA communicate over the replication network for SAP HSR To learn more about this step s ee Configuring Hostname Resolution for SAP HANA System Replication in the SAP HANA documentation 73 SAP Support Access In some situations it may be necessary to allow an SAP support engineer to access your SAP HANA s ystems on AWS The following information serves only as a supplement to the information contained in the “Getting Support” section of the SAP HANA Administration Guide 74 A few steps are required to configure proper connectivity to SAP These steps differ depending on whether you w ant to use an existing remote network connection to SAP or you are setting up a new connection directly with SAP from systems on AWS Support Channel Setup with SAProuter on AWS When setting up a direct support connection to SAP from AWS consider the following steps: 1 For the SAProuter instance c reate and configure a specific SAProuter security group which only allows the required inbound and outbound access to the SAP s upport network This should be limited to a specific IP address that SAP gives you to connect to along with TCP port 3299 See the Amazon EC2 security group documentation for additional details about creating and configuring s ecurity groups75 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 27 2 Launch t he instance that the SAProuter software will be installed on into a public subnet of the Amazon VPC and assign it an Elastic IP a ddress (EIP) 3 Install the SAProuter software and create a saprouttab file that allows access from SAP to your SAP HANA system on AWS 4 Set up the connection with SAP For your internet connection use Secure Network Communication (SNC) For more information see the SAP Remote Support – Help page76 5 Modify the ex isting SAP HANA security groups to trust the new SAProuter security group you have created Tip: For added security shut down the EC2 instance that hosts the SAProuter service when it is not needed for support purposes Figure 13 : Support connectivity with SAProuter on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 28 Support Channel Setup with SAProuter OnPremises In many cases you may already have a support connection configured between your data center and SAP This can easily be extended to support SAP systems on AWS This scenario assumes that connectivity between your data center and AWS has already been established either by way of a secure VPN tunnel over the internet or by using AWS Direct Connect 77 You can extend this connectivity as follows : 1 Ensure that the proper saprouttab entries exist to allow access from SAP to resources in the Amazon VPC 2 Modify the SAP HANA s ecurity groups to allow access from the on premises SAProuter IP address 3 Ensure that the proper firewall ports are o pen on your gateway to allow traffic to pass over TCP port 3299 Figure 14 : Support connectivity with SAProuter onp remises ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 29 Security This section discusses additional security topics you may want to consider that are not covered in the SAP HANA Quick Start reference deployment guide Here are additional AWS security resources to help you achieve the level of security you require for your SAP HANA environment on AWS: • AWS Cloud Security C enter78 • CIS AWS Foundation whitepaper79 • AWS Cloud Security whitepaper80 • AWS Cloud Security Best Practices whitepaper81 OS Hardening You may want to lock down the OS configurat ion further for example to avoid providing a DB admin istrator with root credentials when logging into an instance You can also refer to the followin g SAP notes: • 1730999 : Configuration changes in HANA appliance82 • 1731000 : Unrecommended configuration changes83 Disabling HANA Services HANA services such as HANA XS are optional and should be deactivated i f they are not needed For instructions see SAP N ote 1697613 : Remove XS Engine out of SAP HANA d atabase 84 In case of service deactivation you should also remove the TCP ports from the SAP HANA AW S security groups for complete security API C all Logging AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you85 The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 30 With CloudTrail you can get a history of A WS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as CloudFormation) The AWS API call history produced by CloudTrail enables security analysis resourc e change tracking and compliance auditing Notifications on Access You can use Amazon Simple Notification Service ( Amazon SNS) or third party applications to set up n otifications on SSH l ogin to your email addre ss or mobile phone86 High Availability and Disaster Recovery For details and best practices for h igh availability and disaster recovery of SAP HANA systems running on AWS see High Availability and Disaster Recovery Options for SAP HANA on AWS 87 Conclusion This whitepaper discusse s best practices for the operation of SAP HANA systems on the AWS cloud The best practices provided in this paper will help you efficiently manage and achieve maximum benefit s from running your SAP HANA systems on the AWS C loud For feedback or questions please contact us at saponaws@amazoncom Contributors The following individuals and organizations contributed to this document: • Rahul Kabra Partner Solutions Architect AWS • Somckit Khemmanivanh Partner Solution s Architect AWS • Naresh Pasumarthy Partner Solutions Architect AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 31 Appendix A – Configuring Linux to Recognize Ethernet Devices for M ultiple ENIs Follow these steps to configure the Linux operating system to recognize and name the Ethernet devices associated with the new elastic network interfaces (ENI s) created for logical network separation which was discussed earlier in this paper 1 Use SSH to connect to your SAP HANA host as ec2user and sudo to root 2 Remove the existing udev rule ; for example : hanamaster:# rm f /etc/udev/rulesd/70 persistent netrules Create a new udev rule that writes rules based on MAC address rather than other device attributes This will ensur e that on reboot eth0 is still eth0 eth1 is eth1 and so on For example: hanamaster:# cat < /etc/udev/rulesd/75 persistent net generatorrules # Copyright (C) 2012 Amazoncom Inc or its affiliates # All Rights Reserved # # Licensed under the Apache License Version 20 (the ""License"") # You may not use this file except in compliance with the License # A copy of the License is located at # # http://awsamazoncom/apache20/ # # or in the ""license"" file accompanying this file This file is # distributed on an ""AS IS"" BASIS WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND either express or implied See the License for the ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 32 # specific language governing permissions and limitations under the # License # these rules generate rules for persistent network device naming SUBSYSTEM!=""net"" GOTO=""persistent_net_generator_end"" KERNEL!=""eth*"" GOTO=""persistent_net_generator_end"" ACTION!=""add"" GOTO=""persistent_net_generator_end"" NAME==""?*"" GOTO=""persistent_net_generator_end"" # do not create rule for eth0 ENV{INTERFACE}==""eth0"" GOTO=""persistent_net_generator_end"" # read MAC address ENV{MATCHADDR}=""\ $attr{address}"" # do not use empty address ENV{MATCHADDR}==""00:00:00:00:00:00"" GOTO=""persistent_net_generator_end"" # discard any interface name not generated by our rules ENV{INTERFACE_NAME}==""?*"" ENV{INTERFACE_NAME}="""" # default comment ENV{COMMENT}=""elastic network interface"" # write rule IMPORT{program}=""write_net_rules"" # rename interface if needed ENV{INTERFACE_NEW}==""?*"" NAME=""\ $env{INTERFACE_NEW}"" LABEL=""persistent_net_generator_end"" EOF 3 Ensure proper interface properties For example: hanamaster:# cd /etc/sysconfig/network/ hanamaster:# cat < /etc/sysconfig/network/ifcfg ethN BOOTPROTO='dhcp4' MTU=""9000"" REMOTE_IPADDR='' STARTMODE='onboot' LINK_REQUIRED=no LNIK_READY_WAIT=5 EOF ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 33 4 Ensure that you can accommodate up to seven more Ethernet devices/ENIs and restart wicked For example: hanamaster:# for dev in eth{17} ; do ln s f ifcfg ethN /etc/sysconfig/network/ifcfg ${dev} done hanamaster:# systemctl restart wicked 5 Create and attach a new ENI to the instance 6 Reboot 7 After reboot modify /etc/iproute2/rt_tables Important: Repeat the following for each ENI that you attach to your instance For example: hanamaster:# cd /etc/iproute2 hanamaster:/etc/iproute2 # echo ""2 eth1_rt"" >> rt_tables hanamaster:/etc/iproute2 # ip route add default via 172161122 dev eth1 table eth1_rt hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default hanamaster:/etc/iproute2 # ip rule add from lookup eth1_rt prio 1000 hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 1000: from lookup eth1_rt 32766: from all lookup main 32767: from all lookup default Notes ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 34 1 http://docsawsamazoncom/quickstart/latest/sap hana/ or https://s3amazonawscom/quickstart reference/sap/hana/latest/doc/SAP+HANA+Quick+Startpdf 2 http://d0awsstaticcom/enterprise marketing/SAP/SAP HANA onAWS Manual Setup Guidepdf 3 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 4 http://servicesapcom/instguides 5 http://servicesapcom/notes 6 http://docsawsamazoncom/gettingstarted/latest/awsgsg intro/introhtml 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/conceptshtml 8 http://awsamazoncom/sap/whitepapers/ 9 http: //d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_Implementation_Guidepdf 10 http://d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_High _Availability_Guide_v32pdf 11 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 12 https://awsamazoncom/answers/infrastructure management/ec2 scheduler/ 13 https://awsamazoncom/blogs/security/how toautomatically tagamazon ec2resources inresponse toapievents/ 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/Using_Tagshtml 15 https://awsamazoncom/blogs/aws/new awsresource tagging api/ 16 https://awsamazoncom/cloudwatch/ 17 https://awsamazoncom/marketplace 18 http://docsawsamazoncom/AWSCloudFormation/lat est/UserGuide/Gettin gStartedhtml 19 http://docsawsamazoncom/cli/latest/userguide/cli chap welcomehtml 20 http://docsawsamazoncom/lambda/latest/dg/getting startedhtml 21 https://awsamazoncom/ec2/systems manager/patch manager/ ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 35 22 https://helpsapcom/viewer/2c1988d620e04368aa4103bf26f17727/2000/e nUS/9731208b85fa4c2fa68c529404ffa75ahtml 23 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 24 http://docsawsamazoncom/cli/latest/userguide/cli ec2launchhtml 25 https://awsamazoncom/cloudformation/ 26 https://helpsapcom/viewer/6b944 45c94ae495c83a19646e7c3fd56/2000/e nUS/38ad53e538ad41db9d12d22a6c8f2503html 27 https://helpsapcom/viewer/6b94445c94ae495c83 a19646e7c3fd56/2000/e nUS/c622d640e47e4c0ebca8cbe74ff9550ahtml 28 https://helpsapcom/viewer/6b94445c94ae495c83a19646e7c3fd5 6/2000/e nUS/ea70213a0e114ec29724e4a10b6bb176html 29 https://launchpadsupportsapcom/#/notes/1984882/E 30 https://launchpadsupportsapcom/#/notes/1913302/E 31 https://wwwsusecom/communities/blog/upgrading running demand instances public cloud/ 32 https://awsamazoncom/partners/redhat/faqs/ 33 https://awsamazoncom/about aws/whats new/2016/12/amazon ec2 systems manager now offers patch management/ 34 https://awsamazoncom/ec2/systems manager/patch manager/ 35 http://docsawsamazoncom/systems manager/latest/userguide/systems manager patchhtml 36 https://docsawsamaz oncom/quickstart/latest/sap hana/welcomehtml 37 https://helpsapcom/doc/6b94445c94ae495c83a19646e7c3fd56/2001/en US/c622d640e47e4c0ebca8cbe74ff9550ahtml 38 https://launchpadsupportsapcom/#/notes/1984882/E 39 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 40 http://servicesapcom/sap/support/notes/1642148 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 36 41 http://servicesapcom/sap/support/notes/1821207 42 http://servicesapcom/sap/support/notes/1869119 43 http://servicesapcom/sap/support/notes/1873247 44 http://servicesapcom/sap/support/notes/1651055 45 http://servicesapcom/sap/support/notes/2484177 46 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 47 http://docsawsamazoncom/AWSEC2/latest/UserGuide/creating anami ebshtml 48 http://docsawsamazoncom/AWSEC2/latest/UserGuide/CopyingAMIshtml 49 https://servicesapcom/notes/1703435 50 http://awsamazoncom/s3 / 51 http://awsamazoncom/documentation/s3/ 52 http://awsamazoncom/iam/ 53 http://awsamazoncom/glacier/ 54 http://docsawsamazoncom/AmazonS3/latest/dev/object archivalhtml 55 http://awsamazoncom/cli/ 56 http://docsawsamazoncom/cli/latest/reference/s3/ 57 http://docsawsamazoncom/cli/latest/reference/s3/synchtml 58 http://docsawsamazoncom/cli/latest/reference/s3/lshtml 59 http://se rvicesapcom/sap/support/notes/1651055 60 http://docsawsamazoncom/systems manager/latest/userguide/systems manager accesshtml 61 http://docsawsamazoncom/systems manager/latest/userguide/systems manager managedinstanceshtml#sysman service role 62 http://docsawsamazoncom/cli/latest/reference/s3/cphtml 63 https://helpsapcom/hana/SAP_HANA_Adminis tration_Guide_enpdf 64 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializehtml ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 37 65 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l 66 https://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml 67 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_SecurityG roupshtml 68 https://helpsapcom/saphelp_hanaplatform/helpdata/en/a9/326f20b39342 a7bc3d08acb8ffc68a/framesethtm 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml#creating security group 70 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#create_eni 71 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#attach_eni_running_stopped 72 https://helpsapcom/saphelp_hanaplatform/helpdata/en/bb/cb76c7fa7f45b 4adb99e60ad6c85ba/framesethtm 73 http://helpsapcom/saphelp_hanaplatform/helpdata/en/9a/cd6482a5154b7 e95ce72e83b04f94d/framesethtm 74 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 75 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml 76 https://supportsapcom/remote support/helphtml 77 http://awsamazoncom/directconnect/ 78 http://awsamazoncom/security/ 79 https://d0awsstaticcom/whitepapers/compliance/AWS_CIS_Foundations_ Benchmarkpdf 80 http://d0awsstaticcom/whitepapers/Security/AWS%20Security%20Whitep aperpdf ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 38 81 http://d0awsstaticcom/whitepapers/aws security best practicespdf 82 https://servicesapcom/sap/support/notes/1730999 83 https://servicesapcom/sap/support/notes/1731000 84 https://servicesapcom/sap/support/notes/1697613 85 http s://awsamazoncom/cloudtrail/ 86 https://awsamazoncom/sns/ 87 http://d0awsstaticcom/enterprise marketing/SAP/ saphana onawshigh availability disaster recovery guidepdf Archived",General,consultant,Best Practices Secure_Content_Delivery_with_CloudFront,Secure Content Delivery with Amazon CloudFront Improve the Security and Performance of Your Applications While Lowering Your Content Delivery Costs November 2016 This paper has been archived For the latest technical content about secure content delivery with Amazon CloudFront see https://docsawsamazoncom/whitepapers/latest/secure contentdeliveryamazoncloudfront/securecontentdelivery withamazoncloudfronthtml Archived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Enabling Easy SSL/TLS Adoption 2 Using Custom SSL Certificates with SNI Custom SSL 3 Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS 4 Improving Performance of SSL/TLS Connections 5 Terminating SSL Connections at the Edge 6 Supporting Session Tickets and OCSP Stapling 6 Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination 7 Ensuring Asset Availability 8 Making SSL/TLS Adoption Economical 8 Conclusion 9 Further Reading 9 Notes 11 Archived Abstract As companies respond to cybercrime compliance requirements and a commitment to securing customer data their adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols increases This whitepaper explains how Amazon CloudFront improves the security and performance of your APIs and applications while helping you lower your content delivery costs It focuses on three specific benefits of using CloudFront: easy SSL adoption with AWS Certificate Manager (ACM) and Server Name Indication (SNI) Custom SSL support improved SSL performance with SSL termination available at all CloudFront edge locations globally and economical adoption of SSL thanks to free custom SSL certificates with ACM and SNI support at no additional charge ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 1 of 11 Introduction The adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols to encrypt Internet traffic has increased in response to more cybercrime compliance requirements (PCI v32) and a commitment to secure customer data A survey of the top 140000 websites revealed that more than 40 percent were secured by SSL 1 As measured by Alexa (an amazoncom company) 32 percent of the top million URLs were encrypted using HTTPS (also called HTTP over TLS HTTP over SSL and HTTP Secure) in September 20162 an increase of 45 percent from the same month in 2015 Amazon CloudFront is moving in this direction with a rapidly increasing share of global content traffic on CloudFront delivered over SSL/TLS CloudFront integrates with AWS Certificate Manager (ACM) for SSL/TLSlevel support to ensure secure data transmission using the most modern ciphers and handshakes Figure 1 shows how this secure content delivery works Figure 1: Secure content delivery with CloudFront and the AWS Certificate Manager SSL/TLS on CloudFront offers these key benefits (summarized in Table 1) :  Ease of use  Improved performance ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 2 of 11  Lower costs The integration of CloudFront with ACM reduces the time to s et up and deploy SSL/TLS certificates and translates to improved HTTPS availability and performance Finally certificates and encrypted data rates are offered at very low charge These benefits are discussed in detail in the following sections Table 1: Summary of the key benefits of SSL/TLS on CloudFront Ease of Use Improved Performance Lower Costs Integrated with ACM  Procurement of new certificate directly from CloudFront console  Automatic certificate distribution globally  Automatic certificate renewal Revocation management SNI Custom SSL support Support for standards (eg Apple iOS ATS and PCI) SSL management in AWS environment HTTPS capability at all global edge locations SSL/TLS termination close to viewers Latency reduction with Session Tickets and OCSP stapling Free custom SSL/TLS certificate with ACM SNI Custom SSL/TLS at no additional charge No setup fees no hosting fees and no extra charges for the HTTPS bytes transferred Standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests Enabling Easy SSL/TLS Adoption All browsers have the capability to interact with secured web servers using the SSL/TLS protocol However both browser and server need an SSL certificate to establish a secure connection Support for SSL certificate management requires working with a Certificate Authority (CA) which is a thirdparty that is trusted by both the subject of the certificate (eg the content owner) and the party that relies on the certificate (eg the content viewer) The entire manual process of purchasing uploading and renewing valid certificates through thirdparty CAs can be quite lengthy AWS provides seamless integration between CloudFront and ACM to reduce the creation and deployment time of a new free custom SSL certificate and make certificate management a simpler more automatic process as shown in Figure 2 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 3 of 11 Custom SSL certificates allow you to deliver secure content using your own domain name (eg www examplecom) Although it typically takes a couple of minutes for a certificate to be issued after receiving approval it could take longer3 Once a certificate is issued or imported into ACM it is immediately available for use via the CloudFront console and automatically propagated to the global network of CloudFront edge locations when it is associated with distributions ACM automatically handles certificate renewal which makes configuring and maintaining SSL/TLS for your secure website or application easier and less error prone than by using a manual process In turn this help s you avoid downtime due to misconfigured revoked or expired certificates ACMprovided certificates are valid for 13 months and renewal starts 60 days prior to expiration If a certificate is compromised it can be revoked and replaced via ACM at no additional charge AWS ensures that private keys are never exported which removes the need to secure and track them Figure 2: CloudFront integration with ACM Using SSL Certificates with SNI Custom SSL You can use your own SSL certificates with CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL SNI is an extension of the TLS protocol that provides an efficient way to deliver content over HTTPS using your ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 4 of 11 own domain and SSL certificate SNI identifies the domain without the server having to examine the request body so it can offer the correct certificate during the TLS handshake SNI is supported by most modern browsers including Chrome 60 and later Safari 30 and later Firefox 20 and later and Internet Explorer 7 and later4 (If you need to support older browsers and operating systems you can use the CloudFront dedicated IPbased custom SSL for an additional charge) Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS You can leverage the combination of ACM SNI and CloudFront security features to help meet the requirements of many compliance and regulatory standards such as PCI Additionally CloudFront has “out ofthe box” support f or the industry standard Apple iOS App Transport Security (ATS) For more information on CloudFront security capabilities see Table 2 and Table 3 Table 2: Overview of CloudFront security capabilities Vulnerability CloudFront Security Capabilit ies Cryptographic attacks CloudFront frequently reviews the latest security standards and supports only viewer requests using SSL v3 and TLS v10 11 and 12 When available TLS v13 will also be supported CloudFront supports the strongest ciphers (ECDHE RSA AES128 GCM SHA256) and offers them to the clie nt in preferential sequence Export ciphers are not supported Patching Dedicated teams are responsible for monitoring the threat landscape handling security events and patching software Under t he shared security model AWS will take the necessary meas ures to remediate vulnerabilities with methods such as patching deprecation and revocation DDoS attacks CloudFront has extensive mitigation techniques for standard flood type attacks against SSL To thwart SSL renegotiation type attacks CloudFront dis ables renegotiation Table 3 : Amazon CloudFront support of Apple iOS ATS requirements Apple iOS ATS Requirement CloudFront Support TLS/SSL version must be TLS 12 CloudFront supports TLS 12 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 5 of 11 Apple iOS ATS Requirement CloudFront Support TLS Cipher Suite must be from the following with Perfect Forward Secrecy : CloudFront supports Perfect Forward Secr ecy with the following ciphers: ECDSA Certificates: RSA Certificates: TLS_ECDHE_ECDSA_WITH_AES_ 256_GCM_ SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDH E_ECDSA_WITH_AES_128_GCM_ SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDH E_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_E CDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDH E_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES _128_CBC_SHA TLS_E CDHE_ECDSA_WITH_AES_128_CBC_SHA RSA Certificates: TLS_ECDHE_RSA_WITH_AES_256_G CM_SHA384 TLS_EC DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_EC DHE_RSA_WITH_AES_256_CBC_SHA384 TLS_EC DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA Leaf server certs must be signed with the following : Server certificates signed with the following type of key: Rivest Shamir Adleman (RSA) key with a length of at least 2048 bits Rivest Shamir Adleman (RSA) key with a length of 2048 bits Elliptic Curve Cryptography (ECC) key with a size of at least 256 bits Improving Performance of SSL/TLS Connections You may see a degradation in the performance of your API or application when clients connect directly to your origin servers using SSL Setting up an SSL/TLS connection adds up to three round trips between the client and server introducing additional latency in the connection setup Once the connection is established additional CPU resources are required to encrypt the data that is transmitted ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 6 of 11 Terminating SSL Connections at the Edge When you enable SSL with CloudFront all global edge locations are used for handling your SSL traffic Clients terminate SSL connections at a nearby CloudFront edge location thus reducing network latency in setting up an SSL connection In addition moving the SSL termination to CloudFront helps you offload encryption to CloudFront servers that are specifically designed to be highly scalable and performance optimized These factors boost the performance of not only static content but also dynamic content For example Slack improved its performance when it migrated the delivery of its dynamic content to HTTPS with CloudFront The worldwide average response time to slackcom dropped from 488 milliseconds to 199 milliseconds (see Figure 3) A large portion of these performance benefits came from the decreased SSL negotiation time as the worldwide average for SSL connection times decreased from 215 milliseconds to 52 milliseconds Figure 3: Slack improved its performance by delivering its dynamic content via HTTPS with CloudFront Supporting Session Tickets and OCSP Stapling CloudFront further improves the performance of SSL connections with the support of Session Tickets and Online Certificate Status Protocol (OCSP) stapling (see Figure 4) Session Tickets help decrease the time spent restarting or resuming an SSL session CloudFront encrypts SSL session information and ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 7 of 11 stores it in a ticket that the client can use to resume a secure connection instead of repeating the SSL handshake process OCSP stapling improves the time taken for individual SSL handshakes by moving the OSCP check (a call used to obtain the revocation status of an SSL certificate) from the client to a periodic secure check by the CloudFront servers With OCSP stapling the CloudFront engineering team measured up to a 30 percent performance improvement in the initial connection between the client and the server Figure 4: Session Tickets decrease the time spent restarting or resuming an SSL session Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination With CloudFront you can strike a balance between security and performance by choosing between half bridge and full bridge TLS termination (see Figure 5) By defining different cache behaviors in the same distribution you can define which connections to the origin use HTTPS and which use HTTP You can configure objects that need secure connections to the origin to use HTTPS (eg login pages sensitive data) and configure objects that do not need secure connections to use HTTP (eg logos images) Thus everything can be securely transmitted to the client and origin fetches can be optimized to use HTTP to reduce the overall latency of the transaction ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 8 of 11 Figure 5: Balancing security and performance on the same distribution For full secure delivery you can configure CloudFront to require HTTPS for communication between viewers and CloudFront and optionally between CloudFront and your origin5 Also you can configure CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature When you enable HTTP to HTTPS Redirect CloudFront will respond to an HTTP request with a 301 redirect response that requires the viewer to resend the request over HTTPS Ensuring Asset Availability CloudFront puts significant focus on and dedication to maintaining the availability of your assets Availability is calculated based on how often an attempt was made to download a single object and how often the download failed As shown in Table 4 CloudFront SSL availability (as measured from real clients) across multiple regions is consistently high when compared to other top CDNs6 Table 4 : SSL /TLS traffic – availability by geography for July 2016 to August 2016 # CDN United States Europe Japan Korea 1 CloudFront SSL 9914 9935 9935 9922 2 CDN A 9870 9753 9864 9898 3 CDA B 9677 9444 9167 9819 Making SSL/TLS Adoption Economical CloudFront enables you to generate custom SSL/TLS certificates with ACM and support them with SNI at no additional charge These features are offered with ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 9 of 11 no setup fees no hosting fees and no extra charges for the HTTPS bytes transferred You simply pay standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests For more information see the Amazon C loudFront pricing page 7 For dedicated IP custom SSL there is an additional charge per month This additional charge is associated with dedicating multiple IP v4 addresses (a finite resource) for each SSL certificate at each CloudFront edge location Conclusion You can deliver your secure APIs or applications via SSL/TLS with Amazon CloudFront in an easy way at no additional charge and with improved SSL performance You can create free custom SSL/TLS certificates with AWS ACM in minutes and immediately add them to your CloudFront distributions at no additional charge with automatic SNI support You don’t have to manage certificate renewal because ACM takes care of it automatically and if any certificate is compromised you can revoke it and replace it via ACM You can do all of this while benefiting from improved SSL/TLS performance because of SSL/TLS terminations near your end user and CloudFront support of Session Tickets and OCSP stapling This also applies if you want to deliver dynamic content as CloudFront provides a way to increase performance and security at no additional charge Further Reading There is a wealth of information available in the following whitepapers blog posts user guides presentations and slides to help customers get a deeper understanding of CloudFront ACM and how SSL is used Amazon CloudFront Custom SSL  Amazon CloudFront Custom SSL  List of browsers supported by SNI Custom SSL AWS Certificate Manager ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 10 of 11  Getting started  Managed certificate renewal  FAQs Blogs  Amazon CloudFront What’s New  HTTP and TLS v11 v12 to the origin  AWS Certificate Manager – Deploy SSL/TLSBased Apps on AWS Developers Guide  Introduction to Amazon CloudFront  Using an HTTPS Connection to Access Your Objects Slack Performance Improvement with Amazon CloudFront  Video  Slides re:Invent Presentations  SSL with Amazon Web Services (SEC316) 11/2014  Using Amazon CloudFront For Your Websites & Apps STG206 10/2015  Secure Content delivery Using Amazon CloudFront STG205 10/2015 re:Invent Slides  Secure Content Delivery Using Amazon CloudFront and AWS WAF ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 11 of 11 Notes 1 https://wwwtrustworthyinternetorg/sslpulse/ 2 http://httparchiveorg/trendsphp#perHttps 3 https://awsamazoncom/certificatemanager/faqs/ 4 https://enwikipediaorg/wiki/Server_Name_Indication 5 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Secu reConnectionshtml#SecureConnectionsHowToRequireCustomProcedure 6 http://wwwcedexiscom/getthedata/country report/?report=secure_object_delivery_response_time 7 https://awsamazoncom/cloudfront/pricing/ Archived,General,consultant,Best Practices Securely_Access_Services_Over_AWS_PrivateLink,"This paper has been archived For the latest technical content refer t o HTML version: https://docsawsamazoncom/whitepapers/latest/aws privatelink/awsprivatelinkhtml Securely Access Services Over AWS PrivateLink First published January 2019 Updated June 3 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 3 Contents Introduction 5 What Is AWS PrivateLink? 6 Why use AWS PrivateLink? 6 What are VPC Endpoints? 7 Interface endpoints 8 Gateway endpoi nts 8 How does AWS PrivateLink work? 9 Creating Highly Available Endpoint Services 10 Endpoint Specific Regional DNS Hostname 10 Zonal specific DNS Hostname 11 Private DNS Hostname 11 Private IP Address of the Endpoint Network Interface 11 Deploying AWS PrivateLink 12 AWS PrivateLink Considerations 12 AWS PrivateLink Configuration 15 UseCase Examples 15 Private Access to SaaS Applications 15 Shared Services 16 Hybrid Services 18 Presenting Microservices 19 InterRegion Endpoint Services 21 InterRegion Access to Endpoint Services 23 Conclusion 24 Contributors 24 Further Reading 25 Document Revisions 25 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 4 Abstract Amazon Virtual Private Cloud (Amazon VPC) gives AWS customers the ability to define a virtual private network within the AWS Cloud Customers can build services securely within an Amazon VPC and provide access to these services internally and externally using traditional methods such as an internet gateway VPC peering network address translation (NAT) a virtual private network (VPN) and AWS Direct Connect This whitepaper presents how AWS PrivateLink keeps network traffic private and allows connectivity fro m Amazon VPCs to services and data hosted on AWS in a secure and scalable manner This paper is intended for IT professionals who are familiar with the basic concepts of networking and AWS Each section has links to relevant AWS documentation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 5 Introduct ion The introduction of Amazon Virtual Private Cloud (Amazon VPC) in 2009 made it possible for customers to provision a logically isolated section of the AWS cloud and launch AWS resources in a virtual network that they define Traditional methods to acces s third party applications or public AWS services from an Amazon VPC include using an internet gateway virtual private network (VPN) AWS Direct Connect with a virtual private gateway and VPC peering Figure 1 illustrates an example Amazon VPC and its associated components: Figure 1: Traditional access from an Amazon VPC This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 6 What is AWS PrivateLink? AWS PrivateLink provides secure private connectivity between Amazon VPCs AWS services and onpremises applications on the AWS network As a result customers can simply and securely access services on AWS using Amazon’s private network powering connectivity to AWS services through interface Amazon VPC endpoints Refer to Figure 2 for Amazon VPCtoVPC connectivity using AWS PrivateLink Figure 2: Amazon VPCtoVPC connectivity with AWS PrivateLink AWS PrivateLink also allows customers to create an application in their Amazon VPC referred to as a service provider VPC and offers that application as an AWS PrivateLink enabled service or VPC endpoint service A VPC endpoint service lets customers host a service and have it acces sed by other consumers using AWS PrivateLink Why use AWS PrivateLink? Prior to the availability of AWS PrivateLink services residing in a single Amazon VPC were connected to multiple Amazon VPCs either (1) through public IP addresses using each VPC’s int ernet gateway or (2) by private IP addresses using VPC peering With AWS PrivateLink service connectivity over Transmission Control Protocol (TCP) can be established from the service provider’s VPC to the service consumers’ VPCs in a secure and scalable manner AWS PrivateLink provides the following three main benefits: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 7 Use Private IP Addresses for Traffic AWS PrivateLink provides Amazon VPCs with a secure and scalable way to privately connect to AWS hosted services AWS PrivateLink traffic does not use public internet protocols (IP) addresses nor traverse the internet AWS PrivateLink uses private IP addresses and security groups within an Amazon VPC so that services function as though they were hosted directly within an Amazon VPC Simplify Network Management AWS PrivateLink helps avoid both (1) security policies that limit benefits of internet gateways and (2) complex networking across a large number of Amazon VPCs AWS PrivateLink is easy to use and manage because it re moves the need to whitelist public IPs and manage internet connectivity with internet gateways NAT gateways or firewall proxies AWS PrivateLink allows for connectivity to services across different accounts and Amazon VPCs with no need for route table mo difications There is no longer a need to configure an internet gateway VPC peering connection or Transit VPC to enable connectivity A Transit VPC connects multiple Amazon Virtual Private Clouds that might be geographically disparate or running in separ ate AWS accounts to a common Amazon VPC that serves as a global network transit center This network topology simplifies network management and minimizes the number of connections that you need to set up and manage It is implemented virtually and does no t require any physical network gear or a physical presence in a colocation transit hub Facilitate Your Cloud Migration AWS PrivateLink gives on premises networks private access to AWS services via AWS Direct Connect Customers can more easily migrate traditional on premises applications to services hosted in the cloud and use cloud services with the confidence that traffic remains private What are VPC Endpoints? A VPC endpoint enables customers to privately connect to supported AWS services and VPC endpoint services powered by AWS PrivateLink Amazon VPC instances do not require public IP addresses to communicate with resources of the service Traffic between an Amazon VPC and a service does not leave the Amazon network This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 8 VPC endpoints are virtual devices They are horizontally scaled redundant and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth c onstraints on network traffic There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints Interface endpoints Interface endpoints enable connectivity to services over AWS PrivateLink These services include some AWS managed services services hosted by other AWS customers and partners in their own Amazon VPCs (referred to as endpoint services) and supported AWS Marketplace partner services The owner of a service is a service provider The principal creating the inte rface endpoint and using that service is a service consumer An interface endpoint is a collection of one or more elastic network interfaces with a private IP address that serves as an entry point for traffic destined to a supported service Interface endp oints currently support over 17 AWS managed services Check the AWS documentation for VPC endpoints for a list of AWS services that are available over AWS PrivateLink Gateway endpoints A gateway endpoint targets specific IP routes in an Amazon VPC route table in the form of a prefix list used for traffic destined to Amazon DynamoDB or Amazon Simple Storage Service (Amazon S3) Gateway endpoints do not enable AWS PrivateLink More information about gateway endpoints is in the Amazon VPC User Guide Instances in an Amazon VPC do not require public IP addresses to communicate with VPC endpoints as interface endpoints use local IP addresses within the consumer Amazon VPC Gateway endpoints are destinations that are reachable from within an Amazon VPC through prefix lists within the Amazon VPC’s route table Refer to Figure 3 showing connectivity to AWS services using VPC endpoints This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers 9 Figure 3: Connectivity to AWS services using VPC endpoints How does AWS PrivateLink work? AWS PrivateLink uses Network Load Balancers to connect interface endpoints to services A N etwork Load Balancer functions at the network transport layer (layer 4) and can handle millions of requests per second In the case of AWS PrivateLink it is represented inside the consumer Amazon VPC as an endpoint network interface Customers can specify multiple subnets in different Availability Zones to ensure that their service is resilient to an Availability Zone service disruption To achieve this they can create endpoint network interfaces in multiple subnets mapping to multiple Availability Zones An endpoint network interface can be viewed in the account but customers cannot manage it themselves For more information see Elastic Network Interfaces This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 10 Creat ing Highly Available Endpoint Services The creation of VPC endpoint services goes through four stages which we develop here The generation of a DNS hostname the use of private IP address the deployment of the endpoint and its configuration In Figure 4 the account owner of VPC B is a service provider and has a service running on instances in subnet B The owner of VPC B has a service endpoint (vpce svc1234) with an associated Network Load Balancer that points to the instances in subnet B as targets Instances in subnet A of VPC A use an interface endpoint to access the services in subnet B Figure 4: Detailed Amazon VPC toVPC connectivity with AWS PrivateLink When an interface endpoint is created endpoint specific Domain Name System (DNS) hostnames are generated that can be used to communicate with the service After creating the endpoint requests can be submitted to the provider’s service through one of the following methods: Endpoint Specific Regional DNS Hostname Customers generate an e ndpoint specific DNS hostname which includes all zonal DNS hostnames generated for the interface endpoint The hostname includes a unique This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 11 endpoint identifier service identifier the region and vpceamazonawscom in its name; for example : vpce0fe5b17a070 7d6abc29p5708sec2us east1vpceamazonawscom Zonal specific DNS Hostname Customers generate a zonal specific DNS hostname for each Availability Zone in which the endpoint is available The hostname includes the Availability Zone in its name; for exampl e: vpce0fe5b17a0707d6abc 29p5708s useast1aec2us east 1vpceamazonawsco Private DNS Hostname If enabled customers can use a private DNS hostname to alias the automatically created zonal specific or regional specific DNS hostnames into a friendly h ostname such as: myserviceexamplecom Private IP Address of the Endpoint Network Interface The private IP address of the endpoint network interface in the VPC is directly reachable to access the service in and across Availability Zones in the same way the zonal specific DNS hostname is Service providers that use zonal DNS hostnames to access the service can help achieve high availability by enabling cross zone load balancing Cross zone load balancing enables the load balancer to distribute traffic across the registered targets in all enabled Availability Zones Regional data transfer charges may apply to a service provider’s account when they enable cross zone load balancing as data could potentially transfer between Availability Zones In Figure 5 the owner of VPC B is the service provider and has configured a Network Load Balancer with targets in two different Availability Zones The service consumer (VPC A) has created interface endpoints in the same two Availability Zones in their Amazon VPC Requests to the service from instances in VPC A can use either interface This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 12 endpoint The DNS name resolution of the Endpoint Specific Regional DNS Hostname will alternate between the two IP addresses Figure 5: Round robin DNS load balancing Deploying AWS PrivateLink AWS PrivateLink Considerations When deploying an endpoint customers should consider the following: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 13 • Traffic will be sourced from the Network Load Balancer inside the service provider Amazon VPC When service consumers send traffic to a service through an interface endpoint the source IP addresses provided to the application are the private IP addresses of the Network Load Balancer nodes and not the IP addresses of the service consumers • Proxy Protocol v2 can be enabled to gain insight into the network traffic Network Load Balancers use Proxy Protocol v2 to send additional connection inform ation such as the source and destination This may require changes to the application • Proxy Protocol v2 can be enabled on the load balancer and the client IP addresses can be obtained from the Proxy Protocol header when IP addresses of the service consume rs and their corresponding interface endpoint IDs are needed • Customers can create an Amazon Simple Notification Service (SNS) to receive alerts for specific events that occur on the endpoints that are attached or when they attempt to attach to their endpo int service For example one can receive an email when an endpoint request is accepted or rejected for the endpoint service • The Amazon SNS topic that a customer can use for notifications must have a topic policy that allows the VPC endpoint service to publish notifications on your behalf Include the following statement in the topic policy: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 14 { ""Version"" : ""20121017"" ""Statement"" : [ { ""Effect"" : ""Allow"" ""Principal"" : { ""Service"" : ""vpceamazonawscom"" } ""Action"" : ""SNS:Publish"" ""Resource"" : ""arn:aws:sns: region:account:topic name"" } ] } For more information see the documentation on Authentication and Access Control for Amazon SNS • Endpoint services cannot be tagged • The private DNS of the endpoint does not resolve outside of the Amazon VPC For more information read accessing a service through an interface endpoint Note that private DNS hostnames can be configured to point to endpoint network interface IP addresses directly Endpoint services are available in the AWS Region in which they are created and can be accessed in remote AWS Regions using InterRegion VPC Peering • If an endpoint service is asso ciated with multiple Network Load Balancers then for a specific Availability Zone an interface endpoint will establish a connection with one load balancer only • Availability Zone names in a customer account might not map to the same locations as Availabi lity Zone names in another account For example the Availability Zone US EAST 1A might not be the same Availability Zone as US EAST 1A for another account An endpoint service gets configured in Availability Zones according to their mapping in a customer’s account • For low latency and fault tolerance we recommend creating a Network Load Balancer with targets in each available Availability Zone of the AWS Region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 15 AWS PrivateLink Configuration Full details on how to configure AWS PrivateLink can be found from the documentation on interface VPC endpoints UseCase Examples This section showcases some of the most common use cases for consuming and providing AWS PrivateLink endpoint services Private Access to SaaS Applications AWS PrivateLink enables Software asaService (SaaS) providers to build highly scalable and secure services on AWS Service providers can privately expose their service to thousands of customers on AWS with ease A SaaS (or service) provider can use a Network Load Balancer to target instances in their Amazon VPC which will represent their endpoint service Customers in AWS can then be granted access to the endpoint service and create an interface VPC endpoint in their own Amazon VPC that is associated with the endpo int service This allows customers to access the SaaS provider’s service privately from within their own Amazon VPC Follow the best practice of creating an AWS PrivateLink endpoint in each Availability Zone within the region that the service is deployed i nto This provides a highly available and lowlatency experience for service consumers Service consumers who are not already on AWS and want to access a SaaS service hosted on AWS can utilize AWS Direct Connect for private connectivity to the service provider Customers can use an AWS Direct Connect connection to access service provider services hosted in AWS For example a customer is interested in understanding their log data and selects a logging analytics SaaS offering hosted on AWS to ingest their lo gs in order to create visual dashboards One way of transferring the logs into the SaaS provider’s service is to send them to the public facing AWS endpoints of the SaaS service for ingestion With AWS PrivateLink the service provider can create an endpoi nt service by placing their service instances behind a Network Load Balancer enabling customers to create an interface VPC endpoint in their Amazon VPC that is associated with their endpoint service As a result customers can privately and securely transfer log data to an This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS P rivateLink 16 interface VPC endpoint in their Amazon VPC and not over public facing AWS endpoints See the following figure for an illustration Figure 6: Private connectivity to cloud based SaaS services Shared Services As customers d eploy their workloads on AWS common service dependencies will often begin to emerge among the workloads These shared services include security services logging monitoring Dev Ops tools and authentication to name a few These common services can be abstracted into their own Amazon VPC and shared among the workloads that exist in their own separate Amazon VPCs The Amazon VPC that contains and shares the common services is often referred to as a Shared Services VPC Traditionally workloads inside Ama zon VPCs use VPC peering to access the common services in the Shared Services VPC Customers can implement VPC peering effectively however there are caveats VPC peering allows instances from one This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 17 Amazon VPC to talk to any instance in the peered VPC Cust omers are responsible for implementing fine grained network access controls to ensure that only the specific resources intended to be consumed from within the Shared Services VPC are accessible from the peered VPCs In some cases a customer running at sca le can have hundreds of Amazon VPCs and VPC peering has a limit of 125 peering connections to a single Amazon VPC AWS PrivateLink provides a secure and scalable mechanism that allows common services in the Shared Services VPC to be exposed as an endpoin t service and consumed by workloads in separate Amazon VPCs The actor exposing an endpoint service is called a service provider AWS PrivateLink endpoint services are scalable and can be consumed by thousands of Amazon VPCs The service provider creates an AWS PrivateLink endpoint service using a Network Load Balancer that then only targets specific ports on specific instances in the Shared Services VPC For high availability and low latency we recommend using a Network Load Balancer with targets in at least two Availability Zones within a region A service consumer is the actor consuming the AWS PrivateLink endpoint service from the service provider When a service consumer has been granted permission to consume the endpoint service they create an interface endpoint in their VPC that connects to the endpoint service from the Shared Services VPC As an architectural best practice to achieve low latency and high availability we recommend creating an Interface VPC endpoint in each available Availab ility Zones supported by the endpoint service Service consumer VPC instances can use a VPC’s available endpoints to access the endpoint service via one of the following ways: (1) the private endpoint specific DNS hostnames that are generated for the inte rface VPC endpoints or (2) the Interface VPC endpoint’s IP addresses Onpremises resources can also access AWS PrivateLink endpoint services over AWS Direct Connect Create an Amazon VPC with up to 20 interface VPC endpoints and associate with the endpoin t services from the Shared Services VPC Terminate the AWS Direct Connect connection’s private virtual interface to a virtual private gateway Next attach the virtual private gateway to the newly created Amazon VPC Resources onpremises are then able to access and consume AWS PrivateLink endpoint services over the AWS Direct connection The following figure illustrates a shared services Amazon VPC using AWS PrivateLink This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 18 Figure 7: Shared Services VPC using AWS PrivateLink Hybrid Services As customers start their migration to the cloud a common architecture pattern used is a hybrid cloud environment This means that customers will begin to migrate their workloads into AWS over time but they will also start to use native AWS services to serve their clients In a Shared Services VPC the instances behind the endpoint service exist on the AWS cloud AWS PrivateLink allows you to extend resource targets for the AWS PrivateLink endpoint service to resources in an onpremises data center The Network Load Balancer for the AWS PrivateLink endpoint service can use resources in an on premises data center as well as instances in AWS Service consumers on AWS still access the AWS PrivateLink endpoint service by creating an interface VPC endpoint that is associated with the endpoint service in their VPC but the requests they make over the interface VPC endpoint will be forwarded to resources in the onpremises data center This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Servi ces Over AWS PrivateLink 19 The Network Load Balancer enables the extension of a service architecture to l oad balance workloads across resources in AWS and on premises resources and makes it easy to migrate tocloud burst tocloud or failover tocloud As customers complete the migration to the cloud on premises targets would be replaced by target instance s in AWS and the hybrid scenario would convert to a Shared Services VPC solution See the following figure for a diagram on hybrid connectivity to services over AWS Direct Connect Figure 8: Hybrid connectivity to services over AWS Direct Connect Presenting Microservices Customers are continuing to adopt modern scalable architecture patterns for their workloads A microservice is a variant of the service oriented architecture (SOA) that structures an application as a collection of loosely coupled services that do one specialized job and do it well AWS PrivateLink is well suited for a microservices environment Customers can give teams who own a particular service an Amazon VPC to develop and deploy their service in Once they are ready to deploy the service for consumption by other services they can create an endpoint service For example endpoint service may consist of a Network Load Balancer that can target Amazon Elastic Compute Cloud (Amazon EC2) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 20 instances or containers on Amazon Elas tic Container Service (Amazon ECS) Service teams can then deploy their microservices on either one of these platforms and the Network Load Balancer would provide access to the service A service consumer would then request access to the endpoint service a nd create an interface VPC endpoint associated with an endpoint service in their Amazon VPC The service consumer can then begin to consume the microservice over the interface VPC endpoint The architecture in Figure 9 shows microservices which are segment ed into different Amazon VPCs and potentially different service providers Each of the consumers who have been granted access to the endpoint services would simply create interface VPC endpoints associated with the given endpoint service in their Amazon V PC for each of the microservices it wishes to consume The service consumers will communicate with the AWS PrivateLink endpoints via endpoint specific DNS hostnames that are generated when the endpoints are created in the Amazon VPCs of the service consume r The nature of a microservice is to have a call stack of various microservices throughout the lifecycle of a request What is illustrated as a service consumer in Figure 9 can also become a service provider The service consumer can aggregate what it nee ds from the services it consumed and present itself as a higher level microservice This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 21 Figure 9: Presenting Microservices via AWS PrivateLink Inter Region Endpoint Services Customers and SaaS providers who host their service in a single region can extend their service to additional regions through Inter Region VPC Peering Service providers can leverage a Network Load Balancer in a remote region and create an IP target group that uses the IPs of their instance fleet in the remote region hosting the service InterRegion VPC Peering traffic leverages Amazon’s private fiber network to ensure that services communicate privately with the AWS PrivateLink endpoint service in the remote region This allows the service consumer to use local interface VPC endpoints to connect to an endpoint service in an Amazon VPC in a remote region Figure 10 shows Inter Region Endpoint services A service provider is hosting an AWS PrivateLink endpoint service in the US EAST 1 Region Service consumers of the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securel y Access Services Over AWS PrivateLink 22 endpoint service require the service provider to provide a local interface VPC endpoint that is associated with the endpoint service in the EUWEST 2 region Service providers c an use Inter Region VPC Peering to provide local endpoint service access to their customers in remote regions This approach can help the service providers gain the agility to provide the access their customers want while not having to immediately deploy their service resources in the remote regions but instead deploying them when they are ready If the service provider has chosen to expand their service resources into remote regions that are currently using Inter Region VPC Peering th e service provider will have to remove the targets from the Network Load Balancer in the remote region and point them to the targets in the local region Since the remote endpoint service is communicating with resources in a remote region additional laten cy will be incurred when the service consumer communicates with the endpoint service The service provider will also have to cover the costs for the Inter Region VPC Peering data transfer Depending on the workload this could be a long term approach for some service providers so long as they evaluate the pros and cons of the service consumer experience and their own operating model Figure 10: Inter Region Endpoint Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 23 Inter Region Access to Endpoint Services As customers expand their global footprint by deploying workloads in multiple AWS regions across the globe they will need to ensure that the services that depend on AWS PrivateLink endpoint services have connectivity from the region they are hosted in Customers can leverage Inter Region VPC Peering to enable services in another region to communicate with interface VPC endpoint terminating the endpoint service which directs traffic to the AWS PrivateLink endpoint service hosted in the remote region InterRegion VPC Peering traffic is transported over Amazon’s network and ensures that your services communicate privately to the AWS PrivateLink endpoint service in the remote Region Figure 11 visualizes the inter region access to endpoint services A customer has deployed a workload in th e EU WEST 1 Region that needs to access an AWS PrivateLink endpoint service hosted in the US EAST 1 Region The service consumer will first need to create an Amazon VPC in the Region where the AWS PrivateLink endpoint service is currently being hosted in They will then need to create an Inter Region VPC Peering connection from the Amazon VPC in their region to the Amazon VPC in the remote Region The service consumer will then need to create an interface VPC endpoint in the Amazon VPC in the remote Region that is associated with the endpoint service The workload in the service consumers Amazon VPC can now communicate with the endpoint service in the remote Region by leveraging Inter Region VPC Peering The service consumer will have to consider the addit ional latency when communicating with endpoint service hosted in the remote Region as well as the inter region data transfer costs between the two Regions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 24 Figure 11: Inter Region access to endpoint services Conclusion The AWS PrivateLink scenarios and best practices outlined in this paper can help you build secure scalable and highly available architectures for your services on AWS Consider your application’s connectivity requirements before choosing an Amazon VPC connectivity architect ure for your internal or external customers Contributors Contributors to this document include : • Ahsan Ali Global Accounts Solutions Architect Amazon Web Services • David Murray Strategic Solutions Architect Amazon Web Services • James Devine Senior Solutions Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 25 • Ikenna Izugbokwe Senior Solutions Architect Amazon Web Services • Matt Lehwess Principal Solutions Architect Amazon Web Services • Tom Clavel Senior Product Marketing Manager Amazon Web Services • Puneet Konghot Senior Product Manager Amazon Web Services Further Reading For additional information see: • Network toAmazon VPC Connectivity Options • AWS PrivateLink Document Revisions Date Description June 3 2021 Updates November 2020 Updates to Figures 6 7 and 8 for clarity January 2019 First publication",General,consultant,Best Practices Security_at_Scale_Governance_in_AWS,"ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 1 of 16 Security at Scale: Governance in AWS Analysis of AWS features that can alleviate onpremise challenges October 2015 This paper has been archived For the most recent security content see Best Practices for Security Identity and Compliance at https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 2 of 16 Table of C ontents Abstract 3 Introduction 3 Manage IT resources 4 Manage IT assets 4 Control IT costs 5 Manage IT security 6 Control physical access to IT resources 6 Control logical access to IT resources 7 Secure IT resources 8 Manage logging around IT resources 10 Manage IT performance 11 Monitor and respond to events 11 Achieve resiliency 12 ServiceGovernance Feature Index 13 Conclusion 15 References and Further Reading 16 ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 3 of 16 Abstract You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions An often overlooked benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more oversight security control and central automation This paper describes how you can achieve a high level of governance of your IT resources using AWS In conjunction with the AWS Risk and Compliance whitepaper and the Auditing Security Checklist whitepaper this paper can help you understand the security and governance features built in to AWS services so you can incorporate security benefits and best practices in building your integrated environment with AWS Introduction Industry and regulatory bodies have created a complex array of new and legacy laws and regulation s mandating a wide range of security and organizational governance measures As such research firms estimate that many companies are spending as much as 75% of their IT dollars to manage infrastructure and spending only 25% of their IT dollars on IT aspects that are directly related to the business their companies are providing One of the key ways to improve this metric is to more efficiently address the backend IT governance requirements An easy and effective way to do that is by leveraging AWS’s out ofthebox governance features While AWS offers a variety of IT governanceenabling features it can be hard to decide how to start and what to implement This paper looks at the common IT governance domains by providing the use case ( or the on premise challenge) the AWS enabling features and the associated governance value propositions of using those features This document is designed to help you achieve the objectives of each IT governance domain1 This paper follows the approach of the major domains of comm onlyimplemented IT governance frameworks (eg CoBIT ITIL COSO CMMI etc) ; however the IT governance domains through which the paper is organized are generic to allow any customer to use it to evaluate the governance features of using AWS versus what can be done with your onpremise resources and tools The following IT governance domains are discussed through a “usecase ” approach : I want to better 1 While this paper features a robust list of the governanceenabling features because new features are consistently being developed it is not inclusive of all the features available Additional tutorials developer tools documentation can be found at http://awsamazoncom/resources/ Manage my IT resources Manage my IT assets Control my IT costsManage my IT security Control logical access Control physical access Secure IT resources Log IT activitiesManage my IT performance Monitor IT events Achieve IT resiliencyArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 4 of 16 Manage IT resources Manage IT assets Identifying and managing your IT assets is the first step in effective IT governance IT assets can range from the high end routers switches servers hosts and firewalls to the applications services operating systems and other software assets deployed in your network An updated inventory of hardware and software assets is vital for decisions on upgrades and purchases tracking warranty status or for troubleshooting and security reasons It is becoming a business imperative to have an accurate asset inventory listing to provide on demand views and comprehensive reports Moreover comprehensive a sset inventories are specifically required for certain compliance regulations For example FISMA SOX PCI DSS and HIPAA all mandate accurate asset inventories as a part of their requirements However the nature of pieced together onpremise resources ca n make maintaining this listing arduous at best and impossible at worst Often organizations have to employ third party solutions to enable automation of the asset inventory listing and even then it is not always possible to see a detailed inventory of every type of asset on a single console Using AWS there are multiple features available for you to quickly and easily obtain an accurate inventory of your AWS IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides a sum marized listing of IT resources by detailing usage of each service by region Learn more Amazon Glacier vault inventory Provides Glacier data inventory by showing all IT resources in Glacier Learn more AWS CloudHSM Provides virtual and physical control over encryption keys by providing customer dedicated HSMs for key storage Learn more AWS Data Pipeline Task Runner Provides automated processing of tasks by polling the AWS Data Pipeline for tasks and then performing and reporting status on those tasks Learn more AWS Management Console Provides a real time inventory of assets and data by showing all IT resources running in AWS by service Learn more AWS Storage Gateway APIs Provide the capability to programmatically inventory assets and data by programming interfaces tools and scripts to manage reso urces Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 5 of 16 Control IT c osts You can better control your IT costs by obtaining resources in the most cost effective way by understand ing the costs of your IT services However managing and tracking the costs and ROI associated with IT resource spend onpremise can be difficult and inaccurate because the calculations are so complex; capacity planning predictions of use purchasing costs depreciation cost of capital and facilities costs are just a few aspects that make total cost of ownership difficult to calculate Using AWS there are multiple features available for you to easily and accurately understand and control your IT resource costs U sing AWS you can achieve cost savings of up to 80% compared to the equ ivalent on premises deployments2 Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides an anytime view of spending on IT resources by showing resources used by service Learn more Amazon EC2 i dempotency instance launch Helps p revent erroneous launch of resources and incurrence of additional costs by preventing timeouts or connection errors from launching additional instances Learn more Amazon EC2 r esource tagging Provides association between resource expenditures and business units by applying custom searchable labels to compute resources Learn more AWS Account Billing Provides easy touse billing features that help you monitor and pay your bill by detailing resources used and associated actual compute costs incurred Learn more AWS Management Console Provides a one stop shop view for cost drivers by showing all IT resources running in AWS by service including actual costs and run rate Learn more AWS service pricing Provides definitive awareness of AWS IT resource rates by providing pricing for each AWS product and specific pricing characteristics Learn more AWS Trusted Advisor Helps o ptimize cost of IT resources by identifying unused and idle resources Learn more Billing Al arms Provides proactive alerts on IT resource spend by sending notifications of spending activity Learn more Consolidated billing Provides centralized cost control and cross account cost visibility by combining multiple AWS accounts into one bill Learn more 2 See the Total Cost of Ownership Whitepaper for more information on overall cost savings using AWS ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 6 of 16 Payasyougo pricing Provides computing resources and services that you can use to build applications within minutes at pay asyougo pricing with no up front purchase costs or ongoing maintenance costs by automatically scaling into multiple servers when demand for your application increases Learn more Manage IT security Control p hysical access to IT resources Physical access management is a key component of IT governance programs In addition to the locks security alarms access controls and surveillance videos that define the traditional components of physical security the electronic controls over physical access are also paramount to effective physical security The traditional physical security industry is in rapid transition and areas of specialization are surfacing making physical security vastly more complex As the onpremise physical security considerations and controls have become more complex there is an increased need for uniquely qualified and specialized IT security professionals to manage the significant effort required to achieve effective physical control around access credentials for cards/card readers controllers and system servers for hosting data around physical security Using AWS you can easily and effectively outsource controls related to physical security of your AWS infrastructure to AWS specialists with the skillsets and resources needed to secure the physical environment AWS has multiple different independent auditors validate the data center physical security throughout the year attesting to the design and detailed testing of the effectiveness of our physical security controls Learn more about the AWS audit programs and associated physical security controls below: AWS governance enabling feature How you get security at scale AWS SOC 1 physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS SOC 2 Security physical access controls Provides transparency into the controls in place that p revent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS PCI DSS physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers relevant to the Payment Card Industry Data Security Standard Controls are properly designed tested and audited by an independent audit firm Learn more AWS ISO 27001 physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the ISO 27002 security best practice s tandard Controls are properly designed tested and audited by an independent audit firm Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 7 of 16 AWS FedRAMP physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the NIST 800 53 best practice standard Controls are properly des igned tested and audited by a government accredited independent a udit firm Learn more Control logical a ccess to IT resources One of the primary objectives of IT governance is to effectively manage logical access to computer systems and data However many organizations are struggling to scale their onpremise solutions to meet the growing and continuously changing number of considerations and complexities around logical access including the ability to establish a rule of least privilege manage permissions to resources address changes in roles and information needs and the growth of sensitive data Major persistent challenges for managing logical access in an onpremise environment are providing users with access based on:  Role (ie internal users contractors outsiders partners etc)  Data classification (ie confidential internal use only private public etc)  Data type (ie credentials personal data contact information workrelated data digital certificates cognitive passwords etc) There are multiple control features AWS offers you effectively manage your logical access based on a matrix of use cases anchored in least privilege Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon S3 Access Control Lists (ACLs) Provides central permissions and conditions by adding specific conditions to control how a user can use AWS such as time of day their originating IP address whether they are using SSL or whether they have authenticated with a Multi Factor Authentication device Learn more here and here Amazon S3 Bucket Policies Provides the ability to create conditional rules for managing access to their buckets and objects by allowing you to restrict access based on account as well as request based attributes such as HTTP referrer and IP address Learn more Amazon S3 Query String Authentication Provides the ability to give HTTP or browser access to resources that would normally require authentication by using the signature in the query string to secure the request Learn more AWS CloudTrail Provides logging of API or console actions (eg log if someone changes a bucket policy stops and instance etc) allowing advanced monitor ing capabilities Learn more AWS IAM Multi Fact or Authentication (MFA) Provides enforcement of MFA across all resources by requiring a token to sign in and access resources Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 8 of 16 AWS IAM password policy Provides the ability to manage the quality and controls around your users’ passwords by allowing you to set a password policy for the passwords used by IAM users that specifies that passwords must be of a certain length must include a selection of charact ers etc Learn more AWS IAM Permissions Provides the ability to easily manage permissions by letting you specify who has access to AWS resources and wha t actions they can perform on those resources Learn more AWS IAM Policies Enables you to achieve detailed least privilege access management by allowing you to create multiple users within your AWS account assign them security credentials and manage their permissions Learn more AWS IAM Roles Provides the ability to temporarily delegate access to users or services that normally don't have access to your AWS resources by defining a set of permissions to access the resources that a user or service needs Learn more AWS Trusted Advisor Provides automated security management assessment by identifying and escalating possible security and permission issues Learn more Secure IT resources Securing IT resources is the cornerstone of IT governance programs However for onpremise environments there is a litany of security steps that must be taken when a new server is brought online For example firewall and access control policies must be updated the newly created server image must be verified to be in compliance with security policy and all software packages have to be up to date Unless these security tasks are automated and delivered in a way that can keep up with the highly dynamic needs of the business organizations working solely with traditional governance approaches will either cause users to work around the security controls or will cause costly delays for the business AWS provides multiple security features that enable you to easily and effectively secure your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Linux AMIs Provides the ability to c onsistently deploy a "" gold"" (hardened) image by developing a private image to be used in all instance deployments Learn more Amazon EC2 Dedicated Instances Provides a private isolated virtual network and ensures that your Amazon EC2 compute instances are be isolated at the hardware level and launching these instances into a VPC Learn more Amazon EC2 instance launch wizard Enables consi stent launch process by providing restrictions around machine images available when launching instances Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 9 of 16 Amazon EC2 security groups Provides granular control over inbound and outbound traffic by acting as a firewall that controls the traffic for one or more instances Learn more Amazon Glacier archives Provides inexpensive long term storage service for securing and durably storage for data archiving and backup using AES 256 bit encryption by default Learn more Amazon S3 Client Side Encryption Provides th e ability to encrypt your data before sending it to Amazon S3 by building your own library that encrypts your objects data on the client side before uploading it to Amazon S3 The AWS SDK for Java can also automatically encrypt your data before uploading i t to Amazon S3 Learn more Amazon S3 Server Side Encryption Provides encryption of objects at rest and keys managed by AWS by using AES 256 bit encryption for Amazon S3 data Learn more Amazon VPC Provides a virtual network closely resembling a traditional network that is operated on premise but with benefits of usi ng the scalable infrastructure of AWS Allows you to create logically isolated section s of AWS where you can launch AWS resources in a virtual network that you define Learn more Amazon VPC logical isolation Provides virtual isolation of resources by allowing machine images to be isolated from other networked resources Lear n more Amazon VPC network ACLs Provides ‘firewall type’ isolation for associated subnets by controlling inbound and outbound traffic at the subnet level Learn more Amazon VPC private IP address es Helps p rotect private IP addresses from internet exposure by routing their traffic through a Network Address Translation (NAT) instance in a public subnet Learn more Amazon VPC security groups Provides ‘firewall type’ isolation for associated Amazon EC2 instances by controlling inbound and outbound traffic at the instance level Learn more AWS CloudFormation templates Provides the ability to c onsistently deploy a specific machine image along with other resources and conf igurations by provisioning infrastructure with scripts Learn more AWS Direct Connect Removes need for a publi c Internet connection to AWS by establishing a dedicated network connection from your premises to AWS ’ datacenter Learn more Onpremise hardware/software VPN connections Provides granular control over network security by allowing secure connectio ns from existing network to AWS Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 10 of 16 Virtual private gateways Provides granular control over network security by providing a way to create a Hardware VPN Connection to your VPC Learn more Manage logging around IT resources A major enabler of securing IT is the logging around IT resources Logging is critically important to IT governance for a variety of use cases including but not limited to: detecting/tracking suspicious behavior supporting forensic analysis meeting compliance requirements supporting IT/networking maintenance and operations managing/reducing IT security costs monitoring service levels and supporting internal business processes Organizations are increasingly dependent on effective log management to support core governance functions including cost management service level and line ofbusiness application monitoring and other IT security and compliance focused activities The SANS Log Management Survey consistently shows that organizations are continuously seeking more uses from their logs but are encountering friction in their ability to achieve that use cases using onpremise resources to collect and analyze those logs With more log types to collect and analyze from different IT resources organizations are challenged by the manual overhead associated with normalizing log data that is in widely different formats as well as with the searching correlating and reporting functionalities Log management is a key capability for security monitoring compliance and effective decisionmaking for the tens or hundreds ofthousands of activities each day Using AWS there are multiple logging features that enable you to effectively log and track the use of your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon CloudFront access log s Provides log files with information about end user access to your objects Logs can be distributed directly to a specific Amazon S3 bucket Learn more Amazon RDS database logs Provides a way to monitor a number of log files generated by your Amazon RDS DB Instances Used to diagnose trouble shoot and fix database configuration or performance issues Learn more Amazon S3 Object Expiration Provides automated log expiration by schedul ing removal of objects after a defined time period Learn more Amazon S3 server access logs Provides logs of access requests with details about th e request such as the request type the resource with which the request was made and the time and date that the request was processed Learn more AWS CloudTrail Provides log s of security actions done via the AWS Management Console or APIs Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 11 of 16 Manage IT performance Monitor and respond to event s IT performance management and monitoring has become a strategically important part of any IT governance program IT monitoring is an essential element of governance that allows you to prevent detect and correct IT issues that may impact performance and/or security The key governance challenge in onpremise environments around IT performance management is that you are faced with multiple monitoring systems to manage every layer of your IT resources and the mix of proprietary management tools and IT processes results in a systemic complexity that can at best slow response times and at worst impact the effectiveness of your IT performance monitoring and management Moreover the increasing complexity and sophistication of security threats mean that event monitoring and response capabilities need to continuously and rapidly evolve to address emerging threats As such onpremise performance management is continuously faced with growing challenges around infrastructure procurement scalability ability to simulate test conditions across multiple geographies etc Using AWS there are multiple monitoring features that enable you to easily and effectively monitor and manage your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Cloud Watch Provides statistical data you can use to view analyze and set alarm s on the operational behavior of your instances These metrics include CPU utilization network traffic I/O and latency Learn more Amazon Cloud Watch alarms Provides consistent alarming for critical events by providing custom metrics alarms and notifications for event s Learn more Amazon EC2 i nstance status Provides instance status checks that summarize results of automated tests and provides information about c ertain acti vities that are scheduled for your instances Uses automated checks to detect whether specific issues are affecting your instances Learn more Amazon Incident Management Team Provides continuous incident detection monitoring and management with 24 7365 staff operators to support detection diagnostics and resolution of certain security events Learn more Amazon S3 TCP selective acknowledgement Provides the ability to improve recovery time after a large number of packet losses Learn more Amazon Simple Notification Service Provides consistent alarming for critical events by managing the delivery of messages to subscribing endpoints or clients Learn more AWS Elastic Beanstalk Provides ability to monitor application deployment details of capacity provisioning load balancing auto scaling and application health monitoring Learn more Elastic Load Balancing Provides the ability to automatically distribute your incoming application traffic across multiple Amazon EC2 instances by detecting ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 12 of 16 over utilized instances and rerouting traffic to underutilized instances Learn more Achieve resiliency Data protection and disaster recovery planning should be a priority component of IT governance for all organizations Arguably the value of DR is not in question; every organization is concerned about its ability to get back up and running after an event or disaster But implementing governance around IT resource resiliency can be expensive and complex as well as tedious and timeconsuming Organizations are faced with a growing number of events that can cause unplanned downtime and operational blockers These events can be caused by technical problems (eg viruses data corruption human error etc) or natural phenomena (eg fires floods power failures weatherrelated outages etc) As such organizations are faced with increasing costs and complexity in planning testing and operating onpremise failover sites because of continual data growth In the face of these challenges cloud computing’s server virtualization enables the quality resiliency programs to be feasible and costeffective Using AWS there are multiple features that enable you to easily and effectively achieve resiliency for your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon EBS snapshots Provides highly available highly reliable predictable storage volumes with incremental point in time backup control of server data Learn more Amazon RDS Multi AZ Depl oyments Provides the ability to safeguard your data in the event with automated availability controls homogenous resilient architecture Learn more AWS Import/Export Provides the ability to move massive amounts of data locally by creating import and export jobs quickly using Amazon’s high speed internal network Learn more AWS Storage Gateway Provides seamless and secure integration between your on premises IT environment and AWS's storage infrastructure by scheduling snapshots that the gateway stores in Amazon S3 in the form of Amazon EBS snapshots Learn more AWS Trusted Advisor Provides automated performance management and availability control by identifying options to increase the availability and redundancy of your AWS application Learn more Extensive 3rd Party Solutions Provides secure data storage and automated availability control by easily connecting you with a market of applications of tools Learn more Managed AWS No SQL/SQL Database Services Provides secure and durable data storage automatically replicating data items across multiple Availability Zones in a Region to provide built in high av ailability and data durability Learn more:  Amazon D ynamo DB ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 13 of 16  Amazon RDS Multi region deployment Provides geo diversity in compute locations power grids fault lines etc providing a variety of locations Learn more Route 53 health checks and DNS failover Monitors availability of stored backup data by allowing you to configure DNS failover in active active active passive and mixed configurations to improve the availability of your application Learn more Service Governance Feature Index The information above is presented by governance domain For your reference a summary of governance feature by major AWS services is described in the table below: AWS Service Governance Feature Amazon EC2 Amazon EC2 idempotency instance launch Amazon EC2 resource tagging Amazon Linux AMIs Amazon EC2 Dedicated Instances Amazon EC2 instance launch wizard Amazon EC2 security groups Elastic Load Balancing Elastic Load Balancing traffic distribution Amazon VPC Amazon VPC Amazon VPC logical isolation Amazon VPC network ACLs Amazon VPC private IP addresses Amazon VPC security groups Onpremise hardware/software VPN connections Amazon Route 53 Amazon Route 53 latency resource record sets Route 53 health Checks and DNS failover AWS Direct Connect AWS Direct Connect Amazon S3 Amazon S3 Access Control Lists (ACLs) ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 14 of 16 Amazon S3 Bucket Policies Amazon S3 Query String Authentication Amazon S3 Client Side Encryption Amazon S3 Server Side Encryption Amazon S3 Object Expiration Amazon S3 server access logs Amazon S3 TCP selective acknowledgement Amazon S3 TCP window scaling Amazon Glacier Amazon Glacier vault inventory Amazon Glacier archives Amazon EBS Amazon EBS snapshots AWS Import/Export AWS Import/Export bulk datano… AWS Storage Gateway AWS Storage Gateway integration AWS Storage Gateway APIs Amazon CloudFront Amazon CloudFront Amazon CloudFront access logs Amazon RDS Amazon RDS database logs Amazon RDS Multi AZ Deployments Managed AWS No SQL/SQL Database Services Amazon Dynamo DB Managed AWS No SQL/SQL Database Services AWS Management Console Account Activity page AWS Account Billing AWS service pricing AWS Trusted Advisor Billing Alarms Consolidated billing Payasyougo pricing ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 15 of 16 AWS CloudTrail Amazon Incident Management Team Amazon Simple Notification Service Multi region deployment AWS Identity and Access Management (IAM) AWS IAM Multi Factor Authentication (MFA) AWS IAM password policy AWS IAM Permissions AWS IAM Policies AWS IAM Roles Amazon CloudWatch AWS CloudWatch Dashboard Amazon CloudWatch alarms AWS Elastic Beanstalk AWS Elastic Beanstalk monitoring AWS CloudFormation AWS CloudFormation templates AWS Data Pipeline AWS Data Pipeline Task Runner AWS CloudHSM CloudHSM key storage AWS Marketplace Extensive 3rd Party Solutions Data Centers AWS SOC 1 physical access controls AWS SOC 2 Security physical access controls AWS PCI DSS physical access controls AWS ISO 27001 physical access controls AWS FedRAMP physical access controls Conclusion The primary focus of IT Governance is around managing resources security and performance in order to deliver value in strategic alignment with the goals of the business Given the rate growth and increasing complexity in technology it is increasingly challenging for onpremise environments to scale to provide the granular controls and features needed to deliver quality IT governance in a costefficient manner F or the s ame reasons that delivering infrastructure in the cloud has benefits over on premise delivery cloud based governance offers a lower cost of entry easier operations and improved agility by providing more oversight and automation that enables organizations to focus on their business ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 16 of 16 References and Further Reading What can I do with AWS? http://awsamazoncom/solutions/awssolutions/ How can I get started with AWS? http://docsawsamazoncom/gettingstarted/latest/awsgsgintro/gsgaws introhtml",General,consultant,Best Practices Security_at_Scale_Logging_in_AWS,ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 1 of 16 Security at Scale : Lo gging in AWS How AWS CloudTrail can hel p you achiev e compliance by logging API calls and changes to resources October 2015 This paper has been archived For the latest technical content refer to: https://docsawsamazoncom/wellarchitected/latest/securitypillar/ detectionhtmlArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 2 of 16 Table of Contents Abstract 3 Introduction 3 Control Access to Log Files 4 Obtain Alerts on Log File Creation and Misconfiguration 5 Receive Alerts for Log File 5 Creation and Misconfiguration 5 Manage Changes to AWS Resources and Log Files 6 Storage of Log Files 7 Generate Customized Reporting of Log Data 7 Generate Customized Reporting of Log Data 8 Conclusion 8 Additional Resources 9 Appendix: Compliance Program Index 10 ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 3 of 16 Abstract The logging and monitoring of API calls are key components in security and operational best practices as well as requirements for industry and regulatory compliance AWS CloudTrail is a web service that records API calls to supported AWS services in your AWS account and delivers a log file to your Amazon Simple Storage Service (Amazon S3) bucket AWS CloudTrail alleviates common challenges experienced in an onpremise environment and in addition to making it easier for you to demonstrate compliance with policies or regulatory standards the service makes it easier for you to enhance your security and operational processes This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements There is no additional charge for AWS CloudTrail aside from standard charges for S3 for log storage and SNS usage for optional notification Introduction Amazon Web Services (AWS) provides a wide variety of ondemand IT resources and services that you can launch and manage with pay asyougo pricing Recording the AWS API calls and associated changes in resource configuration is a critical component of IT governance security and compliance AWS CloudTrail provides a simple solution to record AWS API calls and resource change s that helps alleviate the burden of on premises infrastructure and storage challenges by helping you to build enhanced preventative and detective security controls for your AWS environment Onpremises logging solutions require installing agents setting up configuration files and centralized log servers and building and maintaining expensive highly durable data stores to store the data AWS CloudTrail eliminates this burdensome infrastructure setup and allows you to turn on logging in as little as two clicks and get increased visibility into all API calls in your AWS account CloudTrail continuously captures API calls from multiple servers into a highly available processing pipeline To turn on CloudTrail you simply signin to the AWS Management Console navigate to the CloudTrail console and click to enable logging Learn more about services and regions available for use with AWS CloudTrail on the AWS CloudTrail website This paper was developed by taking an inventory of logging requirements across common compliance frameworks (eg ISO 27001:2005 PCI DSS v20 FedRAMP etc) and combining those into generalized controls and logging domains You may leverage this paper for a variety of usecases such as security and operational bestpractices compliance with internal policies industry standards legal regulations and more The paper is written generic ally to allow anyone to understand how AWS CloudTrail can enhance your existing logging and monitoring activities ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 4 of 16 Control Access to Log Files To maintain the integrity of your log data it is important to carefully manage access around the generation and storage your log files The ability to view or modify your log data should be restricted to authorized users A common logrelated challenge for onpremise environments is the ability to demonstrate to regulators that access to log data is restricted to authorized users This control can be timeconsuming and complicated to demonstrate effectively because most onpremise environments do not have a single logging solution or consistent logging security across all systems With AWS CloudTrail access to Amazon S3 log files is centrally controlled in AWS which allows you to easily control access to your log files and help demonstrate the integrity and confidentiality of your log data Control Access to Log Files Common logging requirements How AWS CloudTrail can he lp you achieve compliance with requirements Controls exist to prevent unauthorized access to logs AWS CloudTrail provides you the ability to restrict access to your log files You can prevent and control access to make changes to your log file data by configuring your AWS Identity and Access Management (IAM) roles and Amazon S3 bucket policies to enforce read only access to your log files Learn more Additionally you can fortify your authentication and authori zation controls by enabling AWS Multi Factor Authentication (AWS MFA) on your Amazon S3 bucket(s) that store(s) your AWS CloudTrail logs Learn more Controls exist to ensure access to log records is rolebased AWS CloudTrail provides you the ability to control user access on your log files based on detailed role based provisioning AWS Identity and Access Management (IAM) enables you to securely control access to AWS CloudTrail for your users; And using IAM r oles and Amazon S3 bucket policies you can enforce role based access to the S3 bucket that stores your AWS CloudTrail log files Learn More ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 5 of 16 Obtain Alerts on Log File Creation and Misconfiguration Nearr ealtime alerts to misconfigurations of logs detailing API calls or resource changes is critically important to effective IT governance and adherence to internal and external compliance requirements Even from an operational perspective it is imperative that logging is configured properly to give you the ability to oversee the activities of your users and resources However variability and breadth of logging infrastructure in onpremise environments has made it overwhelming to actively monitor and alert you when there are misconfigurations or changes to your logging configuration Once you enable AWS CloudTrail for your account the service will deliver log files to your S3 bucket Optionally CloudTrail will publish notifications for log file deliveries to an SNS topic so that you can take action upon delivery These alerts include the Amazon S3 bucket log file address to allow you to quickly access object metadata about the event from the source log files Moreover your AWS Management Console will alert you if your log files are misconfigured and therefore logging is no longer taking place Receive Alerts for Log File Creation and Misconfiguration Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide a lerts when logs are created or fail and follow organization defined actions in the event of a misconfiguration AWS CloudTrail p rovides you immediate notification related to problems with your logging configuration through your AWS Management Console Learn more Alerts related to log misconfiguration will direct users to relevant logs for additional details (and will not divulge unnecessary amount of detail) AWS CloudTrail records the Amazon S3 b ucket log file address every time a new log file is written AWS CloudTrail publishes notifications for log file creation so that customers can take near realtime action when log files are created The notification is delivered to your Amazon S3 bucket and is show n in the AWS Management Console Optionally Amazon SNS messages can be pushed to mobile devices or distributed services configured via API or the AWS Management Console The SNS message for log file creation provides the log file address which limits the information divulged to only the necessary amount while also enabling you to easily link to obtain additional event details Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 6 of 16 Manage Change s to AWS Resources and Log Files Understanding the changes made to your resources is a critical component of IT governance and security Moreover preventing changes and unauthorized access to th is log data directly impacts the integrity of your change management processes and your ability to comply with internal industry and regulatory requirements around change management A major challenge faced in onpremise environments is the ability to log resource changes or changes to logs because there are only finite resources at your disposal to monitor what feels like an infinite amount of data AWS CloudTrail allows you to track the changes that were made to an AWS resource including creation modification and deletion Additionally by reviewing the log history of API calls AWS CloudTrail helps you investigate an event to determine if unauthorized or unexpected changes occurred by reviewing who initiated them when they occurred and where they originated Optionally CloudTrail will publish notifications to an SNS topic so that you can take action upon delivery of the new log file to your Amazon S3 bucket Manage Changes to IT Resources and Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide log of changes to system components (includi ng creation and deletion of system level objects) AWS CloudTrail p roduces log data on system change event s to enable tracking of changes made to your AWS resources AWS CloudTrail provides visibility into any changes made to your AWS resource from its c reation to deletion by loggin g changes made using API calls via the AWS Management Console the AWS Command Line Interface (CLI) or the AWS Software Development Kits (SDKs) Learn more Controls exist to prevent modifications to logs of changes or failures associated with logs By default API call log files are encrypted using S3 Server Side Encryption (SSE) and placed into your S3 bucket Modifications to log data can be controlled through use of IAM a nd MFA to enforce read only access to your Amazon S3 bucket that stores your AWS CloudTrail log files Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 7 of 16 Storage of Log Files Industry standards and legal regulations may require that log files be stored for varying periods of time For example PCI DSS requires logs be stored for one year HIPAA requires that records be retained for at least six years and other requirements mandate longer or variable storage periods depending on the data being logged As such managing the requirements for log file storage for different data on different systems can be an administrative and technological burden Moreover storing and archiving large volumes of log data in a persistent and secure way can be a challenge for many organizations AWS CloudTrail is designed to seamlessly integrate with Amazon S3 and Amazon Glacier allowing customization of S3 buckets and lifecycle rules to suit your storage needs AWS CloudTrail provides you an indefinite expiration period on your logs so you can customize the period of time you store your logs to meet your regulators’ requirements Storage of Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Logs are st ored for at least one year For ease of log file storage y ou can configure AWS CloudTrail to aggregate your log files across all regions and/or across multiple accounts to a single S3 bucket AWS CloudTrail provides the ability to customize your log stor age period by configuring your desired expiration period(s) on log files written to your Amazon S3 bucket You control the retention policies for your CloudTrail log files You can retain log files for a time period of your choice or indefinitely By defa ult log files are stored indefinitely You can also move your log file data to Amazon Glacier for additional cost savings associated with cold storage Learn more Store logs for an organization defined period of time Store logs real time for resiliency AWS CloudTrail provides you with log file resiliency by leveraging Amazon S3 a highly durable storage infrastructure Amazon S3’s standard storage is designed for 99999999999% durability and 9999 % availability of objects over a given year Learn more Generate Customized Reporting of Log Data From an operational and security perspective API call logging provides the data and context required to analyze user behavior and understand certain events API calls and IT resource change logs can also be used to demonstrate that only authorized users have performed certain tasks in your environment in alignment with compliance requirements However given the volume and variability associated with logs from different systems it can be challenging in an onpremise environment to gain a clear understanding of the activities users have performed and the changes made to your IT resources AWS CloudTrail produces data you can use to detect abnormal behavior retrieve event activities associated with specific objects or provide a simple audit trail for your account You can evolve your current logging analytics by using the 25+ different fields in the event data that AWS CloudTrail provides to build queries and create customized reports focused on internal investigations external compliance etc AWS CloudTrail enables you to monitor API calls for specific known undesired behavior(s) and raise alarms using your log management or security incident and event management (SIEM) solutions The enriched data provided by AWS CloudTrail can accelerate your investigation time and decrease your incident response time Additionally data provided by AWS CloudTrail may enable you to perform a deep er security analysis on API calls to identify suspicious behavior and latent patterns that don’t trigger immediate alarms but which may represent a ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 8 of 16 security issue Finally AWS CloudTrail works with an extensive range of partners with ready torun solutions for security analytics and alerting Learn more about our partner solutions on the AWS CloudTrail website Generate Customized Reporting of Log Data Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Log individual user access to resources by system accessed and actions taken “Individual user access” includes access by system administrators and system operators ; “Resour ces” includes audit trail logs AWS CloudTrail provides the ability to generate comprehensive and detailed API call reports by logging activities performed by all users who access your logged AWS resources including root IAM users federated users and any users or services performing activities on behalf of users using any access method Learn more Produce logs at an organization defined frequency AWS CloudTrail p rovides the ability to use log anal ysis tools to retrieve log file data at customized frequencie s by creating logs in near realtime and generally deliver ing the log data to your Amazon S3 bucket within 15 minutes of the API call You can use the log files as an input into industry leading log management and analysis solutions to perform analytics Learn more Provide a log of when logging activity was initiated AWS CloudTrail logs all API calls including enabling and disabling AWS Clou dTrail logging This allows you to track when CloudTrail itself was turned on or off Learn more Generate logs synched to a single internal system clock to provide consistent time stamp information AWS CloudTrail p roduces log data from a single internal system clock by generating event time stamps in Coordinated Universal Time (UTC) consistent with the ISO 8601 Basic Time and date format standard Learn more Provide logs that can show if inappropriate or unusual activity has occurred AWS CloudTrail enables you to monitor API calls by recording authorization failures in your AWS account allowing you to track attempted access to restricted resources or other unusual activity Learn more Provide logs with adequate event details AWS CloudTrail delivers API calls with detailed information such as type data and time location source/origin outcome (including exceptions faults and security event information) affected resource (data system etc) and associated user AWS CloudTrail can help you identify the user time of the event IP address of the user request parameters provided by the user re sponse elements returned by the service and optional error code and error message Learn more Conclusion You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions AWS CloudTrail provides a simple solution to log user activity that helps alleviate the burden of running a complex logging system Another benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more visibility security control and central ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 9 of 16 automation AWS CloudTrail is one of the services you can use to achieve a high level of governance of your IT resources using AWS Addition al Resources Below are links in response to commonly asked questions related to logging in AWS:  What can I do with AWS? Learn more  How can I get started with AWS? Learn more  How can I get started with AWS CloudTrail? Learn more  Does AWS CloudTrail have a list of FAQs? Learn more  How can I achieve compliance while using AWS? Learn more  How can I prepare for an audit while using AWS? Learn more This document is provided for informational purposes only It represents AWS’s curr ent product offerings as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 10 of 16 Appendix: Compliance Program Index The information in the whitepaper above was presented by logging requirement domains For your reference the logging requirements by common compliance frameworks are listed in the table below: AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Secur ity Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 52: Ensure that all anti virus mechanisms are current actively running and generating audit logs PCI 101: Establish a process for linking all access to system components (especially access done with adm inistrative privileges such as root) to each individual user PCI 102: Implement automated audit trails for all system components to reconstruct the following events: 1021: All individual accesses to cardholder data 1022: All actions taken by any in dividual with root or administrative privileges 1023: Access to all audit trails 1024: Invalid logical access attempts 1025: Use of identification and authentication mechanisms 1026: Initialization of the audit logs 1027: Creation and deletion of system level objects PCI 103: Record at least the following audit trail entries for all system components for each event: 1031: User identification 1032: Type of event 1033: Date and time 1034: Success or failure indication 1035: Origination of the event 1036: Identity or name of affected data system component or resource PCI 1042: Time data is protected PCI 105: Secure audit trails so they cannot be altered PCI 1051: Limit viewing of audit trails to those with a job related need PCI 1052: Protect audit trail files from unauthorized modifications PCI 1053: Promptly back up audit trail files to a centralized log server or media that is difficult to alter ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 11 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 1054: Write logs for external facing technologies onto a log server on the internal LAN PCI 1055: Use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert) PCI 106: Review logs for all system components at least daily Log reviews must include those servers that perform security functions like intrusion detection system (IDS) and aut hentication authorization and accounting protocol (AAA) servers (for example RADIUS) PCI 107: Retain audit trail history for at least one year with a minimum of three months immediately available for analysis (for example online archived or rest orable from back up) PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critical system files configuration files or content files; and configure the software to perform critical file comparisons at lea st weekly PCI 122: Develop daily operational security procedures that are consistent with requirements in this specification (for example user account maintenance procedures and log review procedures) PCI A12d: Restrict each entity’s access and privileges to its own cardholder data environment only PCI A13: Ensure logging and audit trails are enabled and unique to each entity’s cardholder data environment and consistent with PCI DSS Requirement 10 PCI 114: Use intrusion detection system s and/or intrusion prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment and alert personnel to suspected compromises Keep all intrusion dete ction and prevention engines baselines and signatures uptodate ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 12 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for s toring processing and transmitting credit card information in the cloud Learn more PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critic al system files configuration files or content files; and configure the software to perform critical file comparisons at least weekly Service Organization Controls 2 (SOC 2 ) The SOC 2 report is an attestation report that expands the evaluation of cont rols to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Security 32g: Procedures exist to restrict logical access to the defined system including but not limited to the fol lowing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Security 33: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other system components such as firewalls routers and servers SOC 2 Security 37: Procedures exist to identify report and act upon system security breaches and other incidents SOC 2 Availability 35f: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master pass words powerful utilities and security devices (for example firewalls) SOC 2 Availability 36: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other sys tem components such as firewalls routers a nd servers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 13 of 16 AWS Compliance Program Compliance Requirement SOC 2 Availability 310: Procedures exist to identify report and act upon system availability issues and related security breaches and other incidents Service Organization Controls 2 (SOC 2) The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Confidentiality 33: The system procedures related to confidentiality of data processing a re consistent with the documented confidentiality policies SOC 2 Confidentiality 381: Procedures exist to restrict logical access to the system and the confidential information resources maintained in the system including but not limited to the foll owing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Confidentiality 313: Procedures exist to identify report and act upon s ystem confidentiality and security breaches and other incidents SOC 2 Confidentiality 42: There is a process to identify and address potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its system confident iality and related security policies SOC 2 Integrity 36g: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Integrity 41: System processing integrity and security performance are periodically re viewed and compared with the defined system processing integrity and related security policies SOC 2 Integrity 42: There is a process to identify and ad dress potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its defined system processing integrity and related security policies ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 14 of 16 AWS Compliance Program Compliance Requirement International Organization for Standardization (ISO) 27001 ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and custom er information that’s based on periodic risk assessments Learn more Due to copyright laws AWS cannot provide the requirement descriptions for ISO 27001 You may purchase a copy of the ISO 27001 standard online from various sources including ISOorg Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized a pproach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 2: The organization: a Determines based on a risk assessment and mission/business needs that the information system must be capable of auditing the following events: [Assignment: organization defined list of auditable events]; b Coordinates the security audit function wit h other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of auditable events; c Provides a rationale for why the list of auditable events are deemed to be adequate to support after thefact investigations of security incidents; and d Determines based on current threat information and ongoing assessment of risk that the following events are to be audited within the information system: [Assignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 4 AU 2: The organization: a Determines that the information system must be capable of audit ing the following events: [Assignment: organization defined auditable events]; b Coordinates the security audit function with other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of au ditable events; c Provides a rationale for why the auditable events are deemed to be adequate to support after the fact investigations of security incidents; and d Determines that the following events are to be audited within the information system: [Ass ignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 3 AU 3: The information system produces audit records that contain sufficient information to at a minimum establish what type of event occurred when (date and time) the event occurred where the event occurred the source of the event the outcome (success or failure) of the event and the identity of any user/subject associated with the event ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 15 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 3: The information system produces audit records containing information that at a minimum establishes what type of event occurred when the event occurred where the event occ urred the source of the event the outcome of the event and the identity of any user or subject associated with the event FedRAMP NIST 800 53 Rev 3 AU 4: The organization allocates audit record storage capacity and configures auditing to reduce the likelihood of such capacity being exceeded FedRAMP NIST 800 53 Rev 4 AU 4: The organization allocates audit record storage capacity in accordance with [Assignment: organization defined audit record storage requirements] Federal Risk and Authorization Ma nagement Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 5: The information system: a Alerts designated organizational officials in the event of an audit processing failure; and b Takes the following additional actions: [Assignment: orga nization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NIST 800 53 Rev 4 AU 5: The information system: a Alerts [Assignment: organization defined personnel] in t he event of an audit processing failure; and b Takes the following additional actions: [Assignment: organization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NI ST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of inappropriate or unusual activity and reports findings to designated organizational officials; and b Adjusts the level of audit review analysis and reporting within the information system when there is a change in risk to organizational operations organizational assets individuals other organizations or the Nation based on law enforcement in formation intelligence information or other credible sources of information FedRAMP NIST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of [Ass ignment: organization defined inappropriate or unusual activity]; and b Reports findings to [Assignment: organization defined personnel or roles] FedRAMP NIST 800 53 Rev 3 AU 8: The information system uses internal system clocks to generate time stamps for audit records ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 16 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 8: The information system: a Uses internal system clocks to generate time stamps for audit records; and b Generates time in the time stamps that can be mapped to Coordinated Universal Time (UTC) or Green wich Mean Time (GMT) and meets [Assignment: organization defined granularity of time measurement] FedRAMP NIST 800 53 Rev 3 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion FedRAMP NIST 800 53 Rev 4 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that pr ovides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 10: The information system protects against an individual fal sely denying having performed a particular action FedRAMP NIST 800 53 Rev 4 AU 10: The information system protects against an individual (or process acting on behalf of an individual) falsely denying having performed [Assignment: organization defined ac tions to be covered by non repudiation] FedRAMP NIST 800 53 Rev 3 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records retention policy] to provide support for after thefact investigati ons of security incidents and to meet regulatory and organizational information retention requirements FedRAMP NIST 800 53 Rev 4 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records rete ntion policy] to provide support for after thefact investigations of security incidents and to meet regulatory and organizational information retention requirements,General,consultant,Best Practices Security_of_AWS_CloudHSM_Backups,Security of AWS CloudHSM Backups Fully Managed Hardware Security Modules (HSMs) in the AWS Cloud First published December 2017 Updated March 24 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Abstract v Introduction 1 AWS CloudHSM: Managed by AWS controlled by you 1 High availability 2 CloudHSM cluster backups 3 Creating a backup 3 Archiving a back up 4 Restoring a backup 4 Security of backups 5 Key hierarchy 6 Restoration of backups 7 Conclusion 7 Contributors 8 Further reading 8 Document revisions 9 Abstract AWS CloudHSM clusters provide high availability and redundancy by distributing cryptographic operations across all hardware security modules (HSMs ) in the cluster Backup and restore is the mechanism by which a new HSM in a cluster is synchronized This whitepaper provides details on the cryptographic mechanisms supporting backup and restore functionality and the security mechanisms protecting the Amazon Web Services ( AWS )managed backups This whitepaper also provides in depth information on how ba ckups are protected in all three phases of the CloudHSM backup lifecycle process : Creation Archive and Restore For the purposes of this whitepaper it is assume d that you have a basic understanding of AWS CloudHSM and cluster architecture Amazon Web Services – Security of AWS CloudHSM Backups Page 1 Introduction AWS offers two options for securing cryptographic keys in the AWS Cloud: AWS Key Management Service (AWS KMS) and AWS CloudHSM AWS KMS is a managed service that uses hardware security modules (HSMs) to protect the security of your encryption keys AWS CloudHSM delivers fully managed HSMs in the AWS Cloud which allows you to add secure validated key storage and high performance crypto acceleration to your AWS applications CloudHSM offers you the option of single tenant access and control over your HSMs CloudHSM is bas ed on Federal Information Processing Standards (FIPS ) 1402 Level 3 validated hardware CloudHSM delivers fully managed HSM s in the AWS Cloud CloudHSM delivers all the benefits of traditional HSMs including secure generation storage and management of cryptographic keys used for data encryption that are controlled and accessible only by you As a managed service it also automates time consuming administrative tasks suc h as hardware provisioning software patching high availability and backups HSM capacity can be scaled quickly by adding and removing HSMs from your cluster on demand The b ackup and restore functionality of CloudHSM is what enables scalability reliabi lity and high availability in CloudHSM A key aspect of the backup and restore feature is a secure backup protocol that CloudHSM uses to back up your cluster This paper takes an in depth look at the security mechanism s in place around this feature AWS Cloud HSM: Managed by AWS controlled by you AWS CloudHSM provides HSMs in a cluster A cluster is a collection of individual HSMs that AWS CloudHSM keeps in sync You can think of a cluster as one logical HSM When you perform a key generation task or operation on one HSM in a cluster the other HSMs in that cluster are automatically kept up to date Each HSM in a cluster is a single tenant HSM under your control At the hardware level each HSM includes hardware enforced isolation of crypto operations and key storage Each HSM runs on dedicated cryptographic cores Amazon Web Services – Security of AWS CloudHSM Backups Page 2 Each HSM appears as a network resource in your virtual private cloud (VPC) AWS manage s the HSM on your behalf performing functions such as health checks backups and synchronizati on of HSMs within a cluster However you alone control the user accounts passwords login policies key rotation procedures and all aspects of configuring and using the HSM s The implication of this control is that your cryptographic data is secure from external compromise This is important to financial applications subject to PCI regulations healthcare applications subject to HIPAA regulations and streaming video solutions subject to contractual DRM requirements You interact with the HSM s in a cluster via the AWS CloudHSM client Communication occurs over a n endtoend encrypted channel AWS does not have visibility into your communication with your HSM which occurs within thi s endtoend encrypted channel High availability Historically d eploying and maintaining traditional HSMs in a high availability configuration has been a manual process that is cumbersome and expensive CloudHSM makes scalability and high availability simple without compromising security When you use CloudHSM you begin by creat ing a cluster in a particular AWS Region A cluster can contain multiple individual HSM s For idle workloads you can delete all HSMs and simply retain the empty cluster For production workloads you should have at least two HSMs spread across multiple Availability Zones CloudHSM automatically synchronize s and load balance s the HSM s within a cluster The CloudHSM client loadbalances cryptographic operations acro ss all HSMs in the cluster based on the capacity of each HSM for additional processing If a cluster requires additional throughput y ou can expand your cluster by adding more HSMs through a single API call or a click in the CloudHSM console When you expa nd a cluster CloudHSM automatically provisions a new HSM as a clone of the other HSMs in the cluster This is done by taking a backup of an existing HSM and restoring it to the newly added HSM When you delete an HSM from a cluster a backup is automatica lly taken This way when you create a new HSM later you can pick up where you left off Should an HSM fail for any reason the service will automatically replace it with a new healthy Amazon Web Services – Security of AWS CloudHSM Backups Page 3 HSM This HSM is restored from a backup of an other HSMs in the cluste r if available Otherwise the new HSM is restored from the last available backup taken for the cluster When you don't need to use a cluster any more you can delete all its HSMs as well as the cluster Later when you need to use the HSM s again you can create a new cluster from the backup effectively restoring your previous HSM In the next section we will take a deeper look at the contents of the backup and the security mechanisms used to protect it CloudHSM cluster backups Backups are initiated archived and restored by CloudHSM A backup is a complete encrypted snapshot of the HSM E ach AWS managed backup contains the entire contents of the HSM including keys certificates users policies quorum settings and configuration options This includes: • Certificates on the HSM including the cluster certificate • All HSM users (COs CUs and AU) • All key material on the HSM • HSM configuration s and policies Backups are stored in Amazon Simple Storage Se rvice (Amazon S3) within the same Region as the cluster You can view backups available for your cluster from the CloudHSM console Backups can only be restored to a genuine HSM running in the AWS Cloud The restored HSM retain s all the configurations and policies you put in place on the original HSM Creating a backup CloudHSM triggers backups in the following scenarios: • CloudHSM automatically backs up you r HSM cluster s periodically • When add ing an HSM to a cluster CloudHSM takes a backup from an active HSM in that cluster and restores it to the newly provisioned HSM • When delet ing an HSM from a cluster CloudHSM takes a backup of the HSM before deleting it Amazon Web Services – Security of AWS CloudHSM Backups Page 4 A backup is a unified encrypted object combining certificates users k eys and policies It is created and encrypted as a single tightly bound object The individual components are not separable from each other The key used to encrypt the backup is derived using a combination of persistent and ephemeral secret keys Backup s are encrypted and decrypted within your HSM only and can only be restored to a genuine HSM running within the AWS Cloud This is discussed in further detail in the Security of Backups: Restor ation of Backup s section of this document CloudHSM uses FIPS 140 2 level 3 validated HSMs Your cryptographic material is never accessible in the clear outsi de the hardware Archiving a backup CloudHSM stores the cluster backups in a service controlled Amazon S3 location in the same AWS Region as your cluster The following figure illustrates an encrypted backup of an HSM cluster in a service controlled Amazon S3 bucket Encrypted backup of an HSM cluster in a service controlled S3 bucket Restoring a backup Backups are used in two scenarios : • When you provision a new cluster using an existing backup • When a second (or subsequent) HSM is added to a cluster or when CloudHSM automatically replaces an unhealthy HSM Amazon Web Services – Security of AWS CloudHSM Backups Page 5 In both scenarios the backu p is restored to a newly created HSM During restoration the backup is decrypted within an HSM using the process described in the next section The decryption relies on a set of keys available only within an authentic hardware instance from the original manufacturer installed in the AWS Cloud Therefore CloudHSM can restore backups onto only authentic HSMs within the AWS Cloud Recall that each backup contains all users keys access policies and configuration from the original HSM Therefore the rest ored HSM contains the same protections and access controls as the original and is equivalently secure to the original When your application or cryptographic officer seeks to use the HSM you can verify that the HSM is a clone of the one you originally established a trust relationship with You do so by confirming that the cluster certificate is signed using the same key you used when initially claiming the HSM This ensures that you are talking to your HSM Note that while CloudHSM manages backups the service does not have any access to the data cryptographic material user information and the keys encapsulated within the backup Specifically AWS has no way to recover your keys if you lose your access credentials to log in to the HSM Security of backups The CloudHSM backup mechanism has been validated under FIPS 140 2 Level 3 A backup taken by an HSM configured in FIPS mode cannot be restored to an HSM that is not also in FIPS mode Operation in FIPS mode is a required configuration for CloudHSM An HSM in FIPS mode is running production firmware provided by the manufacturer and signed with a FIPS production key This ensures other parties cannot forge the firmware Further more each backup contains a complete copy of everything in the HSM Specifically each AWS managed backup contains the entire contents of the HSM including keys claiming certificates users policies quorum settings and configuration options Accordin gly you can demonstrate – for example during a compliance audit that each HSM with a restored backup is protected at exactly the same level and with the same policies and controls as when the backup was first created Amazon Web Services – Security of AWS CloudHSM Backups Page 6 Key hierarchy As discussed earlier a backup is encrypted within the HSM before it is provided to CloudHSM for archival The backup is encrypted using a backup encryption key described in the following section The backup of the HSM is encrypted using a backup encryption key (BEK) Manufacturer’s key backup key (MKBK) The manufacturer’s key backup key ( MKBK ) exists in the HSM hardware provided by the manufacturer This key is common to all HSM s provided by the manufacturer to AWS The MKBK cannot be accessed or used by any user or for any purpose other than the generation of the backup encryption key Specifically AWS does not have access to or visibility into the MKBK AWS key backup key (AKBK) The AWS key backup key ( AKBK ) is securely installed by the CloudHSM service when the hard ware is placed into operation within the CloudHSM fleet This key is unique to hardware installed by AWS within our CloudHSM infrastructure The AKBK is generated securely within an offline FIPS compliant hardware security module and loaded under twoperson control into newly commissioned CloudHSM hardware Amazon Web Services – Security of AWS CloudHSM Backups Page 7 Backup Encryption Key (BEK ) The backup of the HSM is encrypted using a backup encryption key (BEK) The BEK is an AES 256 key that is generated within the HSM when a backup is requested The HSM uses the BEK to encrypt its backup The encrypted backup includes a wrapped copy of the BEK The BE K is wrapped with an AES 256 bit wrapping key using a FIPS appro ved AES key wrapping method This method complie s with NIST Special Publication 800 38F The wrapping key is derived from the MKBK and AKBK via a key derivation function (KDF ) This same wrapping key must be derived again to recover the B EK prior to decrypting the backup This implies that both the MKBK and AKBK are required to decrypt a customer backup Put another way the BEK cannot be discovered or derived using a secret managed by AWS or by the manufacturer alone Once encrypted the backup is ready to be archived Rec all that each backup is stored on Amazon S3 Restoration of backups CloudHSM backups can only be decrypted by an HSM that is able to derive the same wrapping key used to secure the BEK when the backup was created Recall that this wrapping key is derived f rom the Manufacturer’s Key Backup Key (MKBK) and the AWS Key Backup Key (A KBK) The MKBK is only embedded in genuine hardware by the manufacturer and the AKBK is only installed on genuine hardware within the AWS fleet Therefore the BEK cannot be unwrapp ed outside of an AWS managed HSM This in turn implies that the backup cannot be decrypted outside of an AWS managed HSM Conclusion AWS CloudHSM provides a secure FIPS validated HSM backup and restore mechanism that enables highavailability and failure management capabilities without sacrificing security or privacy You retain complete control over your HSM and the data within Backups are encrypted strongly at creation stored securely and never decrypted outside an HSM Backups can only be restored to genuine hardware in the AWS Cloud running firmware signed with a FIPS production key As backups include user accounts and security policy configurations in addition to cryptographic material restored HSMs retain all the security policies and controls from the original HSM With CloudHSM you can demonstrate – for example during a compliance audit – that a n HSM restored Amazon Web Services – Security of AWS CloudHSM Backups Page 8 from backup is protected at exactly the same level and with the same policies and controls as the HSM from which the backup was originally created Contributors The following individuals and organizations contributed to this document: • Ben Grubin General Manager AWS Cryptography • Balaji Iyer Senior Professional Services Consultant AWS • Avni Rambhia Senior Pr oduct Manager AWS Cryptography Further reading • CloudHSM documentation : https://awsamazoncom/documentation/cloudhsm/ • CloudHSM product details: https://awsamazoncom/cloudhsm/details/ • Blog – “Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads ”: https://awsamazoncom/blogs/aws/aws cloudhsm update costeffective hardware keymanagement/ • Webinar – “Secure Scalable Key Storage in AWS ”: https ://wwwyoutubecom/watch?v=hEVks207ALM • Verify the Identity and Authenticity of Your Cluster’s HSM: http://docsawsamazoncom/cloudhsm/latest/userguide/verify hsm identityhtml • AWS CloudHSM Clie nt Tools and Software Libraries: http://docsawsamazoncom/cloudhsm/latest/userguide/client tools and librar ieshtml#client Amazon Web Services – Security of AWS CloudHSM Backups Page 9 Document revisions Date Description March 24 2021 Reviewed for technical accuracy December 2017 First publication,General,consultant,Best Practices Security_Overview_of_AWS_Lambda,Security Overview of AWS Lambda An In Depth Look at AWS Lambda Security January 2021 This paper has been archived For the latest version of this document see: https://docsawsamazoncom/whitepapers/latest/ securityoverviewawslambda/welcomehtml ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor do es it modify any agreement between AWS and its customers © 202 1 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Abstract v Introduction 1 About AWS Lambda 1 Benefits of Lambda 2 Cost for Running Lambda Based Applications 3 The Shared Responsibility Model 3 Lambda Functions and Layers 4 Lambda Invoke Modes 5 Lambda Executi ons 6 Lambda Execution Environments 6 Execution Role 8 Lambda MicroVMs and Workers 8 Lambda Isolation Technologies 10 Storage and State 11 Runtime Maintenance in Lambda 11 Monitoring and Auditing Lambda Functions 12 Amazon CloudWatch 12 AWS CloudTrail 13 AWS X Ray 13 AWS Config 13 Architecting and Opera ting Lambda Functions 13 Lambda and Compliance 14 Lambda Event Sources 14 Conclusion 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract This whitepaper presents a deep dive into the AWS Lambda service through a security lens It provides a well rounded picture of the service which is useful for new adopters and deepens understanding of Lambda for current users The intended audience for this whitepaper is Chief Information Security Officers (CISOs) information security groups security engineers enterprise architects compliance teams and any others interested in understanding the underpinnings of AWS Lambda ArchivedAmazon Web Services Security Overview of AWS Lambda Page 1 Introduction Today more workloads use AWS Lambda to achieve scalability performance and cost efficiency without managing the underlying computing These workloads scale to thousands of concurrent requests per second Lambda is used by hundreds of thousands of Amazon Web Services (AWS) customers to serve trillions of requests every month Lambda is suitable for mission critical applications in many industries A broad variety of customers from media and entertainment to financial services and other regulated industries take advantage of Lambda These customers decrease time to market optimize costs and improve agility by focusing on what they do best: running their business The managed runtime environment model enables Lambda to manage much of the implementation details of running serverless workloads This model further reduces the attack surface while making cloud security simpler This whitepaper presents the underpinnings of that model along with best practices to developers security analysts security and compliance teams and other stakeholders About AWS Lambda Lambda is an event driven serverless compute service that extends other AWS services with custom logic or creates backend services that operate with scale performance and security in mind Lambda can be configured to automatically run code in response to multiple events such as HTTP requests through Amazon API Gateway modifications to objects in Amazon Simple Storage Service (Amazon S3) buckets table updates in Amazon DynamoDB and state transitions in AWS Step Functions Lambda runs cod e on a highly available compute infrastructure and performs all the administration of the underlying platform including server and operating system maintenance capacity provisioning and automatic scaling patching code monitoring and logging With Lamb da you can just upload your code and configure when to invoke it; Lambda takes care of everything else required to run your code Lambda integrates with many other AWS services and enables you to create serverless applications or backend services rangin g from periodically triggered simple automation tasks to full fledged microservices applications Lambda can be configured to access resources within your Amazon Virtual Private Cloud (Amazon VPC) and by extens ion your on premises resources Lambda integrates with AWS Identity and Access Management (IAM) which you can leverage to protect your data and configure fine grained access controls using a variety ArchivedAmazon Web Services Security Overview of AWS Lambda Page 2 of access m anagement strategies while maintaining a high level of security and auditing to help you meet your compliance needs Benefits of Lambda Customers who want to unleash the creativity and speed of their development organizations without compromising their IT team’s ability to provide a scalable cost effective and manageable infrastructure find that Lambda lets them trade operational complexity for agility and better pricing without compromising on scale or reliability Lambda offers many benefits includi ng the following: No Servers to Manage Lambda runs your code on highly available fault tolerant infrastructure spread across multiple Availabi lity Zones (AZs) in a single Region seamlessly deploying code and providing all the administration maintenance and patches of the infrastructure Lambda also provides built in logging and monitoring including integration with Amazon CloudWatch CloudWatch Logs and AWS CloudTrail Continuous Scaling Lambda precisely manages scaling of your functions (or application) by running event triggered code in parallel and processing each event individually Millisecond Meterin g With Lambda you are charged for every 1 millisecond (ms) your code executes and the number of times your code is triggered You pay for consistent throughput or execution duration instead of by server unit Increases Innovation Lambda frees up your pr ogramming resources by taking over the infrastructure management allowing you to focus on innovation and development of business logic Modernize your Applications Lambda enables you to use functions with pre trained machine learning models to inject artificial intelligence into applications easily A single application programming interface (API) request can classify images analyze videos convert speech to text perform natural language processing and more ArchivedAmazon Web Services Security Overview of AWS Lambda Page 3 Rich Ecosystem Lambda supports developers through AWS Serverless Application Repository for discovering deploying and publishing serverless applications AWS Serverless Application Model for building serverless applications and integrations with various integrated development environments (IDEs) like AWS Cloud9 AWS Toolkit for Visual Studio AWS Tools for Visual Studio Team Services and several others Lambda is integrated with additional AWS services to provide you a rich ecosystem for building serverless applications Cost for Running Lambda Based Applications Lambda offers a granular payasyougo pricing model With this model you are charged based on the number of function invocations and their duration (the time it takes for the code t o run) In addition to this flexible pricing model Lambda also offers 1 million perpetually free requests per month which enables many customers to automate their process without any costs The Shared Responsibility Model At AWS security and compliance is a shared responsibility between AWS and the customer This shared responsibility model can help relieve your operational burden as AWS operates manages and controls the co mponents from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates For Lambda AWS manages the underlying infrastructure and application platform the operating system and the e xecution environment You are responsible for the security of your code and identity and access management (IAM) to the Lambda service and within your function Figure 1 shows the shared responsibility model as it applies to the common and distinct components of Lambda AWS responsibilities appear in orange and customer responsibilities appear in blue ArchivedAmazon Web Services Security Overview of AWS Lambda Page 4 Figure 1 – Shared Responsibility Model for AWS Lambda Lambda Functions and Layers With Lambda you can run code virtually with zero administration of the underlying infrastructure You are responsible only for the code that you provide Lambda and the configuration of how Lambda runs that code on your behalf Today Lambda supports two types of code resources: Functions and Layers A function is a resource which can be invoked to run your code in Lambda Functions can include a common or shared resource called Layers Layers can be used to share common code or data across different functions or AWS accounts You are respons ible for the management of all the code contained within your functions or layers When Lambda receives the function or layer code from a customer Lambda protects access to it by encrypting it at rest using AWS Key Management Service (AWS KMS) and in transit by using TLS 12+ You can manage access to your functions and layers through AWS IAM policies or through resource based permissions For a full list of supported IAM features on Lambda see AWS Services that work with IAM You can also control the entire lifecycle of your functions and layers through Lambda's control plane APIs For example you ca n choose to delete your function by calling DeleteFunction or revoke permissions from another account by calling RemovePermission ArchivedAmazon Web Services Security Overview of AWS Lambda Page 5 Lambda Invoke Modes The Invoke API can be calle d in two modes: event mode and request response mode • Event mode queues the payload for an asynchronous invocation • Request response mode synchronously invokes the function with the provided payload and returns a response immediately In both cases the function execution is always performed in a Lambda execution environment but the payload takes different paths For more information see Lamb da Execution Environments in this document You can also use other AWS services that perform invocations on your behalf Which invoke mode is used depends on which AWS service you are using and how it i s configured For additional information on how other AWS services integrate with Lambda see Using AWS Lambda with other services When Lambda receives a request response i nvoke it is passed to the invoke service directly If the invoke service is unavailable callers may temporarily queue the payload client side to retry the invocation a set number of times If the invoke service receives the payload the service then atte mpts to identify an available execution environment for the request and passes the payload to that execution environment to complete the invocation If no existing or appropriate execution environments exist one will be dynamically created in response to the request While in transit invoke payloads sent to the invoke service are secured with TLS 12+ Traffic within the Lambda service (from the load balancer down) passes through an isolated internal virtual private cloud (VPC) owned by the Lambda servi ce within the AWS Region to which the request was sent Figure 2 – Invocation model for AWS Lambda: request response Event invocation mode payloads are always queued for processing before invocation All payloads are queued for processing in an Amazon Simple Queue Service (Amazon SQS) queue Queued events are always secured in transit with TLS 12+ but they are not currently encrypted at rest The Amazon SQS queues used by Lambda are managed by the Lambda service and are not visible to you as a customer Queued events can be stored in a shared queue but may be migrated or assigned to dedicated queues depending on a number of factors that cannot be directly controlled by customers (for example rate of invoke size of events and so on) ArchivedAmazon Web Services Security Overview of AWS Lambda Page 6 Queued events are retrieved in batches by Lambda’s poller fleet The poller fleet is a group of EC2 instances whose purpose is to process queued event invocations which have not yet been processed Whe n the poller fleet retrieves a queued event that it needs to process it does so by passing it to the invoke service just like a customer would in a request response mode invoke If the invocation cannot be performed the poller fleet will temporarily sto re the event in memory on the host until it is either able to successfully complete the execution or until the number of run retry attempts have been exceeded No payload data is ever written to disk on the poller fleet itself The polling fleet can be tasked across AWS customers allowing for the shortest time to invocation For more information about which services may take the event invocation mode see Using AWS Lambda with other services Lambda Executions When Lambda executes a function on your behalf it manages both provisioning and configuring the underlying systems necessary to run your code This enables your developers to focus on business logic and writing code not administering and managing underlying systems The Lambda service is split into the control plane and the data plane Each plane serves a distinct purpose in the service The control plane provides the management APIs (for example CreateFunction UpdateFunctionCode PublishLayerVersion and so on) and manages integrations with all AWS services Communications to Lambda's control plane are protected in transit by TLS All customer data stored within Lambda's control plane is encrypted at rest through the use of AWS KMS which is designed to protect it from unauthorized disclosure or tampering The data plane is Lambda's Invoke API that triggers the invocation of Lambda functions When a Lambda function is invoked the data pla ne allocates an execution environment on an AWS Lambda Worker (or simply Worker a type of Amazon EC2 instance) to that function version or chooses an existing execution environment that has already been set up for that function version which it then uses to complete the invocation For more information see the AWS Lambda MicroVMs and Workers section of this document Lambda Execution Environments Each invocation is routed by Lambda's invoke service to an execution environment on a Worker that is able to service the request Other than through data plane customers and other users cannot directly initiate inbound/ingress network communications with an execution environment This helps to ensure that communications to your execution environment are authenticated and authorized ArchivedAmazon Web Services Security Overview of AWS Lambda Page 7 Execution environments are reserved for a specific function version and cannot be reused across function versions functions or AWS accounts This means a single function which may have two different versions would result in at least two unique execution environments Each execution environment may only be used for one concurrent invocation at a time and they may be reused across multiple invocations of the same function version for performance reasons Depending on a number of factors (for example rate of invocation function configuration and so on) one or more execution environments may exist for a given function version With this approach Lambda is able t o provide function version level isolation for its customers Lambda does not currently isolate invokes within a function version’s execution environment What this means is that one invoke may leave a state that may affect the next invoke (for example fi les written to /tmp or data in memory) If you want to ensure that one invoke cannot affect another invoke Lambda recommends that you create additional distinct functions For example you could create distinct functions for complex parsing operations whi ch are more error prone and re use functions which do not perform security sensitive operations Lambda does not currently limit the number of functions that customers can create For more information about limits see the Lambda quotas page Execution environments are continuously monitored and managed by Lambda and they may be created or destroyed for any number of reasons including but not limited to: • A new invoke arrives and no suitable execution environment exists • An internal runtime or Worker software deployment occurs • A new provisioned concurrency configuration is published • The lease time on the execution environment or the Worker is approaching or has exceeded max lifetime • Other internal workload rebalancing processes Customers can manage the number of pre provisioned execution environments that exist for a function version by configuring provisioned concurrency on their function configuration When configured to do so Lambda will create manage and ensure the configur ed number of execution environments always exist This ensures that customers have greater control over start up performance of their serverless applications at any scale Other than through a provisioned concurrency configuration customers cannot determi nistically control the number of execution environments that are created or managed by Lambda in response to invocations ArchivedAmazon Web Services Security Overview of AWS Lambda Page 8 Execution Role Each Lambda function must also be configured with an execution role which is an IAM role that is assumed by the Lambda service when performing control plane and data plane operations related to the fun ction The Lambda service assumes this role to fetch temporary security credentials which are then available as environment variables during a function’s invocation For performance reasons the Lambda service will cache these credentials and may re use them across different execution environments which use the same execution role To ensure adherence to least privilege principle Lambda recommends that each function has its own unique role and that it is configured with the minimum set of permissions it requires The Lambda service may also assume the execution role to perform certain control plane operations such as those related to creating and configuring Elastic network interfaces (ENI) for VPC functions sending logs to Amazon CloudWatch sending traces to AWS X Ray or other non invoke related operations Customers can always review and audit these use cases by reviewing audit logs in AWS CloudTrail For more information on this subject see the AWS Lambda execution role documentation page Lambda MicroVMs and Workers Lambda will create its execution environments on a fleet of EC2 instances calle d AWS Lambda Workers Workers are bare metal EC2 Nitro instances which are launched and managed by Lambda in a separate isolated AWS account which is not visible to customers Workers have one or more hardware virtualized Micro Virtual Machines (MVM) created by Firecracker Firecracker is an open source Virtual Machine Monitor (VMM) that uses Linux’s Kernel based Virtual Machine (KVM) to create and manage MVMs It is purpose built for creating and managing secure multi tenant container and function based services that provide serverless operational models For more information about Firecracker's security mode l see the Firecracker project website As a part of the shared responsibility model Lambda is responsible for maintaining the security configuration controls and patching level of the Workers The Lambda team uses AWS Inspector to discover known potential security issues as well as other custom security issue notification mechanisms and pre disclosure lists so that customers don’t need to manag e the underlying security posture of their execution environment ArchivedAmazon Web Services Security O verview of AWS Lambda Page 9 Figure 3 – Isolation model for AWS Lambda Workers Workers have a maximum lease lifetime of 14 hours When a Worker approaches maximum lease time no further invocations are routed to it MVMs are gracefully terminated and the underlying Worker instance is terminated Lambda continuously monitors and alarms on lifecycle activities of its fleet’s lifetime All data plane communications to workers are encrypted using Advanced Encryption Standard with Galois/Counter Mode (AES GCM) Other than through data plane operations customers cannot directly interact with a worker as it hosted in a network isolated Amazon VPC managed by Lambda in Lambda’s service accounts When a Worker needs to create a new execution environment it is given time limited authorization to access customer function artifacts These artifacts are specifically optimized for Lambda’s execution environment and workers Function code which is uploa ded using the ZIP format is optimized once and then is stored in an encrypted format using an AWS managed key and AESGCM Functions uploaded to Lambda using the container image format are also optimized The container image is first downloaded from its o riginal source optimized into distinct chunks and then stored as encrypted chunks using an authenticated convergent encryption method which uses a combination of AES CTR AES GCM and a SHA256 MAC The convergent encryption method allows Lambda to securely deduplicate encrypted chunks All keys required to decrypt customer data is protected using customer mana ged KMS Customer Master Key (CMK) CMK usage by the Lambda service is available to customers in AWS Clou dTrail logs for tracking and auditing ArchivedAmazon Web Services Security Overview of AWS Lambda Page 10 Lambda Isolation Technologies Lambda uses a variety of open source and proprietary isolation technologies to protect Workers and execution environments Each execution environment contains a dedicated copy of the foll owing items: • The code of the particular function version • Any AWS Lambda Layers selected for your function version • The chosen function runtime (for example Java 11 Node JS 12 Python 38 and so on) or the function's custom runtime • A writeable /tmp directory • A minimal Linux user space based on Amazon Linux 2 Execution environments are isolated from one another using several container like technologies built into the Linux kernel along with AWS proprietary isolation technologies These technolog ies include: • cgroups – Used to constrain the function's access to CPU and memor y • namespaces – Each execution environment runs in a dedicated name space We do this by having unique group process IDs user IDs network interfaces and other resources managed by the Linux kernel • seccomp bpf – To limit the system calls (syscalls) that can be used f rom within the execution environment • iptables and routing tables – To prevent ingress network communications and to isolate network connecti ons between MVMs • chroot – Provide scoped access to the underlying filesystem • Firecracker configuration – Used to rate limit block device and network device throughput • Firecracker security featur es – For more information about Firecracker's current security design please review Firecracker's latest design document Along with AWS proprietary isolation te chnologies these mechanisms provide strong isolation between execution environments ArchivedAmazon Web Services Security Overview of AWS Lambda Page 11 Storage and State Execution environments are never reused across different function versions or customers but a single environment can be reused between invocations of the same function version This means data and state can persist between invocations Data and/or state may continue to persist for hours before it is destroyed as a part of normal execution environment lifecycle management For performance reasons functi ons can take advantage of this behavior to improve efficiency by keeping and reusing local caches or long lived connections between invocations Inside an execution environment these multiple invocations are handled by a single process so any process wide state (such as a static state in Java) can be available for future invocations to reuse if the invocation occurs on a reused execution environment Each Lambda execution environment also includes a writeable filesystem available at /tmp This storage is not accessible or shared across execution environments As with the process state files written to /tmp remain for the lifetime of the execution environment This allows expensive transfer operations such as downloading machine learning (ML) models to be amortized across multiple invocations Functions that don’t want to persist data between invocations should either not write to /tmp or delete their files from /tmp between invocations The /tmp directory is backed by an EC2 instance store and is encrypted at rest Customers that want to persist data to the file system outside of the execution environment should consider using Lambda’s integration with Amazon Elastic File System (Amazon EFS) For more information see Using Amazon EFS with AWS Lambda If customers don’t want to persist data or state across invocations Lambda recommends that they do not use the execution context or execution environment to store data or state If customers want to actively prevent data or state leaking across invocations Lambda recommends that they create distinct functions for e ach state Lambda does not recommend that customers use or store security sensitive state into the execution environment as it may be mutated between invocations We recommend recalculating the state on each invocation instead Runtime Maintenance in Lamb da Lambda provides support for multiple programming languages through the use of runtimes including Java 11 Python 38 Go 1x NodeJS 12 NET core 31 and others For a complete list of currently supported runtimes see AWS Lambda Runtimes Lambda provides support for these runtimes by continuously scanning for and deploying compatible updates and security patches and by performing other runtime maintenance activity Thi s enables customers to focus on just the maintenance and security of any code included in their Function and Layer The Lambda team uses AWS Inspector to ArchivedAmazon Web Services Security Overview of AWS Lambda Page 12 discover known security issues as well as other cus tom security issues notification mechanisms and pre disclosure lists to ensure that our runtime languages and execution environment remain patched If any new patches or updates are identified Lambda tests and deploys the runtime updates without any invol vement from customers For more information about Lambda's compliance program see the Lambda and Compliance section of this document Typically no action is required to pick up the latest patches for supported La mbda runtimes but sometimes action might be required to test patches before they are deployed (for example known incompatible runtime patches) If any action is required by customers Lambda will contact them through the Personal Health Dashboard throug h the AWS account's email or through other means with the specific actions required to be taken Customers can use other programming languages in Lambda by implementing a custom runtime For custom runtimes maintenance of the runtime becomes the custome r's responsibility including making sure that the custom runtime includes the latest security patches For more information see Custom AWS Lambda runtimes in the AWS Lambda Developer Guide When upstream runtime language maintainers mark their language End OfLife (EOL) Lambda honors this by no longer supporting the runtime language version When runtime versions are marked as deprecated in Lambda Lambda stops supporting t he creation of new functions and updates to existing functions that were authored in the deprecated runtime To alert customer of upcoming runtime deprecations Lambda sends out notifications to customers of the upcoming deprecation date and what they can expect Lambda does not provide security updates technical support or hotfixes for deprecated runtimes and reserves the right to disable invocations of functions configured to run on a deprecated runtime at any time If customers want to continue to run deprecated or unsupported runtime versions they can create their own custom AWS Lambda runtime For details on when runtimes are deprecated see the AWS Lambda Runtime support policy Monitoring and Auditing Lambda Functions You can monitor and audit Lambda functions with many AWS services and methods including the following services: Amazon CloudWatch Lambda automatically monitors Lambda functions on your behalf Through Amazon CloudWatch it reports metrics such as the number of requests the execution duration per request and the number of requests resulting in an error These metrics are exposed at the function level which you can then leverage to set CloudWatch alarms ArchivedAmazon Web Services Security Overview of AWS Lambda Page 13 For a list of metrics exposed by Lambda see Working with AWS Lambda function metrics AWS CloudTrail Using AWS CloudTrail you can implement governance compliance operational auditing and risk auditing of your entire AWS account including Lambda CloudTrail enables you to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a complete event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Using CloudTrail you can optionally encrypt log files using KMS and also leverage CloudTrail log file integrity validati on for positive assertion AWS X Ray Using AWS X Ray you can analyze and debug production and distributed Lambda based applications which enables you to understand the performance of your application and its u nderlying services so you can eventually identify and troubleshoot the root cause of performance issues and errors X Ray’s end toend view of requests as they travel through your application shows a map of the application’s underlying components so you can analyze applications during development and in production AWS Config With AWS Config you can track configuration changes to the Lambda functions (including deleted functions) runtime environments tags handler name code size memory allocation timeout settings and concurrency settings along with Lambda IAM execution role subnet and security group associations This gives you a holistic view of the Lambda function’s lifecycle and enables you to sur face that data for potential audit and compliance requirements Architecting and Operating Lambda Functions Now that we have discussed the foundations of the Lambda service we move on to architecture and operations For information about standard best pra ctices for serverless applications see the Serverless Application Lens whitepaper which defines and explores the pillars of the AWS Well Architected Framework in a Serverless context • Operational Excellence Pillar – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures ArchivedAmazon Web Services Security Overview of AWS Lambda Page 14 • Security Pillar – The ability to protect information systems and assets while delivering business value through risk assessment and mitigation strategies • Reliability Pillar – The ability of a system to recover from infrastructure or service disruptions dynamically ac quire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues • Performance Efficiency Pillar – The efficient use of computing resources to meet requirements and the maintenance of that efficiency a s demand changes and technologies evolve The Serverless Application Lens whitepaper includes topics such as logging metrics and alarming throttling and limits assigning permissions to Lambda functions and making sensitive data available to Lambda functions Lambda and Compliance As mentioned in The Shared Responsibility Model section of this document you are responsible for determining which compliance regime applies to your data After you have determined your compliance regime needs you can use the various Lambda features to match those controls You can contact AWS expert s (such as solution architects domain experts technical account managers and other human resources) for assistance However AWS cannot advise customers on whether (or which) compliance regimes are applicable to a particular use case As of November 202 0 Lambda is in scope for SOC 1 SOC 2 and SOC 3 reports which are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives In addition Lambda maintains compliance with PCI DSS and the US He alth Insurance Portability and Accountability Act (HIPAA) among other compliance programs For an up todate list of compliance information see the AWS Services in Scope by Compliance P rogram page Because of the sensitive nature of some compliance reports they cannot be shared publicly For access to these reports you can sign in to your AWS console and use AWS Artifact a no cost self service portal for on demand access to AWS compliance reports Lambda Event Sources Lambda integrates with more than 140 AWS services via direct integration and the Amazon EventBridge event bus The commonly used Lambda event sources are: • Amazon API Gateway • Amazon CloudWatch Events ArchivedAmazon Web Services Security Overview of AWS Lambda Page 15 • Amazon CloudWatch Logs • Amazon Dy namoDB Streams • Amazon EventBridge • Amazon Kinesis Data Streams • Amazon S3 • Amazon SNS • Amazon SQS • AWS Step Functions With these event sources you can: • Use AWS IAM to manage access to the service and resources securely • Encrypt your data at rest1 All services encrypt data in transit • Access from your Amazon VPC using VPC endpoints (powered by AWS PrivateLink ) • Use Amazon CloudWatch to collect report and alarm on metrics • Use AWS CloudTrail to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a comple te event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Conclusion AWS Lambda offers a powerf ul toolkit for building secure and scalable applications Many of the best practices for security and compliance in Lambda are the same as in all AWS services but some are particular to Lambda This whitepaper describes the benefits of Lambda its suitabi lity for applications and the Lambda managed runtime environment It also includes information about monitoring and auditing and security and compliance best practices As you think about your next implementation consider what you learned about Lambda and how it might improve your next workload solution Contributors Contributors to this document include: • Mayank Thakkar Senior Solutions Architect ArchivedAmazon Web Services Security Overview of AWS Lambda Page 16 • Marc Brooker Senior Principal Engineer • Osman Surkatty Senior Security Engineer Further Reading For additional information see: • Shared Responsibility Model which explains how AWS thinks about security in general • Security best practices in IAM which covers recommendations for AWS Identity and Access Management (IAM) service • Serverless App lication Lens covers the AWS well architected framework and identifies key elements to help ensure your workloads are architected according to best practices • Introduct ion to AWS Security provides a broad introduction to thinking about security in AWS • Amazon Web Services: Risk and Compliance provides an overview of com pliance in AWS Document Revisions Date Description March 2019 First publication January 2021 Republished with significant updates Notes 1 At the time of publishing encryption of data at rest was not available for Amazon EventBridge Continue to monitor the service homepages for updates on these capabilities Archived,General,consultant,Best Practices Serverless_Architectures_with_AWS_Lambda,"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Serverless Architectures with AWS Lambda Overview and Best Practices November 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction What Is Serverless? 1 AWS Lambda —the Basics 2 AWS Lamb da—Diving Deeper 4 Lambda Function Code 5 Lambda Function Event Sources 9 Lambda Function Configuration 14 Serverless Best Practices 21 Serverless Architecture Best Practices 21 Serverless Development Best Practices 34 Sample Serverless Architectures 42 Conclusion 42 Contributors 43 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Since its introduction at AWS re:Invent in 2014 AWS Lambda has continued to be one of the fast est growing AWS services With it s arrival a new application architecture paradigm was created— referred to as serverless AWS now provides a number of different services that allow you to build full application stacks without the need to manage any servers Use cases like web or mobile backends realtime data processing chatbots and virtual assistants Internet of Things (IoT) backends and more can all be fully serverless For the logic layer of a serverless application you can execute your business logic using AWS Lambda Developers and organizations are finding that AWS Lambda is enabling much faster development speed and experimentation than is possible when deploying applications in a traditional server based environment This whitepaper is meant to provide you with a broad overview of AWS Lamb da its features and a slew of recommendations and best practices for building your own serverless applications on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 1 Introduction What Is Serverless ? Serverless most often refers to serverless applications Serverless applications are ones that don't require you to provision or manage any servers You can focus on your core product and business logic instead of responsibilities like operating system ( OS) access control OS patching provisioning right sizing scaling and availability By building your application on a serverless platform the platform manages these responsibilities for you For service or platform to be considered serverless it shoul d provide the following capabilities : • No server management – You don’t have to provision or maintain any servers There is no software or runtime to install maintain or administer • Flexible scaling – You can scale your application automatically or by adjusting its capacity through toggling the units of consumption (for example throughput memory) rather than units of individual servers • High availability – Serverless applications have built in availability and fault to lerance You don't need to architect for these capabilities because the services running the application provide them by default • No idle capacity – You don't have to pay for idle capacity There is no need to pre provision or over provision capacity for things like compute and storage T here is no charge when your code is n’t running The AWS Cloud provides many different services that can be components of a serverless application These include capabilities for : • Compute – AWS Lambda 1 • APIs – Amazon API Gateway2 • Storage – Amazon Simple Storage Service (Amazon S3 )3 • Databases –Amazon DynamoDB4 • Interprocess messaging – Amazon Simple Notification Service ( Amazon SNS)5 and Amazon Simple Queue Service ( Amazon SQS)6 • Orchestration – AWS Step Functions7 and Amazon CloudWatch Events8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 2 • Analytics – Amazon Kinesis9 This whitepaper will focus on AWS Lambda the compute layer of your serverless application where your code is executed and the AWS developer tools and services that enable best practices when building and maintaining serverless applications with Lambda AWS Lambda—the Basics Lambda is a high scale provision free serverless compute offering based on functions It provides t he cloud logic layer for your application Lambda functions can be trigg ered by a variety of events that occur on AWS or on supporting third party services They enabl e you to build reactive event driven systems When there are multiple simultaneous events to respond to Lambda simply runs more copies of the function in para llel Lambda functions scale precisely with the size of the workload down to the individual request Thus the likelihood of having an idle server or container is extremely low Architectures that use Lambda functions are designed to reduce wasted capacit y Lambda can be described as a type of serverless Function asaService (FaaS) FaaS is one approach to building event driven computing systems It relies on functions as the unit of deployment and execution Serverless FaaS is a type of FaaS where no virtual machines or containers are present in the programming model and where the vendor provides provision free scalability and built in reliability Figure 1 shows t he relationship among event driven computing FaaS and serverless FaaS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 3 Figure 1: The relationship among event driven computing FaaS and serverless FaaS With Lambda you can run code for virtually any type of application or backend service Lambda run s and scale s your code with high availability Each Lambda function you create contains the code you want to execute the configuration that defines how your code is executed and optionally one or more event sources that detect events and invoke your function as they occur These elements are covered in more detail in the next section An example event source is API Gateway which can invoke a Lambda function anytime an API method created with API Gateway receives an HTTPS request Another example is Amazon SNS which has the ability to invoke a Lambda function anytime a new message is posted to an SNS topic Many event source options can trigger your Lambda function For the full list see this documentat ion10 Lambda also provide s a RESTful service API which includes the ability to directly invoke a Lambda function 11 You can use this API to execute your code directly without confi guring another event source You don’t need to write any code to integrate an event source with your Lambda function manage any of the infrastructure that detects events and delivers them to your function or manage scaling your Lambda function to match the number of events that are delivered You can focus on your application logic and configure the event sources that cause your logic to run This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 4 Your La mbda function runs within a (simplified) architecture that looks like the one shown in Figure 2 Figure 2: Simplified architecture of a running Lambda function Once you configure an event source for your function your code is invoked when the event occurs Your code can execute any business l ogic reach out to external web services integrate with other AWS services or anything else your application requires All of the same capabilities and software design principles that you’re used to for your language of choice will apply when using Lambd a Also because of the inherent decoupling that is enforced in serverless applications through integrating Lambda functions and event sources it ’s a natural fit to build microservices using Lambda functions With a basic understanding of serverless princ iples and Lambda you might be ready to start writing some code The following resources will help you get started with Lambda immediately : • Hello World tutorial: http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml12 • Serverless workshops and walkthroughs for building sample applications: https://githubcom/awslabs/aws serverless workshops13 AWS Lambda—Diving Deeper The remainder of this whitepaper will help you understand the components and features of Lambda followed by best practices for various aspects of building and owning serverless applications using Lambda This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 5 Let’s begin our deep dive by further expanding and explaining each of the major components of Lambda that we described in the introduction: function code event sources and function configuration Lambda Function Code At its core you use Lambda to execute code This can be code that you’ ve written in any of the languages supported by Lam bda (Java Nodejs Python or C# as of this publication) as well as any code or packages you’ve uploaded alongside the code that you’ve written You’re free to bring any librari es artifacts or compiled native binaries that can execute on top of the runtime environment as part of your function code package If you want you can even execute code you’ve written in another programming language (PHP Go SmallTalk Ruby etc) as long as you stage and invoke that code from within one of the support languages in the AWS Lambda runtime environment (see this tutorial )14 The Lambda runtime environment is based on an Amazon Linux AMI (see current environment details here ) so you should compile and test the components you plan to run inside of Lambda within a matching environment15 To help you perform this type of testing prior to running within Lambda AWS provides a set of to ols called AWS SAM Local to enable local testing of Lambda functions16 We discuss these tools in the Serverless Development Best Practices section of this whitepaper The Function Code Package The function code package contains all of the assets you want to have available locally upon execution of your code A package will at minimum include the code function you want the Lambda se rvice to execute when your function is invoked However it might also contain other assets that your code will reference upon execution for example addition al files classes and libraries that your code will import binaries that you would like to execute or configuration files that your code might reference upon invocation The maximum size of a function code package is 50 MB compressed and 250MB extracted at the time of this publication (For the full list o f AWS Lambda l imits see this documentation 17) When you create a Lambda function (through the AWS Management Console or using the CreateFunction API) you can referenc e the S3 bucket and object This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 6 key where you’ve uploaded the package 18 Alternatively you can upload the code package directly when you create the function Lambda will then store your code package in an S3 bucket manage d by the service The same options are available when you publish updated code to existing Lambda functions (through the UpdateFunctionCode API)19 As events occur your code package will be downloaded from the S3 bucket installed in the Lambda runtime environment and invoked as needed This happens on demand at the scale required by the number of events triggering your function within an environm ent ma naged by Lambda The Handler When a Lambda function is invoked code execution begins at what is called the handler The handler is a specific code method (Java C#) or function (Nodejs Python) that you’ve created and included in your package You specify the handler when creating a Lambda function Each language supported by Lamb da has its own requirements for how a function handler can be defined and referenced within the package The following links will help you get started with each o f the supported languages Language Example Handler Definition Java20 MyOutput output handlerName(MyEvent event Context context) { } Nodejs21 exportshandlerName = function(event context callback) { // callback parameter is optional } Python22 def handler_name(event context): return some_value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 7 Language Example Handler Definition C#23 myOutput HandlerName( MyEvent event ILambdaContext context) { } Once the handler is successfully invoked inside your Lambda f unction the runtime environment belongs to the code you’ve written Your Lambda function is free to execute any logic you see fit driven by the code you’ve written that starts in the handler This means you r handler can call other methods and functions within the files and classes you’ve uploaded Your code can import third party libraries that you’ve uploaded and install and execute native binaries that you’ve uploaded (as long as they can run on Amazon Linux ) It can also interact with other AWS services or make API requests to web ser vices that it depends on etc The Event Object When your Lambda function is invoked in one of the supported languages one of the parameters provided to your handler function is an event object The event differ s in structure and contents depending o n which event source created it The contents of the event parameter include all of the data and metadata your Lambda function needs to drive its logic For example an event created by API Gateway will contain details related to the HTTPS request that was made by the API client (for example path query st ring request body ) whereas an event created by Amazon S3 when a new object is created will include details about the bucket and the new object The Context Object Your Lambda function is also provided with a context object The context object allows your function code to interact with the Lambda execution environment The contents and structure of the context object vary based on the language runtime your Lambda function is using but at minimum it will contain: • AWS RequestId – Used to track specific invocations of a Lambda function (important for error reporting or when contacting AWS Support) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 8 • Remaining time – The amount of time in milliseconds that remain befo re your function timeout occurs (Lambda functions can run a maximum of 300 seconds as of this publishing but you can configure a shorter timeout) • Logging – Each language runtime provides the ability to stream log statements to Amazon CloudWatch Logs T he context object contain s information about which C loudWatch Logs stream your log statements will be sent to For more information about how logging is handled in each language runtime see the following : o Java24 o Nodejs25 o Python26 o C#27 Writing Code for AWS Lambda —Statelessness and Reuse It’s important to understand the central tenant when writing code for Lambda: your code cannot make assumptions about stat e This is because Lambda fully manag es when a new function container will be created and invoked for the first time A container could be getting invoked for the first time for a number of reasons For example the events triggering your Lambda function a re increasing in concurrency beyond the number of containers previously created for your function an event is triggering your Lambda function for the first time in several minutes etc While Lambda is responsible for scaling your function containers up and down to meet actual demand your code needs to be able to operate accordingly Although Lambda won’t interrupt the processing of a specific invocation that’s already in flight your code doesn’t need to account for that level of volatility This mean s that your code cannot make any assumptions that state will be preserved from one invocation to the next However each time a function container is created and invoked it remain s active and available for subsequent invo cations for at l east a few minutes before it is terminated When subsequent invocations occur on a container that has already been active and invoked at least once before we say that invocation is running on a warm container When an invocation occurs for a Lambda function that requires your function code package to be created and invoked for the first time we say the invocation is experiencing a cold start This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 9 Figure 3: Invocations of warm function containers and cold function containers Depending on the logic your code is executing understanding how your code can take advantage of a warm container can result in faster code execution inside of Lambda This in turn results in quicker responses and lower cost For more details and examples of how to improve your Lambda function performance by taking advantage of warm containers see the Best Practices section later in this w hitepaper Overall each language that Lambda supp orts has its own model for packaging source code and po ssibilities for optimizing it V isit this page to get started with each of the supported languages28 Lambda Function Event Sources Now that you know what goes into the code of a Lambda function let’s look at the event sources or triggers that invoke your code While Lambda provide s the Invok e API that enables you to directl y invoke your function you will likely only use it for testing and operational purposes29 Instead you can associate your Lambda function with event sources occurring within AWS services that will invoke your function as needed You don’t have to write scale or maintain any of the software that integrates the event source with your Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 10 Invocation Patterns There are two models for invoking a Lambda function : • Push Model – Your Lambda function is invoked every time a particular event occurs within another AWS service (for example a new object is added to an S3 bucket) • Pull M odel – Lambda poll s a data source and invoke s your function with any new records that arrive at the data source batching new records toge ther in a single function invocation (for example new records in an Amazon Kinesis or Amazon DynamoDB stream) Also a Lambda function can be executed synchronously or asynchronously You choose this using the parameter InvocationType that’s provided when invoking a Lambda function This parameter has three possible values: • RequestResponse – Execute synchronously • Event – Execute asynchronously • DryRun – Test that the invocation is permitted for the caller but don’t execute the function Each event source dictate s how your function can be invoked The event source is also responsible for crafting its own event parameter as we discussed earlier The following tables provide details about how some of the more popular event sources can integrate with your La mbda functions You can find the full list of supported event sources here 30 Push Model Event Source s Amazon S3 Invocation Model Push Invocation Type Event Description S3 event notifications (such as ObjectCreated and ObjectRemoved) can be configured to invoke a Lambda function as they are published This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 11 Example Use Cases Create image modifications (thumbnails different resolutions watermarks etc ) for images that users upload to an S3 bucket through your application Process raw data uploaded to an S3 bucket and move transformed data to another S3 bucket as part of a big data pipeline Amazon API Gateway Invocation Model Push Invocation Type Event or RequestResponse Description The API methods you create with API Gateway can use a Lambda function as their service backend If you choose Lambda as the integration type for an API method your Lambda function is invoked synchronously (the response of your Lambda function serve s as the API response) With this integration type API Gateway can also act as a simple proxy to a Lambda function API Gateway will perform no processing or transformation on its own and will pass along all the contents of the r equest to Lambda If you want an API to invoke your function asynchronously as an event and return immediately with an empty response you can use API Gateway as an AWS Service Proxy and integrate with the Lambda Invoke API providing the Event InvocationType in the request header This is a great option if your API clients don’t need any information back from the request and you want the fastest response time possible (This option is great for pushing user interactions on a website or app to a service backend for analysis ) Example Use Cases Web service backends (web application mobile app microservice architectures etc) Legacy service integration (a Lambda function to transform a legacy SOAP backend into a new modern REST API) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 12 Any other use cases where HTTPS is the appropriat e integration mechanism between application components Amazon SNS Invocation Model Push Invocation Type Event Description Messages that are published to an SNS topic can be delivered as events to a Lambda function Example Use Cases Automated responses to CloudWatch alarms Processing of events from other services (AWS or otherwise) that can natively publish to SNS topics AWS CloudFormation Invocation Model Push Invocation Type RequestResponse Description As part of deploying AWS CloudFormation stacks you can specify a Lambda function as a custom resource to execute any custom commands and provide data back to the ongoing stack creation Example Use Cases Extend AWS CloudFormation capabilities to include AWS service features not yet natively supported by AWS CloudFormation Perform custom validation or reporting at key stages of the stack creation/update/delete process Amazon CloudWatch Events Invocation Model Push This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 13 Invocation Type Event Description Many AWS services publish resource state changes to CloudWatch Events Those events can then be filtered and routed to a Lambda function for automated responses Example Use Cases Event driven operations automation (for example take action each time a new EC2 instance is launched notify an appropriate mailing list when AWS Trusted Advisor reports a new status change) Replacement for tasks previously accomplished with cron (CloudWatch Events supports scheduled events) Amazon Alexa Invocation Model Push Invocation Type RequestResponse Description You can write Lambda f unctions that act as the service backend for Amazon Alexa Skills When an Alexa user interacts with your skill Alexa’s Natural Language Understand and Processing capabilities will deliver their interactions to your Lambda functions Example Use Cases An Alexa skill of your own Pull Model Event Source s Amazon DynamoDB Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a DynamoDB stream multiple times per second and invoke your Lambda function with the batch of updates that have been published to the stream since the last batch You can configure the batch size of each invocation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 14 Example Use Cases Application centric workflows that should be triggered as changes occur in a DynamoDB table (for example a new user registered an order was placed a friend request was accepted etc) Replication of a DynamoDB table to another region (for disaster recover y) or another service (shipping as logs to an S3 bucket for backup or analysis) Amazon Kinesis Streams Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a Kinesis stream once per second for each stream shard and invoke your Lambda function with the next records in the shard You can define the batch size for the number of records delivered to your function at a time as well as the number of Lambda function containers executing concurrently (number of stream shards = number of concurrent function containers) Example Use Cases Realtime data processing for big data pipelines Realtime alerting/monitoring of streaming log statements or other application events Lambda Function Configuration After you write and package your Lambda function code on top of choosing which event sources will trigger your function you have various configuration options to set that define how your code is executed within Lambda Function Memory To define the resources allocated to y our executing Lambda function you’re provided with a single dial to increase/decrease function resources: memory/RAM You can allocate 128 MB of RAM up to 15 GB of RAM to your Lambda function Not only will this dictate the amount of memory available to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 15 your function code during execution but the same dial will also influence the CPU and n etwork resources available to your function Selecting the appropriate m emory allocation is a very important step when optimizing the price and performance of any Lambd a function Please review the best practices later in this whitepaper for more specifics on optimizing performance Versions and Aliases There are times where you might need to reference or revert your Lambda function back to code that was previously deployed Lambda lets you version your AWS Lambda f unctions Each and every Lambda f unction has a default version built in: $LATEST You can address the most recent code that has been uploaded to your Lambda function through the $LATEST version You can ta ke a snapshot of the code that’s currently referred to by $LATEST and create a numbered version through the PublishVersion API31 Also when updating your function code thro ugh the UpdateFunctionCode API there is an optional Boolean parameter publish32 By setting publish: true in your request Lambda will create a new Lambda function version incremented from the last published version You can invoke each version of your Lambda function independently at any time Each version has its own Amazon Resource Name (ARN) referenced like this: arn:aws:lambda:[region]:[account] :function:[fn_name] :[version] When calling the Invoke API or creating an event source for your Lambda function you can also specify a specific version of the Lambda function to be executed33 If you don ’t provide a version number or use the ARN that doesn’t contain the version number $LATEST is invoked by default It’s important to know that a Lambda f unction container is specific to a particular version of your function So for example if there are already several function containers deployed and available in the Lambda runtime environment for version 5 of the f unction version 6 of the same function will not be able to execute on top of the existing version 5 containers —a different set of containers will be installed and managed for each function version This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 16 Invoking your Lambda functions by their version number s can be useful during testing and operational activities However we don’t recommend having your Lambda function be triggered by a specific version number for real application traffic Doing so would require you to update all of the triggers and clients invoking your Lambda function to point at a new function version each time you wanted to update your code Lambda aliases should be used here instead A function alias allows you to invoke and point event sources to a specific Lambda function version However you can update what version that alias refers to at any time For example your event sources and clients that are invoking version number 5 through the alias live may cut over to version number 6 of your function as soon as you update the live alias to instead point at version number 6 Each alias can be referred to within the ARN similar to when referring to a function version number: arn:aws:lambda:[region]:[account] :function:[fn_name] :[alias] Note : An alias is simply a pointer to a specific version number This means that if you have multiple different aliases pointed to the same Lambda function version at once requests to each alias are executed on top of the same set of installed function containers This is important to understand so that you don’ t mistakenly point multiple aliases at the same function version number if requests for each alias are intended to be processed separately Here are s ome example suggestions for Lambda aliases and how you might use them: • live/prod/active – This could represent the Lambda function version that your production triggers or that clients are integrating with • blue/green – Enable the blue/green deployment pattern through use of aliases • debug – If you’ve created a testing stack to debug your applications it can integrate with an alias like this when you need to perform a deeper analysis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 17 Creating a good documented strategy for your use of function aliases en able s you to have sophisticated serverless deployment and operations practices IAM Role AWS Identity and Access Management (IAM) provides the capability to create IAM policies that define permissions for interacting with AWS s ervices and APIs34 Policies can be associated with IAM roles Any access key ID and secret access key generate d for a particular role is authorized to perform the actions defined in the policies attached to that role For more information about IAM best practices see this documentation 35 In the context of Lambda you assign an IAM role (called an execution role) to each of your Lambda functions The IAM p olicies attached to that role define what AWS s ervice APIs your function code is authorized to interact with There are t wo benefits: • Your source code is n’t required to perform any AWS credential management or rotation to interact with the AWS APIs Simply using the AWS SDKs and the default credential provider result s in your Lambda function automatically using temporary cre dentials associated with the execution role assigned to the function • Your source code is decoupled from its own security posture If a developer attempts to change your Lambda function code to integrate with a service that the function doesn’t have access to that integration will fail due to the IAM role assigned to your function (Unless they have used IAM credentials that are separate from the execution role you should use static code analysis tools to ensure that no AWS credentials are present in your source code) It’s important to assign each of your Lambda functions a specific separate and least privilege IAM role This strategy ensures that each Lambda f unction can evolve independently without increasing the authorization scope of any other Lambda functions Lambda Function Permissions You can define which push model event sources are allowed to invoke a Lambda function through a concept called permissions With permissions you declare a function policy that lists the AWS Resource Names (ARNs) that are allowed to invoke a function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 18 For pull model event sources (for example Kinesis streams and DynamoDB streams) you need to ensure that the appropriate actions are permitted by the IAM execution role assig ned to your Lambda function AWS provides a set of managed IAM roles associated with each of the pull based event sources if you don’t want to manage the permissions required However to ensure least privilege IAM policies you should create your own IAM roles with resource specific policies to permit access to just the intended event source Network Configuration Executing your Lambda function occurs through the use of the Invoke API that is part of the AWS Lambda service API s; so there is no direct inbo und network access to your function to manage However y our function code might need to integrate with external dependencies (internal or publically hosted web services AWS services databases etc) A Lambda function has two broad options for outbound network connectivity: • Default – Your Lambda function communicate s from inside a virtual private cloud (VPC) that is managed by Lambda It can connect to the internet but not to any privately deployed resources running within your own VPCs • VPC – Your Lamb da function communicate s through an Elastic Network Interface (ENI) that is provisioned within the VPC and subnets you choose with in your own account These ENIs can be assigned security groups and traffic will route based on the route tables of the subne ts those ENIs are placed within —just the same as if an EC2 instance were placed in the same subnet If your Lambda function does n’t require connectivity to any privately deployed resources we recommend you select the d efault networking option Choosing the VPC option will require you to manage: • Selecting appropriate subnets to ensure multiple Availability Zones are being used for the purposes of high availability • Allocating the appropriate number of IP a ddresses to each subnet to manage capacity • Implementing a VPC network design that will permit your Lambda functions to have the connectivity and security required This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 19 • An increase in Lambda cold start times if your Lambda function invocation patterns require a new ENI to be created just in time (ENI creation can take many seconds today ) However if your use case requires private connectivity use the VPC option with Lambda F or deeper guidance if you plan to deploy your Lambda functions with in your own VPC see this documentation 36 Environment Variables Software Development Life Cycle (SDLC) best practice dictates that developers separate their code and their config You can achieve this by using environment variables with Lambda Environment variables for Lambda functions enable you to dynamically pass data to your function code and libraries without making changes to your code Environment variables are key value pairs that you create and modify as par t of your function configuration By default these variables are encrypted at rest For any sensitive information that will be stored as a Lambda function environment variable we recommend you encrypt those values using the AWS Key Management Service (AWS KMS) prior to function creation storing the encrypted cyphertext as the variable value Then have your Lambda function decrypt that variable in memory at execution time Here are some e xamples of how you might decide to use environment variables: • Log settings ( FATAL ERROR INFO DEBUG etc) • Dependency and/or database connection strings and credentials • Feature flags and toggles Each version of your Lambda f unction can have its own e nvironment variable values However once the values are established for a numbered Lambda funct ion version they cannot be changed To make changes to your Lambda function environment variables you can change them to the $LATEST version and then publish a new version that contains the new environment variable values This enables you to always keep track of which e nvironment variable values are associated with a previous version of your function This is often import ant during a rollback procedure or when triaging the past state of an application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 20 Dead Letter Queues Even in the ser verless world exceptions can still occur (For example perhaps you’ve uploaded new function code that does n’t allow the Lambda event to be parsed successfully or there is an operational event within AWS that is preventing the function from being invoked ) For asynchronous event sources (the event InvocationType ) AWS owns the client software that is responsible for invoking your function AWS does not have the ability to synchronously notify you if the invocations are successful or not as invocations occur If an exception occurs when trying to invoke your function in these models the invocation will be attempted two more times (with back off between the retries) After the third attempt the event is either discarded or placed onto a dead letter queu e if you configured one for the function A dead letter queue is either an SNS topic or SQS queue that you have designated as the destination for all failed invocation events If a failure event occurs the use of a dead letter queue allow s you to retain just the messages that failed to be processed during the event Once your function is able to be invoked again you can target those failed events in the dead letter queue for reprocessing The mechanisms for reprocessing/retrying the function invocation attempts placed on to your dead l etter queue is up to you For more information about dead letter queues see this tutorial 37 You should use dead letter queues if it ’s important to your application that all invocations of your Lambda function complete eventually even if execution is delayed Timeout You can designate the maximum amount of time a single function execution is allowed to complete before a timeout is returned The maximum timeout for a Lambda function is 300 seconds at the time of this publication which means a single invocation of a Lambda function cannot execute longer than 300 seconds You should not always set the timeout for a Lambda function to the maximum There are many cases where an application should fail fast Because your Lambda function is billed based on execution time in 100 ms increments avoiding lengthy timeouts for functions can prevent you from being billed whil e a function is simply waiting to timeout (perhaps an external dependency is unavailable you’ve accidentally programmed an infinite loop or another similar scenario) Also once execution completes or a timeout occurs for your Lambda function and a respo nse is returned all execution ceases This includes any background This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 21 processes subprocesses or asynchronous processes that your Lambda function might have spawned during execution So you should not rely on background or asynchronous processes for critica l activities Your code should ensure those activities are completed prior to timeout or returning a response from your function Serverless Best Practices Now that we’ve covered the components of a Lambda based serverless application let’s cover some rec ommended best practices There are many SDLC and server based architecture best practices that are also true for serverless architectures : eliminate single points of failure test changes prior to deployment encrypt sensitive data etc However achieving best practices for serverless architectures can be a different task because of how different the operating model is You don ’t have access to or concerns about an operating system or any lower level components in the infrastructure Because of this your focus is solely on your own application code/architecture the development processes you follow and the features of the AWS services your application leverages that enable you to follow best practices First we review a set of best practices for designing your serverless architecture according to the AWS Well Architected Framework Then we cover some best practices and recommendations for your development process when building serverless applications Serverless Architecture Best Practices The AWS Well Architected Framework includes strategies to help you compare your workload against our best practices and obtain guidance to produce stable and eff icient systems so you can focus on functional requirements 38 It is based on five pillars: security reliability performance efficiency cost optimization and operational excellence Many of the guidelines in the framework apply to serverless applications However there are specific implementation steps or patterns that are unique to serverless architectures In the following sections we cover a set of recommendations that are serverless specific for each of the Well Architected pillars This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 22 Security Best Pr actices Designing and implementing security into your applications should always be priority number one —this doesn’t change with a serverless architecture The major difference for securing a serverless application compared to a server hosted application is obvious —there is no server for you to secure However you still need to think about your application ’s security There is still a shared responsibility model for serverless security With Lambda and serverless architectures rather than implementing application se curity through things like anti virus/malware software file integrity monitoring intrusion detection/prevention systems firewalls etc you ensure security best practices through writing secure application code t ight access control over source code changes and following AWS security best practices for each of the services that your Lambda functions integrate with The following is a brief list of serverless security best practices that should apply to many serverless use cases al though your own specific security and compliance requirements should be well understood and might include more than we describe here • One IAM R ole per Function Each and every Lambda function within your AWS a ccount should have a 1:1 rela tionship with an IAM role Even if multiple functions begin with exactly the same policy always decouple your IAM roles so that you can ensure least privilege policies for the future of your function For example if you shared the IAM role of a Lambda f unction that needed access to an AWS KMS key across multiple Lambda functions then all of those functions would now have access to the same encryption key • Temporary AWS Credentials You should not have any long lived AWS credentials included within your Lambda function code or configuration (This is a great use for static code analysis tools to ensure it never occurs in your code base!) For most cases the IAM execution role is all that’s required to integrate with other AWS services Simply create AWS service clients within your code through the AWS SDK without providing any credentials The SDK automatically manage s the retrieval and rotation of the temporary This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 23 credentials generated for your role The following is an example usin g Java AmazonDynamoDB client = AmazonDynamoDBClientBuilderdefaultClient(); Table myTable = new Table(client ""MyTable""); This code snippet is all that’s required for the AWS SDK for Java to create an object for interacting with a DynamoDB table that automatically sign its requests to the DynamoDB APIs using the temporary IAM creden tials assigned to your function39 However t here might be cases where the execution role is not sufficient for the type of access your function requires This can be the case for some cross account integrations your Lambda function might perform or if you have user specific access control policies through com bining Amazon Cognito40 identity roles and DynamoDB fineg rained access control 41 For cross account us e cases you should grant your execution role should be granted access to the AssumeRole API within the AWS Security Token Service and integrate d to retrieve temporary access credentials 42 For user specific access control policies your function should be provided with the user identity in question and then integrate d with the Amazon Cognito API GetCredentialsForIdentity 43 In this case it’s imperative that you ensure your code appropriately manages these credentials so that you are leveraging the correct credentials for each user associated with that invocation of your Lambda function It’s common for an application to encrypt and store these per user credentials in a place like DynamoDB or Amazon ElastiCache as part of user session data so that they can be retrieved with reduced latency and more scalability than regenerating them for subsequent requests for a returning user44 • Persisting Secret s There are cases where you may have long lived secrets (for example database credentials dependency service access keys encryption keys etc) that your Lambda function needs to use We recommend a few options for the lifecycle of secrets management in your application : o Lambda Environment Variables with Encryption Helpers45 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 24 Advantages – Provided directly to your function runtime environment minimizing the latency and code required to retrieve the secret Disadvantages – E nvironment variables are coupled to a function version Updat ing an environment variable requires a new function version (more rigid but does provide stable version history as well) o Amazon EC2 Systems Manager Parameter Store46 Advantages – Fully decoupled from your Lambda functions to provide maximum flexibility for how secrets and functions relate to each other Disadvantag es – A request to Parameter Store is required to retrieve a parameter/secret While not substantial this does add latency over environment variables as well as an additional service dependency and requires writing slightly more code • Using Secrets Secret s should always only exist in memory and never be logged or written to disk Write code that manages the rotation of secrets in the event a secret needs to be revoked while your application remains running • API Authorization Using API Gateway as the event source for your Lambda function is unique from the other AWS service event source options in that you have ownership of authentication and authorization of your API clients API Gateway can perform much of the heavy lifting by providing things like native AWS SigV4 authentication 47 generated client SDKs 48 and custom authorizers 49 However you’re still responsible for ensuring that the security posture of your APIs meets the bar you’ve set For more information about API s ecurity best practices see this documentation 50 • VPC Security If your Lambda function requires access to resources deployed inside a VPC you should apply network security best practices through use of least privilege s ecurity groups Lambda function specific subnets network ACLs and route tables that allow traffic coming only from your Lambda functions to reach intended destinations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 25 Keep in mind that these practices and policies impact the way that your Lambda functions connect to their dependencies Invoking a Lambda function still occurs through event sources an d the Invoke API (neither are affected by your VPC configuration) • Deployment Access Control A call to the UpdateFunctionCode API is analogous to a code deployment Moving an alias through the UpdateAlias API to that newly published version is analogous to a code release Treat access to the Lambda APIs that enable function code/aliases with extreme sensitivity As such you should eliminate direct user access to these APIs for any functions (production functions at a minimum) to remove the possibility of human error Making code changes to a Lambda function should be achieved through automation With that in mind the entry point for a deployment to Lambda become s the place where your continuous integration/continuous delivery ( CI/CD ) pipeline is initiated This may be a release branch in a repository an S3 bucket where a new code package is uploaded that triggers an AWS CodePipeline pipeline or somewhere else that’s specific to your organization and processes51 Wherever it is it becomes a new place where you should enforce stringent access control mechanisms that fit your team structure and roles Reliability Best Practices Serverless applications can be built to support mission critical use case s Just as with any mission critical application it’s important that you architect with the mindset that Werner Vogels CTO Amazoncom advocates for “E verything fails all the time” For serverless applications this could mean introducing logic bugs into your code failing application dependencies and other similar application level issues that you should try and prevent and account for using existing best practices that will still apply to your serverless applications For infrastructure level service ev ents where you are abstracted away from the event for serverless applications you should understand how you have architected your application to achieve high availability and fault tolerance High Availability High availability is important for productio n applications The availability posture of your Lambda function depends on the number of Availability Zones it can be executed in If your function uses the d efault network environment it is This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 26 automatically available to execute within all of the Availabili ty Zones in that AWS Region Nothing else is required to configure high availability for your function in the d efault network environment If your function is deployed within your own VPC the subnets (and their respective A vailability Zones) define if your function remains available in the event of an Availability Zone outage Therefore it’s important that your VPC design include s subnets in multiple Availability Zones In the event that an Availability Zone outage occurs it ’s important that your remaining subnets continue to have adequate IP addresses to support the number of concurrent functions required For information on how to calculate the number of IP addresses your functions require see this documentation 52 Fault Tolerance If the application availability you need requires you to take advantage of multiple AWS Regions you must take this into account up front in your design It’s not a complex exercise to replicate your Lambda function code package s to multiple AWS R egions What can be complex like most multi region application designs is coordinating a failover decision across all tiers of your application stack This means you need t o understand and orchestrate the shift to another AWS Region —not just for your Lambda functions but also for your event sources (and dependencies further upstream of your event sources) and persistence layers In the end a multi region architecture is very application specific The most important thing to do to make a multi region design feasible is to account for it in your design up front Recovery Consider how your serverless application should behave in the event that your functions cannot be exec uted For use cases where API Gateway is used as the event source this can be as simple as gracefully handling error messages and providing a viable if degraded user experience until your functions can be successfully executed again For asynchronous use cases it can be very important to still ensure that no function invocations are lost during the outage period To ensure that all received events are processed after your function has recovered you should take advantage of d ead letter queues and implement how to process events placed on that queue after recovery occurs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 27 Performance Efficiency Best Practices Before we dive into performance best practices keep in mind that if your use case can be achieved asynchrono usly you might not need to be concerned with the performance of your function (other than to optimize costs) You can leverage one of the event sources that will use the e vent InvocationType or use the pull based invocation model Those methods alone might allow your application logic to proceed while Lambda continues to process the event separately If Lambda function execution time is something you want to optimize the execution duration of your Lambda function will be primarily impacted by three th ings (in order of simplest to optimize): the resources you allocate in the function configuration the language runtime you choose and the code you write Choosing the Optimal Memory Size Lambda provides a single dial to turn up and down the amount of com pute resources available to your function —the amount of RAM allocated to your function The amount of allocated RAM also impact s the amount of CPU time and network bandwidth your function receives Simply choosing the smallest resource amount that runs your function adequately fast is an anti pattern Because Lambda is billed in 100 ms increments this strategy might not only add latency to your application it might even be more expensive overall if the added latency outweighs the resource cost savings We recommend that you test your Lambda function at each of the available resource levels to determine what the optimal level of price/performance is for your application You’ll discover that the performance of your function should improve logarithmically as resource levels are increased The logic you’re executing will define the lower bound for function execution time T here will also be a resource threshold where any additional RAM/CPU/bandwidth available to your function no longer provide s any substantial performance gain However pricing increases linearly as the resource levels increase in Lambda Your tests should find where the logarithmic function bends to choose the optimal configuration for your function The following graph shows how the ideal me mory allocation to an example function can allow for both better cost and lower latency Here the additional compute cost per 100 ms for using 512 MB over the lower memory options is outweighed by the amount of latency reduced in the function by allocatin g more resources But after 512 MB the performance gains are diminished for this This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 28 particular function’s logic so the additional cost per 100 ms now drive s the total cost higher This leaves 512 MB as the optimal choice for minimizing total cost Figure 4: Choosing the o ptimal Lambda function memory s ize The memory usage for your function is determined per invo cation and can be viewed in CloudWat ch Logs 53 On each invo cation a REPORT: entry is made as shown below REPORT RequestId: 3604209a e9a311e6939a754dd98c7be3 Duration: 1234 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB By analyzing the Max Memory Used: field you can determine if your function needs more memory or if you over provisioned your function's memory size Language Runtime Performance Choosing a language runtime performance is obviously dependent on your level of comfort and skills with each of the supported runtimes But if performance is the driving consideration for your application the performance characteristics of each language are what you might expect on Lambda as you would in another runtime environment: the compiled languages (Java and NET) incur the largest initial startup cost for a container’s first invocation but show the best performance for subsequent invocations The interpreted languages (Nodejs and Python) have very fast initial invocation times compared to the compiled language s but can’t reach the same level of maximum performance as the compiled languages do This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 29 If your application use case is both very latency sensitive and susceptible to incurring the initial invocation cost frequently (very spiky traffic or very infrequent use) we recommend one of the interpreted languages If your application does not experience large peaks or valleys within its traffic patterns or does not have user experiences blocked on Lambda function response times we recomm end you choose the langua ge you’re already most comfortable with Optimizing Your Code Much of the performance of your Lambda function is dictated by what logic you need your Lambda function to execute and what its dependencies are We won’t cover what all those optimizations coul d be because they vary from application to application But there are some general best practices to op timize your code for Lambda These are related to taking advantage of container reuse ( as describes in the previous overview) and minimizing the initial cost of a cold start Here are a few examples of how you can improve the performance of your function code when a war m container is invoked: • After initial execution store and reference any externalized configuration or dependencies th at your code retrieves locally • Limit the reinitialization of variables/objects on every invocation (use global/static variables singletons etc) • Keep alive and reus e connections (HTTP database etc) that were established during a previous invocation Finally you should do the following to limit the amount of time that a cold start takes for your L ambda function: 1 Always use the d efault network en vironment unless connectivity to a resource within a VPC via private IP is required This is because there are additional cold start scenarios related to the VPC configuration of a Lambda function (related to creating ENIs within your VPC) 2 Choose an interpreted language over a compiled language This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 30 3 Trim your function code package to only its runtime necessities This reduce s the amount of time that it takes for your code package to be downloaded from Amazon S3 ahead of invocation Understanding Your Application Performance To get visibility into the various components of your application architecture which could include one or more Lambda functions we recommend that you use AWS X Ray54 XRay lets you trace the full lifecycle of an application request through each of its component parts showing the latency and other metrics of each component separately as shown in the following figure Figure 5: A service m ap visualized by AWS X Ray To learn more about X Ray see this documentation 55 Operational Excellence Best Practices Creating a serverless application removes many operational burdens that a traditional application bring s with it This doesn’t mean you should reduce your focus on operational excellence It means that you can narrow your operatio nal focus to a smaller number of responsibilities and hopefully achieve a higher level of operational excellence Logging Each language runtime for Lambda provides a mechanism for your function to deliver logged statements to CloudWatch Logs Making adequate use of logs goes without saying and isn’ t new to Lambda and serverless architectures Even though it ’s not considered best practice today many operational teams depend This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 31 on viewing logs as they are generated on top of the server an application is deployed on That simply isn’t possible with Lambda because there is no server You also do n’t have the ability to “step through” the code of a live running Lambda function today (althoug h you can do this with AWS SAM Local prior to deployment)56 For deployed functions y ou depend heavily on the logs you create to inform an investigation of function behavior Therefore it ’s especial ly important that the logs you do create find the right balance of verbosity to help track/triage issues as they occur without demanding too much additional compute time to create them We recommend that you make use of Lambda e nvironment variables to create a LogLevel variable that your function can refer to so that it can determine which log statements to create during runtime Appropriate use of log levels can ensure that you have the ability to selectively incur the additional compute co st and storage cost only during an operational triage Metrics and Monitoring Lambda just like other AWS services provides a number of CloudWatch metrics out of the box These include metrics related to the number of invocations a function has received the execution duration of a function and others It’s best practice to create alarm thresholds (high and low) for each of your Lambda functions on all of the provided metrics through CloudWatch A major change in how your function is invoked or how long i t takes to execute could be your first indication of a problem in your architecture For any additional metrics that your application needs to gather (for example application error codes dependency specific latency etc) you have two options to get those custom metrics stored in CloudWatch or your monitoring solution of choice: • Create a custom metric and integrate directly with the API required from your Lambda function as it ’s executing This has the fewest dependencies and will record the metric as fas t as possible However it does require you to spend Lambda execution time and resources integrating with another service dependency If you follow this path ensure that your code for captur ing metrics is modularized and reusable across your Lambda functi ons instead of tightly coupled to a specific Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 32 • Capture the metric within your Lambda function code and log it using the provided logging mechanisms in Lambda Then create a CloudWatch Logs metric filter on the function streams to extract th e metric and make it available in CloudWatch Alternatively create another Lambda function as a subscription filter on the CloudWatch Logs stream to push filtered log statements to another metrics solution This path introduces more complexity and is not as near realtime as the previous solution for capturing metrics However it allow s your function to more quickly create metrics through logging rather than making an external service request Deployment Performing a deployment in Lambda is as simple as uploading a new function code package publishing a new version and updating your aliases However these steps should only be pieces of your deployment process with Lambda Each deployment process is application specific To design a deployment process that avoids negatively disrupting your users or application behavior you need to understand the relationship between each Lambda function and its event sources and dependencies Things to consider are: • Para llel version invocations – U pdating an alias to point to a new version of a Lambda function happen s asynchronously on the service side There will be a short period of time that existing function containers containing the previous source code package will continue to be invoked alongside the new function version the alias has been updated to It’s important that your application continues to operate as expected during this process An artifact of this might be that any stack dependencies being decommissioned after a deployment ( for example database tables a message queue etc) not be decommissioned until after you’ve observed all invocations targeting the new function version • Deployment schedule – Performing a Lambda function deployment during a peak traffic time could result in more cold start times than desired You should always perform your function deployments during a low traffic period to minimize the immediate impact of the new/cold function containers being provisioned in the Lambda environment • Rollback – Lambda provide s details about Lambda function versions (for example created time incrementing numbers etc ) However it does n’t logically track how your application lifecycle has been using those versions If you need to roll back your Lambda function code it’s This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 33 important for your processes to roll back to the function version that was previously deployed Load Testing Load test your Lambda function to determine an optimum timeout value It ’s important to analyze how long your function runs so that you can better determine any problems with a dependency service that might increase the concurrency of the function beyond what you expect This is especially important when your Lambda function makes network calls to resources that may not handle Lambda’ s scaling Triage and Debugging Both logging to enable investigations and us ing XRay to profile application s are useful to operational triages Additionally consider creating Lambda function aliases that represent operational activities such as integration testing performance testing debugging etc It’s common for teams to build out test suites or segmented application stacks that serve an operational purpose You should build these operational artifacts to also integrate with Lamb da functions via aliases However keep in mind that aliases don’t enforce a wholly separate Lambda function container So an alias like PerfTest that points at function version number N will use the same function containers as all other aliases pointing at version N You should define appropriate versioning and alias updating processes to ensure separate containers are invoked where required Cost Optimization Best Practices Because Lambda charges are based on function execution time and the resources allocated optimizing your costs is focused on optimizing those two dimensions Right Sizing As covered in Performance Efficiency it’s an anti pattern to assume that the smallest resource size available to your function will provide the lowest total cost If your function’s resource size is too small you could pay more due to a longer execution time than if more resources were avai lable that allowed your function to complete more quickly See the section Choosing the Optimal Memory Size for more details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 34 Distributed and Asynchronous Architectures You don’t need to implement all use cases through a series of blocking/synchronous API requests and responses If you are able to design your application to be asynchronous you might find that each decoupled component of your architecture takes less compute time to conduct its work than tightly c oupled components that spend CPU cycles awaiting responses to synchronous requests Many of the Lambda event sources fit well with distributed systems and can be used to integrate your modular and decoupled functions in a more cost effective manner Batch Size Some Lambda event sources allow you to define the batch size for the number of records that are delivered on each function invocation ( for example Kinesis and DynamoDB) You should test to find the optimal number of records for each batch size so tha t the polling frequency of each event source is tuned to how quickly your function can complete its task Event Source Selection The variety of e vent sources available to integrate with Lambda means that you often have a variety of solution options availab le to meet your requirements Depending on your use case and requirements (request scale volume of data latency required etc) there might be a non trivial difference in the total cost of your architecture based on which AWS services you choose as the components that surround your Lambda function Serverless Development Best Practices Creating applications with Lambda can enable a development pace that you have n’t experienced before The amount of code you need to write for a working and robust serverle ss application will likely be a small percentage of the code you would need to write for a server based model But with a new application delivery model that serverless architectures enable there are new dimensions and constructs that your development pro cesses must make decisions about Things like organizing your code base with Lambda functions in mind moving code changes from a developer laptop into a production serverless environment and ensuring code quality through testing even though you can’t simulate the Lambda runtime environment or your event sources outside of AWS The following are some development centric best practices to help you work through these aspects of owning a serverless application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 35 Infrastructure as Code – the AWS Serverless Application Model (AWS SAM) Representing your infrastructure as code brings many benefits in terms of the auditability automatability and repeatability of managing the creation and modification of infrastructure Even though you don’t need to manage any infrastructure when building a serverless application many components play a role in the architecture : IAM roles Lambda functions and their configurations their event sources and other dependencies Representing all of these things in AWS CloudFormation natively would require a large amount of JSON or YAML Much of it would be almost identical from one serverless application to the next The AWS Serverless Application Model ( AWS SAM) enables you to have a simple r experience when building serverless applications and get the benefits of infrastructure as code AWS SAM is an open specification abstraction layer on top of AWS CloudFormation 57 It provides a set of command line utilities that enable you to define a full serverless application stack with only a handful of lines of JSON or YAML package your Lambda function code together with that infrastructure definition and then deploy them together to AWS We recommend u sing AWS SAM combined with AWS CloudFormation to define and make changes to your serverless application environment There is a distinction however between changes that occur at the infrastructure/environment level and application code changes occurring within existing Lambda functions AWS CloudFormation and AWS SAM aren’t the only tools required to build a deployment pipeline for your Lambda function code changes See the CI/CD section of this whitepaper for more recommendations about managing code changes for your Lambda functions Local Testing – AWS SAM Local Along with AWS SAM AWS SAM Local offers additional command line tools that you can add to AWS SAM to test your serverless functions and applications locally before deploy ing them to AWS58 AWS SAM Local uses Docker to enable you to quickly test yo ur developed Lambda functions using popular event sources ( for example Amazon S3 DynamoDB etc) You can locally test an API you define in your SAM template before it is created in API Gateway You can also validate the AWS SAM template that you created By enabling these capabilities to run against Lambda functions still residing within your developer workstation you can do things like view logs locally step through your code in a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 36 debugger and quickly iterate changes without having to deploy a new co de package to AWS Coding and Code Management Best Practices When developing code for Lambda functions there are some specific recommendations around how you should both write and organize code so that managing many Lambda functions does n’t become a complex task Coding Best Practices Depending on the Lambda runtime language you build with continue to follow the best practices already established for that language While the environment that surrounds how your code is invoked has changed wi th Lambda the language runtime environment is the same as anywhere else C oding standards and best practices still apply The following recommendations are specific to writing code for Lambda outside of those general best practices for your language of c hoice Business Logic outside the Handler Your Lambda function starts execution at the handler function you define within your code package Within your handler function you should receive the parameters provide d by Lambda pass those parameters to another function to parse into new variables/objects that are contextualized to your application and then reach out to your business logic that sits outside the handler function and file This enables you to create a code package that is as decoupled from the Lambda runtime environment as possible This will greatly benefit your ability to test your code within the context of objects and functions you’ve created and reuse the business logic you’ve written in other environments outs ide of Lambda The following example (written in Java ) shows poor practices where the core business logic of an application is tightly coupled to Lambda In this example the business logic is created within the handler method and depend s directly on Lamb da event source objects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 37 Warm Container s—Caching/Keep Alive/Reuse As mentioned earlier you should write code that take s advantage of a warm function container This means scoping your variables in a way that they and their contents can be reused on subsequent invocation s where possible This is especially impactful for things like bootstrapping configuration keeping exter nal dependency connections open or one time initialization of large objects that can persist from one invocation to the next Control Dependencies The Lambda execution environment contains many libraries such as the AWS SDK for the Nodejs and Python runt imes (For a full list see the Lambda Execution Environment and Available Libraries 59) To enable the latest set of features and security updates Lambda periodically update s these libraries These updates can introduce subtle changes to the behavior of your Lambda function To have full control of the dependencies your function uses we recommend packaging all your dependencies with your deployment package Trim Dep endencies Lambda function code package s are permitted to be at most 50 MB when compressed and 250 MB when extracted in the runtime environment If you are including large dependency artifacts with your function code you may need to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 38 trim the dependencies included to just the runtime essentials This also allow s your Lambda function code to be downloaded and installed in the runtime environment more quickly for cold starts Fail Fast Configure reasonably short timeouts for any external dependencies as well as a reasonably short overall Lambda function timeout Don’t allow your function to spin helplessly while waiting for a dependency to respond Because Lambda is billed based on the duration of your function execution you don’t want to incur higher charges than necessary when your function dependencies are unresponsive Handling Exceptions You might decide to throw and handle exceptions differently depending on your use case for Lambda If you ’re placing an API Gateway API in front of a Lambda function yo u may decide to throw an exception back to API Gateway where it might be transformed based on its contents into the appropriate HTTP status code and message for the error that occurred If you ’re building an asynchronous data processing system you might decide that some exceptions within your code base should equate to the invocation moving to the dead letter queue for reprocessing while other errors can just be logged and not placed on the dead letter queue You should evaluate what your decide failure behaviors are and ensure that you are creating and throwing the correct types of exceptions within your code to achieve that behavior To learn more about handling exceptions see the following for details about how exceptions are defined for each languag e runtime environment: • Java60 • Nodejs61 • Python62 • C#63 Code Management Best Practices Now that the code you’ve written for your Lambda functions follows best practices how should you manage that code? With the development speed that Lambda enables you might be able to complete code changes at a pace that is unfamili ar for your typical pro cesses And the reduced amount of code that serverless architectures require means that your Lambda function code This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 39 represents a large portion of what makes your entire application stack function So having good source code management of your Lambda function code will help ensure secure efficient and smooth change management processes Code Repository Organization We recommend that you organize your Lambda function source code to be very fine grained within your source code management solution of choice This usually means having a 1:1 relationship between Lambda functions and code repositories or repository projects (The lexicon differ s from one source code management tool to another ) However if you are following a strategy of creating separate Lambda f unctions for different lifecycle stages of the same logical function ( that is you have two Lambda functions one called MyLambdaFunction DEV and another called MyLambdaFunction PROD) it make s sense to have those separate Lambda functions share a code bas e (perhaps deploying from separate release branches) The main purpose of organizing your code this way is to help ensure that all of the code that contribute s to the code package of a particular Lambda function is independently versioned and committed to and define s its own dependencies and those dependencies’ versions Each Lambda function should be fully decoupled from a source code perspective from other Lambda functions just as it will be when it’s deployed You don’t want to go through the process of modernizing an application architecture to be modular and decoupled with Lambda only to be left with a monolithic and tightly coupled code base Release Branches We recommend that you create a repository or project branch ing strategy that enables you to correlate Lambda function deployments with incremental commits on a release branch If you don’t have a way to confidently correlate source code changes within your repository and the changes that have been deployed to a live Lambda function an operational investigation will always begin with trying to identify which version of your code base is the one currently deployed You should build a CI/CD pipeline (more recommendations for this later ) that allows you to correlate L ambda code package creation and deployment times with the code ch anges that have occurred with your release branch for that Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 40 Testing Time spent developing thorough testing of your code is the best way to ensure quality within a serverless architecture However serverless architectures will enforce proper unit testing practices perhaps more than you ’re used to Many developers use u nit test tools and frameworks to write tests that cause their code to also test its dependencies This is a si ngle test that combines a unit test and an integration test but that doesn’t perform either very well It’s important to scope all of your u nit test cases down to a single code path within a single logical function mocking all inputs from upstream and ou tputs from downstream This allows you to isolate your test cases to only the code that you own When writing unit tests you can and should assume that your dependencies behave properly based on the contracts your code has with them as APIs libraries etc It’s similarly important for your integration tests to test the integration of your code to its dependencies in an environment that mimics the live environment Testing whether a developer laptop or build server can integrate with a downstream dependency is n’t fully testing if your code will integrate successfully once in the live environment This is especially true of the Lambda environment where you code does n’t have ownership of the events that are going to be delivered by event sources and you do n’t have the ability to create the Lambda runtime environment outside of Lambda Unit Tests With what we’ve said earlier in mind we recommend that you u nit test your Lambda function code thoroughly focusing mostly on the business logic outside your handler function You should also unit test your ability to parse sample/mock objects for the event sources of your function However the bulk of your logic and tests should occur with mocked objects and functions that you have full control over within your code base If you feel that there are important things inside your h andler function that need to be unit tested it can be a sign you should encapsulate and externalize the logic in your handler function further Also to supplement the unit tests you’ve written you should create local test automation using AWS SAM Local that can serve as local end toend testing of your function code (note that this isn’t a replacement for unit testing) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 41 Integration Testing For integration tests we recommend that you create lower lifecycle versions of your Lambda functions where your code packages are deployed and invoked through sample events that your CI/CD pipeline can trigger and inspect the results of (Implementation depends on your application and architecture ) Continuous Delivery We recommend that you programmatically manage all of your serverless deployments through CI/CD pipelines This is because the speed with which you will be able to develop new features and push code changes with Lambda will allow you to deploy much more frequently Manual deployments combined with a need to deploy more frequently often result in both the manual process becoming a bottleneck and prone to error The capabilities provided by AWS CodeCommit AWS CodePipeline AWS CodeBu ild AWS SAM and AWS CodeStar provide a set of capabilities that you can natively combine into a holistic and automated serverless CI/CD pipeline (where the pipeline itself also has no infras tructure that you need to manage) Here is how each of these services play s a role in a well define d continuous delivery strategy AWS CodeCommit – Provides hosted private Git repositories that will enable you to host your serverless source code create a branching strategy that meets our recommendations (including f inegrained access control) and integrate with AWS CodePipeline to trigger a new pipeline execution when a new commit occurs in your release branch AWS CodePipeline – Defines the steps in your pipeline Typically a n AWS CodePipeline pipeline begins where your source code changes arrive Then you execute a build phase execute tests against your new build and perform a deployment and release of your build into the live environment AWS CodePipeline provides native integration options for each of these phases with other AWS services AWS CodeBuild – Can be used for the build state of your pipeline U se it to build your code execute unit tests and create a new Lambda code package Then integrate with AWS SAM to push your code package to Amazon S3 and push the new package to Lambda via AWS CloudFormation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 42 After your new version is published to your Lambda f unction through AWS CodeBuild you can automate your subsequent steps in your AWS CodePipeline pipeline by creating deployment centric Lambda functions They will own the logic for performing integration tests updating function aliases determining if immediate rollbacks are necessary and any other application centric steps needed to occur during a deployment for your application (like cache f lushes notification messages etc) Each one of these deployment centric Lambda functions can be invoked in sequence as a step within your AWS CodePipeline pipeline using the Invoke action For details on using Lambda within AWS CodePipeline see this documentation 64 In the end each application and organization has its own requirements for moving source code from repository to production The more automation you can introduce into this process the more agility you can achieve using Lambda AWS CodeStar – A unified user interface for creating a serverless application (and other types of applications) that helps you follow these best practices from the beginning When you create a new project in AWS CodeStar you automatically begin with a fully implemented and integrated continuous delivery toolchain (using AWS CodeCommit AWS CodePipeline and AWS CodeBuild services mentioned earlier ) You will also have a place where you can manage all aspects of the SDLC for your project including team member management issue tracking development deployment and operations For more information about AWS CodeStar go here 65 Sample Serverless Architectures There are a number of sample serverless architectures and instructions for recreating them in your own AWS account You can find them on GitHub 66 Conclusion Building serverless applications on AWS relieves you of the responsibilities and constraints that servers introduce Using AWS Lambda as your serverless logic layer enables you to build faster and focus your development efforts on what differentiates your application Alongside Lambda AWS provides additional serverless capabilities so that you can build robust performant event driven reliable secure and cost effective applica tions Understanding the capabilities and recomm endations described in this w hitepaper can help ensure your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 43 success when building serverless applications of your own To learn more on related topics see Serverless Computing and Applications 67 Contributors The fo llowing individuals and organizations contributed to this document: • Andrew Baird Sr Solutions Architect AWS • George Huang Sr Product Marketing Manager AWS • Chris Munns Sr Developer Advocate AWS • Orr Weinstein Sr Product Manager AWS 1 https://awsamazoncom/lambda/ 2 https://awsamazoncom/api gateway/ 3 https://awsamazoncom/s3/ 4 https://awsamazoncom/dynamodb/ 5 https://awsamazoncom/sns/ 6 https://awsamazoncom/sqs/ 7 https://awsamazoncom/step functions/ 8 https://docsawsa mazoncom/AmazonCloudWatch/latest/events/WhatIsCloud WatchEventshtml 9 https://awsamazoncom/kinesis/ 10 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 11 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 12 http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml 13 https://githubcom/awslabs/aws serverless workshops Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 44 14 https://awsamazoncom/blogs/compute/scripting languages foraws lambda running phpruby and go/ 15 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 16 https://githubcom/awslabs/aws sam local 17 http://docsawsamazoncom/lambda/latest/dg/limitshtml 18 http://docsawsamazoncom/lambda/latest/dg/API_CreateFunctionhtml 19 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 20 http://docsawsamazoncom/lambda/latest/dg/java programming modelhtml 21 http://docsawsamazoncom/lambda/latest/dg/programming modelhtml 22 http://docsawsamazoncom/lambda/latest/dg/python programming modelhtml 23 http://docsawsamazoncom/lambda/latest/dg/dotnet programming modelhtml 24 http://docsawsamazoncom/lambda/latest/dg/java logginghtml 25 http://docsawsamazoncom/lambda/latest/dg/nodejs prog model logginghtml 26 http://docsawsamazoncom/lambda/latest/dg/python logginghtml 27 http://docsawsamazoncom/lambda/latest/dg/dotnet logginghtml 28 http://docsawsamazoncom/lambda/latest/dg/programming model v2html 29 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 30 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 31 http://docsawsamazoncom/lambda/latest/dg/API_PublishVersionhtml 32 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 33 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 45 34 http://docsawsamazoncom/IAM/latest/UserGuide/access_policieshtml 35 http://docsawsamazoncom/IAM/latest/UserGuide/best practiceshtml 36 http://docsawsamazoncom/lambda/latest/dg/vpchtml 37 https://awsamazoncom/blogs/compute/robust serverless application design with awslambda dlq/ 38 http://d0awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 39 https://awsamazoncom/sdk forjava/ 40 https://awsamazoncom/cognito/ 41 http://docsawsamazoncom/amazondynamodb/latest/developerguide/speci fying conditionshtml 42 http://docsawsamazoncom/cognitoidentity/latest/APIReference/API_GetC redentialsForIdentityhtml 44 https://awsamazoncom/elasticache/ 45 http://docsawsamazoncom/lambda/latest/dg/env_variableshtml#env_enc rypt 46 http://docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 47 http://docsawsamazoncom/general/latest/gr/signature version 4html 48 http://docsawsamazoncom/apigateway/latest /developerguide/how to generate sdkhtml 49 http://docsawsamazoncom/apigateway/latest/developerguide/use custom authorizerhtml 50 http://docsawsamazoncom/apigateway/latest/developerguide/apigateway control access toapihtml 51 https://aw samazoncom/codepipeline/ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 46 52 http://docsawsamazoncom/lambda/latest/dg/vpchtml#vpc setup guidelines 53 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/WhatIs CloudWatchLogshtml 54 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 55 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 56 https://githubcom/awslabs/serverless application model 57 https://githubcom/awslabs/serverless application model 58 https://githubcom/awslabs/aws sam local 59 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 60 http://docsawsamazoncom/lambda/latest/dg/java exceptionshtml 61 http://docsawsamazoncom/lambda/latest/dg/nodejs progmode exceptionshtml 62 http://docsawsamazoncom/lambda/latest/dg/python exceptionshtml 63 http://docsawsamazoncom/lambda/latest/dg/dotnet exceptionshtml 64 http://docsawsamazoncom/codepipeline/latest/userguide/actions invoke lambda functionhtml 65 https://awsamazoncom/codestar/ 66 https://githubcom/awslabs/aws serverless workshops 67 https://awsamazoncom/serverless/",General,consultant,Best Practices Serverless_Streaming_Architectures_and_Best_Practices,"ArchivedServerless Streaming Architectures and Best Practices June 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product of ferings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 What is serverless computing and why use it? 1 What is streaming data? 1 Who Should Read this Document 2 Stream Processing Application Scenarios 2 Serverless St ream Processing 3 Three Patterns We’ll Cover 4 Cost Considerations of Server Based vs Serverless Architectures 4 Example Use case 6 Sensor Data Collection 6 Best Practices 8 Cost Estimates 8 Streaming Ingest Transform Load (ITL) 9 Best Practices 10 Cost Estimates 11 Real Time Analytics 12 Best Practices 15 Cost Estimates 16 Customer Case Studies 17 Conclusion 18 Contributors 19 Furth er Reading 18 Document Revisions 19 Appendix A – Detailed Cost Estimates 19 Common Cost Assum ptions 19 Appendix A1 – Sensor Data Collection 20 Appendix A2 – Streaming Ingest Transform Load (ITL) 23 Appendix A3 – Real Time Analytics 26 Archived Appendix B – Deploying and Testing Patterns 28 Common Ta sks 28 Appendix B1 – Sensor Data Collection 29 Appendix B2 – Streaming Ingest Transform Load (ITL) 32 Appendix B3 – Real Time Analytics 36 Archived Execu tive Summary Serverless computing allows you to build and run applications and services without thinking about servers This means you can focus on writing business logic instead of managing or provisioning infrastruct ure AWS Lambda our serverless compu te offering allows you to write code in discrete units called functions which are triggered to run by events Lambda will automatically run and scale your code in response to these events such as modifications to Amazon S3 buckets table updates in Amaz on DynamoDB or HTTP requests from custom applications AWS Lambda is also pay peruse which means you pay only for when your code is running Using a serverless approach allows you to build applications faster at a lower cost and with less on going man agement AWS Lambda and serverless architectures are wellsuited for stream processing workloads which are often event driven and have spiky or variable compute requirements Stream processing architectures are increasingly deployed to process high volume events and generate insights in near real time In this whitepaper we will explore three st ream processing patterns using a serverless approach For each pattern we’ll describe how it applies to a real world use case the best practices and consideration s for implementation and cost estimates Each pattern also includes a template which enables you to easily and quickly deploy these patterns in your AWS accounts ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 1 Introduction What is serverless computing and why use it? Serverless computing allows y ou to build and run applications and services without thinking about servers Serverless applications don't require you to provision scale and manage any servers You can build them for nearly any type of application or backend service and everything re quired to run and scale your application with high availability is handled for you Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes either in the cloud or onpremises This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable Serverless applications have three main benefits:  No server management  Flexible scali ng  Automated high availability In this paper we will focus on serverless stream processing applications built with our serverless compute service AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compu te time you consume there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service all with zero administration Just upload your code and Lambda takes care of everything require d to run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app What is streaming data? Streaming Data is data that is generated continuously by thousands of data sources which typically send in the data records simultaneously and in small sizes (order of kilobytes) Streaming data includes a wide variety of data such as log files generated by mobile or web applications e commerce purchases ingame player activity information from social networks financial trading floors or geospatial services and telemetry from connected devices or instrumentation in data centers Streaming data can be processed in real time or near real time providing act ionable insights that respond to changing conditions and customer behavior quicker than ever before This is in contrast to the traditional database model where data is stored then processed or analyzed at a later time sometimes leading to insights deriv ed from data that is out of date ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 2 Who Should Read this Document This document is targeted at Architects and Engineers seeking for a deeper understanding of serverless patterns for stream processing and best practices and considerations We assume a workin g knowledge of stream processing For an intr oduction to Stream Processing please see to the Whitepaper: Streaming Data Solutions on AWS with Amazon Kinesis Stream Processing Applicati on Scenarios Streaming data processing is beneficial in most scenarios where new dynamic data is generated on a continual basis It applies to most big data use cases and can be found across diverse industry verticals as shown in Table 1 In this Whitepaper we ’ll focus on the Internet of Things (IoT) industry vertical to provide examples of how to apply stream processing architectures to real world challenges Scenarios/ Verticals Accelerated Ingest Transform Load Continuous M etrics Generation Responsive Data Analysis IoT Sensor device telemetry data ingestion Operational metrics and dashboards Device operational intelligence and alerts Digital Ad Tech Marketing Publisher bidder data aggregation Advertising metrics like coverage yield and conversion User engagement with ads optimized bid/buy engines Gaming Online data aggregation eg top 10 players Massively multiplayer online game (MMOG) live dashboard Leader board generation player skill match Consumer Online Clickstream analytics Metrics like impressions and page views Recommendation engines proactive care Table 1 Streaming Data Scenarios Across Verticals There are several characteristics of a stream processing or real time analytics wo rkload:  It must be reliable enough to handle critical updates such as replicating the changelog of a database to a replica store like a search index delivering this data in order and without loss  It must support throughput high enough to handle large vol ume log or event data streams  It must be able to buffer or persist data for long periods of time to support integration with batch systems that may only perform their loads and processing periodically  It must provide data with latency low enough for real time applications ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 3  It must be possible to operate it as a central system that can scale to carry the full load of the organization and operate with hundreds of applications built by disparate teams all plugged into the same central nervous system  It has to support close integration with stream processing systems Serverless Stream Processing Traditionally stream processing architectures have used frameworks like Apache Kafka to ingest and store the data and a technology like Apache Spark or Storm to pr ocess the data in near real time These software components are deployed to clusters of servers along with supporting infrastructure to manage the clusters such as Apache ZooKeeper Today companies taking advantage of the public cloud no longer need to pu rchase and maintain their own hardware However any server based architecture still requires them to architect for scalability and reliability and to own the challenges of patching and deploying to those server fleets as their applications evolve Moreove r they must scale their server fleets to account for peak load and then attempt to scale them down when and where possible to lower costs —all while protecting the experience of end users and the integrity of internal systems Serverless compute offerings like AWS Lambda are designed to address these challenges by offering companies a different way of approaching application design – an approach with inherently lower costs and faster time to market that eliminates the complexity of dealing with servers at a ll levels of the technology stack Eliminating infrastructure and moving to a per payrequest model offers dual economic advantages:  Problems like cold servers and underutilized storage simply cease to exist along with their cost consequences —it’s simply impossible for a serverless compute system like AWS Lambda to be cold because charges only accrue when useful work is being performed with millisecond level billing granularity  The elimination of fleet management including the security patching deplo yments and monitoring of servers disappears along with the challenge of maintaining the associated tools processes and on call rotations required to support 24x7 server fleet uptime Without the burden of server management companies can direct their s carce IT resources to what matters —their business With greatly reduced infrastructure costs more agile and focused teams and faster time to market companies that have already adopted serverless approaches are gaining a key advantage over their competi tors ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 4 Three Patterns We’ll Cover In this whitepaper we will consider three serverless stream processing patterns:  Sensor Data Collection with Simple Transformation – in this pattern IoT sensor devices are transmitting measurements into a ingest service As data is ingested simple transformations can be performed to make the data suitable for downstre am processing Example use case s: medical sensor devices generate patient data streams that m ust be de identified to mask Protected Health Information (PHI ) and Personally Identifiable Information ( PII) to meet HIPAA compliance  Stream Ingest Transform Load (ITL) – this pattern extends the prior pattern to add field level enrichment from relatively small and static data sets Example use case (s): add data f ields to medical device sensor data such as location information or device details looked up from a database This is also a common pattern used for log data enrichment and transformation  Real time Analytics – this pa ttern builds upon the prior pattern s and adds the computation of windowed aggregations and anomaly detection Example use case (s): tracking user activity performing log analytics fraud detection recommendation engines and maintenance alerts in near realtime In the sections that follow we will provide an example use case of each pattern We will discuss the implementation choices and provide an estimate of the costs Each sample pattern described in the paper is also available in Github ( please see Appen dix B ) so you can quickly and easily deploy them into your AWS account Cost Considerations of Server Based vs Serverless Architectures When comparing the cost of a serverless solution against server based approaches you must consider several indirect cost elements that are i n addition to the server infrastructure costs These indirect costs include additional patching monitoring and other responsibilities of maintaining server based applications that can require additional resources to manage A number of these cost consid erations are listed in Table 2 ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 5 Cost Consideration Server based architectures Serverless architectures Patching All servers in the environment must be regularly patched; this includes the Operating System (OS) as well as the su ite of applications needed for the workload to function As there are no servers to manage in a serverless approach these patching tasks are largely absent You are only responsible for updating your function code when using AWS Lambda Security stack Servers will often include a security stack including products for malware protection log monitoring host based firewalls and IDS that must be configured and managed Equivalent firewall and IDS controls are largely taken care of by the AWS service and ser vice specific security logs such as CloudTrail are provided for auditing purposes without requiring setup and configuration of agents and log collection mechanisms Monitoring Server based monitoring may surface lower level metrics that must be monitored correlated and translated to higher service level metrics For example in a stream ingestion pipeline individual server metrics like CPU utilization network utilization disk IO disk space utilization must all be monitored and correlated to understand the performance of the pipeline In the serverless approach each AWS service provides CloudWatch metrics that can be directly used to understand the performance of the pipeline For example: Kinesis Firehose publishes CloudWatch metrics for IncomingBytes IncomingRecords and S3 DataFreshness that lets an operator understand more directly the performance of the streaming application Supporting infrastructure Often server based clusters need supporting infrastructure such as cluster management software centralized log collection that must also be managed AWS manages the clusters providing AWS services and removes this burden from the customer Further services like AWS Lambda deliver log records to CloudWatch Logs allowing centralized log collection p rocessing and analysis Software licenses Customers must consider the cost of licenses and commercial support for software such as the Operating Systems streaming platforms application servers and packages for security management and monitoring The AWS service prices include software licenses and no additional packages are needed for security management and monitoring of these services Table 2 Cost considerations when comparing serverless and server based architectures ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 6 Example Use case For this whitepaper we will focus on a use case of medical sensor devices that are wired to a patient receiving treatment at a hospital First sensor data must be ingested securely at scale Next the patient’s protected health information (PHI) is de identified in order to be processed in an anonymized way As part of the processing the data may need to be enriched with additional fields or the data may be transformed Finally the sensor data is analyzed in real time to derive insights s uch as detecting anomalies or developing trend patterns In the sections that follow we’ll detail this use case with example realizations of the three patterns Sensor Data Collection Wearable devices for health monitoring is a fast growing IoT use case that allow real time monitoring of a patient’s health In order to do this first the sensor data must be ingested securely and at scale It must then be de identified to remove the patient’s personal health information (PHI) so that the anonymized data c an be processed in other systems downstream An example solution that meets these requirements is shown in Figure 1 Figure 1 Overview of Medical Device Use Case – Sensor or Device Data Collection In Point 1 of Figure 1 one or more medical devices (“IoT sensors”) are wired to a patient in a hospital The devices transmit sensor data to the hospital IoT gateway which are then forwarded securely using the MQTT protocol to the AWS IoT gateway service for processing A sample r ecord at this point is: IoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Store KMS: Encryption KeysEncryptMQTT1 2 Deidentified recordsArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 7 { ""timestamp"" : ""20180127T05:11:50"" ""device_id"" : ""device8401"" ""patient_id"" : ""patient2605"" ""name"": ""Eugenia Gottlieb"" ""dob"": ""08/27/1977"" ""temperature"" : 1003 ""pulse"": 1086 ""oxygen_percent"" : 484 ""systoli c"": 1102 ""diastolic"" : 756 } Next the data must be de identified in order to be processed in an anonymized way AWS IoT is configured with an IoT Rule that selects measurements for a specific set of patients and an IoT Action that delivers these sele cted measurements to a Lambda de identification function The Lambda performs three tasks First the function removes PHI and PII attributes (Patient Name and Patient DOB) from the records Second f or the purpose of future cross reference the function enc rypts and stores the Patient Name and Patient DOB attributes in a DynamoDB table along with the Patient ID And finally the function sends the de identified records to a Kinesis Data Firehose delivery stream (Point 2 in Figure 1) A sample record at this point is shown below – note that the date of birth (“dob”) and “name” fields are removed: { ""timestamp"" : ""20180127T05:11:50"" ""device_id"" : ""device8401"" ""patient_id"" : ""patient2605"" ""temperature"" : 1003 ""pulse"": 1086 ""oxygen_percent"" : 484 ""systolic"" : 1102 ""diastolic"" : 756 } ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 8 Best Practices Consider the following best practices when deploying this pattern:  Separate the Lambda Handler entry point from the core logic This allows you to make a more unit testable function  Take advantage of container re use to i mprove the performance of your Lambda f unction Make sure any externalized configuration or dependencies that your code retrieves are stored and referenced locally after initial execution Limit the r einitialization of variables/objects on every invocation Instead use static initialization/constructor global/static variables and singletons  When delivering data to S3 tune the Kinesis Data Firehose buffer size and buffering interval to achieve the desired object size With small objects the cost of PUT and GET actions on the object will be higher  Use a compression format to further reduce storage and data transfer costs Kinesis Data Firehose supports GZIP Snappy and Zip data compression Cost E stimates The monthly cost of the AWS services from the ingestion of the sensor data into AWS IoT Gateway de identification in a Lambda function and storing cross reference data into DynamoDB Table can be $11719 for the s mall scenario $113201 for the m edium scenario and $497799 for the large scenario Please refer to Appendix A1 – Sensor Data Collection for a detailed breakdown of the costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 9 Stream ing Ingest Transform Load (ITL) After sensor data ha s been ingested it may need to be enriched or modified with simple transformations such as field level substitutions and data enrichment from relatively small and static data sets In the example use case it may be important to associate sensor measurem ents with information on the device model and manufacturer A solution to meet this need is shown i n Figure 3 Deidentified records from the prior pattern are ingested into a Kinesis Data Firehose Delivery Stream (Point 2 in Figure 2) Figure 2 Overview of Medical Device Use Case – Stream Ingest Transform Load (ITL) The solution introduces a Lambda function that is invoked by Kinesis Data Firehose as records are received by the delivery stream The Lambda function looks up information about each device from a DynamoDB table and adds these as fields to the measurement records Firehose then buffers and sends the modified records to the configured destinations (Point 3 in Figure 2) A copy of the source records is saved in S3 as a backup and for future analysis A sample record at this point is shown below with the enriched fields highlighted: S3: Buffered Files Raw records Enriched recordsEnriched records Firehose Delivery StreamIoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Enrichment Lookup TablesLookup Store KMS: Encryption KeysEncryptMQTT1 2 Deidentified records 3ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 10 { ""timestamp"" : ""20180127T05:11:50"" ""device_id"" : ""device8401"" ""patient_id"" : ""patient2605"" ""temperature"" : 1003 ""pulse"": 1086 ""oxygen_percent"" : 484 ""systolic"" : 1102 ""diastolic"" : 756 ""manufacturer"" : ""Manufacturer 09"" ""model"": ""Model 02"" } Using AWS Lambda functions for transformations in this pattern removes the conventional hassle of setting up and maint aining infrastructure Lambda runs more copies of the function in parallel in response to concurrent transformation invocations and scales precisely with the size of the workload down to the individual request As a result the problem of idle infrastruct ure and wasted infrastructure cost is eliminated Once data is ingested into Firehose a Lambda function is invoked that performs simple transformations :  Replace the numeric timestamp information with a human readable string that allows us to query the dat a based on day month or year Eg the timestamp “1508039751778 ” is converted to the timestamp string “ 2017 10 15T03:55:51778000”  Enrich the data record by querying a table (stored in DynamoDB) using the Device ID to get the corresponding Device Manufac turer and Device Model The function caches the device details in memory to avoid having to query DynamoDB frequently and reduce the number of Read Capacity Units (RCU) This design takes advantage of container reuse in AWS Lambda to opportunistically cache data when a container is reused Best Practices Consider the following best practices when deploying this pattern:  When delivering data to S3 tune the Kinesis Data Firehose buffer size and buffer interval to achieve your desired object size With small objects the cost of object actions – PUTs and GETs – will be higher ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 11  Use a compression format to reduce your storage and data transfer costs Kinesis Data Firehose supports GZIP Snappy or Zip data compression  When delivering data to Redshift consider the best practices for loading data into Redshift  When transforming data in the Firehose delivery stream using an AWS Lambda function consider enabling Source Record Backup for the delivery stream This feature backs up all untransformed records to S3 while delivering transformed records to the destinations Though this increases yo ur storage size on S3 this backup data can come in handy if you have an error in your transformation lambda function  Firehose will buffer records up to the buffer size or 3MB whichever is smaller and invoke the transformation Lambda function with each buffered batch Thus the buffer size determines number of Lambda function invocations and the amount of work sent in each invocation A small buffer size means a large number of Lambda function invocations and a larger invocation cost A large buffer size means fewer invocations but more work per invocation and depending on the complexity of the transformation the function may exceed the 5 minute maximum invocation duration  The lookup during the transformation happens at the rate of ingest record rates Consider using Amazon DynamoDB Accelerator (DAX) to cache results to reduce the latency for lookups and increase lookup throughput Cost Estimates The monthly cost of the AWS services from the ingestion of the streaming data into Kinesis Data Firehose tran sformations in a Lambda function and delivery of both the source records and transformed records into S3 can be as little as $1811 for the Small scenario $13816 for the Medium scenario and $67206 for the Large scenario Please refer to Appendix A2 – Streaming Ingest Transform Load (ITL) for a detailed breakdown of the costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 12 RealTime Analytics Once streaming data is ingested and enriched it can now be analyzed to derive insights in real time In the example u secase the de identified and enriched records needs be analyzed in real time to detect anomalies with any of the devices in the hospital and notify the appropriate device manufacturers By assessing the condition of the devices the manufacturer can star t to spot patterns that indicate when a failure is likely to arise In addition by monitoring information in near real time the hospital provider can quickly react to concern s before anything goes wrong Should an anomaly is detected the devices are imm ediately pulled out and sent for inspection The benefits of this approach include a reduction in device downtime increased device monitoring lower labor costs and more efficient maintenance scheduling This also allows the device manufacturer s to start offering hospitals more performance based maintenance contracts A solution to meet this requirement is shown in Figure 3 Figure 3 Overview of Medical Device Use Case – Real Time Analytics S3: Buffered Files Raw records Enriched recordsEnriched records Firehose Delivery StreamIoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Enrichment Lookup TablesLookup Store KMS: Encryption KeysEncrypt Kinesis Analytics: Anomaly DetectionKinesis Data Stream: Anomaly ScoresAlerting SNS: SMS AlertsMQTT1 2 Deidentified records 3 4 Anomaly ScoresArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 13 Copies of the enriched records from the prior patter n (Point 4 in Figure 3 ) are delivered to a Kinesis Data Analytics application that detects anomalies in the measurements across all devices for a manufactu rer The anomaly scores (P oint 5 in Figure 3 ) are sent to a Kinesis Data Stream and processed by a Lambda function A sample record with the added anomaly score is shown below: { ""timestamp"": ""2018 0127T05:11:50"" ""device_id"": ""device8401"" ""patient_id"": ""patient2605"" ""tempera ture"": 1003 ""pulse"": 1086 ""oxygen_percent"": 484 ""systolic"": 1102 ""diastolic"": 756 ""manufacturer"": ""Manufacturer 09"" ""model"": ""Model 02"" ""anomaly_score"": 09845 } Based on a range or threshold of anomalies detected the Lambda fun ction sends a notification to the manufacturer with the model number and device id and a set of measurements that caused the anomaly The Kinesis Analytics application code consists of an anomaly detection pre built function RANDOM_CUT_FOREST This funct ion is the crux of the anomaly detection The function takes the numeric data in the message in our case ""temperature"" ""pulse"" ""oxy gen_percent"" ""systolic"" and ""diastolic"" to determine the anomaly score To learn more on the function RANDOM_CUT_FOREST you read the amazon kinesis analytics document https://docsawsamazoncom/kinesisanalytics/latest/sqlref/sqlrf random cut foresthtml The following i s an example of anomaly detection The diagram shows three clusters and a few anomalies randomly interjected The red squares show the records that received the highest anomaly score according to the RANDOM_CUT_FOREST function The blue diamond represent the remaining records Note how the highest scoring records tend to be outside the clusters ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 14 Figure 4 Example of anomaly detection Below is the Kinesis Analytics Application code The first block of the code is to store the output of the anomaly score generated by the RANDOMM_CUT_FOREST function The block of code uses the incoming sensor data stream (“STRAM_PUMP”) to call the pre built anomaly detection function RANDOM_CUT_FOREST Creates a temporary stream and defines a sc hema CREATE OR REPLACE STREAM ""TEMP_STREAM"" ( ""device_id"" VARCHAR(16) ""manufacturer"" VARCHAR(16) ""model"" VARCHAR(16) ""temperature"" integer ""pulse"" integer ""oxygen_percent"" integer ""systolic"" integer ""diastolic"" integer ""ANOMALY_SCORE"" DOUBLE); Compute an anomaly score for each record in the source stream using Random Cut Forest CREATE OR REPLACE PUMP ""STREAM_PUMP"" AS INSERT INTO ""TEMP_STREAM"" SELECT STREAM ""device_id"" ""manufacturer"" ""model"" ""temperature"" ""pulse"" ""oxygen_percent"" ""systolic"" ""diastolic"" ""ANOMALY_SCORE"" FROM ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 15 TABLE(RANDOM_CUT_FOREST ( CURSOR(SELECT STREAM ""device_id"" ""manufacturer"" ""model"" ""temperature"" ""pulse"" ""oxygen_percent"" ""systolic"" ""diastolic"" FROM ""SOURCE_SQL_STREAM _001"") ) ); The post processing Lambda function in this use case performs the following simple tasks on the analytics data records with the anomaly scores:  The Lambda function uses two environment variable s called ANOMALY_THRESHOLD_SCORE and SNS_TOPIC_A RN The environment variable ANOMALY_THRESHOLD_SCORE you need to set after running initial testing using controlled data to determine the appropriate value to set The SNS_TOPIC_ARN is the SNS Topic to which the lambda function will deliver the anomaly rec ords  The Lambda function iterates through the a batch of analytics data records looking at the anomaly score and find records that has an anomaly score that exceeds the threshold  The L ambda function then publishes the threshold records to the SNS Topic d efined in the environment variable In your deployment script referred in the section Appendix B3 under Package and Deploy you will set the variable NotificationEmailAddress for your e mail that will be used to subscribe to the SNS Topic The sensor data is also sto red into S3 making the data available for all sorts of future analysis by different data scientists working on different domains The stream sensor data is passed to a Kinesis Firehose Delivery stream where it is buffered and zipped before doin g a PUT operation into S3 Best Practices Consider the following best practices when deploying this pattern:  Setup Amazon CloudWatch Alarms Using the CloudWatch metrics that Amazon Kinesis Data Analytics provides: Input bytes and input records (number of bytes and records entering the application) Output bytes output record and MillisBehindLatest ( tracks how far behind the application is in reading from the streaming source) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 16  Defining Input Schema Adequately test the inferred schema The discovery process uses only a sample of records on the streaming source to infer a schema If your streaming source has many record types there is a possibility that the discovery API missed sampling one or more record types which can result in a schema that does not accurately reflect data on the streaming source  Connecting To Outputs We recommend that every application have at least two outputs Use the first de stination to insert the results of your SQL queries Use the second destination to insert the entire error stream and send it to an S3 bucket through an Amazon Kinesis Firehose delivery stream  Authoring Application Code:  During development keep window s ize small in your SQL statements so that you can see the results faster When you deploy the application to your production environment you can set the window size as appropriate  Instead of a single complex SQL statement you might consider breaking it into multiple statements in each step saving results in intermediate in application streams This might help you debug faster  When using tumb ling windows we recommend that you use two windows one for processing time and one for your logical time (ingest time or event time) For more information see Timestamps and the ROWTIME Column Cost Estimates The monthly cost of the AWS services for doing the anomaly detection in Kinesis Analytics reporting of the anomaly score using the lambda function to an SNS Topic and storing the anomaly score data to a n S3 bucket for future analysis can be $70581 for the Small scenario $81709 for the Medium scenario and $131205 for the Large scenario Please refer to Appendix A3 – Real Time Analytics for a detailed breakdown of t he costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 17 Customer Case Studies Customers of different sizes and across different business segments are using a serverless approach for data processing and analytics Below are some of their stories To see more serverless case studies and cus tomer talks go to our AWS Lambda Resources page Thomson Reuters is a leading source of information —including one of the world’ s most trusted news organizations —for the world’s businesses and professionals In 2016 Thomson Reuters decided to build a solution that would enable it to capture analyze and visualize analytics data generated by its offerings providing insights to he lp product teams continuously improve the user experience This solution called Product Insights ingests and delivers data to a streaming data pipeline using AWS Lambda and Amazon Kinesis Streams and Amazon Kinesis Data Firehose The data is then piped into permanent storage or into an Elasticsearch cluster for real time data analysis Thomson Reuters can now process up to 25 billion events per month Read the case study » iRobot is a leading global consumer robot company designs an d builds robots that empower people to do more both inside and outside the home iRobot created the home cleaning robot category with the introduction of its Roomba Vacuuming Robot in 2002 Today iRobot reports that connected Roomba vacuums operate in mor e than 60 countries with total sales of connected robots projected to reach more than 2 million by the end of 2017 To handle such scale at a global level iRobot implemented a completely serverless architecture for its mission critical platform At the h eart of this solution is AWS Lambda AWS IoT Platform and Amazon Kinesis With serverless iRobot is able to kee p the cost of the cloud platform low and manage the solution with fewer than 10 people Read the case study » ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 18 Nextdoor is a free private social network for neighborhoods The Systems team at Nextdoor is responsible for managing the data ingestion pipeline which services 25 billion syslog and tracking events per day As the data volumes grew keeping the data ingestion pipeline stable became a f ull time endeavor that distracted the team from core responsibilities like developing the product Rather than continue running a large infrastructure to power data pipelines Nextdoor decided to implement a serverless ETL built on AWS Lambda See Nextdoor’s 2017 AWS re:Invent talk to learn more about Nextdoor’s serverless solution and how you can leverage Nextdoor scale serverless ETL through their open source project Bender Hear the Nextdoor talk » Conclusion Serverless computing eliminates the undifferentiated heavy lifting associated with building and managing server infrastruc ture at all levels of the technology stack and introduces a pay perrequest billing model where there are no more costs from idle compute capacity With data stream processing you can evolve your applications from traditional batch processing to realtime analytics which allows you to extract deeper insights on how your business performs In this whitepaper we reviewed how by combining these two powerful concepts developers can work with a clean application model that helps them deliver complex data proce ssing applications faster and organizations to only pay for useful work To learn more about serverless computing visit our page Serverless Computing and Applications You can also see more resources cus tomer talks and tutorials on our Serverless Data Processing page Further R esources For more serverless data processing resources including tutoria ls documentation customer case studies talks and more visit our Serverless Data Processing Page For more resources on serverless and AWS Lambda please see the AWS Lambda Resources page Read related whitepapers about serverless computing and data processing:  Streaming Data Solutions on AWS with Amazon Kinesis  Serverless: Changing the Face of Business Economics  Optimizin g Enterprise Economics with Serverless Architectures ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 19 Contributors The following individuals and organizations contributed to this document:  Akhtar Hossain Sr Solutions Archit ect Global Life science Amazon Web Services  Maitreya Ranganath Solutions A rchitect Amazon Web Services  Linda Lian Product Marketing Manager Amazon Web Services  David Nasi Product Manager Amazon Web Services Document Revisions Date Description Month YYYY Brief description of revisions Month YYYY First publication Appendix A – Detailed Cost Estimates In this Appendix we provide the detailed costs estimates that were summarized in the main text Common Cost Assumptions We estimate the monthly cost of the resources required to implement each pattern for three traffic scenario s:  Small – peak rate of 50 records / second average 1 KB per record  Medium – peak rate of 1000 records / second average 1 KB per record  Large – peak rate of 5000 records / second average 1 KB per record We assume that there are 4 peak hours in a day whe re records are ingested at the peak rate for the scenario In the rest of the 20 hours the rate falls to 20% of the peak data rate This is a simple variable rate model used to estimate the volume of data ingested monthly ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 20 Appendix A1 – Sensor Data Coll ection The detailed monthly cost of the Sensor Data Collection pattern is estimated in Table 3 below The services are configured as follows:  AWS IoT Gateway Service Connectivity per day is assumed at 25%/day for the small use case 50%/day for the medium use case and 70%/day for the large use case  Kinesis Firehose buffer size is 100MB  Kinesis Firehose buffer interval is 5 minutes (300 seconds) Small Medium Large Peak Rate (Messages/Sec) 100 1000 5000 Record Size (KB) 1 1 1 Daily Records (Numbers) 2880000 28800000 144000000 Monthly Records (Numbers) 86400000 864000000 4320000000 Monthly Volume (KB) 86400000 864000000 4320000000 Monthly Volume (GB) 8239746094 8239746094 4119873047 Monthly Volume (TB) 008046627 0804662704 4023313522 (Table continued on next page) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 21 AWS IoT Costs No of Devices 1 1 1 Connectivity Percentage Time / day 25 50 75 Messaging (Number of Msgs/day) 2880000 28800000 144000000 Rules Engine (Number of Rules) 1 1 1 Device Shadow 0 0 0 Device Registry 0 0 0 Total Cost Based on AWS IoT Cor e Calculator $11200 $112300 $495200 Amazon Kinesis Firehose Delivery Stream Record Size Rounded Up to 5 KB 5 5 5 Monthly Volume for Firehose(KB) 432000000 4320000000 21600000000 Monthly Volume for Firehose(GB) 4119873047 4119873047 2059936523 Firehose Monthly Cost 1194763184 1194763184 5973815918 Amazon Dynamo DB RCU 1 1 1 WCU 10 10 10 Size (MB) 1 1 1 RCU Cost 00936 00936 00936 WCU Cost 468 468 468 Size Cost 0 0 0 DynamoDB Monthly Cost 47736 47736 47736 ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 22 AWS Key Management Service (KMS ) Cost Monthly Record Number 86400000 864000000 4320000000 Number of Encryption Request 20000 free 86380000 863980000 4319980000 Encryption Cost 25914 259194 1295994 KMS Monthly Cost 25914 259194 1295994 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164692419 824812095 Memory(MB) 1536 1536 1536 Memory Duration (GB/Sec) 2474435 24744363 123721814 Lambda Monthly Cost 042 424 2122 Estim ated Total Monthly Cost $38828 $384343 $1853532 Table 3 Sensor data collection details of estimated costs ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 23 Appendix A2 – Streaming Ingest Transform Load (ITL) The detailed monthly cost of the Streaming Ingest Transform Load (ITL) pattern is estimated in Table 4 below The services are configured as follows:  Kinesis Firehose buffer size is 100MB  Kinesis Firehose buffer interval is 5 minutes (300 seconds)  Buffered records are stored in S3 compressed using GZIP assuming 1/4 compression ratio ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 24 Small Medium Large Peak Rate (records/second) 100 1000 5000 Record Size (KB) 1 1 1 Amazon Kinesis Firehose Monthly Volume (GB) (Note 1) 411987 411987 2059937 Kinesis Monthly Cost $1195 $11948 $59738 Amazon S3 Source Record Backup Storage (GB) 2102 21022 105108 Transformed Records Storage (GB) 2102 21022 105108 PUT API Calls (Note 2) 17280 17280 84375 S3 Monthly Cost $247 $2387 $11935 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164962419 8248120 95 Function Memory (MB) 1536 1536 1536 Memory Duration (GB seconds) 2474436 24744363 123721814 Lambda Monthly Cost (Note 3) $042 $424 $2122 Amazon DynamoDB Read Capacity Units (Note 4) 50 50 50 DynamoDB Monthly Cost $468 $468 $468 Total Monthly Cost $1811 $13816 $67206 Table 4 Streaming Ingest Transform Load (ITL) details of estimated costs Notes: 1 Kinesis Firehose rounds up the record size to the nearest 5KB In the three scenarios above each 1KB record is rounded up to 5KB when calculating the monthly volume ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 25 2 S3 PUT API calls were estimated assuming one PUT call per S3 object created by the Firehose delivery stream At low record rates the number of S3 objects is determined by the Firehose buffer duration (5 minutes) At high r ecord rates the number of S3 objects is determined by the Firehose buffer size (100MB) 3 The AWS Lambda free tier includes 1M free requests per month and 400000 GB seconds of compute time per month The monthly cost estimated above is before the free tier is applied 4 The DynamoDB Read Capacity Units (RCU) estimated above were the result of caching lookups in memory and taking advantage of container reuse This meant that the number of RCU required on the Table is reduced ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 26 Appendix A3 – RealTime Analyti cs The detailed monthly cost of the Real Time Analytics pattern is estimated in the Table 5 below Small Medium Large Peak Rate (Messages/Sec) 100 1000 5000 Record Size (KB) 1 1 1 Daily Records (Numbers) 2880000 28800000 144000000 Monthly Records (Num bers) 86400000 864000000 4320000000 Monthly Volume (KB) 86400000 864000000 4320000000 Monthly Volume (GB) 8239746094 8239746094 4119873047 Monthly Volume (TB) 008046627 0804662704 4023313522 Amazon Kinesis Analytics Peak Hours in a day (hr s) 4 4 4 Average Hours in a day (hrs) 20 20 20 Kinesis Processing Unit (KPU)/hr Peak 2 2 2 Kinesis Processing Unit (KPU)/hr Avg 1 1 1 Kinesis Analytics Monthly Cost $69240 $69240 $69240 Amazon Kinesis Firehose Delivery Stream Record Size Rounded Up to 5 KB 5 5 5 Monthly Volume for Firehose (KB) 432000000 4320000000 21600000000 Monthly Volume for Firehose(GB) 4119873047 4119873047 2059936523 Kinesis Firehose Monthly Cost 1194763184 1194763184 5973815918 (Table continued on next page) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 27 Small Medium Large Amazo n S3 S3 PUTs per Month based on Size only 84375 84375 84375 S3 PUTs per Month based on Time only 8640 8640 8640 Expected S3 PUTs (max of size & time) 8640 8640 8640 Total Puts (source backup + Analytics data) 17280 17280 17280 Analytics Data Compressed (GB) 2102150444 2102 2102 Source Data Compressed (GB) 2102 2102 2102 Source Record Backup 048346 048346 048346 PUTs 00864 00864 00864 Analytics Data Records 0483494602 048346 048346 S3 Monthly Cost 1053354602 105332 105332 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164692419 824812095 Memory(MB) 1536 1536 1536 Memory Duration (GB/Sec) 2474435 24744363 123721814 Lambda Monthly Cost 042 424 2122 Estimated Tota l Monthly Cost $70582 $81717 $131205 Table 5 Real time analytics details of estimated costs ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 28 Appendix B – Deploying and Testing Patterns Common Tasks Implementation details of the three patterns are described in the following sections Each patter n can be deployed ran and tested independently of the other patterns To deploy each pattern we provide links to the AWS Serverless Application Model (AWS SAM) template that can be deployed to any AWS Region AWS SAM extends AWS CloudFormation to provide a simplified syntax for defining the Amazon API Gateway APIs AWS Lambda functions and Amazon DynamoDB tables needed by your serverless application The solutions for three patterns can be downloaded from the public GitHub repo below: https://githubcom/aws samples/aws serverless stream ingest transform load https://githubcom/aws samples/aws serverless realtime analytics https://githubcom/awslabs/aws serverless sensor data collection Create or Identify an S3 Bucket for Artifacts To use the AWS Serverless Application Model (SAM) you need an S3 bucket where your code and template artifacts are uploaded If you already have a suitable bucket in your AWS Account you can simply note the S3 bucket name and skip this step If you instead choose to create a new bucket then you can follow the steps below: 1 Log into the S3 console 2 Choose Create Bucket and type a bucket name Ensure that the name is globally unique – we suggest a name like stream artifacts Choose the AWS Region where you want to deploy the pattern 3 Choose Next on the following pages to accept the defaults On the last page choose Create Bucket to create the bucket Note the name of the bucket as you’ll need it to deploy the three patterns below Create an Amazon Cognito User for Kinesis Data Generator To simulate hospital devices to test the Streaming Ingest Transform Load (ITL) and Real Time Analytics patterns you will use the Amazon Kinesis Data Generato r (KDG) tool You can learn more about the KDG Tool in this blog post ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 29 You can access the Amazon Kinesis Data Generator here Click on the H elp menu and follow the instructions to create a Cognito username and password that will use to log into the KDG Appendix B1 – Sensor Data Collectio n In this section we will describes how you can deploy the use case into your AWS Account and then run and test Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘ SAM For SesorDataCollectionyaml ’ by opening the file i n an editor of your choice You can use Notepad++ which renders the JSON file nicely This template creates the following resources in your AWS Account:  An S3 Bucket that is used to store the De Identified records  A Firehose Delivery Stream and associate d IAM Role used to buffer and collect the De Identified records compressed in a zip file and stored in the S3 Bucket  An AWS Lambda Function that performs the De Identification of the incoming messages by removing the PHI/PII Data The function also stores the PHI / PII Data into DynamoDB along with PatientID for cross reference The PHI / PII data are encrypted using AWS KMS Keys  An AWS Lambda Function that does hospital Device Simulation for the use case The Lambda function uses generates sensor simulat ion data and publishes to IoT MQTT Topic  A DynamoDB table that stores encrypted cross reference data Patient ID Timestamp Patient Name and Patient Date of Birth Package and Deploy Follow the following steps to package and deploy the Sensor Data Collect ion scenario: 1 Clone and download the files from the GitHub folder here to a folder on your local machine On your local machine make sure you have the following files: 11 DeIdentificationzip 12 PublishIoTDatazip 13 SAM ForSesorDataCollectionyaml 14 deploy ersensordatacollectionsh 2 Create an S3 Deployment Bucket in the AWS Region where you intend to deploy the solution Note down the S3 bucket name You will need the S3 bucket name later ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 30 3 From your local machine u pload the foll owing lambda code zip files into the S3 Deployment Bucket you just created in Step 2: 31 DeIdentificationzip 32 PublishIoTDatazip 4 In the AWS Management console launch an ec2 Linux instance that will be used to run the CloudFormation template Launch an ec2 instance of type Amazon Linux AMI 2018030 (HVM) SSD Volume Type (t2macro) ec2 in the AWS Region where you want to deploy the solution Make sure you enable SSH access to the instance For details on how to launch an ec2 instance and enable SSH access see https://awsamazoncom/ec2/getting started/ 5 On your local machine open the deployer sensordatacollectionsh file in a text editor and update the three variables indicated as PLACE_HOLDER – S3ProcessedDataOutputBucket (the S3 bucket Name where the Processed Output Data will be stored) LamdaCodeUriBucket (the S3 Bucket Name you created in Step 2 and uploaded the lambda code file s) and the environment variable REGION to the AWS Region where you intend to deploy the solution Save the deployer sensordatacollectionsh file 6 Once the ec2 instance you just launched is the instance state running using SSH log into the ec2 Linux box Create a folder called samdeploy under /home/ec2 user/ Upload the following files into the folder /home/ec2 user/ samdeploy 51 SAM ForSesorDataCollectionyaml 52 deploy ersensordatacollectionsh 7 On the ec2 instance change dir ectory to /home/ec2 user/ samdeploy N ext you will run two ClouFormation CLI commands called package and deploy Both the steps are in a single script file deployer sensordatacollectionsh Review the script file You can now execute the package and deploy the SAM template b y runnin g the following command at the command prompt: $ sh /deployer sensordatacollectionsh View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack named SensorDataCollectionStack from the list of stacks choose the Events tab and refresh the page to see the progress of resource creation ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 31 The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are successfully created Test the Pipeline The SensorDataCollectionStack includes an IoT Device Simulator Lambda Function called PublishToIoT The lambda function is triggered by AWS CloudWatch event rule The event rule invokes the lambda function on a schedule of every 5 minutes The Lambda function generates simulated sensor device messages matching the pattern discussed earlier and publishes it to the MQTT topic The function takes a JSON string as input called the SimulatorConfig to set the number of messages to gener ate per invocation In our example we have set 10 messages per invocation of the lambda function The input parameter to the lambda function is set to the JSON string {""NumberOfMsgs"": ""10""} The solution w ill start immediately after the stack has deployed successfully You observe the followings: 1 The CloudWatch Event / Rule triggers every 5 min to invoke the Device Simulator lambda function The lambda function is configured to generate by default 10 sensor data messages per invocati on and publish these to the IoT Topic – “LifeSupportDevice/Sensor” 2 The processed data (without the PHI / PII) will appear in the S3 Processed Data Bucket 3 In the DynamoDB console you will see the cross reference data composed o f the PatientID PatientName and PatientDOB in the Table – PatientReferenceTable To stop the Testing of the pattern simply go to the CloudWatch console and disable the Events/Rule called SensorDataCollectionStack IoTDeviceSimmulatorFunct XXXXXXX NOTE: At the time of writing this whitepaper the team at AWS Solution Group has created a robust IoT Device Simulator To help customers more easily test device integration and IoT backend services This solution provides a web based graphical user interface (GU I) console that enables customers to create and simulate hundreds of virtual connected devices without having to configure and manage physical devices or develop time consuming scripts More details can be found at https://awsamazoncom/answers/iot/iot device simulator/ However our simple pattern you will use the IoT Device Simulator Lambda Function that is invoke by the CloudWatch Event/Rule By default the Rule is scheduled to t rigger every 5 minute ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 32 Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these resources 1 On the CloudWatch Console in the AWS Region where you deployed the pattern under Events / Rules disable the Rule – SensorDataCollectionStack IoTDeviceSimmulatorFunct XXXXXXX 1 On the S3 console choose the output S3 Processed Data B ucket and choose Empty Bucket 2 On the CloudFormation console choose the SensorDataCollectionStack stack and choose Delete Stack 3 Finally on the EC2 console terminate the ec2 Linux instance you created to run the CloudFormation template to deploy the solution Appendix B2 – Streaming Ingest Transform Load (ITL) In this section we’ll describe how you can deploy the pattern in your AWS Account and test the transformation function and monitor the performance of the pipeline Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘stream ing_ingest_transform_load template ’ This template creates the following resources in your AWS Account:  An S3 Bucket that is used to store the transformed records and the source records from Kinesis Firehose  A Firehose Delivery Stream and associated IAM Role used to ingest records  An AW S Lambda Function that performs the transformation and enrichment described above  A DynamoDB table that stores device details that are looked up by the transformation function ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 33  An AWS Lambda Function that inserts sample device detail records into the Dyna moDB table This function is invoked once as a custom CloudFormation resource to populate the table when the stack is created  A CloudWatch Dashboard that makes it easy to monitor the processing pipeline Package and Deploy In this step you’ll use the Cl oudFormation package command to upload local artifacts to the artifacts S3 bucket you chose or created in the previous step This command also returns a copy of the SAM template after replacing references to local artifacts with the S3 location where the p ackage command uploaded your artifacts After this you will use the CloudFormation deploy command to create the stack and associated resources Both steps above are included in a single script deployersh in the github repository Before executing this sc ript you need to set the artifact S3 bucket name and region in the script Edit the script in any text editor and replace PLACE_HOLDER with the name of the S3 bucket and region from the previous section Save the file You can package and deploy the SAM template by running the following command: $ sh /deployersh View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack n amed Stream ingITL from the list of stacks choose the Events tab and refresh the page to see the progress of resource creation The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are suc cessfully created Test the Pipeline Follow the steps below to test the pipeline: ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 34 1 Log into the CloudFormation console and locate the stack for the Kinesis Data Generator Cognito User you created in Create an Amazon Cognito User for Kinesis Data Generator above 2 Choose the Outputs tab and click on value for the key KinesisDataGeneratorUrl 3 Log in with the username and password you used when creating the Cognito User CloudFormation stack earlier 4 From the Kinesis Data Generator cho ose the Region where you created the serverless application resources choose the IngestStream delivery stream from the drop down 5 Set the Records per second as 100 to test the first traffic scenario 6 Set the Record template as the following to generate t est data: { ""timestamp"" : {{datenow(""x"")}} ""device_id"" : ""device{{helpersreplaceSymbolWithNumber(""####"")}}"" ""patient_id"" : ""patient{{helpersreplaceSymbolWithNumber(""####"")}}"" ""temperature"" : {{randomnumber({""min"":96""max"":104})}} ""pulse"" : {{rando mnumber({""min"":60""max"":120})}} ""oxygen_percent"" : {{randomnumber(100)}} ""systolic"" : {{randomnumber({""min"":40""max"":120})}} ""diastolic"" : {{randomnumber({""min"":40""max"":120})}} ""text"" : ""{{loremsentence(1 40)}}"" } We are using a text field in the template to ensure that our test records are approximately 1KB in size as required by the scenarios 7 Choose Send Data to send generated data at the chosen rate to the Kinesis Firehose Stream ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 35 Monitor the Pipeline Follow the steps below to monitor the pe rformance of the pipeline and verify the resulting objects in S3: 1 Switch to the CloudWatch Console and choose Dashboards from the menu on the left 2 Choose the Dashboard named StreamingITL 3 View the metrics for Lambda Kinesis Firehose and DynamoDB on the dashboard Choose the duration to zoom into a period of interest Figure 7 CloudWatch Dashboard for Streaming ITL 4 After around 5 8 minutes you will see transformed records arrive in the output S3 bucket under the prefix transformed/ 5 Download a sa mple object from S3 to verify its contents Note that objects are stored GZIP compressed to reduce space and data transfers 6 Verify that the transformed records contain a human readable time stamp string device model and manufacturer These are enriched f ields looked up from the DynamoDB table ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 36 7 Verify that a copy of the untransformed source records is also delivered to the same bucket under the prefix source_records/ Once you have verified the pipeline is working correctly for the first traffic scenario you can now increase the rate of messages to 1000 requests / second and then to 5000 requests / second Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these reso urces 4 Stop sending data from the Kinesis Data Generator 5 On the S3 console choose the output S3 bucket and choose Empty Bucket 6 On the CloudFormation console choose the StreamingITL stack and choose Delete Stack Appendix B3 – RealTime Analytics In this section describes how you can deploy the use case into your AWS Account and then run and test Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘ SAM For RealTimeAnalyticsyaml ’ by opening the file in a text edit or of your choice You can use Notepad++ which renders the JSON file nicely This template creates the following resources in your AWS Account:  An S3 Bucket (S3ProcessedDataOutputBucket ) that is used to store the Real Time Analytics records containing the anomaly score  A Firehose Delivery Stream and the associated IAM Role used as an input stream to the Kinesis Analytic service  A Kinesis Analytics Application named DeviceDataAnalytics with one input stream (Firehose Delivery Stream) Application Code (SQL Statements) a Destination ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 37 Connection (Kinesis Analytics Application Output) as a Lambda Function and a second Destination Connection (Kinesis Analytics Output) as Kinesis Firehose Delivery Stream  A SNS Topic named publishtomanufacturer and a n email subs cription the SNS Topic You configure the e mail in the deployment script deployer realtimeanalyticssh The variable to set your e mail is named NotificationEmailAddress in the deployment script  An AWS Lambda Function that interrogates the data record se t received from the Analytics Stream picking up and publishing the record to a SNS Topic where the anomaly score is higher than a threshold defined (in this case in the Lambda function environment variable)  A second AWS Lambda Function named KinesisAnaly ticsHelper that is used to start the Kinesis Analytics Application DeviceDataAnalytics immediately after the Kinesis Analytics Application is created  A Kinesis Firehose Delivery Stream that aggregates that records from the Analytics Destination Stream buffers the record and zip s and put the zipped file into the S3 bucket ( S3ProcessedDataOutputBucket ) Package and Deploy Follow the following steps to package and deploy the Real Time Analytics scenario: 1 Clone and download the files from the GitHub folder here to a folder on your local machine On your local machine make sure you have the following files: 11 KinesisAnalyticsOuputToSNSzip 12 SAM ForRealTimeAnalyticsyaml 13 depl oyer realtimeanalytics sh 2 Create an S3 Deployment Bucket in the AWS Region where you intend to deploy the solution Note down the S3 bucket name You will need the S3 bucket name later 3 From your local machine upload the following lambda code zip files i nto the S3 Deployment Bucket you just created in Step 2: 31 KinesisAnalyticsOuputToSNSzip 4 In the AWS Management console launch an ec2 Linux instance that will be used to run the CloudFormation template Launch an ec2 instance of type Amazon Linux AMI 2018 030 (HVM) SSD Volume Type (t2macro) ec2 in the AWS Region where you want to deploy the solution Make sure you enable SSH access to the instance For ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 38 details on how to launch an ec2 instance and enable SSH access see https://awsamazoncom/ec2/getting started/ 5 On your local machine open the deployer realtimeanalyticssh file in a text editor and update the five variables indicated as PLACE_HOLDER S3ProcessedDataOutputBucket (the S3 bucket Name where the Processed Output Data will be stored) NotificationEmailAddress (the e mail address you specify to receive notification that the anomaly score has exceeded a threshold value) AnomalyThresholdScore (the threshold value that the Lambda funct ion will use to identify the records to send for notification ) LamdaCodeUriBucket (the S3 Bucket Name you created in Step 2 and uploaded the lambda code files) and the variable REGION to the AWS Region where you intend to deploy the solution Save the dep loyer sensordatacollectionsh file 6 Once the ec2 instance you just launched is in the instance state running using SSH log into the ec2 Linux box Create a folder called samdeploy under /home/ec2 user/ Upload the following files into the folder /home/e c2user/ samdeploy 61 SAM ForRealTimeAnalyticsyaml 62 deployer realtimeanalyticssh 7 On the ec2 instance change directory to /home/ec2 user/ samdeploy Next you will run two ClouFormation CLI commands called package and deploy Both the steps are in a single script file deployer realtimeanalyticssh Review the script file You can now execute the package and deploy the SAM template by running the following command at the command prompt: $ sudo yum install dos2unix $ dos2unix deployer realtim eanalyticssh $ sh / deployer realtimeanalyticssh 8 As part of the deployment of the pattern an e mail (the e mail specified in step 5) subscription is setup to the SNS Topic Check your e mail in box for an e mail requesting subscription confirmation Open the e mail and confirm the subscription verification Subsequently you will be receiving e mail notifications for the device data records that has exceeded the specified threshold View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack named DeviceDataRealTimeAnalyticsStack from the list of stacks choose the Events tab and refresh the page to see the progress of re source creation The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are successfully created ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 39 Test the Pipeline To test this pattern you will use the Kinesis Data Generator (KDG) to Too l to generate and publish test data Refer to section Create an Amazon Cognito User for Kinesis Data Generator section of the whitepaper Using the username and password that you generated during t he configuration log into the KD G tool Provide the followi ng information: 1 Region : Select the Region where you have installed the DeviceDataRealTimeAnalyticsStack 2 Stream / Delivery Stream : Select the delivery stream called DeviceData Input DeliveryStream 3 Records per Second: Enter the record generation / submission rate for simulating the hospital device data 4 Record template: KDG uses a record template to generate random data for each of the record fields We will be using the following JSON template to generate the records that will be submitted to the Kinesis Del ivery Stream DeviceData Input DeliveryStream { ""timestamp"" : ""{{datenow(""x"")}}"" ""device_id"" : ""device{{helpersreplaceSymbolWithNumber(""####"")}}"" ""patient_id"" : ""patient{{helpersreplaceSymbolWithNumber(""####"")}}"" ""temperature"" : ""{{randomn umber({""min"":96""max"":104})}}"" ""pulse"" : ""{{randomnumber({""min"":60""max"":120})}}"" ""oxygen_percent"" : ""{{randomnumber(100)}}"" ""systolic"" : ""{{randomnumber({""min"":40""max"":120})}}"" ""diastolic"" : ""{{randomnumber({""min"":40""max"":120})}}"" ""manufacturer"" : ""Manufacturer {{helpersreplaceSymbolWithNumber(""#"")}}"" ""model"" : ""Model {{helpersreplaceSymbolWithNumber(""##"")}}"" } ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 40 To run the RealTime Analytics application click on the Send Data button located towards the bottom of the KDG Tool As the KDG begins to pump device data records to the Kinesis Delivery Stream the records are streamed into the Kinesis Analytics Application The Application code analyzes the streaming data and applies the algorithm to generate the anomaly score for each o f the rows You can view the data stream in the Kinesis Analytics console The diagram below the sampling of the data stream ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 41 The Kinesis Analytics Application is configured with two Destination Connections The first destination connector (or output) is a Lambda function The lambda function iterates through a batch of records delivered by the Application DESTINATION_SQL_STREAM_001 and interrogates the anomaly score field for the record If the anomaly score exceeds the threshold defined in the lambda fu nction environment variable ANOMALY_THRESHOLD_SCORE the lambda function publishes the record to a Simple No tification Service (SNS) Topic named publishtomanufacturer The second Destination Connection is configure d to a Kinesis Firehose Delivery Stream – DeviceDataOutputDeliveryStream The delivery stream buffers the records and zips the buffered records to a zip file before putting into the S3 bucket S3ProcessedDataOutputBucket Observe the followings: 1 In your e mail (that you specified in the deployme nt script) inbox the first e mail you will receive device data records for which the anomaly score has exceeded the specified threshold 2 In the AWS Kinesis Data Analytics console select the Application named DeviceDataAnalytics click the Application detail button towards the bottom this will take you to the DeviceDataAnalytics application detail page Towards the middle of the page under Real Time Analytics click the button “Go to the SQL Results” On the realTime Analytics page observe the Source Data R aelTime Analytics Data and the Destination Data using the tabs 3 Records with the anomaly score are stored in the S3 Processed Data Bucket Review the records that includes the anomaly score ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 42 To stop the Testing of the pattern simply go to the browser where you are running the KDG Tool and click the “Stop Sending Data to Kinesis” button Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these resources 1 Go back to the browser where you launched the KDG Tool and click the stop button The tool will stop sending any addition data to the input kinesis stream 7 On the S3 console choose the output S3 Processed Data Bucket and choose Empty Bucket 8 On the Kinesis consol e stop the Kinesis Data Application DeviceDataAnalytics 9 On the CloudFormation console choose the SensorDataCollectionStack stack and click Delete Stack 10 Finally on the EC2 console terminate the ec2 Linux instance you created to run the CloudFormation t emplate to deploy the solution",General,consultant,Best Practices Setting_Up_Multiuser_Environments_for_Classroom_Training_and_Research,"This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlSetting Up MultiUser Environments in AWS For Classroom Training and Research First Published October 2013 Updated September 15 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlContents Introduction 1 Scenario 1: Individual server environments 2 Scenario 2: Limited user access to the AWS Management Console within a single account 2 Scenario 3: Separate AWS accounts for each user 3 Comparing the scenarios 4 Setting up Scenario 1: Individual server environments 5 Account setup 6 Cost tracking 6 Monitoring resources 7 Reporting 7 Runtime environment 7 Clean up the environment 7 Setting up scenario 2: Limited user access to AWS Management Console within a single account 8 Account setup 10 Cost tracking 11 Monitoring resources 11 Reporting 12 Runtime environment 12 Clean up the environment 12 Setting up Scenario 3: Separate AWS account for each user 13 Account setup 14 Cost tracking 17 Monitoring resources 17 Reporting 17 Runtime environment 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlClean up the e nvironment 18 Keeping accounts alive 18 Conclusion 18 Contributors 20 Further reading 20 Appendix A: Adding IAM user policies 21 Appendix B: Example IAM user policies 24 Example policies for professor (administrator) 24 Example Policies for Students 25 Document versions 28 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAbstract Amazon Web Services (AWS) can provide the ideal environment for classroom training and research Educators can use AWS for student labs training applications individual IT environments and cloud computing courses This whitepaper provides an overview of how to create and manage multi user environments in the AWS Cloud so professors and researchers can leverage cloud computing in their projects This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 1 Introduction With AWS you ca n requisition compute storage and other services on demand gaining access to a suite of secure scalable and flexible IT infrastructure services as your organization needs them This enables educators academic researchers and students to tap into the ondemand infrastructure of AWS to teach advanced courses tackle research endeavors and explore new projects – tasks that previously would have required expensive upfront and ongoing investments in infrastructure For more information see Cloud Computing for Education and Cloud Products To access any AWS service you need an AWS account Each AWS account is typically associated with a payme nt instrument (credit card or invoicing) You can create an AWS account for any entity such as a professor student class department or institution When you create an AWS account you can sign into the AWS Management Console and access a variety of AW S services In addition to creating an AWS account with a user name and password you can also create a set of access keys that you can use to access services via APIs or command line tools Protect these security credentials and do not share them publicl y For more information see AWS security credentials and AWS Management Console If you require more than one person to access your AWS account AWS Identity and Access Management (IAM) enables you to create multiple users and manage the permissions for each of these users within your AWS account A user is a unique i dentity recognized by AWS services and applications Similar to a user login in an operating system like Windows or Linux user s each have a unique name and can identify themselves using various kinds of security credentials such as user name and password or an access key ID and accompanying secret access key A user can be an individual such as a student or teaching assistant or an application such as a research application that requires access to AWS services You can create users group s roles and federation capabilities using the AWS Management Console APIs or a variety of partner products For instructions on how to create new users and manage AWS credentials see Creating an IAM user in your AWS account in the AWS Identity and Access Management documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 2 Depending on your teaching or research needs there are several ways to set up a multi user environment in the AWS Cloud The following sections introduce three possible scenarios Scenario 1: Individual server environments The “Individual Server Environments” scenario is excellent for labs and other class work that requires users to access their own pre provisioned Linux or Windows servers running in the AWS Cloud The servers are in Amazon Elastic Compute Cloud (Amazon EC2) instances The instances can be created by an administrator with a customized configuration that includes applications and the data needed to perform tasks for labs or assignments This scenario is easy to set up and manage It does not require users to have their own AWS accounts or credentials for more than their individual servers Users do not have access to allocate additional resources on the AWS Cloud Example Consider a class with 25 students The administrator creates 25 private keys and launches 25 Amazon EC2 instances; one instance for each student The administrator shares the appropriate key or password with each student and provides instructions on how to log in to their instance In this case students do not have ac cess to the AWS Management Console APIs or any other AWS service Each student gets a unique private key (Linux) or a user name and password (Windows) along with the public hostname or IP address of the instance that they can use to log in Scenario 2: Limited user access to the AWS Management Console within a single account This scenario is excellent for users that require control of AWS resources such as students in cloud computing or high performance computing (HPC) classes With this scenario users are given restricted access to the AWS services through their IAM credentials Example Consider a class with 25 students The administrator creates 25 IAM users using the AWS Management Console or APIs and provides each student with their IAM This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 3 credentials (user name and password) and a login URL for the AWS Management Console The administrator also creates a permission s policy that can be attached to a user group or an individual user to allow or deny access to different services Each student (IAM user) has access to resources and services as defined by the access control policies set by the administrator Students can log in to the AWS Management Console to access different AWS services as defined the policy For example they could launch Amazon EC2 Instances and store objects in Amazon Simple Storage Service (Amazon S3) Scenario 3: Separate AWS accounts for each user This scenario with optional consolidated billing provides an excellent environment for users who need a completely separate account environment such as researchers or graduate students It is similar to Scenario 2 except that each IAM user is created in a separate AWS account eliminating the risk of users affecting each other’s services Example Consider a research lab with ten graduate students The administrator creates one management AWS account which will own the AWS Organization Then t he administrator provisions separate AWS accounts for each student within the AWS Organization For each account the administrator creates an IAM user in each of the accounts or manage s the permissions through single signon users for each student and applies access control policies Users receive access to an IAM user/role within their AWS account Users can log in to the AWS Management Console to launch and access different AWS services subject to the access control policy applied to their account Students don’t see resources provisioned by other students because each account is isolated from each other A key advantage of this scenario is that students can keep their account s after the completion of the course Each account can be set up as a standalone account outside the AWS Organization If the students have used AWS resources as part of a startup course they can continue to use what they have built on AWS after the class semester or course is over This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 4 Comparing the scenarios The scenario you should select depends on your requirements Table 1 provides a comparison of key features of these three scenarios Table 1: Comparison of scenarios Individual server environments Limited user access to AWS Management Console Separate AWS account for each user Examples Undergraduate labs Graduate classes Graduate research labs Example uses Labs or course work requiring a virtual server AWS service or separate application instance Courses in cloud computing or labs requiring variable resource needs (such as HPC) Courses for startups thesis or research projects Separate AWS accounts required for each user No No Yes Major steps for setup Create and allocate Amazon EC2 resources and associated credentials Create IAM users create policies and distribute credentials Create separate member AWS accounts plus the steps in the Setting up Scenario 2: Limited user access to AWS Management Console section Users can provision additional AWS resources resulting in additional charges No Yes depending on IAM services provided to users Yes depending on IAM services provided to users Users have access to AWS Management Console or APIs No Yes Yes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 5 Individual server environments Limited user access to AWS Management Console Separate AWS account for each user User charges paid by the management AWS account Yes Yes Yes if consolidated billing is used Separation between user environments Yes based on resource access configuration Yes if optional resource based permissions are configured Yes Individual user credit cards or invoicing required No No No if consolidated billing is used Billing alerts can be used to monitor charges Yes Yes Yes A large number of real world use cases can benefit from implementing these scenarios This section focus es on the education sector where multi user shared environments are required for setting up online classes labs and workshops for students Both user and resource management are critical in these scenarios Depending on your specific requirements any of these scenarios can be used for setting up classrooms in the AWS Cloud The following sections describe each of these scenarios in more detail Setting up Scenario 1: Individual server environments With this scenario users are provided access credentials to AWS resources Users cannot access the AWS Management Console or launch new services They receive the credentials to access specific AWS services that have already been launched by an administrator This scenario is a good match for simpler use cases in which users do not need to launch new AWS services The following figure shows the architecture for this scenario This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 6 Individual server environments An administrator can give users their own unique access keys (SSH keys for Linux and password for Windows) for security and separation between users For labs that do not require security among users ( such as collaborative labs) the administra tor can keep the keys or access credentials common for all the servers and provide the unique access public DNS names of instances to the users The administrator can choose the level of security and management appropriate for their needs Account setup The administrator creates an AWS account for the user group For example this can be a shared account for a professor class department or school The administrator can also use an existing AWS account New AWS account signup and access to existing AWS a ccounts is available on your Account page The administrator launches the required AWS services for each user and provides resource access credentials to the users Cost tracking If needed the administrator tags the resources launched for different users Cost allocation and resource tagging can help track usage by different users This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 7 For more information see Using Cost Allocation Tags in the AWS Billing and Cost Management documentation Monitoring resources The administrator can set up AWS Budgets to monitor AWS resources Creating billing alerts that automatically notify the designated recipient whenever the estimated AWS charges reach a specified threshold The administrator can choose to receive an alert on the total AWS charges or charges for a specific AWS product or service If the account has any limits the administrator can use these as the threshold for receiving billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets Reporting Detailed usage reports a re available for the administrator from the AWS Management Console Reports are available for monthly charges and also for account activity in hourly increments For more information see Detailed Billing Reports in the AWS Billing and Cost Management documentation Runtime environment After the administrator provisions the account and launches the required AWS services users can access their AWS resources using the provided credentials For example if Amazon EC2 instances are part of the class users would b e given keys or passwords to SSH (in Linux instances ) or RDP (to Windows instances ) Users would not have the credentials to log in to the AWS Management Console or to launch any new services Clean up the e nvironment When users have finished their work or when the account limits are reached the administrator can end the AWS resources Because student users do not have their own AWS accounts ending the launched services ensures that user work is deleted and further charges are discontinued This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 8 Setting up scenario 2: Limited user access to AWS Management Console within a single account For this scenario the administrator creates IAM users and give s each one access credentials With IAM an administrator can securely control access to AWS services and resources The a dministrator can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources Users can log in to the AWS Management Console and launch and access different AWS services subject to the access control policies applied to their account Users have dire ct control over the access credentials for their resources By default when you create IAM users they don’t have access to any AWS resources You must explicitly grant them permissions to access the resources that they need for their work Permissions are rights that you grant to a user or group to let the user perform tasks in AWS Permissions are attached to an IAM principal or an AWS Single SignOn (SSO) permission set and let the ad ministrator specify what that user can do Depending on the context administrators may be able to construct resource level permissions for users that control the actions the user is allowed to take for specific resources (for example limiting which instance the user is allowed to end) For an overview of IAM permissions see Controlling access to AWS resources using policies in the AWS Identity and Access Management documentation and read Resource Level Permissions for EC2 –Controlling Management Access on Specific Instances on the AWS Se curity Blog To define permissions administrators use policies which are documents in JSON format A policy consists of one or more statements each of which describes one set of permissions Policies can be attached to IAM users groups or roles AWS Policy Generator is a handy tool that lets administrators create policies easily For e xample policies that are relevant to multi user environments see Appendix B For more information about policies see Policies and permissions in IAM A useful option in this scenario is for the administrator to tag resources and write appropriate resource level permissions to limit IAM users to specific actions and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 9 resources A tag is a label you assign to an AWS resource For services that support tagging apply tags using the AWS Management Console or API requests This enables fine grained control on which resources a user can access and what actions they can take on those resources The administrator will also need to write policies to prevent users from manipulating the resource tags For example for Amazon EC2 tags the administrator should disable the ec2:CreateTags and ec2:DeleteTags actions This scenario is also good for use cases that require collaboration among u sers As described previously a user can give other IAM users access to specific actions on their resources using a mix of user level and resource level permissions A good example is a collaborative research project where students allow other members of their team access to software in their Amazon EC2 instances and data stored in their Amazon S3 buckets This scenario can be useful when the users need to access the AWS Management Console launch new services interact with services for complicated cloud based application architectures or exercise more control over accessing and sharing resources The following figure shows the architecture for this scenario Limited user access to AWS Management Console As shown in the preceding figure this scenario works well with a single AWS account The administrator needs to create IAM users and groups to apply access control policies This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 10 for the environment Example IAM user policies for setting up this scenario are in Appendix B Account setup The administrator creates one AWS account for the group For example this can be a shared account for a professor class department or school An existing AWS account can also be used New AWS account signup and access to existing AWS accounts is available on the Account page The administrator then creates an IAM user for each user with the AWS Management Console or the API These IAM users can belong to one or more IAM groups within a single AWS account Alternatively the administrator can deploy SSO and create an SSO User for each student teaching assistant or professor which allows users to log in into the account through federation Each SSO user can have one or more permission sets assigned to them depending on the role they need to assume to log into the account Based on environment requirements the administrator attaches custom policies to IAM users or IAM groups to restrict certain AWS resources that ca n be launched and used Thus users can only launch AWS services for which permissions have been granted Users are provided credentials for their IAM user which can be used to log in to the AWS Management Console access AWS services and call APIs Information required for account setup To create an account and set up IAMbased access control an administrator need s the following information: • An AWS account for the group This account could belong to the school department or professor If no account exists a new account must be created • Name and email address of users • Required AWS resources and services and the operations permitted on them This is required to determine the access control policies to be applied to each IAM user • Contact information for the billing reports and alerts • Contact information for the usage reports and alerts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 11 Providing access to users With SSO the administrator can use the example IAM policies from Appendix B to create custom permission sets to assign to each group of users using the IAM User policies Next the administrator needs to create an AWS SSO user for each of the students and assign the us er to the relevant permission set Students then can log in using the AWS SSO Sign in URL See this Basic AWS SSO Configuration video tutorial For basic instructions on how to add IAM user polici es see Appendix A For e xample IAM user policies for setting up this scenario see Appendix B If the administrator decides not to use SSO adds IAM users with roles and custom policies to the AWS account directly to implement required access control logic for the different kinds of users in the group The administrator then provides IAM user login information to the corresponding members of the group Cost tracking All users can tag their resources for services with tagging capability With the cost allocation feature of AWS Account Billing the administrator can track AWS costs for each user For more information see Using Cost Allocation Tags in the AWS Account Billing documentation Monitoring resources AWS Budgets can help moni tor AWS resources Billing alerts automatically notify users whenever the estimated charges on their current AWS bill reach a threshold they define Users can choose to receive an alert on their total AWS charges or charges for a specific AWS product or service If the account has any limits the administrator can use these as the threshold for sending billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 12 Reporting Detailed usage reports are available for the administrator from the AWS Management Console Reports are available for monthly charges and also for account activity in hourly increments For more information see Detailed Billing Reports Runtime environment Users can l og into the AWS Management Console (as an IAM user or with an AWS SSO user) with the login information provided to them by the administrator They can launch and use resources defined by the rules and policies set by the administrator For example if they have the appropriate permissions they can launch new Amazon EC2 instances or create new Amazon S3 buckets upload data to them and share them with others An IAM user might be granted access to create a resource but the user's permissions even for that resource are limite d to what's been explicitly granted by the administrator The administrator can also revoke the user's permission at any time Setting proper resource and user based permissions helps prevent an IAM user from taking actions on resources belonging to other IAM users in the AWS account For example an IAM user can be prevented from terminating instances belonging to other IAM users in the AWS account For more information see Controlling access to AWS resources using policies Clean up the e nvironment When users have finished their work or when the account limits are reached they (or the administrator) can end the AWS resources Administrators can also delete the IAM users If an instance of SSO was created for the users to log in the directory should be disabled The users will lose their work unless they take steps to save it (a procedure that is beyond the scope of this whitepaper) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 13 Setting up Scenario 3: Separate AWS account for each user In this scenario an administrator creates separate AWS accounts for each user who needs a new AWS account These accounts can optionally be added together under an AWS Organization and a single AWS account can be designated as the management account using AWS Organizations Once the student accounts become member s of the Organization the management account can becomes the payer account and all the accounts can benefit from consolidated billing which provides a single bill for multiple AWS accounts The administrator then creates an IAM user in each AWS account and applies an access control policy to each user Users a re given access to the IAM user within their AWS account but do not have access to the AWS account root user The administrator should deploy SSO in the management account to create users to grant access to each account through federation centrally This allows the accounts to be managed by an administrator consistent with the required policies for the user environment Users can log into the AWS Management Console with their IAM credentials and then launch and access different AWS services subject to the access control policies applied to their account Since students have access to their individual accounts they have direct control over the access credentials for their resources (creation/deletion of SSH keys) and they can also share these resources with other users and accounts as needed This scenario is good for setting up collaborative multi user work environments To implement it users can create an IAM role which is an entity that includes permissions that isn't associated with a specific user Users from other accounts can then assume the role and access resources according to the permissions assigned to the role For more information see Roles terms and concepts This scenario offers maximum flexibility for users and is helpful when they need to access the AWS Management Console to launch new services TIt also gives users flexibility in working with complicated cloud based application architectures and more control over accessing and sharing their resources Having separate AWS accounts for each user works well for both short term and long term usage For short term usage AWS resources IAM users and even AWS accounts can be terminated after the work is done For long term usage the AWS accounts for This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 14 some or all users are kept alive at the end of the current engagement All work done can be easily preserved for future use Users can also be provided full administrator access to their AWS account (besides the IAM based access they initially had) to continue their work An example scenario is an entrepreneurship class where some students might develop some new solutions or intellectual property using AWS resources that they want to retain for future use or for immediate deployment Their work can be easily turned over to them by giving them full access to their AWS account Another benefit of this scena rio is that in the AWS Management Console users cannot see resources belonging to any other users in the group since each user is working from their own AWS account The following figure shows the architecture for this scenario Separate AWS account for each user Account setup The administrator deploys AWS Organizations on the management accounts then provisions an AWS account for each user in the group Independent AWS accounts (with unique AWS IDs) are created for each user This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 15 The administrat or creates the accounts using the AWS Organizations CreateAccount API which creates an account in which the credentials of the AWS account root user need to be reset In the management account the administrator deploys AWS SSO and creates an AWS SSO user for each of users that requires access to the AWS accounts Based on the environment requirements the AWS SSO users are assigned to their relevant AWS SSO permission sets that are customized with IAM policies allowing each user to only use the services for which permissions have been granted Alternatively the administrator can create an IAM user in each user’s AWS account Based on environment requi rements custom policies are attached to IAM users individually to constrain AWS resources that can be launched and used Users can only launch AWS services for which permissions have been granted Users are provided credentials to log into the AWS Manage ment Console access AWS services and call APIs Users do not have access to the root credentials of the AWS account and cannot change the IAM access policies enforced on the account Using AWS Organizations an administrator can set up consolidated billing for the group Consolidated billing (offered at no additional charge by AWS) enables consolidation of payment for multiple AWS accounts by designating a single payer account the management account within the AWS Organization Consolidated billing provides a combined view of AWS charges incurred by all accounts as well as a detailed cost report for each individual member AWS account associated with the management account For detailed information about how to set up consolidated billing see Consolidated billing for AWS Organizations Another benefit from this scenario is that the administrator can set control s across every account using service control policies (SCPs) to restrict access to specific resources and services independently of the user’s permiss ions Information required for account setup The following information is required for creating accounts and setting up IAMbased access control: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 16 • AWS management account for the group This account could belong to the school department or professor If no account exists a new account is created This account is necessary for setting up AWS Organizations and consolidated billing • Name and email addresses of users • AWS account credentials for users who have existing AWS accounts that they want to use in this environment These accounts will join the AWS Organization users who do not have an AWS account or do not want to use their existing account will need new acco unts provisioned for them • Required AWS resources and services and the operations permitted on them This is required to determine the access control policies to be applied to each IAM user • Contact information for the billing reports and alerts • Contact information for the usage reports and alerts Providing access to users Using SSO the administrator creates different permissions sets which are custom IAM policies that can be used to grant resource access to the accounts within the AWS Organization to a specific user Then the administrator creates a n associated SSO user assigns it to the user’s account and attach es a permission set to th at user based on the level of privileges the user needs to have In this case there could be a permission set for an administrator a permission set for the teaching assistant and a permission set for the students Changes to the permissions will apply immediately to all the users using the same permission set in their account Finally the administrator can generate login credentials for each user so they can access the accounts each user has access to through the SSO portal For more information on how to manage access to your accounts and assign policies to your per mission sets see Manage SSO to your AWS Accounts See this SSO Configuration tutorial video to understand AWS SSO better Alternatively the administrator can add IAM users with roles and custom policies for each user in each AWS account to implement required access control logic for the different types of users in the group Login information for t he IAM users is provided to the corresponding users in the group This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 17 For basic instructions on how to add IAM user policies see Appendix A See Appendix B for an example of setting up IAM user policies for this scenario Cost tracking Consolidated billing makes it easy to track AWS costs because it shows the administrator a combined view of charges incurred b y all AWS accounts as well as a detailed cost report for each individual AWS account within the organization Consolidated billing is included with AWS Organizations where the management account pays the charges of all member accounts All users can also tag their resources for services with tagging capability An administrator can then use the cost allocation feature of AWS Account Billing to track AWS costs for each user For more information see Using Cost Allocation Tags and Viewing your bill in the AWS Billing and Cost Management documentation Monitoring resources AWS Budgets alerts can help monitor AWS resources Billing alerts automatically notify users whenever the estimated charges on their current AWS bill reach a threshold they define Users can choose to receive an alert on their total AWS charges or charges for a specific AWS product or service I f the account has any limits the administrator can use these as the threshold for sending billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets Reporting Detailed usage reports are available for administrator s from the AWS Management Console Reports are available for monthly ch arges as well as for account activity in hourly increments For more information see Detailed Billing Reports Runtime environment Users can log into the AWS Manage ment Console as an IAM user with the login information provided to them by the administrator They can launch and use resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Envir onments in AWS 18 defined by the rules and policies set by the administrator For example if they have the appropriate permissions users can lau nch new Amazon EC2 instances or create new Amazon S3 buckets upload data to them and share them with others Because accounts are independent each user sees only their own AWS resources in the AWS Management Console Clean up the environment When the users have finished their work or when the account limits are reached they (or the administrator) can optionally terminate the AWS services The administrator can also delete the IAM users or the SSO users and revoke the access to the account When the account is no longer in use it can be closed ending all the resources within the account Keeping accounts alive If the users want to retain their AWS accounts they can request the root account credentials from their administrator The administ rator would remove their account from the organization and users would need to provide their own billing information The users will get login and security credentials to their AWS account Conclusion Multi user shared environments with custom access cont rol policies are a common use case for AWS customers Typical requirements include both user and resource management to allow controlled access to AWS resources for multiple users This whitepaper presented three scenarios that covered a wide array of use cases with these requirements • The “Individual Server Environments” scenario provides access to customized work environments on AWS and is suitable for use cases like undergraduate labs • The “Limited User Access to AWS Management Console” scenario prov ides IAM user access to users from a single AWS account suitable for use cases like graduate classes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 19 • The “Separate AWS Account for Each User” scenario provides independent AWS accounts for each user (with consolidate billing) which is suitable for gradua te research and entrepreneurship courses In this whitepaper we focused on the short to medium term education and research environments as the example domain but the same or similar scenarios may also be implemented for other use cases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 20 Contributors Contributors to this document include : • KD Singh Amazon Web Services • Leo Zhadanovsky Chief Technologist Education Amazon Web Services • Alex Torres Solutions Developer Amazon Web Services Further reading For additional information see: • IAM documentation • IAM policies for Amazon EC2 • Granting IAM users required permissions for Amazon EC2 resources • Amazon Resource Names (ARNs) • Organizing Your AWS Environment Using Multiple Accounts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 21 Appendix A: Adding IAM user policies This section describes how to add IAM user policies to an AWS account For more information see Creating an IAM user in your A WS account 1 In the AWS Management Console choose Services > IAM 2 Choose Users 3 Choose Add Users 4 Enter name of the IAM user to be created 5 Choose Next This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 22 6 Choose Next to attach a user policy 7 If none of these policies work for your use case you can Create a policy and attach it You can create the policy using the Interface or you can create a custom JSON policy with one of the examples from Appendix B This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 23 8 Paste the proper policy from Appendix B 9 Choose Apply Policy 10 On the previous screen refresh the policy list and attach the policy you just created 11 Choose Download Credentials Save the downloaded file in a secure location as these are the user’s access key ID and secret access key They will need these to use the AWS API This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 24 Appendix B: Example IAM user policies This section provides example IAM user policies for a class that us es AWS services including policies for the professor teaching assistant and students These policies are useful for setting up the “Limited User Access to AWS Management Console” and “Separate AWS Account for Each User” scenarios described earlier in th is whitepaper For more information about policies see Policies and permissions in IAM Example policies for professor ( administrator) • Full administrator access: { ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": ""*"" ""Resource"": ""*"" }] } • Billing access: { ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": [ ""awsportal:ViewBilling"" ] ""Resource"": ""*"" }] } • Usage acce ss (Example Policies for Teaching Assistant ): { ""Statement"": [ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 25 { ""Effect"": ""Allow"" ""Action"": [ ""awsportal:ViewUsage"" ] ""Resource"": ""*"" }] } • Full administrator access but no access for billing or usage information: { ""Statement"":[{ ""Effect"":""Allow"" ""Action"":""*"" ""Resource"":""*"" } { ""Effect"":""Deny"" ""Action"":""aws portal:*"" ""Resource"":""*"" }] } Example Policies for Students • Permission to create and describe Amazon EBS volumes: { ""Version"": ""2012 1017"" ""Statement"": [{ ""Effect"": ""Allow"" ""Action"": [ ""ec2:DescribeVolumes"" ""ec2:DescribeAvailabilityZones"" ""ec2:CreateVolume"" ""ec2:DescribeInstances"" ] ""Resource"": ""*"" } { This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 26 ""Effect"": ""Allow"" ""Action"": [ ""ec2:AttachVolume"" ""ec2:DetachVolume"" ] ""Resource"": ""arn:aws:ec2:region:111122223333:instance/*"" ""Condition"": { ""StringEquals"": { ""ec2:ResourceTag/purpose"": ""test"" } } } { ""Effect"": ""Allow"" ""Action"": [ ""ec2:AttachVolume"" ""ec2:DetachVolume"" ] ""Resource"": ""arn:aws :ec2:region:111122223333:volume/*"" } ] } • Permission to create and modify Amazon EC2 instances: { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": [ ""ec2:DescribeInstan ces"" ""ec2:DescribeImages"" ""ec2:DescribeInstanceTypes"" ""ec2:DescribeKeyPairs"" ""ec2:DescribeVpcs"" ""ec2:DescribeSubnets"" ""ec2:DescribeSecurityGroups"" ""ec2:CreateSecurityGroup"" ""ec2:AuthorizeSecurityGroupIngress"" ""ec2:CreateKeyPair"" ] This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 27 ""Resource"": ""*"" } { ""Effect"": ""Allow"" ""Action"": ""ec2:RunInstances"" ""Resource"": ""*"" } ] } • Prevents modifying resource tags: { ""Version"": ""20121017"" ""Statement"": [ { ""Action"": [ ""ec2:CreateTags"" ""ec2:DeleteTags"" ] ""Resource"": [ ""*""] ""Effect"": ""Deny"" }] } • For instances with a student tag allows students to restart stop reboot attach volumes and detach volumes If the professor or teaching assistant applies a student tag with the value being the IAM user name of specific students to specific instances then those students can stop reboot attach volumes to and detach volumes to those instances They can also start instances that they stopped (that still have the student tag on them) but they can’t star t new ones { ""Version"": ""20121017"" ""Statement"": [ { ""Action"": [ ""ec2:StartInstances"" ""ec2:StopInstances"" ""ec2:RebootInstances"" This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 28 ""ec2:AttachVolume"" ""ec2:DetachVolume"" ] ""Condition"": { ""StringEquals"": { ""ec2:ResourceTag/Student"":""${aws: username }"" } } ""Resource"": [ ""arn:aws:ec2: region:account:instance/* "" ""arn:aws:ec2: region:account:volume/*"" ] ""Effect"": ""Allow"" }] } Document versions Date Description September 15 2021 Updated for technical accuracy October 2013 First publication",General,consultant,Best Practices Single_SignOn_Integrating_AWS_OpenLDAP_and_Shibboleth,"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Single Sign On: Integrating AWS OpenLDAP and Shibboleth A Step byStep Walkthrough Matthew Berry AWS Identity and Access Management April 2015 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 2 of 33 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 3 of 33 Contents Abstract 3 Introduction 3 Step 1: Prepare the Operating System 5 Step 2: Install and Configure OpenLDAP 8 Step 3: Install Tomcat and Shibboleth IdP 11 Step 4: Configure IAM 15 Step 5: Configure Shibboleth IdP 19 Step 6: Test Shibboleth Federation 30 Conclusion 32 Further Reading 32 Notes 32 Abstract AWS Identity and Access Management (IAM) is a web service from Amazon Web Services (AWS) for managing users and user permissions in AWS Outside the AWS cloud administrators of corporate systems rely on the Lightweight Directory Access Protocol (LDAP)1 to manage identities By using rolebased access control (RBAC) and Security Assertion Markup Language (SAML) 20 corporate IT systems administrators can bridge the IAM and LDAP systems and simplify identity and permissions management across onpremises and cloudbased infrastructures Introduction In November 2013 the IAM team expanded identity federation2 to support SAML 20 Instead of recreating existing user data in AWS so that users in your organization can access AWS you can use AWS support for SAML to federate user identities into AWS For example in many universities professors can help students take advantage of AWS resources via the students' university account s Stepbystep instructions walk you through the use of AWS SAML 20 support with OpenLDAP which is an implementation of LDAP This walkthrough depicts a fictitious university moving to OpenLDAP Because the university makes heavy use of Shibboleth identity provider (IdP) software you will learn how to use Shibboleth as the IdP You will also learn the entire process of setting up LDAP If your organization already has a functional LDAP implementation you can review the schema and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 4 of 33 then skip to the Install Tomcat3 and Install Shibboleth IdP4 sections Likewise if your organization already has Shibboleth in production you can skip to the Configure Shibboleth IdP5 section Assumptions and Prerequisites This walkthrough describes using a Linux Ubuntu operating system and makes the following assumptions about your familiarity with Ubuntu and with services from AWS such as Amazon Elastic Compute Cloud (Amazon EC2): • You know enough about Linux to move between directories use an editor (such as Vim) and run script commands • You have a Secure Shell (SSH) tool such as OpenSSH or PuTTY installed on your computer and you know how to connect to a running Amazon EC2 instance For a list of SSH tools see Connect to Your Linux Instance 6 in the Amazon EC2 documentation • You have a basic understanding of what LDAP is and what an LDAP schema looks like LDAP Schema and Roles A fictitious university called Example University is organized as shown in Figure 1 This university assigns a unique identifier (uid) to each individual more commonly referred to as a user name Each individual is also part of one or more organizational units (OU or OrgUnit) In our fictitious university OUs correspond to departments and one special OU named “People” contains everyone Each individual has a primary OU The primary OU for everyone except managers is the People OU The primary OU for managers is the department they manage Figure 1: Schema for Example University This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 5 of 33 Software For the example use the following software Although Ubuntu 1404 Long Term Support (LTS ) is illustrated the instructions apply to most versions of Ubuntu and Linux (perhaps with minor modifications) In general the procedures work in Microsoft Windows or OS X from Apple but they require alternate installation and configuration guides for OpenLDAP and Java v irtual machine which this walkthrough does not address Function Software and version Operating system Ubuntu 1404 LTS Java virtual machine OpenJDK 7u25 (IcedTea 2310) Web server Apache Tomcat 70 59 Identity provider Shibboleth IdP 24 Directory SLAPD (OpenLDAP 2428) Step 1: Prepare the Operating System These steps begin with an Amazon EC2 instance so that you can see a completely clean installation of all components The demo uses a t2micro instance because it is free tier eligible 7 (it will not cost you anything) and because this example installation does not serve any production traffic You can complete this walkthrough with a t2micro instance and stay in the free tier You can use a larger instance size if you want It makes no difference to the illustrated functionality and larger sizes run faster But note that you will be charged at standard rates if you use instances that are not in the free tier If you are new to Amazon EC2 you might want to read Getting Started with Amazon EC2 Linux Instances8 for context before you begin Launch a New Amazon EC2 Instance 1 Sign in to the AWS Management Console and then go to the Amazon EC2 console 2 Click Launch Instance find Ubuntu Server 1404 LTS (HVM) SSD Volume Type and then click Select 3 Select the t2micro instance which is the default 4 Click through the Next buttons until you get to Step 6: Configure Security Group Note: Restrict the IP address range in this step to match your organization’s IP address prefix or use the My IP option 5 Click Add Rule and then select HTTPS This opens up port 443 for SSL traffic This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 6 of 33 Note: Restrict the IP address range in this step to match your organization’s IP address prefix or use the My IP option 6 When you are finished click Review and Launch and then click Launch 7 When prompted create a new key pair for logging in to the Ubuntu instance Give it a name (for example ShibbolethDemo ) and then download and save the key pair See Figure 2 Then click Launch Instances Figure 2: Select an Existing Key Pair or Create a New Key Pair Important: Be sure to download your key pair Otherwise you will not be able to access your instance For information about how to connect to an Amazon EC2 instance using SSH see Connect to Your Linux Instance9 8 Click View Instances When the instance is running find and copy the following values for the instance which you'll need later: • The instance ID • The public DNS of the instance • The public IP address of the instance You can find all of these values in the Amazon EC2 console when you select your instance as shown in Figure 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 7 of 33 Figure 3: EC2 Instance Details Showing Instance ID Public DNS and IP Address Update L ocal Hosts File In this walkthrough various configuration values reference the DNS examplecom or idpexamplecom Each Amazon EC2 instance has a unique IP address and DNS that are assigned when the instance starts so you must update the hosts file on your local computer so that examplecom and idpexamplecom resolve to the IP address of your Amazon EC2 instance 1 Make sure you know the public IP address of your Amazon EC2 instance as explained in the previous section 2 Open the hosts file on your local computer Editing this file requires administrative privileges These are the usual locations of the hosts file: • Windows: %windir%\ System32\ drivers\ etc\hosts • Linux: /etc/hosts • Mac: /private/etc/hosts 3 Add the following mappings to the hosts file using the public IP address of your own Amazon EC2 instance When you are done save and close the file nnnnnnnnnnn examplecom nnnnnnnnnnn idpexamplecom This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 8 of 33 Create Directories Using your SSH tool (OpenSSH PuTTY etc) connect to your Amazon EC2 instance Create directories for Tomcat Shibboleth and the demo files by running the following commands cd /home/ubuntu/ mkdir –p /home/ubuntu/server/tomcat/conf/Catalina /localhost mkdir p /home/ubuntu/server/tomcat/endorsed mkdir /home/ubuntu/server/shibidp mkdir p /home/ubuntu/installers/shibidp Step 2: Install and Configure OpenLDAP OpenLDAP is an opensource implementation of the Lightweight Directory Access Protocol (LDAP)10 This walkthrough assumes basic knowledge of LDAP and explains only what is required to complete it About LDAP A small set of primitives that can be combined into a complex hierarchy of objects and attributes defines LDAP The core element of the LDAP system is an object which consists of a keyvalue pair Objects can represent anything that needs an identity in the LDAP system such as people printers or buildings Because you can reuse keys sets of key value pairs are grouped into object classes These object classes are included by using special object class attributes as shown in Figure 4 Figure 4: Including Object Classes with Special Object Class Attributes Object classes make LDAP extensible All the people at an organization have a core set of attributes that they share such as name address phone office department and job level You can wrap t hese attributes into an object class so that the definition of a person in the directory can reference the object class and automatically get all the common attributes defined by it Figure 5 shows an example of an object class This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 9 of 33 Figure 5: An Example of an Object Class Install OpenLDAP For this walkthrough you need to install OpenLDAP on the Amazon EC2 instance that you launched 1 Log in to the Amazon EC2 instance and enter the following commands to download and install OpenLDAP sudo apt get y update && sudo apt get y upgrade This command updates the package list on the host The second half of the command updates all the packages on the host to the newest versions 2 Type the following command to install OpenLDAP sudo apt get y install slapd ldap utils 3 Type the following commands to set up shortcuts (aliases) for working with OpenLDAP echo ""alias ldapsearch='ldapsearch H ldapi:/// x W '"" >> ~/bashrc echo ""alias ldapmodify='ldapmodify H ldapi:/// x W '"" >> ~/bashrc # Adding $LDAP_ADMIN to either of t he ldap commands binds to admin account echo ""export LDAP_ADMIN=' D cn=admindc=exampledc=com '"" >> ~/bashrc source ~/bashrc These commands add aliases to the ~/bashrc file which is a file that contains commands that run each time the user signs in The shortcuts add some common parameters to ldapsearch and ldapmodify the two This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 10 of 33 most common LDAP utilities The parameters for these commands are as follows: • H ldapi:/// tells the command where the directory is located • x tells the command to use simple authentication • W tells the command to ask for the password (instead of listing it on the command line) • D cn=admindc=exampledc=com is a set of parameters to indicate that LDAP should run as an administrator 4 Type the following command to tell the package manager to reconfigure OpenLDAP sudo dpkg reconfigure slapd When the command runs you see the following prompts Respond as noted • Omit OpenLDAP server configuration? Type No You want to have a blank directory created • DNS domain name: Type examplecom You use this to construct the hierarchy of the LDAP directory Use this domain for this walkthrough because other aspects of the configuration depend on this domain name • Organization name: Type any name This value is not used • Administrator password: (and confirmation) This is the LDAP administrator password For the purposes of this walkthrough use password For production systems consult your security best practices You will need the password when you make changes to the LDAP configuration later • Database backend to use: This lets you specify the storage back end for LDAP information Type HDB • Do you want the database to be removed when slapd is purged? Type Yes This is a safety measure in case you purge a setup and start over In that case if you type Yes the directory is backed up rather than deleted • Move old database? Type Yes This is part of the safety measure from the previous prompt By answering Yes you cause OpenLDAP to make a backup of the existing directory before wiping it This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 11 of 33 • Allow LDAPv2 protocol? Type No LDAPv2 is deprecated Download LDAP Sample Data For this walkthrough you need some data in the LDAP data store For convenience the walkthrough provides files that contain sample data To download these directly to the Amazon EC2 instance run the following script inside your instance wget O '/home/ubuntu/examplestargz' 'https://s3amazonawscom/awsiammedia/public/sample/OpenLDA PandShibboleth/examplestargz ' tar xf /home/ubuntu/examplestargz Configure OpenLDAP Because LDAP is text based it is easy to back up the directory and share attribute definitions (called schema s) However this paper does not focus on LDAP so it does not go into detail about the text format used to interact with LDAP You just need to know that Lightweight Directory Interchange Format (LDIF) is a textbased export/import format for LDAP and you can find the sample LDIFs for populating the directory in the files that you downloaded After you have downloaded the sample data files as described in the previous section run the following script to insert information from the example files into the LDAP database You need the LDAP administrator password that you specified when you installed and configured OpenLDAP sudo ldapmodify Y EXTERNAL H ldapi:/// f examples/eduPerson201310ldif # Schema installation requires root but all other changes onl y require admin ldapmodify $LDAP_ADMIN f examples/PEOPLEldif ldapmodify $LDAP_ADMIN f examples/BIOldif ldapmodify $LDAP_ADMIN f examples/CSEldif ldapmodify $LDAP_ADMIN f examples/HRldif Step 3: Install Tomcat and Shibboleth IdP The next step is to install Shibboleth Because Shibboleth is a construction of Java Server Pages it needs a container in which to run We are using Apache This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 12 of 33 Tomcat 11 You do not have to know much about Tomcat in order to use it in this walkthrough; we will show you the installation and configuration steps Install Tomcat The Tomcat installation is simple You just need to download and unzip a tarball In order to run Tomcat a Java SE Development Kit (JDK) is required Log in to the Amazon EC2 instance and run the following script in order to install the JDK download Tomcat and extract it sudo apt get y install openjdk 7jreheadless wget O 'installers/tomcat7targz' ' http://wwwusapacheorg/dist/tomcat/tomcat 7/v7059/bin/apache tomcat7059targz ' # Tomcat installation is simply to extract the tarball tar xzf installers/tomcat7targz C server/tomcat/ stripcomponents=1 Install Shibboleth IdP You can install Shibboleth by downloading a tarball and extracting it You then need to set an environment variable and run the Shibboleth installer script In the Amazon EC2 instance run the following script wget O 'installers/shibidp24targz' ' http://shibbolethnet/downloads/identity provider/24 0/shibboleth identityprovider 240 bintargz ' tar xzf installers/shibidp24targz C installers/shibidp strip components=1 # This is needed for Tomcat and Shibboleth scripts echo ""export JAVA_HOME=/usr/lib/jvm/java 7openjdk amd64/"" >> ~/bashrc source ~/bashrc # Installation directory: /home/ubuntu/server/shibidp # (don't use ~) # Domain: idpexamplecom cd installers/shibidp; /installsh && cd Use the following answers when prompted This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 13 of 33 • Where should the Shibboleth Identity Provider software be installed? Type /home/ubuntu/server/shibidp • (This question may not appear) The directory '/home/ubuntu/server/shibidp' already exists Would you like to overwrite this Shibboleth configuration? (yes [no]) Type yes • What is the fully qualified hostname of the Shibboleth Identity Provider server? [idpexampleorg] Type idpexamplecom (Use com not org because that is what the LDAP installation uses) Note that this response assumes that you typed examplecom as the domain earlier • A keystore is about to be generated for you Please enter a password that will be used to protect it This password protects a key pair that is used to sign SAML assertions It is stored in a file in the Shibboleth directory For purposes of this walkthrough use password everywhere you are prompted In a production system be sure to consult your security best practices Configure Tomcat Tomcat's default configuration does not quite suit our needs for this example IdP so you need to edit the server's configuration file 1 In the Amazon EC2 instance use an editor such as Vim to edit the following file /home/ubuntu/server/tomcat/conf/serverxml 2 Comment out the block that starts with 4 Create the following file /home/ubuntu/server/tomcat/conf/Catalina/localhost/idpxml 5 Add the following to the file you just created and then save and close the file This tells Tomcat where Shibboleth’s files are and how to use them 6 Run the following command cp ~/installers/shibidp/endorsed/* ~/server/tom cat/endorsed This command tells Tomcat that it can run the Shibboleth library files by copying the contents of Shibboleth's endorsed directory to Tomcat's endorsed directory 7 Edit the Tomcat user store file that is in the following location /home/ubuntu/server/tomcat/conf/tomcat usersxml 8 Add a root user by adding the following line just before the tag (inside the tomcatusers element) This configures Tomcat as an administrative user so that Tomcat can start and stop Shibboleth 9 Start the server by running the following startup commands This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 15 of 33 sudo /home/ubuntu/server/tomcat/bin/startupsh tail f /home/ubuntu/server/tomcat/logs/catalinaout 10 Wait for a line that says ""INFO: Server startup in ### ms"" and then press CTRL+C 11 To verify that Tomcat and Shibboleth started properly from your main computer (not the Amazon EC2 instance) navigate to https://idpexamplecom If the server is working Tomcat displays a welcome page after a brief warning about certificates and host names 12 Click Manager App and type the root credentials Verify that the Shibboleth software is running Step 4: Configure IAM Now that you have set up Shibboleth as an IdP configure AWS IAM so that it can act as a SAML service provider This involves two tasks: the first is to create an IAM SAML provider that describes the IdP and the second is to create an IAM role (in our case several roles) that a federated user can assume in order to get temporary security credentials for accessing AWS resources such as signing in to the AWS Management Console Create an IAM SAML Provider In order to support SAML identity federation from an external IdP IAM must first establish a trust relationship with the provider To do this create an IAM SAML provider SAML 20 describes a document called a metadata document that contains all the required information to configure communication and trust between two entities You can get the metadata document by asking Shibboleth running on your instance to generate it 1 In your Amazon EC2 instance navigate to the following URL download the metadata document and save it with the name idpexamplecomxml (use this name because later steps assume this name) https://idpexamplecom/idp/profile/Metadata/SAML 2 Sign in to AWS and navigate to the IAM console 12 3 In the navigation pane click Identity Providers and then click Create Provider The Create Provider wizard starts 4 Choose SAML as the provider type 5 Type ShibDemo as the name 6 Upload the metadata document you saved in Step 1 of this procedure as the Metadata Document as shown in Figure 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 16 of 33 Figure 6: The Create Provider Wizard 7 Click Next Step 8 Review the Provider Name and Provider Type and then click Create Create IAM Roles Next you create IAM roles that federated users can assume You create three roles for Example University: one for the biology department one for the computer science and computer engineering departments to share and one for the human resources department Shibboleth controls access to the first two roles The third role includes a condition so that Shibboleth and AWS manage access control (authorization) In the IAM console follow these steps: 1 In the navigation pane click Roles and then click Create New Role 2 Type BIO for the name of the first role and then click Next Step 3 For role type select Role f or Identity Provider Access 4 Select Grant Web Single SignOn (WebSSO) access to SAML providers as shown in Figure 7 Figure 7: Grant WebSSO Access to SAML Providers By default the wizard selects the SAML provider that you created earlier (see Figure 8) The wizard also shows that the Value field is set to https://signinawsamazoncom/saml This is a required value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 17 of 33 Figure 8: Create Role Wizard 5 Click Next Step and verify that the role trust policy matches the following example (except that your policy includes your AWS account number instead of 000000000000 ) When you have verified the policy click Next Step { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": ""sts:AssumeRoleWithSAML"" ""Principal"": { ""Federated"": ""arn:aws:iam::000000000000:saml provider/ShibDemo"" } ""Condition"": { ""StringEquals"": { ""SAML:aud"": ""https://signinawsamazoncom/saml"" } } } ] } 6 In the Attach Policy step do not selec t any options For this exercise the role does not actually need to have any permissions Instead click Next Step You see a summary of the role as shown in Figure 9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 18 of 33 Figure 9: Summary of the Created Role Note the role's Amazon Resource Name or ARN (arn:aws:iam::000000000000 0:role/BIO ) Later parts of this walkthrough assume that the ARNs of the roles you create in this procedure match the suggested names (BIO CSE and HR) 7 Click Create Role to finish creating this role 8 Repeat steps 1–7 to create another role named CSE 9 Repeat the steps again to create another role named HR For the HR role you need to add a condition to check that at least one of the values of the SAML:eduPersonPrimaryOrgUnitDN attribute is a string that is required When you get to the Verify Role Trust step copy and paste the following policy Remember to replace 000000000000 with your AWS account number { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": ""sts:AssumeRoleWith SAML"" ""Principal"": { ""Federated"": ""arn:aws:iam::000000000000:saml provider/ShibDemo"" } ""Condition"": { ""StringEquals"": { ""SAML:aud"": ""https://signinawsamazoncom/saml"" } ""ForAnyValue :StringEquals"": { ""SAML:eduPersonPrimaryOrgUnitDN"": ""ou=hrdc=exampledc=com"" } } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 19 of 33 } ] } The extra condition restricts the HR role to the manager of HR because Example University uses the eduPersonP rimaryOrgUnitDN attribute to denote managers 10 As with the BIO and CSE roles do not select any policies to attach as the role's access policy because no permissions are needed for this walkthrough Step 5: Configure Shibboleth IdP Shibboleth IdP consumes data from a variety of sources and uses that data to both authenticate a user and communicate the authenticated identity to external entities You can configure nearly every part of the process and you can extend with code the portions of the IdP that do not support configuration settings About Shibboleth Data Connectors The basic flow for attribute data through Shibboleth is the same regardless of whether the data comes from a database LDAP or another source A component called a data connector fetches attribute data from its source The data connector defines a query or filter used to get the identity data Predefined data connectors exist for relational databases LDAP and configuration files The results returned by the data connector persist into the next step in the process which is the attribute definition In this step you can process the identity data pulled from the store (and potentially from other attributes defined earlier in the configuration) to produce attributes with the format you need For example an attribute can pull several columns of a relational database together with appropriate delimiters and format an email address Like data connectors Shibboleth supports predefined attribute definitions One definition passes identity values through with no modification With the mapped attribute definition you can use regular expressions to transform the format of attributes A number of special attribute definitions expose some of Shibboleth's internal mechanisms which are interesting but will not be used here However these attributes are still in a Shibbolethspecific internal format You can attach attribute encoders to the attribute definitions so that you can serialize the internal attributes into whatever wire format you need This walkthrough uses the SAML 20 string encoder to create the required XML for the SAML authentication responses After you have fetched transformed and encoded data into the correct format you can use attribute filters to dictate which attributes to include in communication with various relying parties Predefined attribute filter policies give you great flexibility in releasing attributes to relying parties You can use filters to write attributes to specific relying parties only and to write only specific This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 20 of 33 values of the attributes specific users or specific authentication methods You can also string together Boolean combinations of all the above A complete overview of the process appears in Figure 10 Figure 10: Attribute Pipeline in Shibboleth Fetch Attributes from OpenLDAP Much of the configuration for getting Shibboleth to communicate with OpenLDAP is already in existing files and just needs to be uncommented 1 In your Amazon EC2 instance open this file in your text editor /home/ubuntu/server/shibidp/conf/attribute resolverxml 2 In the file find the section with the following heading # 3 Uncomment that section (The commentedout section ends before an element that has the ID eduPersonTargetedID ) 4 If you are using a newer schema that includes the definitions for eduPersonPrincipalNamePrior or eduPersonUniqueId (the eduPerson object class specification 201310) you can optionally add the following block after the block that you just uncommented 5 Find the section that begins with the following This section has been commented out 6 Replace that entire commentedout section with the following block and then save and close the file This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 22 of 33 About Attribute Definitions The most relevant part of an LDAP data connector block is the filter template near the bottom of the definition When Shibboleth requests attributes for a user it runs this query on the OpenLDAP database OpenLDAP needs to authenticate and needs to know where to search This is what the authenticationType and baseDN attributes define The reference myLDAP is used to refer to this specific OpenLDAP query If there are other attributes in OpenLDAP that require a different query you can copy this block give it a different ID and change the query The block contains the following eduPerson attribute definition The xsi:type=""ad:Simple"" attribute in these definitions indicates that these attributes simply copy their values from the data connector as is This is appropriate for attributes that map directly to single columns of a database to single attributes from OpenLDAP or to static configuration data The id=""eduPersonAffiliation"" portion gives this configuration section an internal name that can be referenced elsewhere in the configuration It is never released to relying parties The sourceAttributeID=""eduPersonAffiliation"" portion defines the name of the attribute released by the data connector to use as the source of data for this attribute definition Because this attribute definition gets data from OpenLDAP the configuration specifies a dependency on myLDAP which is the ID that you assigned to the OpenLDAP data connector Finally a number of encoders are attached In the SAML 20 string encoder the name and friendlyName are used to set the same portions of a SAML2 attribute This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 23 of 33 Configuring AWSspecific attribute definitions To use SAML identity federation with AWS you must configure two AWSspecific attributes The first is a simple attribute that sets the name of the session granted to users This value is captured in logs and displayed in the console when the user signs in Good candidates for this value are a user's login name or email address Some format restrictions exist for the value: • It must be between 2 and 32 characters in length • It can contain only alphanumeric characters underscores and the following characters: +=@ • It is typically a user ID (bobsmith) or an email address (bobsmith@examplecom) • It should not include blank spaces such as often appear in a user’s display name (Bob Smith) This example uses the uid of the user from OpenLDAP by setting the sourceAttributeID to uid and adding a dependency on the OpenLDAP data connector The other attribute that needs to be set is the list of roles the user can assume This could be as simple as a static value attached to all users in an organization or as complex as a per user per department ACL (access control list)–based value This example uses a flexible option that is not difficult to implement To configure the attributes follow these steps 1 Edit the following file /home/ubuntu/server/shibidp/conf/attribute resolverxml 2 Insert the following block immediately after the heading ""Attribute Definitions"" and before Note: Replace 000000000000 with your AWS account number Note also that the block includes the ARNs of the roles that you created earlier (for example arn:aws:iam::000000000000:role/BIO ) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 24 of 33 arn:aws:iam::000000000000:role/BIOarn:aws:iam::00000000000 0:samlprovider/ShibDemo *ou=biology* arn:aws:iam::000000000000:role/CSEarn:aws:iam::00000000000 0:samlprovider/ShibDemo *ou=computersci* *ou=computereng* arn:aws:iam::000000000000:role/HRarn:aws:iam::000000000000 :samlprovider/ShibDemo *ou=hr* With the mapped attribute definition you can use a regular expression to map input values into output values This example maps eduPersonOrgUnitDN to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 25 of 33 an IAM role (depending on the organizational unit) in order to give entire departments access to resources by using existing access control rules The attribute definition contains several value maps each with its own pattern Each of the values associated with the eduPersonOrgUnitDN (because it is multivalued) is checked against the patterns specified in the SourceValue nodes If the check finds a match the ReturnValue value is added to the attribute definition The format of the ReturnValue is a role ARN and a prov ider ARN separated by a comma The order of the two ARNs does not matter If you are using regular expressions in the SourceValue fields you can use back references in the ReturnValue so that you can simplify the configuration by capturing the organizational unit and using a back reference although delving into further possibilities of using pattern matching is beyond our scope Release Attributes to Relying Parties Sometimes attributes can contain sensitive data that is useful for authentication within the organization No one should release the sensitive data outside of the organization The first part of an attribute filter defines to whom the filter applies By u sing an AttributeRequesterString filter policy an administrator can choose the relying parties to whom to release the attributes This example uses the entity ID of AWS ""urn:amazon:webservices"" This walkthrough uses a simple directory so all possible values of all the eduPerson and AWS attributes are released to AWS This allows you to write policies in IAM that can include conditions based on attributes that represent OpenLDAP information You do this by including an AttributeRule element for each eduPerson entity or AWS attribute and setting PermitValueRule to basic:ANY 1 Edit the following file /home/ubuntu/server/shibidp/conf/attribute filterxml 2 Add the following block inside the element AttributeFilterPolicyGroup (before the closing tag and after the comments) When you are done save and cl ose the file This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 26 of 33 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 27 of 33 Enable Login Using OpenLDAP as a User Store Shibboleth supports several authentication methods By default remote user authentication is configured which passes through authentication from Tomcat To authenticate against OpenLDAP you must disable remote user authentication and enable user name/password authentication User name/password authentication via JAAS and the loginconfig file are already defined in the configuration file; you just need to uncomment it Follow these steps: 1 In the Amazon EC2 instance edit the following file /home/ubuntu/server/ shibidp/conf/handlerxml 2 Comment out the following block 3 Uncomment the following block and then save and close the file 4 Edit the following file in order to configure the OpenLDAP connection parameters /home/ubuntu/server/shibidp/conf/loginconfig 5 Find the block that begins with Example LDAP authentication Replace the entire commented section (which begins with eduvtmiddleware ) with the following block eduvtmiddlewareldapjaasLdapLoginModule required ldapUrl=""ldap://localhost"" baseDn=""ou=Peopledc=exampledc=com"" bindDn=""cn=admindc=exampledc=com"" bindCredential=""password"" userFilter=""uid={0}""; Configure Shibboleth to Talk to AWS Now you have an OpenLDAP directory and Shibboleth configured to use that identity store and you have created IAM entities that AWS needs to establish This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 28 of 33 trust with Shibboleth The only thing left is to establish trust between Shibboleth (as the IdP) and AWS (as a service provider) You do this by configuring Shibboleth with the location of the AWS SAML 20 metadata document A metadata document contains all the information needed for two parties to communicate such as Internet endpoints and public ke ys Shibboleth can automatically refresh AWS metadata when AWS changes it by using a FileBackedHTTPMetadataProvider object Alternatively if an administrator wants to control the relationship manually the administrator can manually download the metadata and use a FileSystemMetadataProvider 1 In your Amazon EC2 instance edit the following file /home/ubuntu/server/shibidp/conf/relying partyxml 2 In the Metadata Configuration section just below the IdPMD entry add the following The file contains settings that cause Shibboleth to apply a set of default configurations to AWS You can find these settings inside the DefaultRelyingParty and AnonymousRelyingParty blocks 3 To change the configuration for a specific relying party insert the following block after the DefaultRelyingParty block (after the closing tag) With This configuration you can specify the following: • defaultSignin gCredentialRef – The keys used to sign and encrypt requests • ProfileConfiguration – Which SAML 1x or SAML 20 profiles to respond to Keep in mind that AWS supports only SAML2SSOProfile • assertionLifetime – The length of time (expiration) for the user to provide the authentication information to AWS before it is no longer valid • signResponses/signAssertions – The portions of the response to sign • maximumSPSessionLifetime – The length of a session that AWS should provide based on the authentication information provided Test Configuration Changes by Using AACLI You have configured Shibboleth! To apply the Shibboleth configuration changes you must restart Tomcat However before you do that it is best to test the configuration You can use the attribute authority command line interface (AACLI) tool to simulate Shibboleth's attribute construction based on an arbitrary configuration directory This allows you to copy a working configuration to a test directory modify it test it and then copy it back For the sake of this example you set up AACLI to test the live configuration 1 Edit the following file ~/bashrc 2 Add the following block to the file and then save and close the file echo ""alias aacli='sudo E /home/ubuntu/server/shibidp/bin/aaclish configDir=/home/ubuntu/server/shibidp/conf' "" >> ~/bashrc 3 Run the following source command source ~/bashrc This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 30 of 33 4 Run the following AACLI command aacli requester ""urn:amazo n:webservices"" principal bobby The attributes that are constructed for a given principal can be tested by filling in a principal's OpenLDAP uid (In this case you use the principal bobby which exists in the example LDAP database) If all goes well the command displays XML information that could be directly injected into a SAML 20 attribute statement block If you see a series of stack traces instead a misconfiguration is present Check the settings for the OpenLDAP data connector and the syntax of all the XML configuration files 5 After the AACLI begins returning attributes stop and then restart Tomcat by using the following commands sudo /home/ubuntu/server/tomcat/bin/shutdownsh sudo /home/ubuntu/server/tomcat/bin/startupsh Ensure that no stack traces occur in Tomcat or in the Shibboleth logs Step 6: Test Shibboleth Federation As soon as the previous testing is working you can test federation to AWS In the Amazon EC2 instance open a browser and navigate to the following URL https://idpexamplecom/idp/profile/SAML2/Unsolicited/SSO?p roviderId=urn:amazon:webservices This initiates the SSO flow to AWS and you see the page shown in Figure 11 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 31 of 33 Figure 1 1: The Custom Login Page for the AWS Management Console Type the user name bobby and use password for the password (In the sample LDAP data all the passwords are password ) You then go to the AWS Management Console as shown in Figure 12 Figure 1 2: Console for a User Logged In as Charlie Using a Role Named CSE To try a different user log out by navigating to https://idpexamplecom/idp/profile/Logout Then try logging in as user Dean Notice that this user is unable to federate This is because the HR role policy specifies that the SAML:eduPersonPrimaryOrgUnitD N must be ou=hrdc=exampledc=com The user bobby has this and can federate as a member of the HR department However Dean's primary organizational unit is ou=Peopledc=exampledc=com As noted earlier administrators have the flexibility to control access in two places The first place is on the Shibboleth side in the attribute resolver by attaching specific AWS role attributes to specific users The role that is associated with a user then determines what the user can do in AWS The second place is in the IAM role trust policy where you can add conditions based on SAML This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 32 of 33 attributes that limit who can assume the role It is up to you to choose which of these two strategies to use (or both) For a complete list of attributes that you can use in role trust policies see the IAM documentation 13 Conclusion Now that you have integrated your onpremises LDAP infrastructure into IAM you can spend less time on synchronizing permissions between onpremises and the cloud The combination of SAML attributes and RBAC means you can author finegrained access control policies that address your LDAP user data an d your AWS resources Further Reading For more information about installing and configuring OpenLDAP and Shibboleth see the following: • Installing an OpenLDAP server 14 • How To Install and Configure a Basic LDAP Server on an Ubuntu 1204 VPS15 • LDIF examples16 • Edit the Tomcat Configuration File17 • Preparing Apache Tomcat for the Shibboleth Identity Provider18 For Shibboleth attributes and authentication responses the Shibboleth documentation wiki provides extensive information These topics contributed to the creation of this tutorial: • LDAP Data Connector19 • Shibboleth attributes: o Define and Release a New Attribute in an IdP20 o Simple Attribute Definition21 o Mapped Attribute Definition22 o Define a New Attribute Filter23 • Shibboleth User Name/Password Handler24 • Adding Metadata providers25 • PerService Provider Configuration26 Notes 1 http://enwikipediaorg/wiki/Ldap 2 http://awsamazoncom/aboutaws/whatsnew/2013/11/11/aws identityand accessmanagementiamadds supportfor samlsecurityassertionmarkup language20/ 3 See the “Install Tomcat” section 4 See the “Install Shibboleth IdP” section 5 See the “Configure Shibboleth IdP” section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 33 of 33 6 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AccessingInstancesh tml 7 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/billing free tierhtml 8 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EC2_GetStartedhtm l 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AccessingInstancesh tml 10 http://enwikipediaorg/wiki/Ldap 11 http://tomcatapacheorg/ 12 https://consoleawsamazoncom/iam/home?#home 13 http://docsawsamazoncom/IAM/latest/UserGuide/AccessPolicyLanguage_E lementDescriptionshtml#conditionkeyssaml 14 https://helpubuntucom/lts/serverguide/openldap serverhtml#openldap serverinstallation 15 https://wwwdigitaloceancom/community/articles/howtoinstalland configureabasicldapserveronanubuntu 1204vps 16 http://wwwzytraxcom/books/ldap/ch5/step4html#step4ldif 17 http://tomcatapacheorg/tomcat70 doc/ssl howtohtml#Edit_the_Tomcat_Configuration_File 18 https://wikishibbolethnet/confluence/display/SHIB2/IdPApacheTomcatPrep are 19 https://wikishibbolethnet/confluence/display/SHIB2/ResolverLDAPDataCo nnector 20 https://wikishibbolethnet/confluence/display/SHIB2/IdPAddAttribute 21 https://wikishibbolethnet/confluence/display/SHIB2/ResolverSi mpleAttribu teDefinition 22 https://wikishibbolethnet/confluence/display/SHIB2/ResolverMappedAttrib uteDefinition 23 https://wikishibbolethnet/confluence/display/SHIB2/IdPAddAttributeFilter 24 https://wikishibbolethnet/confluence/display/SHIB2/IdPAuthUserPass 25 https://wikishibbolethnet/confluence/display/SHIB2/IdPMetadataProvider 26 https://wikishibbolethnet/confluence/display/SHIB2/IdPRelyingParty",General,consultant,Best Practices Sizing_Cloud_Data_Warehouses,Sizing Cloud Data Warehouses Recommended Guidelines to Sizing a Cloud Data Warehouse January 2019 This document has been archived For the latest technical content about cloud data warehouses see the AWS Whitepapers & Guides page: https//awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s prod ucts or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of no r does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Sizing Guidelines 2 Redshift Cluster Resize 4 Conclu sion 5 Contributors 6 Document Revisions 6 ArchivedAbstract This whitepaper describes a process to determine an appropriate configuration for your migration to a cloud data warehouse This process is appropriate for typical data migrations to a cloud data warehouse such as Amazon Redshift The intended audience includes database administrators data engineers data architects and other data warehouse stakeholders Whether you are performing a PoC (proof of concept) evaluation a production migration or are migrating from an on premises appliance or another cloud data warehouse you can follow the simple guide lines in this whitepaper to help you increase the chances of delivering a data warehouse cluster with the desired storage performance and cost profile ArchivedArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 1 Introduction One of the first tasks of migrating to any data warehouse is s izing the data warehouse appropriately by determining the appropriate number of cluster nodes and their compute and storage profiles Fortunately with cloud data warehouses such as Amazon Redshift it is a relatively straightforward task to make immediate course corrections to resize your cluster up or down However sizing a cloud data warehouse based on the wrong type of information can lead to your PoC evaluations and production environments being executed on suboptimal cluster configurations Resizing a cluster might be easy but repeating PoCs and dealing with change control procedures for production environments can potentially be more time consuming risky and costly which puts your project milestones at risk Migrations of several petabyt es to exabytes of uncompressed data typically benefit from a more holistic sizing approach that factors in existing data warehouse properties data profiles and workload profiles Holistic sizing approach es are more involved and fall under the category of professional services engagement For more information contact AWS ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 2 Sizing Guidelines For migrations of approximately one petabyte or less of uncompressed data you can use a simple storage centric sizing approach to identify an appropriate data wareh ouse cluster configuration With the simple sizing approach your organization’s uncompressed data size is the key input for sizing your Redshift cluster However you must refine that size a little Redshift typically achieves 3x –4x data compression whic h means that the data that is persisted in Redshift is typically 3 –4 times smaller than the amount of uncompressed data In addition it is always a best practice to maintain 20% of free capacity in a Redshift cluster so you should increase your compresse d data size by a factor of 125 to ensure a healthy amount (20%) of free space The simple sizing approach can be represented by this equation : This equation is appropriate for typical data migrations but it is important to note that suboptimal data modelling practices could artificially lead to insufficient storage capacity Amazon Redshift has four basic node types —or instance types —with differ ent storage capacities For more information on Redshift instance types see the Amazon Redshift Clusters documentation ArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 3 Basic Re dshift cluster information Instance Family Instance Name vCPU s Memory Storage # Slices Dense storage ds2xlarge 4 31GiB 2TB HDD 2 ds28xlarge 36 244GiB 16TB HDD 16 Dense compute dc2large 2 1525GiB 160GB SSD 2 dc28xlarge 32 244GiB 256TB SSD 16 In an example scenario the f ictitious company Examplecom would like to migrate 100TB of uncompressed data from its on prem ises data warehouse to Amazon Redshift Using a conservative compression ratio of 3x you can expect that the compressed data prof ile in Redshift wil decrease from 100TB to approximately 33TB You factor in an additional 20% size increase to ensure a healthy amount of free space which will give you approximately 42TB of storage capacity in your Redshift cluster You now have your ta rget storage capacity of 42TB There are multiple Redshift cluster configurations that can satisfy that requirement The Examplecom VP of Data Analytics wants to start out small select the least expensive option for their cloud data warehouse and then scale up as necessary With that extra requirement you can configure your Redshift cluster using the dense storage ds2xlarge node type which has 2TB of storage capacity With this information your simple sizing equation is: ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 4 You should also consider the following information about this example Redshift cluster configuration: Cluster Type Instance Type Nodes Cluster Capacity Cost ($/month) Memory Compute Storage Dense storage ds2xlarge 21 651Gb 84 Cores 42TB $x ds28xlarge 3 732Gb 108 Cores 48TB $12x Dense compute dc28xlarge 17 4148 Gb 544 Cores 44TB $452x If initial testing shows that the Redshift cluster you selected is under or over powered you can use the straightforward resizing capabilities available in Redshift to scale the Redshift cluster configuration up or down for the necessary price and performance Redshift Cluster Resize After your data migration from your on premises data warehouse to the cloud is complete over time it is normal to make incremental node additions or removals from your cloud data warehouse These changes help you to maintain the cost storage and performance profiles you need for your data warehouse With Amazon Redshift there are two main methods to resize your cluster: • Elastic resize – Your existing Redshift cluster is modified to add or remove nodes either manually or with an API call This resize typically requires approximately 15 minutes to complete Some tasks might continue to run in the background but your Redshift cluster is fully available during that time • Classic resize – Enables a Redshift cluster to be reconfigured with a different node count and instance type The original cluster enters read only mode during the resize which can take multiple hours In addition Amazon Redshift supports concurrency based scaling which is a feature that adds transient capacity to your cluster during concurrency spikes This in effect temporar ily increas es the number of Amazon Redshift nodes processing your queries With concurrency scaling Redshift automatically adds transient clusters to your Redshift cluster to handle concurrent requests with consistently fast performance This means that your Redshift c luster is temporarily scaled up with additional compute nodes to provide increased concurrency and consistent performance ArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 5 For more information about resizing a Redshift cluster see: • Resizing a Cluster (Redshift Documentation) https://docsawsamazoncom/redshift/latest/mgmt/working with clustershtml#cluster resize intro • Elastic Resize (Redshift Documentation) https://awsamazoncom/about aws/whats new/2018/11/amazon redshift elastic resize/ • Elastic Resize (Blog Post) https://awsamazoncom/blogs/big data/scale your amazon redshift clusters upanddown inminute stogettheperformance youneed when youneed it/ • Concurrency Scaling (Blog Post) https://wwwallthingsdistributedcom/2018/11/amazon redshift performance optimizationhtml Conclusion It is important that you size your cloud data warehouse using the right information and approach Although it is easy to resize a cloud data warehouse (such as Amazon Redshift ) up or down to achieve a different cost or performance profile the change control procedures for modifying a production environment repeating a PoC evaluation etc could pose significant challenges to project milestones You can f ollow the simple sizing approach outlined in this whitepaper to he lp you identify the appropriate cluster configurations for your data migration ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 6 Contributors Contributors to this document include: • Asser Moustafa Solutions Architect Specialist Data Warehousing • Thiyagarajan Arumugam Solutions Architect Specialist Data Warehousing Document Revisions Date Description January 2019 First publication Archived,General,consultant,Best Practices SoftNAS_Architecture_on_AWS,ArchivedSoftNAS Architecture on AWS April 201 7 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers SoftNAS and the SoftNAS logo are trademarks or registered trademarks of SoftNAS Inc All rights reserved ArchivedContents Introduction 1 About SoftNAS Cloud 1 Architecture Considerations 1 Application and Data Security 1 Performance 3 Using Amazon S3 with SoftNAS Cloud 9 Network Security 10 Data Protection Considerations 13 SoftNAS Cloud is Copy OnWrite (COW) File System 14 Automatic Error Detection and Correction 14 SoftNAS Cloud Snapshots 15 SoftNAS SnapClones™ 16 Amazon EBS Snapshots 17 Deployment Scenarios 17 HighAvailability Architecture 17 Single Controller Architecture 20 Hybrid Cloud Architecture 21 Automation Options 23 Conclusion 25 Contributors 25 Further Reading 26 SoftNAS References 26 Amazon Web Services References 26 ArchivedAbstract Network Attached Storage (NAS) software is commonly deployed to provide shared file services data protection and high availability to users and applications SoftNAS Cloud a popular NAS solution that can be deployed from the Amazon Web Services (AWS) Marketplace is designed to support a variety of market verticals use cases and workload types Increasingly SoftNAS Cloud is deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS) Network File System (NFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) This paper addresses architectural considerations when deploying SoftNAS Cloud on AWS It also provides best practice guidance for security performance high availability and backup ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 1 Introduction Network Attached Storage (NAS) systems enable data and file sharing and are used for businesscritical applications and data management NAS syste ms are optimized to balance performance interoperability data reliability and recoverability Although widely deployed by IT in traditional data center environments NAS software is increasingly used on AWS a flexible cost effective easy touse cloudcomputing platform Deploying NAS on Amazon Elastic Compute Cloud (Amazon EC2) provides a solution for applications that require the benefits of NAS storage in a software form factor1 About SoftNAS Cloud SoftNAS Cloud is a softwaredefined NAS filer delivered as a virtual appliance running within Amazon EC2 SoftNAS Cloud provides NAS capabilities suitable for the enterprise including MultiAvailability Zone (Multi AZ) high availability with automatic failover in the AWS Cloud SoftNAS Cloud which runs within the customer’s AWS account offers businesscritical data protection required for nonstop operation of applications websites and IT infrastructure on AWS This paper doesn’ t cover all SoftNAS Cloud features For more information see wwwsoftnascom 2 Architecture Considerations This section provides information critical to a successful SoftNAS Cloud installation This information includes application an d data security performance interaction with Amazon Simple Storage Service (Amazon S3) 3 and network security Application and Data Security Security and protection of customer data are the highest priorities when working with SoftNAS Cloud on AWS When you use SoftNAS Cloud in conjunction with AWS security features such as Amazon Virtual Private Cloud (Amazon VPC) 4 Amazon VPC Security Groups and AWS Identity and Access Management (IAM) roles you can deploy a secure data storage solution ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 2 SoftNAS Cloud uses the CentOS Linux distribution which is managed updated and patched as part of a normal release cycle You can use SoftNAS StorageCenter ™ the webaccessible SoftNAS Cloud administration console to check the current software revision and apply available updates For security and compliance reasons the SoftNAS technical support team should approve any custom package before it is installed on a SoftNAS Cloud instance Webbased administration through SoftNAS StorageCenter is SSLencrypted and passwordprotected by default Optional twofactor authentication is also available for use You can administer SoftNAS Cloud through SSH and a secure REST API On AWS all SSH sessions use public/private key access control Logging in as root is prohibited Administrative access through the API and command line interface (CLI) over SSH are SSLencrypted and authenticate d Iptables a commonly used software firewall is included with SoftNAS Cloud and can be customized to accommodate more restrictive and finergrained security controls Data access is performed across a private network by Network File System (NFS) Common Internet File System (CIFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) You can also restrict the list or range of client addresses allowed to perform data access SoftNAS Cloud offers encryption options for data security – both in flight and at rest If NFS is used all Linux authentication schemes are available including Network Information Service (NIS) Lightweight Directory Access Protocol (LDAP) Kerberos and restrictions based on the user ID (UID) and group ID (GID) Using CIFS you manage security through SoftNAS StorageCenter facilitating basic Windows user and group permissions Active Directory integration is supported for more advanced user and permissions management in Windows environments The SnapReplicate ™ feature provides blocklevel replication between two SoftNAS Cloud instances SnapReplicate between source and target SoftNAS Cloud instances sends all data through encrypted SSH tunnels and authenticates using RSA (public key infrastructure PKI ) Data is encrypted in transit using industrystandard ciphers The default cipher for encryption is BlowfishCBC selected for its balance of speed and security However you can use any cipher supported by SSH including AES256bitCBC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 3 SoftNAS Clo ud uses the IAM service to control the SoftNAS Cloud appliance’s access to other AWS services5 IAM roles are designed to allow applications to securely make API calls from an instance without requiring the explicit management and storage of access keys When an IAM role is applied to an EC2 instance the role handles key management rotating keys periodically and making them available to applications through Amazon EC2 metadata Performance The performance of a NAS system on Amazon EC2 depends on many factors including the Amazon EC2 instance type the number and configuration of Amazon Elastic Block Store (Amazon EBS) volumes6 the type of Amazon EBS volume the use of Provisioned IOPS with Amazon EBS and the application workload Benchmark your application on several Amazon EC2 instance types and storage configurations to select the most appropriate configuration SoftNAS Cloud provides Amazon Machine Images (AMIs ) that support both paravirtual (PV) and hardware virtual machine (HVM) virtualization To take advantage of special hardware extensions (CPU network and storage) and for optimal performance SoftNAS recommends that you use a current generation instance type and an HVM AMI with single root input/output virtualization (SRIOV) support To increase the performance of your system you need to know which of the server’s resources is the performance constraint If CPU or memory limits your system performance you can scale up the memory compute and network resources available to the software by choosing a larger Amazon EC2 instance type Use StorageCenter dashboard performance charts and Amazon CloudWatch to monitor your performance and throughput metrics7 Depending on the instance type and size chosen EC2 instances are allocated varying amounts of CPU memory and network capabilities Some instance families have higher ratios of CPU to memory or higher ratios of memory to CPU In general to achieve the best performance from your SoftNAS Cloud virtual appliance select an instance with large amounts of memory up to 70 percent of which will be dedicated to highspeed dynamic randomaccess memory (DRAM ) cache If you require more than 120 MB/s NAS throughput for more demanding use cases select an instance with advanced networking AWS provides instances that support 10 and 20 Gbps network interfaces If available ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 4 choose an EBSoptimized instance which uses a dedicated network path to EBS storage For production workloads SoftNAS recommends starting with a larger EC2 instance size coupled with monitoring of CloudWatch metrics as workloads are increased to their typical levels This ensures applications have sufficient IOPS and throughput as they’re brought online Continue monitoring the application using SoftNAS StorageCenter and CloudWatch metrics in particular CPU and network usage to determine how well the chosen instance size is serving your unique workloads After a period of time (eg 30 days) with your workload in production it will become apparent if the instance is well matched to the production workloads As your load increases if CPU or network usage reaches 75 percent or higher you might need to increase instance si ze to achieve full throughput at low latencies If CPU and network usage are below 40 to 50 percent you can consider decreasing the instance size during a maintenance window to reduce operating costs SoftNAS does not recommend using T1 or T2 instances as they are designed for burstable workloads and can run out of CPU credits during sustained heavy usage At the time of this writing SoftNAS recommends the m42xlarge as a minimum default AWS instance size the m44xlarge for medium workloads and the m410xlarge for heavier workloads as seen in Figure 1 below A SoftNAS representative can help with further sizing guidance About RAM Usage SoftNAS Cloud allocates 50 percent of available RAM for use as Zettabyte File System (ZFS) file system cache Remaining RAM is used by the Linux operating system SoftNAS Cloud processes and NAS services It’s typical to see 80 to 90 percent of RAM allocated and in use ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 5 Later instance families also supported Figure 1: AWS instance to workload If your performance is limited by disk I/O you can make configuration changes to improve the performance of your disk and caching resources Multilevel Cache Readintensive workloads benefit from additional RAM as level 1 cache (ZFS ARC) plus level 2 cache (ZFS L2ARC) Leverage the ephemeral SSD disks attached to certain EC2 instances to provide additional highspeed read cache Because data on ephemeral disks can be lost whenever an EC2 instance stops and restarts or if underlying hardware changes or fails use ephemeral disks only for read cache purposes and never as a write log Amazon EBS Performance Optimizations Because Amazon EBS is connected to an EC2 instance over the network instances with higher network bandwidth can provide more Amazon EBS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 6 performance Some instance types support the Amazon EBSoptimized flag (ec2:EbsOptimized) This flag provides a dedicated network interface for Amazon EBS bound traffic (storage I/O) and is designed to reduce variability in storage performance due to contention with network I/O The chart here provides an outline of expected bandwidth throughput and Max IOPS per instance type and size8 For SSD based volume types Amazon EBS measures an I/O operation as one that is 256 KB or smaller I/O operations larger than 256 KB are counted in 256 KB increments For example a 1024 KB I/O would count as four 256 KB IOPs Amazon EBS also combines smaller I /O operations into a single operation where possible to achieve higher performance for all volume types Benefits of Each EBS Volume Type and Relevant Storage Application Magnetic Backed Magneticbacked volume types support higher block sizes up to 1024 KB Throughput Optimized HDD (st1) and Cold HDD (sc1) Amazon EBS volume types are based on magnetic storage technology The Throughput Optimized HDD (st1) volume type is designed for sequential read/write workloads (eg big data) It can achieve very hi gh throughput (500 MB/s) for sequential read/write workloads (compared to 160 MB/s and 320 MB/s for SSDbacked gp2 and io1 respectively) Generally big data workloads operate on very large sequential datasets and generate data for storage in a similar way The st1 volume type has a baseline performance of 40 MB/s per terabyte (TB) of allocated storage and like gp2 can burst beyond the baseline performance for a short period of time The Cold HDD (sc1) volume type is designed for high density and infrequent access workloads This volume type is suitable for cold storage (infrequent access) applications where low cost is important Unlike st1 the baseline performance of an sc1 volume is 12 MB/s per TB of allocated storage It ’s important to note that Amazon S3 achieves high availability ( HA) by default within a single region whereas sc1 volumes have to be mirrored across Availability Zones to achieve parity with Amazon S3 in durability and availability of the data (This doubles and triples the cost of sc1 when compared to Amazon S3) Nevertheless depending on certain access patterns (eg cold ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 7 versus warm) of the data the cost of sc1 volumes can be cheaper for certain workloads SSD Backed General Purpose (gp2) and Provisioned IOPS (io1) SSD volumes can achieve faster IOPS performance and very high throughput on random read/write workloads when compared to magnetic disks but at a higher price point However gp2 and io1 volume types are limited to a throughput of  320 MB/s (160 MB/s for gp2 320 MB/s for io1) General Purpose (gp2) volumes provide a fixed 1:3 ratio between gigabytes and IOPS provisioned so a 100 GB General Purpose volume provides a baseline of 300 IOPS Gp2 volumes less than 1 TB in size can also burst for short periods up to 3000 IOPS You can provision General Purpose volumes up to 16 TB and 10000 IOPS Provisioned IOPS (io1) volumes are intended for workloads that demand consistent performance such as databases You can create Provisioned IOPS volumes up to 16 TB and 20000 IOPS Over a year Amazon EBS Provisioned IOPS volumes are designed to deliver within 10 percent of the Provisioned IOPS performance 999 percent of the time There are differences in total throughput capabilities between Provisioned IOPS (io1) and General Purpose SSD (gp2) volumes Io1 volumes are designed to provide up to 320 MB/second of throughput while gp2 volumes are designed to provide up to 160 MB/second RAID If you need more I/O capabilities than a single volume can provide you can create an array of volumes with redundant array of independent disks (RAID ) software to aggregate the performance capabilities of each volume in the array For example a stripe of two 4000 IOPS volumes allows for a theoretical maximum of 8000 IOPS RAID 0 and RAID 10 are the two RAID levels recommended for use with Amazon EBS RAID 0 or striping has the advantage of providing a linear performance increase with every volume added to the array (up to the maximum capabilities of the host instance) Two 4000 IOPS volumes provide 8000 IOPS three ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 8 4000 IOPS volumes provide 12000 IOPS and so on However because RAID 0 does not provide redundancy it has less durability than a single volume It also aggregates the failure rate of each volume in the array RAID 10 is a good compromise because it provides increased redundancy aggregates the read performance of all volumes in the array and provides a mirror of stripes in the array However RAID 10 isn’t without drawbacks There is a 50 percent penalty to write performance and a 50 percent reduction in available storage capacity This penalty is due to half of the disks in the array being reserved for a mirror RAID 10 has the same write penalty as RAID 1 RAID 5 and 6 are not recommended because parity calculations incur significant overhead without dramatically increasing the durability of the volume set Such a large write penalty makes these RAID levels very expensive to run in terms of both dollars and I/O In general RAID using mirroring or parity for increased durability adds extra steps and reduces performance while not necessarily increasing the data’s durability Amazon EBS has its own durability mechanisms It can be supplemented with Amazon S3backed snapshots and SoftNAS replication to more than one Availability Zone DRAM cache can dramatically increase read IOPS performance Choose instances with more memory for the best read IOPS and throughput For an even larger read cache choose instance types with ephemeral SSD locally attached disks and attach an SSD cache device to each storage pool To ensure their availability attach local SSD ephemeral disks to the SoftNAS instance when you create the instance Many instance types provide instance store or “ephemeral” volumes Although SoftNAS doesn ’t support the use of these volumes for dataset storage you can use them as a read cache for storage pools These volumes are located physically inside the underlying host of the instance and are not affected by performance variability from network overhead Although this varies by instance type most instancestore volumes (especially on newer instance types) are SSD volumes However stopping and starting an instance can move it to another underlying host which causes all data on these volumes to be lost This isn’t an issue for caching but is detrimental for dataset storage ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 9 If you require additional write caching or IOPS you can attach SSD backed Amazon EBS volumes to a storage pool The use of locally attached ephemeral disks for write cache isn ’t recommended Consider your workload requirements and priorities If the amount of storage and cost take priority over speed magnetic EBS volumes might be the right choice General Purpose SSD or Provisioned IOPS volumes offer the best mix of price performance and total storage space With AWS and SoftNAS Cloud you can add more storage or configure a different type of storage on the fly to address a variety of price or performance needs Using Amazon S3 with SoftNAS Cloud SoftNAS Cloud provides support for a feature known as SoftNAS S3 Cloud Disks These are abstractions of Amazon S3 storage presented as block devices By leveraging Amazon S3 storage SoftNAS Cloud can scale cloud storage to practically unlimited capacity You can provision each cloud disk to hold up to four petabytes (PB) of data If a larger data store is required you can use RAID to aggregate multiple cloud disks Each SoftNAS S3 Cloud Disk occupies a single Amazon S3 bucket in AWS The administrator chooses the AWS Region in which to create the S3 bucket and cloud disk For best performance choose the same r egion for both the SoftNAS Cloud EC2 instance and its S3 buckets SoftNAS Cloud storage pools and volumes using cloud disks have the full enterprisegrade NAS features (for example deduplication compression caching storage snapshots and so on) available and can be readily published for shared access through NFS CIFS AFP and iSCSI When you use a cloud disk use a block device local to the SoftNAS Cloud virtual appliance as a read cache to reduce Amazon S3 I/O charges and improve IOPS and performance for readintensive workloads For best S3 cloud disk performan ce and security specify an S3 endpoint within the VPC in which you deploy SoftNAS Cloud The S3 endpoint ensures S3 traffic is optimally routed through the VPC and not across the NAT gateway or Internet which is slower and less secure ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 10 You can also encrypt S3 cloud disks to protect all Amazon S3 I/O should it need to travel over the Internet or outside a VPC (eg from on premises or across regions ) Network Security Amazon VPC is a logically separated section of the AWS Cloud that provides you with com plete control over the networking configuration This includes the provisioning of an IP space subnet size and scope access control lists and route tables You can configure subnets inside an Amazon VPC as either public or private The difference between public and private subnets is that a public subnet has a direct route to the Internet; a private one does not When you configure an Amazon VPC for use with SoftNAS Cloud consider the level of access that your use case requires If the SoftNAS Cloud vir tual appliance does n’t need to be accessed from the Internet consider placing it in private Amazon VPC subnets To leverage SoftNAS S3 Cloud Disks the SoftNAS Cloud virtual appliance must have a way to access the S3 bucket either through the Internet or a configured VPC endpoint A VPC Security Group acts as a virtual firewall for your instance to control inbound and outbound traffic For each Security Group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic Open only those ports that are required for the operation of your application Restrict access to specific remote subnets or hosts For a SoftNAS Cloud installation determine which ports must be opened to allow access to required services These ports can be divided in to three categories: management file services and high availability Open the following ports to manage SoftNAS Cloud through the SoftNAS StorageCenter and SSH As the following table indicates you should limit the source to hosts and subnets where management clients are located ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 11 Management Type Protocol Port Source SSH TCP 22 Management HTTPS TCP 443 Management When providing file services first determine which services you will provide The following tables show which ports to open for security group configuration As the tables indicate the source should be limited to the clients and subnets that consume these services AFP Type Protoco l Port Source Custom TCP Rule TCP 548 Clients Custom TCP Rule TCP 427 Clients NFS Type Protocol Port Source Custom TCP Rule TCP 111 Clients Custom TCP Rule TCP 2010 Clients Custom TCP Rule TCP 2011 Clients Custom TCP Rule TCP 2013 Clients Custom TCP Rule TCP 2014 Clients Custom TCP Rule TCP 2049 Clients Custom UDP Rule UDP 111 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 12 Custom UDP Rule UDP 2010 Clients Custom UDP Rule UDP 2011 Clients Custom UDP Rule UDP 2013 Clients Custom UDP Rule UDP 2014 Clients Custom UDP Rule UDP 2049 Clients CIFS/SMB Type Protocol Port Source Custom TCP Rule TCP 137 Clients Custom TCP Rule TCP 138 Clients Custom TCP Rule TCP 139 Clients Custom UDP Rule UDP 137 Clients Custom UDP Rule UDP 138 Clients Custom UDP Rule UDP 139 Clients Custom TCP Rule TCP 445 Clients Custom TCP Rule TCP 135 Clients Active Directory Integration Type Protocol Port Source LDAP TCP 389 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 13 iSCSI Type Protocol Port Source Custom TCP Rule TCP 3260 Client IPs The following security group configuration is required when you deploy SoftNAS SNAP HA which is discussed later in this whitepaper As the table indicates you should limit the source to the IP addresses of the SoftNAS Cloud virtual appliance High Availability with SNAP HA™ Type Protocol Port Source Custom ICMP Rule Echo Reply 22 SoftNAS Cloud IPs or Security Group ID* Custom ICMP Rule Echo Request 443 SoftNAS Cloud IPs or Security Group ID* * http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingnetwork securityhtml Data Protection Considerations Creating a comprehensive strategy for backing up and restoring data is complex In some industries you must consider regulatory requirements for data security privacy and records retention SoftNAS Cloud provides multiple capabilities for data redundancy Always have one or more independent data backups beyond the data redundancy provided by SoftNAS Cloud You can back up data disks using EBS snapshots and thirdparty backup tools to create offsite or other backup copies to protect data SoftNAS Cloud provides multiple levels of data protection and redundancy but it isn’t intended to replace normal data backup processes Instead these levels of redundancy and data protection reduce risks associated with data loss or data ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 14 integrity and provide features that enable rapid recovery often without the need to restore from a backup copy SoftNAS Cloud is CopyOn Write (COW ) File Syst em SoftNAS Cloud leverages the reliable mature ZFS ZFS is a copy onwrite file System which means that existing data is never directly overwritten Instead new data blocks are appended to each file conceptually similar to a tape Figure 2 depicts how the file System inside SoftNAS Cloud maintains multiple versions known as storage snapshots without overwriting the existing data Figure 2: Copy onwrite file system Automatic Error Detection and Correction SoftNAS Cloud automatically detects and corrects unforeseeable data errors These errors can occur over time for many different reasons including bad sectors network or other I/O errors SoftNAS Cloud also provides protection against potential “bit rot” disk media deterioration over time caused by magnetism decay cosmic ray effects and other sporadic issues that can cause data storage or retrieval errors Proven ZFS data integrity measures are implemented by SoftNAS Cloud to detect errors repair them automatically and ensure data integrity is maintained Each read is validated against a 256bit checksum code When ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 15 errors are detected the system automatically repairs the block with the corrected data transparently so applications aren’t affected and data integrity is maintained Periodically administrators can “ scrub ” storage pools to provide even higher levels of data integrity SoftNAS Cloud Snapshots SoftNAS Cloud snapshots are volumebased point intime copies of data StorageCenter provides a rich set of snapshot scheduling and ondemand capabilities As Figure 3 shows snapshots provide file system versioning Figure 3: SoftNAS Cloud volumebased snapshots SoftNAS Cloud snapshots are integrated with Windows Previous Versions which is provided through the Microsoft Volume Shadow Copy Service (VSS ) API This feature is accessible to Windows operating system users through the Previous Versions tab so IT administrators don’t need to assist in file recovery Microsoft server and desktop operating system users can use scheduled snapshots to recover deleted files view or restore a version of a file that was overwritten and compare file versions side by side Operating systems that are supported include Windows 7 Windows 8 Windows Server 2008 and Windows Server 2012 Snapshots consume storage pool capacity so you must monitor usage for over consumption Storage snapshots grow incrementally as file system data is modified over a period of time SoftNAS Cloud automatically manages snapshots based on snapshot policies to prevent snapshots from consuming all available space Several snapshot policies are provided as a starting point and you can also create custom snapshot policies Snapshot policies are independent ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 16 of each volume so when a snapshot policy is changed it’s applied across all volumes that reference that policy When allocating storage pool space and choosing snapshot policies be sure to plan for enough additional storage to hold the snapshot data for the retention period you require SoftNAS SnapClones™ SnapClones provide read/write clones of SoftNAS Cloud snapshots They’re created instantaneously because of the spaceefficient copy onwrite model Initially SnapClones take up no capacity and grow only when writes are made to the SnapClone as shown in Figure 4 You can create any number of SnapClones from a storage snapshot Figure 4: SoftNAS SnapClones You can mount SnapClones as external NFS or CIFS shares They’re good for manipulating copies of data that are too large or complex to be practically copied For example testing new application versions against real data and selective recovery of files and folders using the native file browsers of the client operating system You can create a SnapClone instantly even for very large datasets in the tens to hundreds of TBs ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 17 Amazon EBS Snapshots SoftNAS Cloud has a builtin capability to leverage Amazon EBS point intime snapshots to back up EBS based storage pools The Amazon EBS snapshot copies the entire SoftNAS Cloud storage pool for backup and recovery purposes Advantages include the ability to use the AWS Management Console to manage the snapshots Capacity for the Amazon EBS snapshots isn’t counted against the storage pool capacity You can use Amazon EBS snapshots for longerterm data retention Deployment Scenarios The design of your SoftNAS Cloud installation on Amazon EC2 depends on the amount of usable storage and your requirements for IOPS and availability HighAvailability Architecture To realize high availability for storage infrastructure on AWS SoftNAS strongly recommends implementing SNAP HA in a highavailability configuration The SNAP HA functionality in SoftNAS Cloud provides high availability automatic and seamless failover across Availability Zones SNAP HA leverag es secure blocklevel replication provided by SoftNAS SnapReplicate to provide a secondary copy of data to a controller in another Availability Zone SNAP HA also provides both automatic and manual failover High availability and crosszone replication eliminates or minimizes downtime It is not however intended to replace regular data backups which are always required to fully protect important data especially in disaster recovery scenarios There are two methods for achieving high availability across zones: Elastic IP (EIP) addresses and SoftNAS Cloud Private Virtual IPbased HA The use of Private Virtual IPbased HA is recommended for best security performance and lowest cost All NAS traffic remains inside the VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 18 Support for EIP is available for situations that require a “routable” IP address or the rare cases where data shares must be made available over the Internet Of course access via EIP addresses can be locked down using Security Groups Figure 5: Task creation and result aggregation MultiAZ HA operates within a VPC Optionally you can route NAS traffic through a floating EIP combined with SoftNAS patent ed9 HA technology That is NFS CIFS AFP and iSCSI traffic are routed to a primary SoftNAS controller in one Availability Zone and a secondary controller operates in a different Availability Zone NAS clients can be located in any Availability Zone SnapReplicate performs block replication from the primary controller A to the backup controller B keeping the secondary updated with the latest changed data blocks once per minute In the event of a failure in Availability Zone 1 (shown in Figure 5) the Elastic HA ™ IP address automatically fails over to controller B in Availability Zone 2 in less than 30 seconds Upon failover all NFS CIFS AFP and iSCSI sessions reconnect with no impact on NAS clients (that is no stale file handles and no need to restart) ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 19 HA with Private Virtual IP Addresses The patent ed9 Virtual IPbased HA technology in SoftNAS Cloud enables you to deploy two SoftNAS Cloud instances across different Availability Zones inside the private subnet of a VPC Then you can configure the SoftNAS Cloud instances with private IP addresses which are completely isolated from the Internet This allows for more flexible deployment options and greater control over access to the appliance In addition using private IP addresses enables faster failover because waiting for an EIP to switch instances isn ’t required Further Virtual IP HA is less costly because there is no I/O flowing across an EIP Instead all traffic remains completely within the VPC For most use cases MultiAZ HA using private virtual IP addresses is the recommended method Failover usually takes place in 15 to 20 seconds from the time a failure is detected SoftNAS Cloud uses patent ed9 techniques that allow NAS clients to stay connected via NFS CIFS iSCSI and AFP in case of a failover ensuring that services are not interrupted and continue to operate without downtime ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 20 Figure 6: Crosszone HA with virtual private IP addresses For more information about implementation and HA design best practices see the SoftNAS High Availability Guide 10 Single Controller Architecture In scenarios where you don’t r equire high availability you can deploy a single controller Figure 7 shows a basic SoftNAS Cloud instance running within a VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 21 Figure 7: Basic SoftNAS Cloud instance running within a VPC In these scenarios you can combine EBS volumes into a RAID 10 ar ray for the storage pool to provide usable storage space with no drive failure redundancy You can also configure storage pools using a SoftNAS S3 Cloud Disk for RAID 0 (striping) for improved performance and IOPS These examples are for illustration purposes only Typically RAID 0 is sufficient as the underlying EBS and S3 storage devices already provide redundancy Volumes are provisioned from the storage pools and then shared through NFS CIFS/SMB AFP or iSCSI Hybrid Cloud Architecture You can deploy SoftNAS Cloud in a Hybrid Cloud architecture in which a SoftNAS Cloud virtual appliance is installed both in Amazon EC2 and on premises This architecture enables replication of data from on premises to Amazon EC2 and vice versa providing synchronized data access to users and ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 22 applications Hybrid Cloud architectures are also useful for backup and disaster recovery scenarios in which AWS can be used as an offsite backup location Replication You can deploy SoftNAS Cloud in Amazon EC2 as a replication target using SnapReplicate This enables scenarios such as data replicas disaster recovery and development environments by copying onsite production data into Amazon EC2 as shown in Figure 8 Figure 8: Hybrid Cloud backup and disaster recovery File Gateway to Amazon S3 You can deploy SoftNAS Cloud in file gateway use cases where SoftNAS Cloud operates on premises deployed in local data centers on popular hypervisors such as VMware vSphere SoftNAS Cloud connects to Amazon S3 storage treating Amazon S3 as a disk device The Amazon S3 disk device is added to a storage pool where volumes can export CIFS NFS AFP and iSCSI Amazon S3 is cached with block disk devices for read and write I/O Write I/O is cached at primary storage speeds and then flushed to Amazon S3 at the speed of the WAN When using Amazon S3based volumes with backup software the write cache dramatically shortens the backup window ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 23 Figure 9: SoftNAS Cloud Automation Options This section describes how the SoftNAS Cloud REST API CLI and AWS CloudFormation can be used for automation API and CLI SoftNAS Cloud provides a secure REST API and CLI The REST API provides access to the same storage administration capabilities from any programming language using HTTPS and REST verb commands returning JSONformatted response strings The CLI provides command line access to the API set for quick and easy storage administration Both methods are available for programmatic storage administration by DevOps teams who want to design storage into automated processes For more information see the SoftNAS API and CLI Guide 11 AWS CloudFormation The AWS CloudFormation service enables developers and businesses to create a collection of related AWS resources and provision them in an orderly and predictable way12 SoftNAS Cloud provides sample CloudFormation templates that you can use for automation You can find these templates here and in the Further Reading section of this paper When you work with CloudFormation templates pay ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 24 careful attention to the Instance Type Mappings and User Data sections which are shown in the following examples List all the instance types that you want to appear Edit this section with the latest instance types available Map to the appropriate AMIs here (SoftNAS regularly updates AMIs so this section must be updated accordingly ) This section is used to pass variables to the SoftNAS Cloud CLI for additional configuration ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 25 Conclusion SoftNAS Cloud is a popular NAS option on the AWS Cloud computing platform By following the implementation considerations and best practices highlighted in this paper you will maximize the performance durability and security of your SoftNAS Cloud implementation on AWS For more information about SoftNAS Clo ud see wwwsoftnascom Get a free 30day trial of SoftNAS Cloud now13 Contributors The following individuals and organizations contributed to this document:  Eric Olson VP Development SoftNAS  Kevin Brown Solutions Architect SoftNAS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 26  Brandon Chavis Solutions Architect Amazon Web Services  Juan Villa Solutions Architect Amazon Web Services  Ian Scofield Solutions Architect Amazon Web Services Further Reading SoftNAS References SoftNAS Cloud Installation Guide SoftNAS Reference Guide SoftNAS Cloud High Availability Guide SoftNAS Cloud API and Cloud Guide AWS CloudFormation Templates for HVM Amazon Web Services References Amazon Elastic Block Store Amazon EC2 Instances AWS Security Best Practices Amazon Virtual Private Cloud Documentation Amazon EC2 SLA 1 http://awsamazoncom/ec2/ 2 http://wwwsoftnascom/ Notes ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 27 3 http://awsamazoncom/s3/ 4 http://awsamazoncom/vpc/ 5 http://awsamazoncom/iam/ 6 http://awsamazoncom/ebs/ 7 http://awsamazoncom/cloudwatch/ 8 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l#ebsoptimizationsupport 9 US Pat Nos 9378262; 9584363 Other patents pending 10 https://wwwsoftnascom/docs/softnas/v3/snaphahtml/indexhtm 11 https://wwwsoftnascom/docs/softnas/v3/apihtml/ 12 http://awsamazoncom/cloudformation/ 13 http://softnascom/trynow?utm_source=aws&utm_medium=white paper&utm_campaign=aws wp2017,General,consultant,Best Practices Strategies_for_Managing_Access_to_AWS_Resources_in_AWS_Marketplace,"ArchivedStrategies for Managing Access to AWS Resources in AWS Marketplace July 201 6 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 2 of 13 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 3 of 13 Contents Abstract 3 Overview 4 Accessing ApplicationSpecific Resources 4 The EC2 Instance Role 5 Accessing Resources on Behalf of Users 7 The EC2 Instance Role 8 The Account Access Role 8 Switching Roles 11 AWS Marketplace Considerations 12 Using External IDs 12 Using Wildcards for IAM Roles 12 Great Documentation 13 Summary 13 Contributors 13 Notes 13 Abstract This paper discusses how applications in AWS Marketplace that require access to AWS resources c an use AWS Identity and Access Management (IAM) roles f or authentication to help protect customers from potential security vulnerabilities ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 4 of 13 Overview Applications in AWS Marketplace that require access to Amazon Web Services (AWS) resources must follow security best practices when accessing AWS to help protect customers from potential security vulnerabilities Typically application authors will use a combination of access and secret keys to authenticate against AWS resources However for AWS Marketplace 1 we require application authors to use AWS Identity and Access Management (IAM)2 roles and do not permit the use of access or secret keys This requirement affects two types of applications: applications that interact with AWS resources to operate and applications that interact with AWS resources on behalf of specific users either in the same or in different AWS accounts When an application requires access to AWS resources to operate temporary credentials can be obtained by using IAM roles for Amazon Elastic Compute Cloud (Amazon EC2) instances 3 Applications can then interact with AWS resources without needing to store secure and manage a user’s access keys When an application needs to access AWS resources on behalf of different users either in the same or in different AWS accounts the same technique can be applied IAM roles can be used to access both the resources required by the application and the resources the application may access on behalf of a user By using IAM roles instead of IAM users for both applicationspecific and user specific access you can remove the need for customers to distribute and manage access keys The following sections explain how you can adopt this strategy Accessing ApplicationSpecific Resources When an application needs to interact with AWS resources access should be provided by using IAM roles and not IAM users For example if an application needs to access an Amazon DynamoDB4 database and an Amazon Simple Storage Service ( S3)5 bucket access to these resources are not userspecific ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 5 of 13 Figure 1: Sample architecture for accessing applicationspecific resources The EC2 Instance Role The EC2 instance is started with an instance role attached This role has a policy that grants access to the DynamoDB database and the S3 b ucket within the same account When making API calls to Amazon S3 your application must retrieve the temporary credentials from the IAM role and use those credentials You can retrieve these credentials from the instance metadata (http://169254169254/latest/metadata/iam/security credentials/rolename) If you are using an AWS SDK the AWS Command Line Interface (AWS CLI )6 or AWS Tools for Windows PowerShell 7 these credentials will be obtained automatically Using roles in this way has several benefits Because role credentials are temporary and rotated automatically you don't have to manage credentials and you don't have to worry about longterm security risks To create and use an IAM instance role: 1 Create a new instance role 2 Add a trust relationship that allows ec2amazonawscom to assume the role ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 6 of 13 3 Create a new policy that specif ies the permissions required 4 Add the new policy to the new instance role 5 Create a new EC2 instance that specifies the IAM role 6 Build your app by using one of the AWS SDKs Do not specify credentials when calling methods because temporary credentials will be automatically added by the SDK For more detailed instructions s ee IAM Roles for Amazon EC2 in the IAM documentation8 Note You can also configure launch settings used by Auto Scaling groups to use IAM roles In our example we’ll create the instance role with the following trust relationship: { ""Version"": ""2008 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Principal"": { ""Service"": ""ec2amazonawscom"" } ""Action"": ""sts:AssumeRole"" } ] } Add the AmazonDynamoDBFullAccess and AmazonS3FullAccess policies to the IAM role and then create the EC2 instance by specifying the role ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 7 of 13 Accessing Resources on Behalf of Users To illustrate the scenario of accessing AWS resources on behalf of specific users consider an application that processes images stored in S3 buckets on behalf of a user The application itself might use services such as DynamoDB for storin g configuration and job status The following diagram shows the architecture Figure 2: Sample architecture for accessing AWS resources on behalf of users In this scenario the EC2 instance hosting the application would use an instance profile that gives specific permissions to DynamoDB When accessing Amazon S3 resources on behalf of the user the application would switch to a different IAM role: a role that was set up by the user with specific permission to access the S3 buckets This method would allow an application to access resources on behalf of different users without the need to store credentials Users would still need to create IAM policies and IAM roles but this is no different from creating IAM users and IAM roles for the same reason There are two IAM roles in play:  EC2 instance r ole (application role) – This is the role the application uses to obtain temporary credentials to access applicationspecific resources such as the Dynamo DB database  Account access r oles (user roles) – These are the roles the application uses to obtain temporary credentials to access resources for specific users of the application ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 8 of 13 Figure 3: Roles and policies The EC2 Instance Role The EC2 instance role would be configured in the same way as in the first scenario The Account Access Role Since the application can also access S3 buckets and objects from other AWS accounts it is tempting to maintain a list of credentials to access these AWS resources; however the same technique of using roles and temporary credentials is preferred This strategy again removes the need for the application to store anything but benign information or handle key rotation scenarios Using roles across accounts is no more difficult to set up than creating users and assigning polici es but it requires a few extra steps: 1 In the target account (the account that contains the AWS resources): a Create a new IAM r ole b Add a trust relationship that specifies the root of the application hosting account as the principal Include a condition that specifies an external ID ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 9 of 13 c Create a new policy that spec ifies the permissions required and attach it to the role 2 In the application hosting account (the account where the application is hosted): a Create a new policy that specifies that the sts:AssumeRole action is allowed to the role defined in the target account b Attach the new policy to the instance role In the target account we can create a role named myuserrole with the following trust relationship: { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Principal"": { ""AWS"": [ ""arn:aws:iam::111111111111 :root” ] } ""Action"": ""sts:AssumeRole"" ""Condition"": { ""StringEquals"": { ""sts:ExternalId"": "" myapp"" } } } ] } Note that the account number 111111111111 is used in the principal Amazon Resource Name (ARN) to ensure that only IAM users and roles from that account can assume this role Furthermore the inclusion of an sts:ExternalId condition means that the caller also needs this information to complete the AssumeRole function See the code sample later in this paper for information on how this condition is used ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 10 of 13 The permissions added to the role permit access to specific S3 buckets It is good practice to be explicit in permissions rather than using wildcards The following is an example of the permissions added: { ""Version"": ""2012 1017"" ""Statement"": [ { ""Effect"": ""Allow"" ""Action"": [ ""s3:ListBucket"" ] ""Resource"": [ ""arn:aws:s3::: myBucket1"" ""arn:aws:s3::: myBucket2"" ] } ] } Back in the application hosting account we need to add a new permission to the role to allow it to assume the role in the target account: { ""Version"": ""2012 1017"" ""Statement"": { ""Effect"": ""Allow"" ""Action"": ""sts:AssumeRole"" ""Resource"": ""arn:aws:iam:: 222222222222: role/my userrole"" } } You can use a wildcard in the application hosting account since the permissions need to be explicitly defined in the target account This also allows you to access roles across multiple AWS accounts ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 11 of 13 { ""Version"": ""2012 1017"" ""Statement"": { ""Effect"": ""Allow"" ""Action"": ""sts:AssumeRole"" ""Resource"": ""arn:aws:iam::* :role/my userrole"" } } Switching Roles In the application we do not need to code anything special to use the instance role and the permissions that gives us However to access the S3 buckets in the other AWS accounts we will need to assume the new role and use the temporary credentials for that role in our SDK calls The following code snippet shows a Node js example: var accounted = ‘ 222222222222’; 1 var rolename = ‘ myuserrole’; 2 var externalId = ‘ myapp’; 3 var sts = new AWSSTS(); 4 var stsparams = { 5 RoleArn: 'arn:aws:iam::'+ accountid + ':role/' + rolename 6 RoleSessionName: ' myappsession ' 7 ExternalId: externalId 8 DurationSeconds: 3600 9 }; 10 11 AWSconfigcredentials = new AWS EC2MetadataCredentials(); 12 var tempCredentials = new AWSTemporaryCrede ntials(stsparams); 13 var options = { 14 credentials: tempCredentials 15 } 16 var s3 = new AWSS3(optio ns); 17 ArchivedLines 5 –10 define the parameters ( stsparams ) for obtaining the temporary credentials on line 13 We build the RoleArn from parameters defined in lines 1 and 2 along with the externalId in line 3 Once we have the temporary credentials we use these in line 17 to access the S3 resource AWS Marketplace Considerations There are a few things to consider when using IAM roles for AWS Marketplace Using External IDs It is important not to just rely on the role name; you must specify an external ID to be used by the application Furthermore you should allow the customer deploying your application to define the external ID value You should use a different external ID for each AWS account to limit exposure Using Wildcards for IAM Roles Since users will be supplying roles in different accounts you can use wildcards to designate target accounts in the application hosting account You should use a wellknown role name but you can substitute a wildcard for the account number The following example is a good use of a wildcard: arn:aws:iam::* :role/my userrole The following example is not an acceptable use of a wildcard: arn:aws:iam::* ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 13 of 13 Great Documentation Customers need to create IAM roles and polices in the AWS accounts they want to access so you should provide explicit documentation to walk customers through creating the correct roles and policies Summary Applications in AWS Marketplace that require access to AWS resources must implement authentication using IAM r oles as discussed in this guide This helps reduce the potential vulnerabilities within a customer’s AWS account by providing access only to temporary credentials Contributors The following individuals and organizations contributed to this document:  David Aiken partner solutions architect AWS Marketplace Notes 1 https://awsamazoncom/marketplace/ 2 https://awsamazoncom/iam/ 3 https://awsamazoncom/ec2/ 4 https://awsamazoncom/dynamodb/ 5 https://awsamazoncom/s3/ 6 https://awsamazoncom/cli/ 7 https://awsamazoncom/powershell/ 8 https://docsawsamazoncom/AWSEC2/latest/UserGuide/iamrolesfor amazonec2html",General,consultant,Best Practices Strategies_for_Migrating_Oracle_Databases_to_AWS,"This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoawshtmlStrategies for Migrating Oracle Databases to AWS First Published December 2014 Updated January 27 202 2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iii Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iv Contents Introduction 7 Data migration strategies 7 Onestep migration 8 Twostep migration 8 Minimal downtime migration 9 Nearly continuous data replication 9 Tools used for Oracle Database migration 9 Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS 10 Amazon RDS 11 Amazon EC2 11 Data migration methods 12 Migrating data for small Oracle databases 13 Oracle SQL Developer database copy 14 Oracle materialized views 15 Oracle S QL*Loader 17 Oracle Export and Import utilities 21 Migrating data for large Oracle databases 22 Data migration using Oracle Data Pump 23 Data migration using Oracle external tables 34 Data migration using Oracle RMAN 35 Data replication using AWS Database Migration Service 37 Data replication using Oracle GoldenGate 38 Setting up Oracle GoldenGate Hub on Amazon EC2 41 Setting up the source database for use with Oracle GoldenGate 43 Setting up the destination database for use with Oracle GoldenGate 43 Working with the Extract and Replicat utilities of Oracle GoldenGate 44 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html v Running the Extract process of Oracle GoldenGate 44 Transferring files to AWS 47 AWS DataSync 47 AWS Storage Gateway 47 Amazon RDS integration with S3 48 Tsunami UDP 48 AWS Snow Family 48 Conclusion 49 Contributors 49 Further reading 49 Document versions 50 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html vi Abstract Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying enterprise grade solutions in a rapid reliable and cost effective manner Oracle Database is a widely used relational database management system that is deployed in enterprises of all sizes It manage s various forms of data in many phases of business transactions This whitepaper de scribe s the preferred methods for migrating an Oracle Database to AWS and helps you choose the method that is best for your business This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 7 Introduction This whitepaper presents best practices and methods fo r migrating Oracle Database from servers that are on premises or in your data center to AWS Data unlike application binaries cannot be recreated or reinstalled so you should carefully plan your data migr ation and base it on proven best practices AWS offers its customers the flexibility of running Oracle Database on Amazon Relational Database Service (Amazon RDS) the managed database service in the cloud as we ll as Amazon Elastic Compute Cloud (Amazon EC2): • Amazon RDS makes it simple to set up operate and scale a relational database in the cloud It provides cost efficient resizable capacity for an open standard relational database and manages common database administration tasks • Amazon EC2 provides scalable computing ca pacity in the cloud Using Amazon EC2 removes the need to invest in hardware up front so you can develop and deploy applications faster You can use Amazon EC2 to launch as many or as few virtual servers as you need configure security and networking and manage storage Running the database on Amazon EC2 is very similar to running the database on your own servers Depending on whether you choose to run your Oracle Database on Amazon EC2 or Amazon RDS the process for data migration can differ For example users don’t have OSlevel access in Amazon RDS instances It ’s important to understand the different possible strategies so you can choose the one that best fits your need s Data migration strategies The migration strategy you choose depends on several factors: • The size of the database • Network connectivity between the source server and AWS • The version and edition of your Oracle Database software • The database options tools and utilities that are available • The amount of time that is available for migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 8 • Whether the migration and switchover to AWS will be done in one step or a sequence of steps over time The following sections describe some common migration strategies Onestep migration Onestep migration is a good option for small databases tha t can be shut down for 24 to 72 hours During the shut down period all the data from the source database is extracted and the extracted data is migrated to the destination database in AWS The destination database in AWS is tested and validated for data consistency with the source Once all validations have passed the database is switched over to AWS Twostep migration Twostep migration is a commonly used method because it requires only minimal downtime and can be used for databases of any size: 1 The da ta is extracted from the source database at a point in time (preferably during nonpeak usage) and migrated while the database is still up and running Because there is no downtime at this point the migration window can be sufficiently large After you co mplete the data migration you can validate the data in the destination database for consistency with the source and test the destination database on AWS for performance connectivity to the applications and any other criteria as needed 2 Data changed in the source database after the initial data migration is propagated to the destination before switchover This step synchronizes the source and destination databases This should be scheduled for a time when the database can be shut down (usually over a few hours late at night on a weekend) During this process there won’t be any more changes to the source database because it will be unavailable to the applications Normally the amount of data that is changed after the first step is small compar ed to the total size of the database so this step will be quick and requires only minimal downtime After all the changed data is migrated you can validate the data in the destination database perform necessary tests and if all tests are passed switc h over to the database in AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 9 Minimal downtime migration Some business situations require database migration with little to no downtime This requires detailed planning and the necessary data replication tools for proper completion These migration method ologies typically involve two components: an initial bulk extract/load followed by the application of any changes that occurred during the time the bulk step took to run After the changes have applied you should validate the migrated data and conduct an y necessary testing The replication process synchronizes the destination database with the source database and continues to replicate all data changes at the source to the destination Synchronous replication can have an effect on the performance of the source database so if a few minutes of downtime for the database is acceptable then you should set up asynchronous replication instead You can switch over to the database in AWS at any time because the source and destination databases will always be in sync There are a number of tools available to help with minimal downtime migration The AWS Database Migration Service (AWS DMS) supports a range of database engines including Oracle running on premise s in EC 2 or on RDS Oracle GoldenGate is another option for real time data replication There are also third party tools available to do the replication Nearly c ontinuous data replication You can us e nearly continuous data replication if the destination database in AWS is used as a clone for reporting and business intelligence (BI) or for disaster recovery (DR) purposes In this case the process is exactly the same as minimal downtime migration ex cept that there is no switchover and the replication never stops Tools used for Oracle Database migration A number of tools and technologies are available for data migration You can use some of these tools interchangeably or you can use other third party tools or open source tools available in the market This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 10 • AWS DMS helps you move databases to and from AWS easily and securely It supports most commercial and open source databases and facilitates both homogeneous and heterogeneous migrations AWS DMS offers change data capture technology to keep databases in sync and minimize downtime during a migration It is a manag ed service with no client install required • Oracle Recovery Manager (RMAN) is a tool available from Oracle for performing and managing Oracle Database backups and rest orations RMAN allows full hot or cold backups plus incremental backups RMAN maintains a catalogue of the backups making the restoration process simple and dependable RMAN can also duplicate or clone a database from a backup or from an active database • Oracle Data Pump Export is a versatile utility for exporting and importing data and metadata from or to Oracle databases You can perform Data Pump export/ import on an entire database selective schemas table spaces or database objects Data Pump export/ import also has powerful data filtering capabilities for selective export or import of data • Oracle GoldenGate is a tool for replicating data between a source and one or more destination databases You can use it to build high availability architectures You can also use it to perform real time data integration transactional change data capture and replication in heterogeneous IT environments • Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development an d management This Java based tool is available for Microsoft Windows Linux or iOS X • Oracle SQL*Loader is a bulk data load utility available from Oracle for loading data from external files into a database SQL*Loader is included as part of the full database client installation Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS To migrate your data to AWS you need a source database (either onpremises or in a data center) and a destination database in AWS Based on your business needs you can choose between using Amazon RDS for Oracle or installing and managing the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 11 database on your own in Amazon EC2 instance To help you choose the servic e that ’s best for your business see the following sections Amazon RDS Many customers prefer Amazon RDS for Oracle because it frees them to focus on application development Amazon RDS automates time consuming database administration tasks including prov isioning backups software patching monitoring and hardware scaling Amazon RDS simplifies the task of running a database by eliminating the need to plan and provision the infrastructure as well as install configure and maintain the database software Amazon RDS for Oracle makes it easy to use replication to enhance availability and reliability for production workloads By using the Multi Availability Zone (AZ) deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a synchronously replicated secondary database As with all AWS services no upfront investments are required and you pay only for the resources you use For more information see Amazon RDS for Oracle To use Amazon RDS log in to your AWS account and start an Amazon RDS Oracle instance from the AWS Management Console A good strategy is to treat this as an interim migration database from which the final database will be created Do not enable the Multi AZ feature until the data migration is completely done because replication for Multi AZ will hinder data migration performance Be sure to give the instance enough space to store the import data files Typically this requires you to provision twice as much capacity as the size of the database Amazon EC2 Alternatively you can run an Oracle database directly on Amazon EC2 which gives you full control over se tup of the entire infrastructure and database environment This option provides a familiar approach but also requires you to set up configure manage and tune all the components such as Amazon EC2 instances networking storage volumes scalability and security as needed (based on AWS architecture best practices) For more information see the Advanced Architectures for Oracle Database on Amazon EC 2 whitepaper for guidance about the appropriate architecture to choose and for installation and configuration instructions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 12 VMware Cloud on AWS VMware Cloud on AWS is the preferred service for AWS for all vSphere based workloads VMware Cloud on AWS brings the VMware software designed data center (SDDC ) software to the AWS Cloud with op timized access to native AWS services If your Oracle workload runs on VMware on premises you can easily migrate the Oracle workloads to the AWS C loud using VMware Cloud on AWS VMware Cloud on AWS has the capability to run Oracle Real Application Clusters (RAC) workloads It allows multi cast protocols and provides shared storage capability across VMs running in VMware Cloud on AWS SDDC VMware provides native migration capabiliti es such as VMware VMotion and VMware HCX to move virtual machines ( VMs) from on premises to the VMware Cloud on AWS Depending on Orac le workload performance patterns service level agreement ( SLA) and the bandwidth availability you can choose to migrate the VM either live or using cold migration methods Data migration methods The remainder of this whitepaper provides details about ea ch method for migrating data from Oracle Database to AWS Before you get to the details you can scan the following table for a quick summary of each method Each method depends upon business recovery point objective (RPO) recovery time objective (RTO) a nd overall availability SLA Migration administrators must evaluate and map these business agreements with the appropriate methods Choose the method depending upon your application SLA RTO RPO tool and license availability Table 1 – Migration methods and tools Data migration method Database size Works for: Recommended for: AWS Database Migration Service Any size Amazon RDS Amazon EC2 Minimal downtime migration Database size limited by internet bandwidth Oracle SQL Developer Database c opy Up to 200 MB Amazon RDS Amazon EC2 Small databases with any number of objects This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 13 Data migration method Database size Works for: Recommended for: Oracle Materialized Views Up to 500 MB Amazon RDS Amazon EC2 Small databases with limited number of objects Oracle SQL*Loader Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with limited number of objects Oracle Export and Import Oracle Utilities Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with large number of objects Oracle Data Pump Up to 5 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Preferred method for any database from 10 GB to 5 TB External tables Up to 1 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Scenarios where this is the standard method in use Oracle RMAN Any size Amazon EC2 VMware Cloud on AWS Databases over 5 TB or if database backup is already in Amazon Simple Storage Service (Amazon S3) Oracle GoldenGate Any size Amazon RDS Amazon EC2 VMware Cloud on AWS Minimal downtime migration Migrating data for small Oracle databases You should base your strategy for data migration on the database size reliability and bandwidth of your network connection to AWS and the amount of time available for migration Many Oracle databases tend to be medium to large in size ranging anywhere from 10 GB to 5 TB with some as large as 20 TB or more However you also might need to migrate smaller databases This is especially true for phased migrations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 14 where the databases are broken up by schema making each migration effort small in size If the source database is under 10 GB and if you have a reli able high speed internet connection you can use one of the following methods for your data migration All the methods discussed in this section work with Amazon RDS Oracle or Oracle Database running on Amazon EC2 Note : The 10 GB size is just a guideline; you can use the same methods for larger databases as well The migration time varies based on the data size and the network throughput However if your database size exceeds 50 GB you should use one of the methods listed in the Migrating data for large Oracle databases section in this whitepaper Oracle SQL Developer database copy If the total size of the data you are migrating is under 200 MB the simplest solution is to use the Oracle SQL Developer Database Copy function Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development and management This easy touse Java based tool is available for Microsoft Windows Linux or Mac OS X With this method data transfer from a source database to a destination database is done directly without any intermediary steps Because SQL Developer can handle a large number of ob jects it can comfortably migrate small databases even if the database contains numerous objects You will need a reliable network connection between the source database and the destination database to use this method Keep in mind that this method does not encrypt data during transfer To migrate a database using the Oracle SQL Developer Database Copy function perform the following steps: 1 Install Oracle SQL Developer 2 Connect to your source and destination databases 3 From the Tools menu of Oracle SQL Developer choose the Database Copy command to copy your data to your Amazon RDS or Amazon EC2 instance 4 Follow the steps in the Database Copy Wizard You can choose the objects you want to migrate and use filters to limit the data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 15 The following screenshot shows the Database Copy Wizard The Database Copy Wizard in the Oracle SQL Developer guides you through your data transfer Oracle materialized views You can use Oracle Database materialized views to migrate data to Oracle databases on AWS for either Amazon RDS or Amazon EC2 This method is well suited for databases under 500 MB Because materialized views are available only in Oracle Database Enterprise Edition this method works only if Oracle Database Enterprise Edition is used for both the source database and the destination database With materialized view replication you can do a onetime migration of data to AWS while keeping th e destination tables continuously in sync with the source The result is a minimal downtime cut over Replication occurs over a database link between the source and destination databases For the initial load you must do a full refresh so that all the dat a in the source tables gets moved to the destination tables This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 16 Important : Because the data is transferred over a database link the source and destination databases must be able to connect to each other over SQL*Net If your network security design doesn’t a llow such a connection then you cannot use this meth od Unlike the preceding method (the Oracle SQL Developer Database Copy function) in which you copy an entire database for this method you must create a materialized view for each table that you want to migrate This gives you the flexibility of selectively moving tables to the database in AWS However it also makes the process more cumbersome if you need to migrate a large number of tables For this reason this method is better suited for migra ting a limited number of tables For best results with this method complete the following steps Assume the source database user ID is SourceUser with password PASS : 1 Create a new user in the Amazon RDS or Amazon EC2 database with sufficient privileges Create user MV_DBLink_AWSUser identified by password 2 Create a database link to the source database CREATE DATABASE LINK SourceDB_lnk CONNECT TO SourceUser IDENTIFIED BY PASS USING '(description=(address=(protocol=tcp) (host= crmdbacmecorpcom) (port=1521 )) (connect_data=(sid=ORCLCRM)))’ 3 Test the database link to make sure you can access the tables in the source database from the database in AWS through the database link Select * from tab@ SourceDB_lnk 4 Log in to the source database and create a materializ ed view log for each table that you want to migrate CREATE MATERIALIZED VIEW LOG ON customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 17 5 In the destination database in AWS create materialized views for each table for which you set up a materialized view log in the source database CREATE MATERIALIZED VIEW customer BUILD IMMEDIATE REFRESH FAST AS SELECT * FROM customer@ SourceDB_lnk Oracle SQL*Loader Oracle SQL*Loader is well suited for small to moderate databases under 10 GB that contain a limited number of objects Because the process inv olved in exporting from a source database and loading to a destination database is specific to a schema you should use this process for one schema at a time If the database contains multiple schemas you need to repeat the process for each schema This m ethod can be a good choice even if the total database size is large because you can do the import in multiple phases (one schema at a time) You can use this method for Oracle Database on either Amazon RDS or Amazon EC2 and you can choose between the fol lowing two options: Option 1 1 Extract data from the source database such as into flat files with column and row delimiters 2 Create tables in the destination database exactly like the source (use a generated script) 3 Using SQL*Loader connect to the destina tion database from the source machine and import the data Option 2 1 Extract data from the source database such as into flat files with column and row delimiters 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for SQL*Loader) For the database on Amazon EC2 this c an be the same instance where the destination database is located For Amazon RDS this is a temporary instance 4 Transport the files to the Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 18 5 Decompress and unen crypt files in the Amazon EC2 instance 6 Create tables in the destination database exactly like the source (use a generated script) 7 Using SQL*Loader connect to the destination database from the temporary Amazon EC2 instance and import the data Use the first option if your database size is small if you have direct SQL*Net access to the destination database in AWS and if data security is not a concern Otherwise use the second option because you can use encryption and compression during the transporta tion phase Compression substantially reduces the size of the files making data transportation much faster You can use either SQL*Plus or SQL Developer to perform data extraction which is the first step in both options For SQL*Plus use a query in a SQL script file and send the output directly to a text file as shown in the follo wing example: set pagesize 0 set head off set feed off set line 200 SELECT col1|| '|' ||col2|| '|' ||col3|| '|' ||col4|| '|' ||col5 from SCHEMATABLE; exit; To create encrypted and compressed output in the second option (see step 2 of the preceding Option 2 procedure) you can directly pipe the output to a zip utility You can also extract data by using Oracle SQL Developer: 1 In the Connections pane select the tables you want to extract data from 2 From the Tools menu choose the Database Export command as shown in the following screenshot This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 19 Database export command 3 On the Source/Destination page of the Export Wizard (see the next screenshot) select the Export DDL option to generate the script for creating the table which will simplify the entire process 4 In the Format dropdown on the same page choose loader 5 In the Save As box on the same page choose Separate Files This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 20 Export Wizard options on the Source/Destination page Continue to follow the Export Wizard steps to complete the export The Export Wizard helps you create the data file control file and table creation script in one step for multiple tables in a schema making it easier than using Oracle SQL*Plus to do the same tasks If you use Option 1 as specified you can run Oracle SQL*Loader from the source environment using the extracted data and control files to import data into the destination database To do this use the following command: sqlldr userid=userID/password@$service control=controlctl log=loadlo g bad=loadbad discard=loaddsc data=loaddat direct=y skip_index_maintenance=true errors=0 If you use Option 2 then you need an Amazon EC2 instance with the full Oracle client installed Additionally you need to upload the data files to that Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 21 For the database on Amazon EC2 this could be the same Amazon EC2 instance where the destination database is located For Amazon RDS this will be a temporary Amazon EC2 instance Before you do the upload we recommend that you compress and encry pt your files To do this you can use a combination of TAR and ZIP/GZIP in Linux or a third party utility such as WinZip or 7 Zip After the Amazon EC2 instance is up and running and the files are compress ed and encrypted upload the files to the Amazon EC2 instance using Secure File Transfer Protocol (SFTP) From the Amazon EC2 instance connect to the destination database using Oracle SQL*Plus to ensure you can establish the connection Run the sqlldr command shown in the preceding example for each control file that you have from the extract You can also cre ate a shell/bat script that will run sqlldr for all control files one after the other Note : Enabling skip_index_maintenance=true significantly increase s dataload performance However table indexes are not updated so you will need to rebuild all indexes after the data load is complete Oracle Export and Import utilities Despite being replaced by Oracle Data Pump the original Oracle Export and Import utilities are useful for migrations of databases with si zes less than 10 GB where the data lacks binary float and double data types The import process creates the schema objects so you do not need to run a script to create them beforehand This makes the process well suited for databases with a large number o f small tables You can use this method for Amazon RDS for Oracle and Oracle Database on Amazon EC2 The first step is to export the tables from the source database by using the following command Substitute the user name and password as appropriate: exp userID/password@$service FILE=exp_filedmp LOG=exp_filelog The export process creates a binary dump file that contains both the schema and data for the specified tables You can import the schema and data into a destination database Choose one of the foll owing two options for the next steps: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 22 Option 1 1 Export data from the source database into a binary dump file using exp 2 Import the data into the destination database by running imp directly from the source server Option 2 1 Export data from the source database into a binary dump file using exp 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for the emp/imp utility) For the database on Amazon EC2 this could be the same instance where the destination database is located For Amazon RDS this will be a temporary instance 4 Transport the files to the Amazon EC2 instance 5 Decompress and unencrypt the files in the Amazon EC2 instance 6 Import the data into the destination database by running imp If your database size is larger than a gigabyte use Option 2 because it includes compression and encryption This method will also have better import performance For both Option 1 and Option 2 use the following command to import into the destination d atabase: imp userID/password@$service FROMUSER=cust_schema TOUSER=cust_schema FILE=exp_filedmp LOG=imp_filelog There are many optional arguments that can be passed to the exp and imp commands based on your needs For details see the Oracle documentation Migrating data for large Oracle databases For larger databases use one of the methods described in this section rather than one of the methods described in Migrating Data for small Oracle Databases For the purpose of this whitepaper define a large database as any database 10 GB or more This section describes three methods for migrating large databases: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 23 • Data m igration using Oracle Data Pump – Oracle Data Pump is an excellent tool for migrating large amounts of data and it can be used with databases on either Amazon RDS or Amazon EC2 • Data m igration using Oracle external tables – The process involved in data migration using Oracle external tables is very similar to that of Oracle Data Pump Use this method if you already have processes built around it; otherwise it is better to use the Oracle Data Pump method • Data m igration using Oracle RMAN – Migration using RMAN can be useful if you are already backing up the database to AWS or using the AWS Import/Export service to bring the data to AWS Oracle RMAN can be used only for databases on Amazon EC2 not Amazon RDS Data migration using Oracle Da ta Pump When the size of the data to be migrated exceeds 10 GB Oracle Data Pump is probably the best tool to use for migrating data to AWS This method allows flexible data extraction options a high degree of parallelism and scalable operations which enables highspeed movement of data and metadata from one database to another Oracle Data Pump is introduced with Oracle 10 g as a replacement for the original Import/Export tools It is available only on Oracle Database 10 g Release 1 or later You can use the Oracle Data Pump method for both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The process involved is similar for both although Amazon RDS for Oracle requires a few additional steps Unlike the original Import/Export utilities the Oracle Data Pump import requires the data files to be available in the database server instance to import them into the database You cannot access the file system in the Amazon RDS instance directly so you need to use one or more Amazon EC2 instances (bridge instances) to transfer files from the source to the Amazon RDS instance and then import that into the Amazon RDS database You need these temporary Amazon EC2 bridge instances only for the duration of the import; you can end the instance s soon after the import is done Use Amazon Linux based instances for this purpose You do not need an Oracle Database installation for an Amazon EC2 bridge instance; you only need to install the Oracle Instance Client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 24 Note : To use this method your Amazo n RDS database must be version 11203 or later The f ollowing is the overall process for data migration using Oracle Data Pump for Oracle Database on Oracle for Amazon EC2 and Amazon RDS Migrating data to a database in Amazon EC2 1 Use Oracle Data Pump to export data from the source database as multiple compressed and encrypted files 2 Use Tsunami UDP to move the files to an Amazon EC2 instance running the destination Oracle database in AWS 3 Import that data into the destination database using the Oracle Data Pump import feature Migrating data to a database in Amazon RDS 1 Use Oracle Data Pump to export data from the source database as multiple files 2 Use Tsunami UDP to move the files to Amazon EC2 bridge instances in AWS 3 Using the provided Perl script that makes use of the UTL_FILE package move the data files to the Amazon RDS instance 4 Import the data into the Amazon RDS database using a PL/SQL script that utilizes the DBMS_DATAPUMP package (an example is provided at the end of this section) Using Oracle Data Pump to export data on the source instance When you export data from a large database you should run multiple threads in parallel and specify a size for each file This speeds up the export and also makes files available quickly for the next step of the process There is no need to wait for the entire database to be exported before moving to the next step As each file completes it can be moved to the next step You can enable compre ssion by using the parameter COMPRESSION=ALL which substantially reduces the size of the extract files You can encrypt files by providing a password or by using an Oracle wallet and specifying the parameter ENCRYPTION= all To learn more about the compr ession and encryption options see the Oracle Data Pump documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 25 The following example shows the export of a 500 GB database running eight threads in parallel with each output file up to a maximum of 20 GB This creates 22 files totaling 175 GB The total file size is significantly smaller than the actual source database size because of the compression option of Oracle Data Pump: expdp demoreinv/demo f ull=y dumpfile=data_pump_exp1:reinvexp1%Udmp data_pump_exp2:reinvexp2%Udmp data_pump_exp3:reinvexp3%Udmp filesize=20G parallel=8 logfile=data_pump_exp1:reinvexpdplog compression=all ENCRYPTION= all ENCRYPTION_PASSWORD=encryption_password job_name=r eInvExp Using Oracle Data Pump to export data from the source database instance Spreading the output files across different disks enhances input/output ( I/O) performance In the following examples three different disks are used to avoid I/O contention This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 26 Parallel run in multiple threads writing to three different disks Dump files generated in each disk This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 27 The most time consuming part of this entire process is the file transportation to AWS so optimizing the file transport significantly reduces the time required for the data migration The following steps show how to optimize the file transport: 1 Compress the dump files during the export 2 Serialize th e file transport in parallel Serialization here means sending the files one after the other; you don’t need to wait for the export to finish before uploading the files to AWS Uploading many of these files in parallel (if enough bandwidth is available) fu rther improves the performance We recommend that you parallel upload as many files as there are disks being used and use the same number of Amazon EC2 bridge instances to receive those files in AWS 3 Use Tsunami UDP or a commercial wide area network ( WAN ) accelerator to upload the data files to the Amazon EC2 instances Using Tsunami to upload files to Amazon EC2 The following example shows how to install Tsunami on both the source database server and the Amazon EC2 instance: yum y install make yum y install automake yum y install gcc yum y install autoconf yum y install cvs wget http://sourceforgenet/projects/tsunami udp/files/late st/download?_test=goal tar xzf tsunami*gz cd tsunamiudp* /recompilesh make install After you’ve installed Tsunami open port 46224 to enable Tsunami communication On the source database server start a Tsunami server as shown in the following example If you do parallel upload then you need to start multiple Tsunami servers: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 28 cd/mnt/expdisk1 tsunamid * On the destination Amazon EC2 instances start a Tsunami server as shown in the following example If you do multiple parallel f ile uploads then you need to start a Tsunami server on each Amazon EC2 bridge instance If you do not use parallel file uploads and if the migration is to an Oracle database on Amazon EC2 (not Amazon RDS) then you can avoid the Amazon EC2 bridge instanc e Instead you can upload the files directly to the Amazon EC2 instance where the database is running If the destination database is Amazon RDS for Oracle then the bridge instances are necessary because a Tsunami server cannot be run on the Amazon RDS s erver: cd /mnt/data_files tsunami tsunami> connect sourcedbserver tsunami> get * From this point forward the process differs for a database on Amazon EC2 versus a database on Amazon RDS The following sections show the processes for each service Next steps for a database on an Amazon EC2 instance If you used one or more Amazon EC2 bridge instances in the preceding steps then bring all the dump files from the Amazon EC2 bridge instances into the Amazon EC2 database instance The easiest w ay to do this is to detach the Amazon Elastic Block Store (Amazon EBS) volumes that contain the files from the Amazon EC2 bridge instances and connect them to the Amazon EC2 database instance Once all the dump files are available in the Amazon EC2 databa se instance use the Oracle Data Pump import feature to get the data into the destination Oracle database on Amazon EC2 as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 29 impdp demoreinv/demo full=y DIRECTORY=DPUMP_DIR dumpfile= reinvexp1%Udmpreinvexp2%Udmp reinvexp3%Udmp parallel=8 logfile=DPimplog ENCRYPTION_PASSWORD=encryption_password job_name=DPImp This imports all data into the database Check the log file to make sure everything went well and validate the data to confirm that all the data was migrated as expected Next steps for a database on Amazon RDS Because Amazon RDS is a managed service the Amazon RDS instance does not provide access to the file system However an Oracle RDS instance has an externally accessible Oracle directory object named DATA_PUMP_DIR You can copy Oracle Data Pump dump files to this directory by using an Oracle UTL_FILE package Amazon RDS supports S3 integration as well You could transfer files between the S3 bucket and Amazon RDS instance through S3 integration of RDS The S3 integration option is recommended when you want to transfer moderately large files to the RDS instance dba_directories Alternatively you can use a Perl script to move the files from the bridge instances to the DATA_PUMP_DIR of the Amazon RDS instance Preparing a bridge Instance To prepare a bridge instance make sure that Perl DBI and Oracle DBD modules are installed so that Perl can connect to the database You can use the following commands to verify if the modules are installed: $perl e 'use DBI; print $DBI::VERSION"" \n"";' $perl e 'use DBD::Oracle; print $DBD::Oracle::VERSION"" \n"";' If the modules are not already installed use the following process below to install them before proceeding further: 1 Downloa d Oracle Database Instant Client from the Oracle website and unzip it into ORACLE_HOME 2 Set up the environment variable as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 30 $ export ORACLE_BASE=$HOME/oracle $ export ORACLE_HOME=$ORACLE_BASE/instantclient_11_2 $ export PATH=$ORACLE_HOME:$PATH $ export TNS_ADMIN=$HOME/etc $ export LD_LIBRARY_PATH=$ORACLE_HOME:$LD_LIBRARY_PATH 3 Download and unzip DBD::Oracle as shown in the following example: $ wget http://wwwcpanor g/authors/id/P/PY/PYTHIAN/DBD Oracle 174targz $ tar xzf DBDOracle174targz $ $ cd DBDOracle174 4 Install DBD::Oracle as shown in the following example: $ mkdir $ORACLE_HOME/log $ perl MakefilePL $ make $ make install Transferring files to an Amazon RDS instance To transfer files to an Amazon RDS instance you need an Amazon RDS instance with at least twice as much storage as the actual database because it needs to have space for the database and the Oracle Data Pump d ump files After the import is successfully completed you can delete the dump files so that space can be utilized It might be a better approach to use an Amazon RDS instance solely for data migration Once the data is fully imported take a snapshot of RDS DB Create a new Amazon RDS instance using the snapshot and then decommission the data migration instance Use a single Availability Zone instance for data migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 31 The following example shows a basic Perl script to transfer files to an Amazon RDS instance Make changes as necessary Because this script runs in a single thread it uses only a small portion of the network bandwidth You can run multiple instances of the script in parallel for a quicker file transfer to the Amazon RDS insta nce but make sure to load only one file per process so that there won’t be any overwriting and data corruption If you have used multiple bridge instances you can run this script from all of the bridge instances in parallel thereby expediting file trans fer into the Amazon RDS instance: # RDS instance info my $RDS_PORT=4080; my $RDS_HOST=""myrdshostxxxus east1devords devamazonawscom""; my $RDS_LOGIN=""orauser/orapwd""; my $RDS_SID=""myoradb""; my $dirname = ""DATA_PUMP_DIR""; my $fname= $ARGV[0]; my $data = ‘‘dummy’’; my $chunk = 8192; my $sql_open = ""BEGIN perl_globalfh := utl_filefopen(:dirname :fname 'wb' :chunk); END;""; my $sql_write = ""BEGIN utl_fileput_raw(perl_globalfh :data true); END;""; my $sql_close = ""BEGIN utl_filefclos e(perl_globalfh); END;""; my $sql_global = ""create or replace package perl_global as fh utl_filefile_type; end;""; my $conn = DBI >connect('dbi:Oracle:host='$RDS_HOST';sid='$RDS_SID';por t='$RDS_PORT$RDS_LOGIN '') || die ( $DBI::errstr "" \n"") ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 32 my $updated=$conn >do($sql_global); my $stmt = $conn >prepare ($sql_open); $stmt>bind_param_inout("":dirname"" \$dirname 12); $stmt>bind_param_inout("":fname"" \$fname 12); $stmt>bind_param_inout("":chunk"" \$chunk 4); $stmt>execute() || die ( $DBI::errstr "" \n""); open (INF $fname) || die "" \nCan't open $fname for reading: $!\n""; binmode(INF); $stmt = $conn >prepare ($sql_write); my %attrib = ('ora_type’’24’); my $val=1; while ($val > 0) { $val = read (INF $data $chunk); $stmt>bind_param("":data"" $data \%attrib); $stmt>execute() || die ( $DBI::errstr "" \n""); }; die ""Problem copying: $! \n"" if $!; close INF || die ""Can't close $fname: $! \n""; $stmt = $co nn>prepare ($sql_close); $stmt>execute() || die ( $DBI::errstr "" \n""); You can check the list of files in the DBMS_DATAPUMP directory using the following query: SELECT * from table(RDSADMINRDS_FILE_UTILLISTDIR('DATA_PUMP_DIR')); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 33 Once all files are s uccessfully transferred to the Amazon RDS instance connect to the Amazon RDS database as a database administrator (DBA) user and submit a job by using a PL/SQL script that uses DBMS_DATAPUMP to import the files into the database as shown in the following PL/SQL script Make any changes as necessary: Declare h1 NUMBER; begin h1 := dbms_datapumpopen (operation => 'IMPORT' job_mode => 'FULL' job_name => 'REINVIMP' version => 'COMPATIBLE'); dbms_datapumpset_parallel(handle => h1 degree => 8); dbms_datapumpadd_file(handle => h1 filename => 'IMPORTLOG' directory => 'DATA_PUMP_DIR' filetype => 3); dbms_datapumpset_parameter(handle => h1 name => 'KEEP_MASTER' value => 0); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp1%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp2%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp3%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_data pumpset_parameter(handle => h1 name => 'INCLUDE_METADATA' value => 1); dbms_datapumpset_parameter(handle => h1 name => 'DATA_ACCESS_METHOD' value => 'AUTOMATIC'); dbms_datapumpset_parameter(handle => h1 name => 'REUSE_DATAFILES' value => 0 ); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 34 dbms_datapumpset_parameter(handle => h1 name => 'SKIP_UNUSABLE_INDEXES' value => 0); dbms_datapumpstart_job(handle => h1 skip_current => 0 abort_step => 0); dbms_datapumpdetach(handle => h1); end; / Once the job is complete check the Amazon RDS database to make sure all the data has been successfully imported At this point you can delete all the dump files using UTL_FILEFREMOVE to reclaim disk space Data migration using Oracle external tables Oracle external tables are a feature of Oracle Database that allows you to query data in a flat file as if the file were an Oracle table The process for using Oracle external tables for data migration to AWS is almost exactly the same as the one used for Ora cle Data Pump The Oracle Data Pump based method is better for large database migrations The external tables method is useful if your current process uses this method and you don’t want to switch to the Oracle Data Pump based method Following are the mai n steps: 1 Move the external table files to RDS DATA_PUMP_DIR 2 Create external tables using the files loaded 3 Import data from the external tables to the database tables Depending on the size of the data file you can choose to either write the file directly to RDS DATA_PUMP_DIR from an on premises server or use an Amazon EC2 bridge instance as in the case of the Data Pump based method If the file size is large and you choose to use a bridge instance use compression and encryption on the files as well as Tsunami UDP or a WAN accelerator exactly as described for the Data Pump based migration To learn more about Oracle external tables see External Tables Concepts in the Oracle documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 35 Data migration using Oracle RMAN If you are planning to migrat e the entire database and your destination database is self managed on Amazon EC2 you can use Oracle RMAN to migrate data Data migration by using Oracle Data Pump is faster and more flexible than data migration using Oracle RMAN; however Oracle RMAN is a better option for the following cases: • You already have an RMAN backup available in Amazon S3 that can be used If you are looking for options to migrate RMAN backups to S3 consider AWS Storage Gateway or AWS DataSync services • The database is very large (greater than 5 TB) and you are planning to use AWS Import/Export • You need to m ake numerous incremental data changes before switching over to the database on AWS Note : This method is for Amazon EC2 and VMware Cloud on AWS You cannot use this method if your destination database is Amazon RDS To migrate data using Oracle RMAN: 1 Create a full backup of the source database using RMAN 2 Encrypt and compress the files 3 Transport files to AWS using the most optimal method 4 Restore the RMAN backup to the destination database 5 Capture incremental backups from the source and apply them to the destination database until switchover can be performed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 36 Creating a full backup of the source database Using RMAN Create a backup of the source database using RMAN: $ rman target=/ RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> BACKUP DATABASE PLUS ARCHIVELOG If you have a license for the compression and encryption option then you already have the RMAN backups created as encrypted and compressed files Otherwise after the backup files are created encrypt and compress them using tools such as ZIP 7 Zip or GZIP All subsequent actions occur on the server running the destination database Transporting files to AWS Depending on the size of the database and the time available for migration you can choose the most optimal method for file transportation to AWS For small files consider AWS DataSync For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to upload files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB consider using AWS Storage Gateway or AWS Snow Family devices for offline file transfer Migrating data to Oracle Database on AWS There are two ways to migrate data to a destination database You can create a new database and restore from the RMAN backup or you can create a duplicate database from the RMAN bac kup Creating a duplicate database is easier to perform To create a duplicate database move the transported files to a location accessible to the Oracle Database instance on Amazon EC2 Start the target instance in NOMOUNT mode Now use RMAN to connect to the destination database For this example we are not connecting to the source database or the RMAN catalog so use the following command : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 37 $ rman AUXILIARY / DUPLICATE TARGET DATABASE TO DBONEC2 SPFILE NOFILENAMECHECK; The duration of this process varies based on the size of the database and the type of Amazon EC2 instance For better performance use Amazon Elastic Block Store (Amazon EBS) General Purpose ( SSD) volumes for the RMAN backup files For more information about SSD volume types see Introducing the Amazon EBS General Purpose (SSD) volume type Once the process is finished RMAN produces a completion message and you now have your duplicate instance After verification you can delete the Amazon EBS volumes containing the RMAN backup files We recommend that you take a snapshot of the volumes for later use before deleting them if needed Data replication using AWS Database Migration Service AWS Database Migration Service (AWS DMS) can support a number of migration and replication strategies including a bulk upload at a point in time a minimal downtime migration levera ging Change Data Capture (CDC) or migration of only a subset of the data AWS DMS supports sources and targets in EC2 RDS and on premise s Because no client install is required the following steps are the same for any combination of the above AWS DMS also offers the ability to migrate data between databases as easily as from Oracle to Oracle The following steps show how to migrate data between Oracle databases using AWS DMS and with minimal downtime: 1 Ensure supplemental logging is enabled on the sour ce database 2 Create the target database and ensure database backups and MultiAZ are turned off if the target is on RDS 3 Perform a no data export of the schema using Oracle SQL Developer or the tool of your choice then apply the schema to the target database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 38 4 Disable triggers foreign keys and secondary indexes (optional) on the target 5 Create a DMS replication instance 6 Specify the source and target endpoints 7 Create a “Migrate existing data and replicate ongoing changes” task mapping your source tables to your target tables (The default task includes all tables ) 8 Start the task 9 After the full load portion of the tasks is complete and the transactions reach a steady state enable triggers foreign keys and secondary indexes 10 Turn on backups and MultiAZ 11 Turn off any applications using the original source database 12 Let the final transactions flow through 13 Point any applications at the new database in AWS and start An alternative method is to use Oracle Data Pump for the initial load and DMS to replicate changes from the Oracle System Change Number ( SCN ) point where data dump stopped More details on using AWS DMS can be found in the documentation To improve the performance of DMS replication the schemas and tables can be grouped into multiple DMS tasks DMS tasks support wildcard entries for the names of the schemas and tables Data replication using Oracle GoldenGate Oracle GoldenGate is a tool for real time change data capture and replication Oracle GoldenGate creates trail files that contain the most recently changed data from the source database then pushes these files to the destination database You can use Oracle GoldenGate to perform minimal downtime data migration Oracle GoldenGate is a licensed software from Oracle You can also use it for nearly continuous da ta replication You can use Oracle GoldenGate with both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The following steps show how to migrate data using Oracle GoldenGate: 1 The Oracle GoldenGate Extract process extracts all the existing data for the first load Extract Pump and Replicat process refers to the GoldenGate Integrated capture mode This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 39 2 The Oracle GoldenGate Pump process transports the extracted data to the Replicat process running in Amazon EC2 3 The Replicat process appl ies the data to the destination database 4 After the first load the process runs continually to capture changed data and applies it to the destination database GoldenGate Replicat is a key part of the entire system You can run it from a server in the sou rce environment but AWS recommend s that you run the Replicat process in an Amazon EC2 instance within AWS for better performance This Amazon EC2 instance is referred to as a GoldenGate Hub You can have multiple GoldenGate Hubs especially if you are mig rating data from one source to multiple destinations Oracle GoldenGate replication data flow process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 40 Reference architecture for EC2: Oracle GoldenGate replication from onpremis es to Oracle Database on Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 41 Reference architecture for RDS: Oracle GoldenGate replication from onpremises to RDS Oracle Database on AWS Setting up Oracle GoldenGate Hub on Amazon EC2 To create an Oracle GoldenGate Hub on Amazon EC2 create an Amazon EC2 instance with a full client installation of Oracle DBMS 12c version 12203 and Oracle GoldenGate 12314 Additionally apply Oracle patch 13328193 For more information about instal ling GoldenGate see the Oracle GoldenGate documentation This GoldenGate Hub stores and processes all the data from your source database so make sure that there is enough storage available in this instance to store the trail files It is a good practice to choose the largest instance type that your GoldenGate license allows Use appropriate Amazon EBS storage volume types depending on the database change rate and replication performance The following process sets up a GoldenGate Hub on an Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 42 1 Add the following entry to the tnsnameora file to create an alias For more information about the tnsnameora file see the Oracle GoldenGate documentation $ cat /example/config/tnsnamesora TEST= (DESCRIPTION= (ENABLE=BROKEN) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=ec2 dns)(PORT=8200)) ) ( CONNECT_DATA= (SID=ORCL) ) ) 2 Next create subdirectories in the GoldenGate directory by using the Amazon EC2 command line shell and ggsci the GoldenGate command interpreter The subdirectories are created under the gg directory and include directories for parameter report and check point files: prompt$ cd /gg prompt$ /ggsci GGSCI> CREATE SUBDIRS 3 Create a GLOBALS parameter file using the Amazon EC2 command line shell Parameters that affect all GoldenGate processes are defined in the GLOBALS parameter file The following example creates the necessary file: prompt$ cd $GGHOME prompt$ vi GLOBALS CheckpointTable oggadm1oggchkpt This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 43 4 Configure the manager Add the following lines to the GLOBALS file and then start the manager by using ggsci : PORT 8199 PurgeOldExtracts /dirdat/* UseCheckpoints MINKEEPDAYS When you have completed this process the GoldenGate Hub is ready for use Next you set up the source and destination databases Setting up the source database for use with Oracle GoldenGate To replicate data to the destination database in AWS you need to se t up a source database for GoldenGate Use the following procedure to set up the source database This process is the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Set the compatible parameter to the same as your destination database (for Amazon RDS as the destination) 2 Enable supplemental logging and force logging 3 Verify the database is in archivelog mode 4 Set ENABLE_GOLDENGATE_REPLICATION parameter to TRUE 5 Set the retention period for archived redo logs for the GoldenGate source database 6 Create a GoldenGate user account on the source database Setting up the destination database for use with Oracle GoldenGate The following steps must be performed on the target database for GoldenGate replication to work These steps are the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Create a GoldenGate user account on the destination database 2 Grant the necessary privileges that are listed in the following example to the GoldenGate user: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 44 CREATE SESSION ALTER SESSION CREATE CLUST ER CREATE INDEXTYPE CREATE OPERATOR CREATE PROCEDURE CREATE SEQUENCE CREATE TABLE CREATE TRIGGER CREATE TYPE SELECT ANY DICTIONARY CREATE ANY TABLE ALTER ANY TABLE LOCK ANY TABLE SELECT ANY TABLE INSERT ANY TABLE UPDATE ANY TABLE DELETE ANY TA BLE Working with the Extract and Replicat utilities of Oracle GoldenGate The Oracle GoldenGate Extract and Replicat utilities work together to keep the source and destination databases synchronized by means of incremental transaction replication using trail files All changes that occur on the source database are automatically detected by Extract and then formatted and transferred to trail files on the GoldenGate Hub on premises or on the Amazon EC2 instance After the initial load is completed the Replicat process reads the data from these files and replicates the data to the destination database nearly continuously Running the Extract process of Oracle GoldenGate The Extract process of Oracle GoldenGate retrieves converts and outputs data from the source database to trail files Extract queues transaction details to memory or to temporary disk storage When the transaction is committed to the source database Extract flushes all of the transaction details to a trail file for routing to the GoldenGate Hub on premises or on the Amazon EC2 instance and then to the destination database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 45 The following process enables and starts the Extract process 1 First configure the Extract parameter file on the GoldenGate Hub The following example shows an Extract parameter file: EXTRACT EABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UT F8) USERID oggadm1@TEST PASSWORD XXXXXX EXTTRAIL /path/to/goldengate/dirdat/ab IGNOREREPLICATES GETAPPLOPS TRANLOGOPTIONS EXCLUDEUSER OGGADM1 TABLE EXAMPLETABLE; 2 On the GoldenGate Hub launch the GoldenGate command line interface (ggsci ) Log in to the source database The following example shows the format for logging in: dblogin userid @ 3 Next add a checkpoint table for the database: add checkpointtable Add transdata to turn on supplemental logging for the database table: add trandata • Alternatively you can add transdata to turn on supplemental logging for all tables in the database: add trandata * 4 Using the ggsci command line use the following commands to enable the Extract process: add extract tranlog INTEGRATED tranlog begin now This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 46 add exttrail extract MEGABYTES Xm 5 Register the Extract process with the database so that the archive logs are not deleted This lets you recover old uncommitted transactions if necessary To register the Extract process with the database use the following command: register EXTRACT DATABASE 6 To start the Extract process use the following command: start Running the Replicat process of Oracle GoldenGate The Replicat process of Oracle GoldenGate is used to push transaction information in the trail files to the destination database The following process enables and starts the Replicat pro cess 1 First configure the Replicat parameter file on the GoldenGate Hub (on premises or on an Amazon EC2 instance) The following listing shows an example Replicat parameter file: REPLICAT RABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UTF8) USERID oggadm1@TARGET password XXXXXX ASSUMETARGETDEFS MAP EXAMPLETABLE TARGET EXAMPLETABLE; 2 Launch the Oracle GoldenGate command line interface ( ggsci ) Log in to the destination database The following example shows the format for logging in: dblogin userid @ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 47 3 Using the ggsci command line add a checkpoint table Note that user indicates the Oracle GoldenGate user account not the owner of the destination table schema The following example creates a checkpoint table named gg_checkpoint : add checkpointtable gg_checkpoint 4 To enable the Replicat process use the following command: add replicat EXTTRAIL CHECKPOINTTABLE gg_checkpoint 5 To start the Replicat process use the following command: start Transferring files to AWS Migrating databases to AWS require s the transfer of files to AWS There are multiple methods of transferring files to AWS This section describe s the methods you can adopt during the migrat ion process AWS DataSync AWS DataSync is an online data transfer service that can accelerate moving data between an onpremises storage system and AWS storage services such as S3 EFS or FSx for Windows File Server AWS DataSync agent connects to the on premises storage and copies data and metadata securely to AWS AWS DataSync is the recommended option when you have large volume of small files 100 MB or less AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software applianc e with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective st orage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low latency performance by maintaining This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 48 frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 or Amazon S3 Glacier AWS Storage Gateway works with moderate or large file sizes AWS Storage Gateway S3 File Gateway interface provides a Network File System/Server Messag e Block (NFS/SMB ) file share in your on premises environment They run a local VM in your on premises data center Files can be copied at the on premises location to this local file share These files are copied to the designated S3 bucket in AWS If your workload uses Windows OS you can use Amazon FSx File Gateway to copy files fr om on premises via SMB clients to the Amazon FSx for Windows File Server Amazon RDS integration with S3 You can use S3 integration to transfer files between an Amazon S3 bucket and an Amazon RDS instance The Amazon RDS instance accesses S3 bucket via a defined IAM role so you can have granular bucket or object level policies for the Amazon RDS instance S3 integration is useful when you have to use Oracle utilities like utl_file or datapump Amazon RDS Oracle rdsadmin package supports both upload and download from S3 buckets Tsunami UDP Tsunami UDP is an open source file transfer protocol that uses TCP control and UDP data for transfer over long dista nce networks at a very fast rate When you use UDP for transfer you gain more throughput than is possible with TCP over the same networks You can download Tsunami UDP from the Tsunami UDP Prot ocol page at SourceForgenet1 For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to Upload Files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB using AWS Snow Family devices might be a better option For smaller databases you can also use the Amazon S3 multipart upload capability to keep it simple and efficient AWS Snow Family AWS Snow Family offers a number of physical devices and capacity points transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 49 computing capabilities For example AWS Snowball Edge has 80 TB of us able capacity and can be mounted as an NFS mount point in the onpremises location For smaller capacity AWS Snowcone offers 8 TB of storage and has the capability to run the AWS DataSync agent Conclusion This whitepaper described the preferred methods for migrating Oracle Database to AWS for both Amazon EC2 and Amazon RDS Depending on your business needs and your migration strategy you will probably use a combination of methods to migrate your database For best performance during migration it is critical to choose the appropriate level of resources on AWS especially for Amazon EC2 instances and Amazon EBS General Purpose (SSD) volume types Contributors Contributors to this document include : • Jayaraman Vellore Sampathkumar AWS Solution Architect – Database Amazon Web Services • Praveen Katari AWS Partner Solution Architect Amazon Web Services Further reading For additional information on data migration with AWS services consult the following resources: Oracle Database on AWS: • Advanced Architectures for Oracle Database on Amazon EC2 • Choosing the Operating System for Oracle Workloads on Amazon EC2 • Determining the IOPS Needs for Oracle Database on AWS • Best Practic es for Running Oracle Database on AWS • AWS Case Study: Amazoncom Oracle DB Backup to Amazon S3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 50 Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle AWS Database Migration Service ( AWS DMS) • AWS Database Mig ration Service Oracle licensing on AWS • Licensing Oracle Software in the Cloud Computing Environment AWS service details • Cloud Products • AWS Documentation Index • AWS Whitepapers & Guides AWS pricing information • AWS Pricing • AWS Pricing Calculator VMware Cloud on AWS • VMware Cloud on AWS Document version s Date Description January 27 2022 Update to text on page 30 for clarity October 8 2021 General updates and inclusion of AWS Snowcone and AWS DataSync services for migration August 2018 General updates December 2014 First publication",General,consultant,Best Practices Streaming_Data_Solutions_on_AWS_with_Amazon_Kinesis,Streaming Data Solutions on AWS First Published September 13 2017 Updated September 1 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Real time and near realtime application scenarios 1 Difference between batch and stream processing 2 Stream processing challenges 2 Streaming data solutions: examples 2 Scenario 1: Internet offering based on location 3 Processing streams of data with AWS Lambda 5 Summary 6 Scenar io 2: Near realtime data for security teams 6 Amazon Kinesis Data Firehose 7 Summary 12 Scenario 3: Preparing clic kstream data for data insights processes 13 AWS Glue and AWS Glue streaming 14 Amazon DynamoDB 15 Amazon SageMaker and Amazon SageMaker service endpoints 16 Inferring data insights in real time 16 Summary 17 Scenario 4: Device sensors realtime anomaly detection and notifications 17 Amazon Kinesis Data Analytics 19 Summary 21 Scenario 5: Real time tele metry data monitoring with Apache Kafka 22 Amazon Managed Streaming for Apache Kafka (Amazon MSK) 23 Amazon EMR with Spark Streaming 25 Summary 27 Conclusion 28 Contributors 28 Document versions 28 Abstract Data engineers data analysts and big data developers are looking to process and analyze their data in realtime so their companies can learn about what their customers applications and products are doing right now and react promptly This whitepaper describes how services such as Amazon Kinesis Data St reams Amazon Kinesis Data Firehose Amazon EMR Amazon Kinesis Data Analytics Amazon Managed Streaming for Apache Kafka (Amazon MSK) and other services can be used to implement real time applications and provides common design patterns using these services Amazon Web Services Streaming Data Solutions on AWS 1 Introduction Businesses today receive data at massive scale and speed due to the explosive growth of data sources that continuously generate streams of data Whether it is log data from application servers clickstream data from websites and mobile apps o r telemetry data from Internet of Things (IoT) devices it all contains information that can help you learn about what your customers applications and products are doing right now Having the ability to process and analyze this data in real time is esse ntial to do things such as continuously monitor your applications to ensure high service uptime and personalize promotional offers and product recommendations Real time and near real time processing can also make other common use cases such as website an alytics and machine learning more accurate and actionable by making data available to these applications in seconds or m inutes instead of hours or days Real time and nearrealtime application scenarios You can use streaming data services for real time and near realtime applications such as application monitoring fraud detection and live leaderboards Realtime use cases require millisecond end toend latencies – from ingestion to processing all the way to emitting the results to target data stores a nd other systems For example Netflix uses Amazon Kinesis Data Streams to monitor the communications between all its applications so it can detect and fix issues quickly ensuring high service u ptime and availability to its customers While the most commonly applicable use case is application performance monitoring there are an increasing number of real time applications in ad tech gaming and IoT that fall under this category Common nearrealtime use cases include analytics on data stores for data science and machine learning (ML) You can use streaming data solutions to continuously load real time data into your data lakes You can then update ML models more frequently as new data becomes av ailable ensuring accuracy and reliability of the outputs For example Zillow uses Kinesis Data Streams to collect public record data and multiple listing service ( MLS) listings and then provide home buyers and sellers with the most up to date home value estimates in near realtime ZipRecruiter uses Amazon MSK for their event logging pipelines which are critical infrastructu re components that collect store and continually process over six billion events per day from the ZipRecruiter employment marketplace Amazon Web Services Streaming Data Solutions on AWS 2 Difference between batch and stream processing You need a different set of tools to collect prepare and process real time streaming data than those tools that you have traditionally used for batch analytics With traditional analytics you gather the data load it periodically into a database and an alyze it hours days or weeks later Analyzing real time data requires a different approach Stream processing applications process data continuously in real time even before it is stored Streaming data can come in at a blistering pace and data volumes can vary up and down at any time Stream data processing platforms have to be able to handle the speed and variability of incoming data and process it as it arrives often millions to hundreds of millions of events per hour Stream processing challenges Processing real time data as it arrives can enable you to make decisions much faster than is possible with traditional data analytics technologies However building and operating your own custom streaming data pipelines is complicated and resource intensiv e: • You have to build a system that can cost effectively collect prepare and transmit data coming simultaneously from thousands of data sources • You need to fine tune the storage and compute resources so that data is batched and transmitted efficiently for maximum throughput and low latency • You have to deploy and manage a fleet of servers to scale the system so you can handle the varying speeds of data you are going to throw at it Version upgrade is a complex and costly process After you have built this platform you have to monitor the system and recover from any server or network failures by catching up on data processing from the appropriate point in the stream without creating duplicate data You also need a dedicated team for infrastructure man agement All of this takes valuable time and money and at the end of the day most companies just never get there and must settle for the status quo and operate their business with information that is hours or days old Streaming data solutions : examples To better understand how organizations are doing real time data processing using AWS services this whitepaper uses four examples Each example review s a scenario and Amazon Web Services Streaming Data Solutions on AWS 3 discuss es in detail how AWS realtime data streaming services are used to solve the problem Scenario 1: Internet offering based on location Company InternetProvider provides internet services with a variety of bandwidth options to users across the world When a user signs up for internet company InternetProvider provides the user with different bandwidth options based on user’s geographic location Given these requirements company InternetProvider implemented an Amazon Kinesis Data Stream s to consume user details and location The user details and location are enrich ed with different bandwidth options prior to publishing back to the application AWS Lambda enables this real time enrichment Processing streams of data with AWS Lambda Amazon Kinesis Data Streams Amazon Kinesis Data Streams enables you to build custom real time applications using popular stream processing frameworks and load streaming data into many different data stores A Kinesis stream can be configured to continuously receive events from hundreds of thousands of data producers delivered from sources like website click streams IoT sensors social media feeds and application logs Within milliseconds data is available to be read and processed by your application When implementing a solution with Kinesis Data Streams you create custom data processing applications known as Kinesis Data Streams applications A typical Kinesis Data Streams application reads data from a Kinesis stream as data reco rds Data put into Kinesis Data Streams is ensured to be highly available and elastic and is available in milliseconds You can continuously add various types of data such as clickstreams application logs and social media to a Kinesis stream from hundreds of thousands of sources Within seconds the data will be available for your Kinesis Applications to read and process from the stream Amazon Web Services Streaming Data Solutions on AWS 4 Amazon Kinesis Data Stre ams is a fully managed streaming data service It manages the infrastructure storage networking and configuration needed to stream your data at the level of your data throughput Sending data to Amazon Kinesis Data Streams There are several ways to s end data to Kinesis Data S treams providing flexibility in the designs of your solutions • You can write code utilizing one of the AWS SDKs that are supported by multiple popular languages • You can use the Amazon Kinesis Agent a tool for sending data to Kinesis Data Streams The Amazon Kinesis Producer L ibrary (KPL) simplifies the producer application development by enabling developers to achieve high write throughput to on e or more Kinesis data streams The KPL is an easy to use highly configurable library that you install on your hosts It acts as an intermediary between your producer application code and the Kinesis Streams API actions For more information about the KPL and its ability to produce events synchronously and asynchronously with code examples see Writing to your Kinesis Data Stream Using the KPL There are two different operations in the Kinesis Streams API that add data to a stream: PutRecords and PutRecord The PutRecords operation sends multiple records to your stream per HTTP request while PutRecord submits one record per HTTP request To achieve higher throughput for most applications use PutRecords For more information about these APIs see Adding Data to a Stream The details for each API operation can be found in the Amazon Kinesis Streams API Reference Processing data in Amazon Kinesis Data Streams To read and process data from Kinesis streams you need to create a consumer application There are varied ways to create consumers for Kinesis Data Streams Some of these approaches include using Amazon Kinesis Data Analytics to analyze streaming data using KCL using AWS Lambda AWS Glue streaming ETL jobs and using the Kinesis Data Streams API d irectly Consumer applications for Kinesis Streams can be developed using the KCL which helps you consume and process data from Kinesis Streams The KCL takes care of Amazon Web Services Streaming Data Solutions on AWS 5 many of the complex tasks associated with distributed computing such as load balancing across multiple instances responding to instance failures checkpointing processed records and reacting to resharding The KCL enables you to focus on the writing record processing logic For more information on how to build your own KCL application see Using the Kinesis Client Library You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream AWS Lambda periodically polls the stream (once p er second) for new records and when it detects new records it invokes the Lambda function passing the new records as parameters The Lambda function is only run when new records are detected You can map a Lambda function to a shared throughput consumer ( standard iterator) You can build a consumer that use s a feature called enhanced fan out when you require dedicated throughput that you do not want to contend with other consumers that are receiving data from the stream This feature enables consumers to receive records from a stream with throughput of up to two MB of data per second per shard For most cases using Kinesis Data Analytics KCL AWS Glue or AWS Lambda shou ld be used to process data from a stream However if you prefer you can create a consumer application from scratch using the Kinesis Data Streams API The Kinesis Data Streams API provides the GetShardIterator and GetRecords methods to retrieve data from a stream In this pull model you r code extracts data directly from the shards of the stream For more information about writing your own consumer application using the API see Developing Custom Consumers with Shared Throughput Using the AWS SDK for Java Details about the API can be found in the Amazon Kinesis Streams API Reference Processing streams of data with AWS Lambda AWS Lambda enables you to run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automatically trigger from other AWS ser vices or call it directly from any web or mobile app AWS Lambda integrates natively with Amazon Kinesis Data Streams The polling checkpointing and error handling complexities are abstracted when you use this native Amazon Web Services Streaming Data Solutions on AWS 6 integration This allows the Lambda function code to focus on business logic processing You can map a Lambda function to a shared throughput (standard iterator) or to a dedicated throughput consumer with enhanced fan out With a standard iterator Lambda polls each shard in your Kinesis s tream for records using HTTP protocol To minimize latency and maximize read throughput you can create a data stream consumer with enhanced fan out Stream consumers in this architecture get a dedicated connection to each shard without competing with othe r applications reading from the same stream Amazon Kinesis Data Streams pushes records to Lambda over HTTP/2 By default AWS Lambda invoke s your function as soon as records are available in the stream To buffer the records for batch scenarios you can i mplement a batch window for up to five minutes at the event source If your function returns an error Lambda retries the batch until processing succeeds or the data expires Summary Company InternetProvider leveraged Amazon Kinesis Data Stream s to stream user details and location The stream of record was consumed by AWS Lambda to enrich the data with bandwidth options stored in the function’s library After the enrichment AWS Lambda published the bandwidth options back to the application Amaz on Kinesis Data Stream s and AWS Lambda handled provisioning and management of servers enabling Company InternetProvider to focus more on business application development Scenario 2: Near realtime data for security teams Company ABC2Badge provides sensor s and badges for corporate or large scale events such as AWS re:Invent Users sign up for the event and receive unique badges that the sensors pick up across the campus As users pass by a sensor their anony mized information is recorded into a relational database In an upcoming event due to the high volume of attendees ABC2Badge has been requested by the event security team to gather data for the most concentrated areas of the campus every 15 minutes Thi s will give the security team enough time to react and disperse security personal proportionally to concentrated areas Given this new requirement from the security team and the inexperience of building a streaming Amazon Web Services Streaming Data Solutions on AWS 7 solution to process date in near realtime ABC2Badge is looking for a simple yet scalable and reliable solution Their current data warehouse solution is Amazon Redshift While reviewing the features of the Amazon Kinesis services they recognize d that Amazon Kinesis Data Firehose can receive a stream of data records batch the records based on buffer size and/or time interval and insert them into Amazon Redshift They created a Kinesis Data Firehose delivery stream and configured it so it would copy data to their Amazon Redshift tables every five minutes As part of this new solution they used the Amazon Kinesis Agent on their servers Every five minutes Kinesis Data Firehose load s data into Amazon Redshift where the business intelligence ( BI) team is enabled to perform its analysis and send the data to the security team every 15 minutes New solution using Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS It can capture transform and load streaming data into Amazon Kinesis Data Analytics Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elasticsearch Service (Amazon ES) and Splunk Additionally Kinesis Data Firehose can load streaming data into any custom HTTP endpoint or HTTP endpoints owned by supported thirdparty service providers Kinesis Data Firehose enables near realtime analytics with existing business intelligence tools and dashboards that you’re already using today It’s a fully managed serverless service that automatically scales to match the throughput of your data and requires no ongoing administration Kinesis Data Firehose can batch compress and Amazon Web Services Streaming Data Solutions on AWS 8 encrypt the data before loading minimizing the amount of storage used at the destination and increasing security It can also transform the source data using AWS Lambda and deliver the transformed data to destin ations You configure your data producers to send data to K inesis Data Firehose which automatically delivers the data to the destination that you specify Sending data to a Firehose delivery stream To send data to your delivery stream there are several o ptions AWS offers SDKs for many popular programming languages each of which provides APIs for Amazon Kinesis Data Firehose AWS has a utility to help send data to your delivery stream Kinesis Data Firehose has been integrated with other AWS services to send data directly from those services into your delivery stream Using Amazon Kinesis agent Amazon Kinesis agent is a standalone software application that continuously monitors a set of log files for new data to be sent to the delivery stream The agent automat ically handles file rotation checkpointing retries upon failures and emits Amazon CloudWatch metrics for monitoring and troubleshooting of the deliv ery stream Additional configurations such data pre processing monitoring multiple file directories and writing to multiple delivery streams can be applied to the agent The agent can be installed on Linux or Window sbased servers such as web servers log servers and database servers Once the agent is installed you simply specify the log files it will monitor and the delivery stream it will send to The agent will durably and reliably send new data to the delivery stream Using API with AWS SDK and AWS services as a source The Kinesis Data Firehose API offers two operations for sending data to your delivery stream PutRecord sends one data record within one call PutRecordBatch can send multiple data records within one call and can achieve higher t hroughput per producer In each method you must specify the name of the delivery stream and the data record or array of data records when using this method For more information and sample code for the Kinesis Data Firehose API operations see Writing to a Firehose Delivery Stream Using the AWS SDK Kinesis Data Firehose also runs with Kinesis Data Streams CloudW atch Logs CloudW atch Events Amazon Simple Notification Service (Amazon SNS) Amazon API Amazon Web Services Streaming Data Solutions on AWS 9 Gateway and AWS IoT You can scalably and reliably sen d your streams of data logs events and IoT data directly into a K inesis Data Firehose destinati on Process ing data before delivery to destination In some scenarios you might want to transform or enhance your streaming data before it is delivered to its destination For example data producers might send unstructured text in each data record and yo u need to transform it to JSON before delivering it to Amazon ES Or you might want to convert the JSON data into a columnar file format such as Apach e Parquet or Apache ORC before storing the data in Amazon S3 Kinesis Data Firehose has built in data format conversion capability With this you can easily convert your streams of JSON data into Apache Parquet or Apache ORC file formats Data transformation flow To enable streaming data transformations Kinesis Data Firehose uses a Lambda function that you create to transform your data Kinesis Data Firehose buffers incoming data to a specified buffer size for the function and then invokes the specified Lambda function asynchronously The transformed data is sent from Lambda to K inesis Data Firehose and Kinesis Data Firehose delivers the data to the destination Data format conversion You can also enable K inesis Data Firehose data format conversion which will convert your stream of JSON data to Apache Parquet or Apache ORC This feature can only convert JSON to Apache Parquet or Apache ORC If you have data that is in CSV you can transform that data via a Lambda function to JSON and then apply th e data format conversion Data delivery As a near realtime delivery s tream Kinesis Data Firehose buffers incoming data After your delivery stream’s buffering thresholds have been reached your data is delivered to the destination you’ve configured Ther e are some differences in how K inesis Data Firehose delivers data to each destination which this paper reviews in the following sections Amazon Web Services Streaming Data Solutions on AWS 10 Amazon S3 Amazon S3 is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web It’s designed to deliver 99999999999% durability and scale past trillions of object s worldwide Data delivery to Amazon S3 For data delivery to S3 K inesis Data Firehose concatenates multiple incoming records based on the buffering configuration of your delivery stream and then delivers them to Amazon S3 as an S3 object The freq uency of data delivery to S3 is determined by the S3 buffer size (1 MB to 128 MB) or buffer i nterval (60 seconds to 900 seconds) which ever comes first Data delivery to your S3 bucket might fail for various reasons For example the bucket might not exist anymore or the AWS Identity and Access Managem ent (IAM) role that Kinesis Data Firehose assumes might not have access to the bucket Under these conditions K inesis Data Firehose keeps retrying for up to 24 hours until the delivery succeeds The maximum data storage time of K inesis Data Firehose is 24 hours If data delivery fails for more than 24 hours your data is lost Amazon Redshift Amazon Redshift is a fast fully manag ed data warehouse that makes it simple and costeffective to analyze all your data using standard SQL and your existing BI tools It allows you to run complex analytic queries against petabytes of structured data using sophisticated query optimization col umnar storage on high performance local disks and massively parallel query running Data delivery to Amazon Redshift For data delivery to Amazon Redshift K inesis Data Firehose first delivers incoming data to your S3 bucket in the format described earlier K inesis Data Firehose then issues an Amazon Redshift COPY command to load the data from your S3 bucket to your Amazon Redshift cluster The frequency of data COPY operations from S3 to Amazon Redshift is determined by how fast your Amazon Redshift clust er can finish the COPY command For a n Amazon Redshift destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream to handle data delivery fai lures Kinesis Data Firehose retries for the specified time duration and skips that particular batch of S3 objects if unsuccessful The Amazon Web Services Streaming Data Solutions on AWS 11 skipped objects' information is delivered to your S3 bucket as a manifest file in the errors/ folder which you can use for manual backfill Following is an architec ture diagram of Kinesis Data Firehose to Amazon Redshift data flow Although this data flow is unique to Amazon Redshift Kinesis Data Firehose follows similar patterns for the other destination targets Data flow from Kinesis Data Firehose to Amazon Redshift Amazon E lasticsearch Service (Amazon ES) Amazon ES is a fully managed service that delivers the Elasticsearch easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads Amazon ES makes it easy to deploy operate and scale Elasticsea rch for log analytics full text search and application monitoring Data delivery to Amazon E S For data delivery to Amazon E S Kinesis Data Firehose buffers incoming records based on the buffering configuration of your delivery stream and then generates an Elasticsearch bulk request to index multiple records to your Elasticsearch cluster The frequency of data delivery to Amazon E S is determined by the Elasticsearch buffer size (1 MB to 100 MB) and buffer interval (60 seconds to 900 seconds) values whic hever comes first For the Amazon E S destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream Kinesis Data Firehose retries for the specified time duration and then skips that particular index request The skipped d ocuments are delivered to your S3 bucket in the elasticsearch_failed/ folder which you can use for manual backfill Amazon Kinesis Data Firehose can rotate your Amazon ES index based on a time duration Depending on the rotation option you choose (NoRotation OneHour Amazon Web Services Streaming Da ta Solutions on AWS 12 OneDay OneWeek or OneMonth ) Kinesis Data Firehose appends a portion of the Coordinated Universal Time ( UTC) arrival timestamp to your specified index name Custom HTTP endpoint or supported thirdparty service provider Kinesis Data Firehose can send data either to Custom HTTP endpoints or supported thirdparty providers such as Datadog Dynatrace LogicMonitor MongoDB New Relic Splunk and Sumo Logic Data delivery to custom HTTP endpoints For K inesis Data Firehose to successfully deliver data to custom HTTP endpoints these endpoints must accept requests and send responses using certain K inesis Data Firehose request and response formats When delivering data to an HTTP endpoint owned by a supported third party ser vice provider you can use the integrated AWS Lambda service to create a function to transform the incoming record(s) to the format that matches the format the service provider's integration is expecting For data delivery frequency each service provider has a recommended buffer size Work with your service provider for more information on their recommended buffer size For data delivery failure handling Kinesis Data Firehose establishes a connection with the HTTP endpoint first by waiting for a response from the destination Kinesis Data Firehose continues to establish connection until the retry duration expires After that Kinesis Data Firehose considers it a data delivery failure and backs up the data to your S3 bucket Summary Kinesis Data Firehose can persist ently deliver your streaming data to a supported destination It’s a fully managed solution requiring little or no development For Company ABC2Badge using K inesis Data Firehose was a natural choice They were already using Amazon Redshift as their data warehouse solution Because their data sources continuously wr ote to transaction logs they were able to leverage the Amazon Kinesis Agent to stream that data without writing any additional code Now that company ABC2Badge has created a stream of sensor records and are receiving these records via K inesis Data Firehose they can use this as the basis for the security team use case Amazon Web Services Streaming Data Solutions on AWS 13 Scenario 3: Preparing clickstream data for data insights processes Fast Sneakers is a fashion boutique with a focus on trendy sneakers The price of any given pair of shoes can go up or down depending on inventory and trends such as what celebrity or sports star was spotted wearing brand name sneakers on TV last night It is importan t for Fast Sneakers to track and analyze those trends to maximize their revenue Fast Sneakers does not want to introduce additional overhead into the project with new infrastructure to maintain They want to be able to split the development to the appropr iate parties where the data engineers can focus on data transformation and their data scientists can work on their ML functionality independently To react quickly and automatically adjust prices according to demand Fast Sneakers streams significant eve nts (like click interest and purchasing data) transforming and augmenting the event data and feeding it to a ML model Their ML model is able to determine if a price adjustment is required This allows Fast Sneakers to automatically modify their pricing t o maximize profit on their products Fast Sneakers realtime price adjustments This architecture diagram shows the real time streaming solution Fast Sneakers created utilizing Kinesis Data Streams AWS Glue and DynamoDB Streams By taking advantage of these services they have a solution that is elastic and reliable without Amazon Web Services Streaming Data Solutions on AWS 14 needing to spend time on setting up and maintaining the supporting infrastructure They can spend their time on what brings value to their company by focusing on a streaming extract transform load (ETL) job and their machine learning model To better understand the architecture and technologies that are used in their workload the following are some details of the services used AWS Glue and AWS Glue streaming AWS Glue is a fully managed ETL service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly reduce the cost complexity and t ime spent creating ETL jobs AWS Glue is serverless so there is no infrastructure to set up or manage You pay only for the resources consumed while your jobs are running Utilizing AWS Glue you can create a consumer application with a n AWS Glue streaming ETL job This enables you to utilize Apache Spark and other Spark based modules writing to consume and process your event data The next section of this document goes into more depth about this scenario AWS Glue Data Catalog The AWS Glue Data Catalog contains references to data that is used as sources and targets of your ETL jobs in AWS G lue The AWS Glue Data Catalog is an index to the location schema and runtime metrics of your data You can use information in the Data Catalog to create and monitor your ETL jobs Information in the Data Catalog is stored as metadata tables where each table specifies a single data store By setting up a crawler you can automatically assess numerous types of data stores including DynamoDB S3 and Java Database Connectivity ( JDBC ) connected stores extract metadata and schemas and then create table de finitions in the AWS Glue Data Catalog To work with Amazon Kinesis Data Streams in AWS Glue streaming ETL jobs it is best practice to define you r stream in a table in a n AWS Glue Data Catalog database You define a stream sourced table with the Kinesis s tream one of the many formats supported (CSV JSON ORC Parquet Avro or a customer format with Grok) You can manually enter a schema or you can leave this step to your AWS Glue job to determine during runtime of the job Amazon Web Services Streaming Data Solutions on AWS 15 AWS Glue streaming ETL job AWS Glue runs your ETL jobs in an Apache Spark serverless environment AWS Glue runs these jobs on virtual resources that it provisions and manages in its own service account In addition to being able to run Apache Spark based jobs AWS Glue provides an additiona l level of functionality on top of Spark with DynamicFrames DynamicFrames are distributed tables that support nested data such as structu res and arrays Each record is self describing designed for schema flexibility with semi structured data A record in a DynamicFrame contains both data and the schema describing the data Both Apache Spark DataFrames and DynamicFrames are supported in you r ETL scripts and you can convert them back and forth DynamicFrames provide a set of advanced transformations for data cleaning and ETL By using Spark Streaming in your AWS Glue Job you can create streaming ETL jobs that run continuously and consume d ata from streaming sources like Amazon Kinesis Data Streams Apache Kafka and Amazon MSK The jobs can clean merge and transform the data then load the results into stores including Amazon S3 Amazon DynamoDB or JDBC data stores AWS Glue processes an d writes out data in 100 second windows by default This allows data to be processed efficiently and permits aggregations to be performed on data arriving later than expected You can configure the window size by adjusting it to accommodate the speed in response vs the accuracy of your aggregation AWS Glue streaming jobs use checkpoints to track the data that has been read from the Kinesis Data Stream For a walkthrough on creating a streaming ETL job in AWS Glue you can refer to Adding Streaming ETL Jobs in AWS Glue Amazon DynamoDB Amazon DynamoDB is a key value and document database that delivers single digit millisecond pe rformance at any scale It's a fully managed multi Region multi active durable database with built in security backup and restore and in memory caching for internet scale applications DynamoDB can handle more than ten trillion requests per day and c an support peaks of more than 20 million requests per second Change data capture for DynamoDB streams A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table When you enable a stream on a table DynamoDB captures information abo ut every modification to data items in the table DynamoDB runs on Amazon Web Services Streaming Data Solut ions on AWS 16 AWS Lambda so that you can create triggers —pieces of code that automatically respond to events in DynamoDB streams With triggers you can build applications that react to data modification s in DynamoDB tables When a stream is enabled on a table you can associate the stream Amazon Resource Name (ARN) with a Lambda function that you write Immediately after an item in the table is modified a new record appears in the table's stream AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records Amazon SageMaker and Amazon SageMaker service endpoints Amazon SageMaker is a fully managed platform that enables developers and data scientists with the ability to build train and deploy ML models quickly and at any scale SageMaker includes modules that can be u sed together or independently to build train and deploy your ML models With Amazon SageMaker service e ndpoints you can create managed hosted endpoint for real time inference with a deployed model that you developed within or outside of Amazon SageMaker By utilizing the AWS SDK you can invoke a SageMaker endpoint passing content type information along with content and then receive real time predictions based on the data passed Th is enables you to keep the design and development of your ML models separated from your code that performs actions on the inferred results This enables your data scientists to focus on ML and the developers who are using the ML model to focus on how the y use it in their code For more information on how to invoke an endpoint in SageMaker see InvokeEnpoint in the Amazon SageMaker API Reference Infer ring data insights in real time The previous architecture diagram shows that Fast Sneakers’ existing web application added a Kinesis Data Stream containing click stream events which provides traffic and event data from the website The product catalog which contains information such as categorization product attributes and pricing and the order table which has data such as items ordered billing shipping and so on are s eparate DynamoDB tables The data stream source and the appropriate DynamoDB tables have their metadata and schemas defined in the AWS Glue Data Catalog to be used by the AWS Glue streaming ETL job Amazon Web Services Streaming Data Solutions on AWS 17 By utilizing Apache Spark Spark streaming and DynamicFr ames in their AWS Glue streaming ETL job Fast Sneakers is able to extract data from either data stream and transform it merging data from the product and order tables With the hydrated data from the transformation the datasets to get inference results from are submitted to a DynamoDB table The DynamoDB Stream for the table triggers a Lambda function for each new record written The Lambda function submits the previously transformed records to a SageMaker Endpoint with the AWS SDK to infer what if any price adjustments are necessary for a product If the ML model identifies an adjustment to the price is required the Lambda function write s the price change to the product in the catalog DynamoDB table Summary Amazon Kinesis Data Streams makes it easy to collect process and analyze real time streaming data so you can get timely insights and react quickly to new information Combined with the AWS Glue serverless data integration service you can create real time event streaming application s that prepare and combine data for ML Because both Kinesis Data Streams and AWS Glue services are fully managed AWS takes away the undifferentiated heavy lifting of managing infrastructure for your big data platform lettin g you focus on generating data insights based on your data Fast Sneakers can utilize real time event processing and ML to enable their website to make fully automated real time price adjustments to maximize their product stock This brings the most valu e to their business while avoiding the need to create and maintain a big data platform Scenario 4: Device sensors realtime anomaly detection and notifications Company ABC4Logistics transports highly flammable petroleum products such as gasoline liquid propane ( LPG) and naphtha from the port to various cities There are hundreds of vehicles which have multiple sensors installed on them for monitoring things such as location engine temperature temperature inside the container driving speed parking location road conditions and so on One of the requirements ABC4Logistics has is to monitor the temperatures of the engine and the container in realtime and alert the driver and the fleet monitoring team in case of any anomaly To Amazon Web Services Streaming Data Solutions on AWS 18 detect such conditions and generate alerts in real time ABC4Logistics implemented the following architecture on AWS ABC4Logistics ’s device sensors real time anomaly detection and notifications architectu re Data from device sensors is ingested by AWS IoT Gateway where the AWS IoT rules engine will make the streaming data available in Amazon Kinesis Data Streams Using Amazon Kinesis Data Analytics ABC4Logistics can perform the real time analytics on streaming data in Kinesis Data Streams Using Kinesis Data Analytics ABC4Logistics can detect if temperature readings from the sensors deviate from the normal readings over a period of ten seconds and ingest the record onto another Kinesis Data Streams instance identifying the anomalous records Amazon Kinesis Data Streams then invokes AWS Lambda functions which can send the alerts to the driver and the fleet monitoring team through Amazon SNS Data in Kinesis Data Stream s is also pushed down to Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose persist s this data in Amazon S3 allowing ABC4Logistics to perform batch or near real time analytics on senso r data ABC4Logistics uses Amazon Athena to query data in S3 and Amazon QuickSight for visualizations For longterm data retention the S3 Lifecycle policy is used to archive data to Amazon S3 Glacier Important components of this architecture are detail ed next Amazon Web Services Streaming Data Solutions on AWS 19 Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics enables you to transform and analyze streaming data and respond to anomalies in real time It is a se rverless service on AWS which means Kinesis Data Analytics takes care of provisioning and elastically sca les the infrastructure to handle any data throughput T his takes away all the undifferentiated heavy lifting of setting up and managing the streaming infrastructure and enables you to spend more time on writing steaming applications With Amazon Kinesis Data Analytics you can interactively query streaming da ta using multiple options including S tandard SQL Apache Flink applications in Java Python and Scala and build Apache Beam applications using Java to analyze data streams These options provide you with flexibility of using a specific approach depending on the complexity level of streaming application and source/target support The following section discuss es Kinesis Data Analytics for Flink Applications option Amazon Kinesis Data Analytics for Apache Flink applications Apache Flink is a popular open source framework and distributed processing engine for stateful computations over unbounded and bounded da ta streams Apache Flink is designed to perform computations at in memory speed and at scale with support for exactly one semantics Apache Flink based applications help achieve low latency with high throughput in a fault tolerant manner With Amazon Kinesis Data Analytics for Apache Flink you can author and run code against streaming sources to perform time series analytics feed real time dashboards and create real time metrics without managing the complex distributed Apache Flink environment You can use the high level Flink programming features in the same way that you use them when hosting the Flink infrastructure yourself Kinesis Data Analytics for Apache Flink enables you to create applications in Java Scala Python or SQL to process and analy ze streaming data A typical Flink application reads the data from the input stream or data location or source transform s/filter s or joins data using operator s or function s and store s the data on output stream or data location or sink The following architecture diagram shows some of the supported sources and sinks for the Kinesis Data Analytics Flink application In addition to the pre bundled connectors for source/sink you can also bring in custom connectors to a variety of other source/sinks for Flink Applications on Kinesis Data Analytics Amazon Web Services Streaming Data Solutions on AWS 20 Apache Flink application on Kinesis Data Analytics for real time stream processing Developers can use their preferred IDE to develop Flink applications and deploy them on Kinesis Data Analytics from AWS Management Console or DevOps tools Amazon Kinesis Data Analytics Studio As part of Kinesis Data An alytics service Kinesis Data Analytics Studio is available for customers to interactively query data streams in real time and easily build and run stream processing applications using SQL Python and Scala Studio notebooks are powered by Apache Zeppelin Using Studio notebook you have the ability to develop your Flink Application code in a notebook environment view results of your code in real time and visualize it within your notebook You can create a Studio Notebook powered by Apache Zeppelin and Apache Flink with a single click from Kinesis Data Streams and Amazon MSK console or launch it from Kinesis Data Analytics Console Once you develop the code iteratively as p art of the Kinesis Data Analytics Studio y ou can deploy a notebook as a Kinesis data analytics application to run in streaming mode continuously reading data from your sources writing to your destinations maintaining longrunning application state an d scaling automatically based on the throughput of your source streams Earlier customers used Kinesis Data Analytics for SQL Applications for such interactive analytics of real time streaming data on AWS Amazon Web Services Streaming Data Solutions on AWS 21 Kinesis Data Analytics for SQL applications is still available but for new projects AWS recommend s that you use the new Kinesis Data Analytics Studio Kinesis Data Analytics Studio combines ease of use with advanced analytical capabilities which makes it possible to build sophisticated stream processing applic ations in minutes For making the Kinesis Data Analytics Flink application faulttolerant you can make use of checkpointing and snapshots as described in the Implemen ting Fault Tolerance in Kinesis Data Analytics for Apache Flink Kinesis Data Analytics Flink application s are useful for writing complex streaming analytics applications such as applications with exactly one semantics of data processing checkpoint ing capabilities and processing data from data sources such as Kinesis Data Streams Kinesis Data Firehose Amazon MSK Rabbit MQ and Apache Cassandra including Custom Connectors After processing streaming data in the Flink application you can persist data to various sinks or destinations such as Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon DynamoDB Amazon Elasticsearch Service Amazon Timestream Amazon S3 and so on The Kinesis Data Analytics Flink application also provide s sub second performance guarantees Apache Beam applications for Kinesis Data Analytics Apache Beam is a programming model for processing streaming data Apache Beam provides a portable API layer for building sophistica ted data parallel processing pipelines that may be run across a diversity of engines or runners such as Flink Spark Streaming Apache Samza and so on You can use the Apache Beam framework with your Kinesis data analytics application to process streaming data Kinesis data analytics applications that use Apache Beam use Apache Flink runner to run Beam pipelines Summary By making use of the AWS st reaming service s Amazon Kinesis Data Streams Amazon Kinesis Data Analytics and Amazon Kinesis Data Firehose ABC4Logistics : can detect anomalous patterns in temperature readings and notify the driver and the fleet management team in real time preventing major accidents such as complete vehicle breakdown or fire Amazon Web Services Streaming Data Solutions on AWS 22 Scenario 5: Real time telemetry data monitoring with Apache Kafka ABC1Cabs is an online cab booking services company All the cabs have IoT devices that gather telemetry data from the vehicles C urrently ABC1Cabs is running Apache Kafka clusters that are designed for real time event consumption gathering system health metrics activity tracking and feeding the data into Apache Spark Streaming platform b uilt on a Hadoop cluster on premises ABC1Cabs use Kibana dashboards for business metrics debugging alerting and creat ing other dashboards They are interested in Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards Their requ irement is to reduce admin overhead of maintaining Apache Kafka and Hadoop clusters while using familiar open source software and APIs to orchestrate their data pipeline The following architecture diagram shows their solution on AWS Realtime processi ng with Amazon MSK and Stream processing using Apache Spark Streaming on EMR and Amazon Elasticsearch Service with Kibana for dashboards The cab IoT devices collect telemetry data and send to a source hub The source hub is configured to send data in real time to Amazon MSK Using the Apache Kafka producer library APIs Amazon MSK is configured to stream the data into an Amazon EMR cluster The Amazon EMR cluster has a Kafka client and Spark Streaming installed to be able to consume and process the streams of data Spark Streaming has sink connectors which can write data directly to defined indexes of Elasticsearch Elasticsearch cluster s with Kibana can be used for metrics and dashboards Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards are all managed services where AWS manages the undifferentiated heavy lifting of infrastructure management of different clusters which enabl es you to build your application using familiar open source soft ware with few clicks The next secti on takes a closer look at these services Amazon Web Services Streaming Data Solutions on AWS 23 Amazon Managed Streaming for Apache Kafka (Amazon MSK) Apache Kafka is an open source platform that enables customers to capture streaming data like click stream events transactions IoT events and application and machine logs With this information you can develop applications that perform real time analyt ics run continuous transformations and distribute this data to data lakes and databases in real time You can use Kafka as a streaming data store to decouple applications from producer and consumers and enable reliable data transfer between the two comp onents While Kafka is a popular enterprise data streaming and messaging platform it can be difficult to set up scale and manage in production Amazon MSK takes care of these managing tasks and makes it easy to set up configure and run Kafka along w ith Apache Zookeeper in an environment following best practices for high availability and security You can still use Kafka's control plane operations and data plane operations to manage producing and consuming data Because Amazon MSK runs and manages o pensource Apache Kafka it makes it easy for customers to migrate and run existing Apache Kafka applications on AWS without needing to make changes to their application code Scaling Amazon MSK offers scaling operations so that user can scale the cluste r actively while its running When creating an Amazon MSK cluster you can specify the instance type of the brokers at cluster launch You can start with a few brokers within an Amazon MSK cluster Then using the AWS Management Console or AWS CLI you can scale up to hundreds of brokers per cluster Alternatively you can scale your clusters by changing the size or family of your Apache Kafka brokers Changing the size or family of your brokers gives you the flexibility to adjust your MSK cluster’s comput e capacity for changes in your workloads Use the Amazon MSK Sizing and Pricing spreadsheet (file download) to determine the correct number of brokers for your Amazon MSK cluster T his spreadsheet provides an estimate for sizing an Amazon MSK cluster and the associated costs of Amazon MSK compared to a similar self managed EC2 based Apache Kafka cluster After creating the MSK cluster you can increase the amount of EBS storage per broker with exception of decreasing the storage Storage volumes remain available during this Amazon Web Services Streaming Data Solutions on AWS 24 scaling up operation It offers two types of scaling operations : Auto Scaling and Manual Scaling Amazon MSK supports automatic expansion of your cluster's storage in response to increased usage using Application Auto Scaling policies Your auto matic scaling policy sets the target disk utilization and the maximum scaling capacity The storage utilization threshold helps Amazon MSK to trigger an auto matic scaling operation To increase storage using manual scaling wait for the cluster to be in the ACTIVE state Storage scaling has a cooldown period of at least six hours between events Even though the operation makes additional storage available right away the service performs optimizations on your cluster that can take up to 24 hours or more The du ration of these optimizations is proportional to your storage size Additionally it also o ffers multi –Availability Zones replication within an AWS Region to provide High Availability Configuration Amazon MSK provides a default configuration for brokers topics and Apache Zookeeper nodes You can also create custom configurations and use them to create new MSK clusters or update existing clusters When you create an MSK cluster without specifying a custom MSK configuration Amazon MSK creates and uses a default configuration For a list of default values see this Apache Kafka Configuration For monitoring purposes Amazon MSK gathers Apache Kafka metrics and sends them to Amazon CloudWatch where you can view them The metrics that you configure for your MSK cluster are automatically collected and pushed to CloudWatch Monitoring consumer lag enables you to identify slow or stuck consumers that aren't keep ing up with the latest data available in a topic When necessary you can then take remedial actions such as scaling or rebooting those consumers Migrating to Amazon MSK Migrating from on premise s to Amazon MSK can be achieved by one of the following methods • MirrorMaker20 — MirrorMaker20 (MM2) MM2 is a multi cluster data replication engine based on Apache Kafka Connect framework MM2 is a combination of an Apache Kafka source connector and a sink connector You can use a single MM2 cluste r to migrate data between multiple clusters MM2 automatically detects new topics and partitions while also ensuring the topic Amazon Web Services Streaming Data Solutions on AWS 25 configurations are synced between clusters MM2 supports migrations ACLs topics config urations and offset translation For mor e details related to migration see Migrating Clusters Using Apache Kafka's MirrorMaker MM2 is used for use cases related to replication of topics config urations and offs et translation automatically • Apache Flink — MM2 supports at least once semantics Records can be duplicated to the destination and the consumers are expected to be idempotent to handle duplicate records In exactly once scenarios semantics are required customers can use Apache Flink It provides an alternative to achieve exactly once semantics Apache Flink can also be used for scenarios where data requires mapping or transformation actions before submission to the destination cluster Apache Flink provi des connectors for Apache Kafka with sources and sinks that can read data from one Apache Kafka cluster and write to another Apache Flink can be run on AWS by launching an Amazon EMR cluster or by running Apache Flink as an application using Amazon Kinesis Data Analytics • AWS Lambda — With support for Apache Kafka as an event source for AWS Lambda customer s can now consume messages from a topic via a Lambda function The AWS Lambda service internally polls for new records or messages from the event source and then synchronously invokes the target Lambda function to consume these messages Lambda reads the messages in batches and provides the message batches to your function in the event payload for processing Consumed messages can then be transformed and/or written directly to your destination Amazon MSK cluster Amazon EMR with Spark Streaming Amazon EMR is a managed cluster platform that simplifies running big data frameworks such as Apache Hadoop and Apache Spark on AWS to process and analyze vast amounts of data Amazon EMR provides the capabilities of Spark and can be used to st art Spark streaming to consume data from Kafka Spark Streaming is an extension of the core Spark API that enables scalable high throughput fault tolerant stream processing of live data streams You c an create an Amazon EMR cluster using the AWS Command Line Interface (AWS CLI) or on the AWS Management C onsole and s elect Spark and Zeppelin in advanced Amazon Web Services Streaming Data Solutions on AWS 26 configurations while creating the cluster As shown in the following architecture diagram data can be ingested from many sources such as Apache Kafka and Kinesis Data Streams and can be processed using complex algorithms expressed with high level functio ns such as map reduce join and window For more information see Transformations on DStreams Processed data can be pushed out to file systems databases and live dashboards Realtime streaming flow from Apache Kafka to Hadoop ecosystem By default Apache Spark Streaming has a micro batch run model However since Spark 23 came out Apache has introduced a new low latency processing mode called Continuous Processing which can achieve end toend latencies as low as one millisecond with at least once guarantees Without changing the Dataset/DataFrames operations in your queries you can choose the mode based on your application requi rements Some of the benefits of Spark Streaming are : • It brings Apache Spark's language integrated API to stream processing letting you write streaming jobs the same way you write batch jobs • It supports Java Scala and Python • It can recover both lost work and operator state ( such as sliding windows) out of the box without any extra code on your part • By running on Spark Spark Streaming lets you reuse the same code for batch processing join streams against historical data or run ad hoc queries on the stream state and build powerf ul interactive applications not just analytics Amazon Web Services Streaming Data Solutions on AWS 27 • After the data stream is processed with Spark Streaming Elasticsearch Sink Connector can be used to write data to the Amazon ES cluster and in turn Amazon ES with Kibana dashboards can be used as consump tion layer Amazon Elasticsearch Service with Kibana Amazon ES is a managed service that makes it easy to deploy operate and scale Elasticsearch clusters in the AWS Cloud Elasticsearch is a popular open source search and analytics engine for use cases such as log analytics real time application monitoring and clickstream analysis Kibana is an open source data visualization and exploration tool used for log and time series analytics application monitoring and operational intelligence use cases It offers powerful and easy touse features such as histograms line graphs pie charts heat maps and built in geospatial support Kibana provides tight integration with Elasticsearch a popular analytics and search engine which makes Kibana the default choice for visualizing data stored in Elasticsearch Amazon ES provides an installation of Kibana with every Amazon ES domain You can find a link to Kibana on your domain dashboard on the Amazon ES console Summary With Apache Kafka o ffered as a managed service on AWS you can focus on consumption rather than on managing the coordination between the brokers which usually requires a detailed understanding of Apache Kafka Features such as h igh availability broker scalability and granular access control are managed by the Amazon MSK platform ABC1Cabs utilize d these services to build production application without needing infrastructure management expertise They could focus on the processing layer to consume data from Amazon MSK and further propagate to visualization layer Spark Streaming on Amazon EMR can help realtime analytics of streaming data and publish ing on Kibana on Amazon Elasticsearch Service for the visualization layer Amazon Web Services Streaming Data Solutions on AWS 28 Conclusion This document reviewed several scenarios for streaming workflow s In these scenarios streaming data processing provided the example companies with the ability to add new features and functional ity By analyzing data as it gets created you will gain insights into what your business is doing right now AWS streaming services enable you to focus on your application to make time sensitive business decisions rather than deploying and managing the infrastructure Contributors The following individuals and organizations contributed to this document: • Amalia Rabinovitch Sr Solutions Architect AWS • Priyanka Chaudhary Data Lake Data Architect AWS • Zohair Nasimi Solutions Architect AWS • Rob Kuhr Solutions Architect AWS • Ejaz Sayyed Sr Partner Solutions Architect AWS • Allan MacInnis Solutions Architect AWS • Chander Matrubhutam Product Marketing Manager AWS Document versions Date Description September 01 2021 Updated for technical accuracy September 07 2017 First publication,General,consultant,Best Practices Tagging_Best_Practices_Implement_an_Effective_AWS_Resource_Tagging_Strategy,This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlTagging Best Practices Implement an Effective AWS Resource Tagging Strategy December 2018 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuranc es from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Content s Introduction: Tagging Use Cases 1 Tags for AWS Console Organization and Resource Groups 1 Tags for Cost Allocation 1 Tags for Automation 1 Tags for Operations Support 2 Tags for Access Control 2 Tags f or Security Risk Management 2 Best Practices for Identifying Tag Requirements 2 Employ a Cross Functional Team to Identify Tag Requirements 2 Use Tags Consistently 3 Assign Owners to Define Tag Value Propositions 3 Focus on Required and Conditionally Required Tags 3 Start Small; Less is More 4 Best Practices for Naming Tags and Resources 4 Adopt a Standardized Approach for Tag Names 4 Standardize Names for AWS Resources 5 EC2 Instances 6 Other AWS Resour ce Types 6 Best Practices for Cost Allocation Tags 7 Align Cost Allocation Tags with Financial Reporting Dimensions 7 Use Both Linked Accounts and Cost Allocation Tags 8 Avoid Multi Valued Cost Allocation Tags 9 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Tag Everything 9 Best Practices for Tag Governance and Data Management 9 Integrate with Authoritative Data Sources 9 Use Compound Tag Values Judiciously 10 Use Automation to Proactively Tag Resources 12 Constrain Tag Values with AWS Service Catalog 12 Propagate Tag Values Across Related Resources 13 Lock Down Tags Used for Access Control 13 Remediate Untagged Resources 14 Implement a Tag Governance Process 14 Conclusion 15 Contributors 15 References 15 Tagging Use Cases 15 Align Tags with Financial Reporting Dimensions 16 Use Both Linked Accounts and Cost Allocation Tags 16 Tag Everything 16 Integrate with Authoritative Data Sources 16 Use Compound Tag Values Judiciously 16 Use Automation to Proactively Tag Resources 17 Constrain Tag Values with AWS Service Catalog 17 Propagate Tag Values Across Related Resources 17 Lock Down Tags Used for Access Control 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Remediate Untagged Resources 17 Document Revisions 18 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Abstract Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage search for and filter resources Although there are no inherent types of tags they enable customers to categorize resources by purpose owner environment or other criteria Without the use of tags it can become diff icult to manage your resources effectively as your utilization of AWS services grows However it is not always evident how to determine what tags to use and for which types of resources The goal of this whitepaper is to help you develop a tagging strategy that enables you to manage your AWS resources more effectively This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 1 Introduction: Tagging Use Cases Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage se arch for and filter resources by purpose owner environment or other criteria AWS tags can be used for many purposes Tags for AWS Console Organization and Resource Groups Tags are a great way to organize AWS resources in the AWS Management Console You can configure tags to be displayed with resources and can search and filter by tag By default the AWS Management Console is organized by AWS service However the Resource Groups tool allows customers to create a custom console that organizes and consolidates AWS resources based on one or more tags or portions of tags Using this tool customers can c onsolidate and view data for applications that consist of multipl e services and resources in one place Tags for Cost Allocation AWS Cost Explorer and Cost and Usage Report support the ability to break down AWS costs by tag Typically customers use bu siness tags such as cost center business unit or project to associate AWS costs with traditional financial reporting dimensions within their organization However a cost allocation report can include any tag This allows customers to easily associate costs with technical or security dimensions such as specific applications environments or compliance programs Table 1 shows a partial cost allocation report Table 1: Partial cost allocation report Tags for Automation Resource or service specific tags are often used to filter resources during infrastructure automation activities Tags can be used to opt in to or out of automated tasks or to identify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 2 specific versions of resources to archive update or delete For examp le many customers run automated start/stop scripts that turn off development environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2) instance tags are a simple way to identify the specific develo pment inst ances to opt into or out of this process Tags for Operations Support Tags can be used to integrate support for AWS resources into day today operations including IT Service Management (ITSM) processes such as Incident Management For example Le vel 1 support teams could use tags to direct workflow and perform business service mapping as part of the triage process when a monitoring system triggers an alarm Many customers also use tags to support processes such as backup/restore and operating syst em patching Tags for Access Control AWS Identity and Access Management ( IAM) policies support tag based conditions enabling customers to constrain permissions based on specific tags and their values For example IAM user or role permissions can include conditions to limit access to specific environments ( for example development test or production) or Amazon Virtual Private Cloud (Amazon VPC) networks based on their tags Tags for Security Risk Management Tags can be assigned to identify resources that require heightened security risk management practices for example Amazon EC2 instance s hosting applications that process sensitive or confidential data This can enable automated compliance checks to ensure that proper access controls are in place patc h compliance is up to date and so on The sections that fol low identify recommended best practices for developing a comprehensive tagging strategy Best Practices for Identifying Tag Requirements Employ a Cross Functional Team t o Identify Tag Requirements As noted in the introduction tags can be u sed for a variet y of purposes In order to develop a comprehensive strategy it’s best to assemble a cross functional team to identify tagging This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 3 requirements Tag stakeholders in an organization typically include IT Finance Information Security application owners cloud a utomation teams middleware and database administration teams and process owners for functions such as patching backup/restore monitoring job scheduling and disaster re covery Rather than meeting with each of these functional areas separately to ident ify their tagging needs conduct tagging requirements workshops with representation from all stakeholder groups so that each can hear the perspectives of the others and integrate their requirements more effectively into the overall strategy Use Tags Cons istently It’s important to employ a consistent approach in tagging your AWS resources If you intend to use tags for specific use cases as illustrated by the examples in the introduction you will need to rely on the consistent use of tags and tag values For example if a significant portion of your AWS resources are missing tags used for cost allocation your cost analysis and reporting process will be more complicated and time consuming and probably less accurate Likewise if resources are missing a t ag that identifies the presence of sensitive data you may have to assume that all such resources contain sensitive data as a precautionary measure A consistent approach is warranted even for tags identified as optional For example if you employ an opt in approach for automatically stopping development environments during non working hours identify a single tag for this purpose rather than allowing different teams or departments to use their own ; resulting in many diffe rent tags all serving the same purpose Assign Owners to Define Tag Value Propositions Consider tags from a cost/benefit perspective when deciding on a list of required tags While AWS does not charge a fee for the use of tags there may be indirect costs (for example the labor needed to assign and maintain correct tag values for each relevant AWS resource ) To ensure tags are useful i dentify an owner for each one The tag owner has the responsibility to clearly articulate its value proposition Having tag owners may help avoid unnecessary costs related to maintaining tags that are not used Focus on Required and Conditionally Required Tags Tags can be required conditionally required or optional Conditionally required tags are only mandatory under certai n circumstances (for example if an application processes sensitive data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 4 you may require a tag to identify the corresponding data classification such as Personally Identifiable Information or Protected Health Information ) When identifying tagging requirements focus on required an d conditionally required tags Allow for optional tags as long as they conform to your tag naming and governance policies t o empower your organization to define new tags for unforeseen or bespoke application requ irements Start Small ; Less is More Tagging decisions are reversible giving you the flexibility to edit or change as needed in the future However there is one exception —cost allocation tags —which are included in AWS monthly cost allocation reports The data for these reports is based on AWS services utilization and captured monthly As a result when you introduce a new cost allocation tag it take s effect starting from that point in time The new tag will not apply to past cost allocation reports Tags help you identify sets of resources Tags can be removed when no longer needed A new tag can be applied to a set of resources in bulk however you need to identify the resources requiring the new tag and the value to assign those resources Start with a smaller set of tags that are known to be need ed and create new tags as the need arise s This approach is recommended over specifying an overabundance of tags that are anticipated to be needed in the future Best Practices for Naming Tags and Resources Adopt a Standardized Approach for Tag Names Keep in mind that names for AWS tags are case sensitive so ensure that they are used consistently For example the tags CostCenter and costcenter are different so one might be configured as a cos t allocation tag for financial analysis and reporting and the other one might not be Similarly the Name tag appears in the AWS Console for many resources but the name tag does not A number of tags are predefined by AWS or created automatically by various AWS services Many AWS defined tags are named using all lowercase with hyphen s separating words in the name and prefixes to identify the source service for the tag For example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 5 • aws:ec2spot:fleet request id identifies the Amazon EC2 Spot Instance Request that launched the instance • aws:cloudformation:stack name identifies the AWS CloudFormation stack that created the resource • lambda console:blueprint identifies blueprint used as a te mplate for an AWS Lambda function • elasticbeanstalk:environment name identifies the applic ation that created the resource Consider naming your tags using all lowercase with hyphens separating words and a prefix identifying the organization name or abbreviated name For example for a fictitious company named AnyCompany you might define tags such as : • anycompany :cost center to identify the internal Cost Center code • anycompany :environment type to identif y whether the environment is developmen t test or production • anycompany :application id to identify the application the resource was created for The prefix ensure s that tags are clearly identified as having been defined by your organization and not by AWS or a third party tool that you may be u sing Using all lowercase with hyphens for separators avoids confusion about how to capitalize a tag name For example anycompany :project id is simpler to remember than ANYCOMPANY :ProjectID anycompany :projectID or Anycompany :ProjectId Standardize Names for AWS Resources Assigning names to AWS resources is another important dimension of tagging that should be considered This is the value that is assigned to the predefined AWS Name tag (or in some cases by other means) and is mainly used in the AWS Management Console To understand the idea here it’s probably not helpful to have dozens of EC2 instances all named MyWebServer Developing a naming standard for AWS resources will help you keep your resources organized and can be used in AWS Cost and Usage Reports for grouping related resources together (see also Propagate Tag Values Across Related Resources below) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 6 EC2 Instances Naming for EC2 instances is a good place to start Most organizations have already recognized the need to standardize on server hostnames and have existing practices in effect For example an organization might create hostnames based on several components such as physical location environment type (development test production ) role/ purpose application ID and a unique identifier: First note that the various components of a hostname construction process like this are great candidates for individual AWS tags – if they were important in the past they’ll likely be important in the future Even if the se elements are captured as separate individual tags i t’s still reasonable to continue to use this style of server naming to maintain consistency and substituting a different physical location code to represent AWS or an AWS region However if you’re moving away from treating your virtual instances like pets and more like cattle (which is recommended ) you’ll want to automate the assignment of server names to avoid having to assign them manually As an alternative you could simply use the AWS instance id (which is globally unique) for your server name s In either case if you ’re also creating DNS names for servers it’s a good idea to associate the value used for the Name tag with the Ful ly Qualified Domain Name ( FQDN) for the EC2 instance So if your instance name is phlpwcspweb3 the FQDN for the server could be phlpwcspweb3a nycompany com If you’d rather use the instance id for the Name tag then y ou should use that in your FQDN (for example i06599a3 8675anycompany com) Other AWS Resource Types For other types of AWS resources one approach is to adopt a dot notation consisting of the following name components : 1 account name prefix: for example production development shared services audit etc Philadelphia data center productionweb tier Customer Service Portalunique identifier phlpwcspweb3 = phl p w csp web3 hostname:This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 7 2 resource name: freeform field for the logical name of the resource 3 type suffix: for example subnet sg role policy kmskey etc See Table 2 for examples of tag names for other AWS resource types Table 2: Sample tag names for other AWS resource types Resource Type Example AWS Resource Name account name resource name type Subnet prod public az1subnet Production public az1 subnet Subnet services az2subnet Shared Services az2 subnet Security Group prod webserversg Production webserver sg Security Group devwebserversg Development webserver sg Security Group servicesdmzsg Shared Services dmz sg IAM Role prodec2 s3accessrole Production ec2s3 access role IAM Role drec2 s3accessrole Disaster Recovery ec2s3 access role KMS Key proda nycompany kmskey Production AnyCompan y kmskey Some resource types limit the character set that can be used for the name In such cases the dot character s can be replaced with hyphen s Best Practices for Cost Allocation Tags Align Cost Allocation Tags with Financial Reporting Dimensions AWS provides detailed cost reports and data extracts to help you monitor and manage your AWS spend When you designate specific tags as cost allocation tags in the AWS Billing and Cost Management Console billing data for AWS resources will include the m Remember b illing information is point intime data so cost allocation tags appear in your billing data only after This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 8 you have (1) specified them in the Billing and Cost Management Console and (2) tagged resources with them A natural place to identify the cost allocation tags you need is by looking at your current IT financial reporting practices Typically financ ial reporting covers a variety of dimensions such as business unit cost center product geographic area or department Aligning cost allocation tags with these financial reporting dimensions simplif ies and streamline s your AWS cost management Use Both Linked Accounts and Cost Allocation Tags AWS resources are c reated within accounts and billing reports and extracts contain the AWS account number for all billable resources regardless of whether or not the resources have tags You can have multiple accounts so creating different accounts for different financial entities within your organization is a way to clearly segregate costs AWS provides options for consolidated billing by associating payer accounts and linked accounts You can also use AWS Organizations to c reate master accounts with associated member accounts to take advantage of the additional centralized management and governance capabilities Organizations may design their account structure based on a number of factors including fiscal isolation administrative isolation access isolation blast radius isolation engineering and cost considerations ( refer to the References section for links to relevant articles on AWS Answers) Examples include: • Creating separate accounts for production and non product ion to segregate communications and access for these environments • Creating a separate account for shared services components and utilities • Creating a separate audit account to captur e log files for security forensics and monitoring • Creating separate accounts for disaster recovery Understand your organization ’s account structure when developing your tagging strategy since alignment of some of the financial reporting dimensions may already be captured by your account structure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 9 Avoid Multi Valued Cost Allocation Tags For shared resources you may need to allocate costs to several applications projects or departments One appro ach to allocating costs is to create multi valued tags that contain a series of allocation codes possibly with corresponding allocation ratios for example: anycompany :cost center = 1600|02 5|1625|020|1731|050|1744|005 If designated as a cost allocation tag such tag values appear in your billing data However there are two challenges with this approach: (1) the data will have to be post processed to parse the multi valued tag value s and produce more detailed records a nd (2) you will need to establish a process to accurately set and maintain the tag values If possible consider identify ing existing cost sharing or chargeback mechanisms within your organization —or create new ones —and associate shared AWS resources to individual cost allocation codes defined by that mechanism Tag Everything When developing a tagging strategy be wary of focus ing only on the set of tags need ed for your EC2 instances Remember that AWS allows you to tag most types of resources that generat e costs on your billing reports Apply your cost allocation tags across all resource types that support tagging to get the most accurate data for your financial analysis and reporting Best Practices for Tag Governance and Data Management Integrate with Authoritative Data Sources You may decide to include tags on your AWS resources for which data is already available within your organization For example if you are using a Configuration Management Database (CMDB) you may already have a pr ocess in place to store and maintain metadata about your applications databases and environments Configuration Items (CIs) in your CMDB may have attributes including application or server owner technical issue resolver groups cost center or charge cod e data classification etc Rather than redundantly capturing and maintain ing such existing meta data in AWS tags consider integrating your CMDB with AWS The integration can be bi directional meaning that This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 10 data sourced from the CMDB can be copie d to tag s on AWS resources and data that can be sourced from AWS (for example IP addresses instance IDs and instance types) can be stored as attribu tes in your Configuration Items If you integrate your CMDB with AWS in this way extend your AWS tag naming convention to include an additional prefix to identify tags that have externally sourced values for example: • anycompany :cmdb:application id – the CMDB Configuration Item ID for th e application that owns the resource • anycompany :cmdb:cost center – the Cost Center code associated with the owning application sourced from the CMDB • anycompany :cmdb:application owner – the indiv idual or group that owns the application associated with this resource sourced from the CMDB This makes it clear that the tags are provided for convenience and that the authoritative source of the data is the CMDB Referencing authoritative data sources rather than redundantly maintaining the same data in mul tiple systems is a general data management best practice Use Compound Tag Values Judiciously Initially AWS limited the number of tags for a given resource to 10 result ing in some organizations combin ing several data elements into a single tag using de limiters to segregate the different attributes as in: EnvironmentType = Developm ent;Webserver;Tomcat 62;Tier 2 In 2016 the number of tags per resource was increased to 50 (with a few exceptions such as S3 objects ) Because of this it’ s generally recommended to follow good da ta management practice by including only one data attribute per tag However there are some situations where it may make sense to combine several related attributes together Some examples include: 1 For contact infor mation as shown in Table 3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 11 Table 3: Examples of compound and single tag values Compound Tag Values anycompany :business contact = John Smith;johnsmith@a nycompany com ;+12015551212 anycompany :technical contact = Susan Jones ;suejones@a nycompany com ;+12015551213 Single Tag Values anycompany :busi ness contact name = John Smith anycompany :business conta ctemail = johnsmith@a nycompany com anycompany :busines scontact phone = +12015551212 anycompany :techni calcontact name = Susan Jones anycompany :technical cont actemail = suejones@a nycompany com anycompany :technica lcontact phone = +12015551213 2 For multi valued tags where a single attribute can have several homogenous values For example a resource support ing multiple applications might use a pipe delimited list: anycompany :cmdb: application ids = APP012|APP 045|APP320|APP450 However before introducing multi valued tags consider the source of the information and how the information will be used if captured in an AWS tag If there is an authoritative source for the data in question then any processes requiring the information may be better served by re ferencing the authoritative source directly rather than a tag Also as recommended in this paper avoid multivalued cost allocation tags if possible 3 For tags used for automation purposes Such tags typically capture opt in and automation status inform ation For example if you implement an AWS Lambda function to automatically back up EBS volumes by taking snapshots you might use a tag that contains a short JSON document: anycompany :auto snapshot = { “frequency”: “daily” “ lastbackup”: “2018 0419T21:18:00000+0000” } This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 12 There are many automation solutions available at AWS Labs ( https://githubcom/awslabs ) and the AWS Marketplace ( https://awsamazo ncom/marketplace ) that make use of compound tag value s in their implementation s Use Automation to Proactively Tag Resources AWS offers a variety of tools to help you implement proactive tag governance practices ; by ensuring that tags are consistently app lied when resources are created AWS CloudFormation provides a common language for provision ing all the infrastructure resources in your cloud environment CloudFormation templates are simple text file s that create AWS resources in an automated and secure manner When you create AWS resources using AWS CloudFormation templates you can use the CloudFormation Resource Tags property to apply t ags to certain resource types upon creation AWS Service Catalog allows organizations to create and manage catalogs o f IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application environments AWS Service Catalog enables a self service capability for users allowing them to provision the services they need while also helping you to maintain consistent governance – including the application of required tags and tag values AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely Using IAM you can create and manage AWS users and groups and use permissions to allow or deny their access to AWS resources When you create IAM policies you can specify resource level p ermissions which include specific permissions for creating and deleting tags In addition you can include condition keys such as aws:RequestTag and aws:TagKeys which will prevent resources from being created if specific tags or tag values are not prese nt Constrain Tag Values with AWS Service Catalog Tags are not useful if they contain missing or invalid data values If tag values are set by automation the automation code can be reviewed tested and enhanced to ensure that valid tag values are used When tags are entered manually there is the opportunity for human error One way to reduce human error is by using AWS Service Catalog One of the key features of AWS Service Catalog is TagOption libraries With TagOption libraries you can specify requir ed tags as well as their range of allowable values AWS Service Catalog organizes your approved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 13 AWS service offerings or products into multiple portfolios You can use TagOption libraries at the portfolio level or even at the individual product level t o specify the range of allowable values for each tag Propagate Tag Values Across Related Resources Many AWS resources are related For example an EC2 instance may have several Elastic Block Storage (EBS) volumes and one or more Elastic Network Interfaces (ENIs) For each EBS volume many EBS snapshots may be created over time For consistency best practice is to propagate tags and tag values across related resources If resources are created by AWS CloudFormation templates they are created together in g roups called stacks from a common automation script which can be configured to set tag values across all resources in the stack For resources not created via AWS CloudFormation you can still implement automation to automatically propagate tags from rela ted resources For example when EBS snapshots are created you can copy any tags present on the EBS volume to the snapshot Similarly you can use CloudWatch Events to trigger a Lambda function to copy tags from an S3 bucket to objects within the bucket a ny time S3 objects are created Lock Down Tags Used for Access Control If you decide to use tags to supplement your access control policies you will need to ensure that you restrict access to creating deleting and modifying those tags For example you can create IAM policies that use conditional logic to grant access to (1) EC2 instances for an IAM group created for developers and (2) for EC2 instances tagged as development This could be further restricted to developers for a particular application based on a condition in the IAM policy that identifies the relevant application ID While the use of tags for this purpose is convenient it can be easily circumvented if users hav e the ability to modify tag values in order to gain access that they should not have Take preventative measures against this by ensur ing that your IAM policies include deny rules for actions such as ec2:C reateTags and ec2:DeleteTags Even with this preven tative measure IAM policies that grant access to resources based on tag values should be used with caution and approved by your Information Security team You may decide to use this approach for convenience in certain situations For example use strict I AM policies (without conditions based on tags) for restricting access to production environments ; but for development environments grant access to application specific resources via tags to help developers avoid inadvertently affecting each other’s work This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 14 Remediate Untagged Resources Automation and proactive tag management are important but are not always effective Many customers also employ reactive tag governance approaches to identify resourc es that are not properly tagged and correct them Reactive tag governance approaches include (1) programmatically using tools such as the Resource Tagging API AWS Config r ules and custom scripts ; or (2) manually using Tag Editor and detailed billing reports Tag Editor is a feature of the AWS Management Console that allows you to search for resources using a variety of search criteria and add modify or delete tags in bulk Search criteria can include resources with or without the presence of a particular tag or value The AWS Resource Tagging API allows you to perform these same functions programmatically AWS Config enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the eva luation of recorded configurations against optimal configurations With AWS Config you can create rules to check resources for required tags and it will continuously monitor your resources against those rules Any non compliant resources are identified on the AWS Config Dashboard and via notifications In the case where resources are initially tagged properly but their tags are subsequently changed or deleted AWS Config will find them for you You can use AWS Config with CloudWatch Events to trigger autom ated responses to missing or incorrect tags An extreme example would be to automatically stop or quarantine non compliant EC2 instances The most suitable governance approach for a n organization primarily depends on its AWS maturity model but even experi enced organizations use a combination of proactive and reactive governance techniques Implement a Tag Governance Process Keep in mind that once you’ve settled on a tagging strategy for your organization you will need to adapt it as you progress through your cloud journey In particular it’s likely that requests for new tags will surface and need to be addressed A basic tag governance process should include : • impact analysis approval and implementation for requests to add change or deprecate tags ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 15 • application of existing tagging requirements as new AWS services are adopted by your organization; • monitoring and remediation of missing or incorrect tags; and • periodic reporting on tagging metrics and key process indicators Conclusion AWS resource tags can be used for a wide variety of purposes from implementing a cost allocation process to supporting automation or authorizing access to AWS resources Implementing a tagging strategy can be challenging for some organizations due to th e number of stakeholder groups involved and considerations such as data sourcing and tag governance This white paper recommends a way forward based on a set of best practices to get you started quickly with a tagging strategy that you can adapt as your organization’s needs evolve over time Contributors The following individuals and organizations contributed to this document: Brian Yost Senior Consultant AWS Professional Services References Tagging Use Cases • AWS Tagging Strategies • Tagging Your Amazon EC2 Resources • Centralized multi account and multi Region patching with AWS Systems Manager Automation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 16 Align Tags with Financial Repor ting Dimensions • Monthly Cost Allocation Report • User Defined Cost Allocation Tags • Cost Allocation for EBS Snapshots • AWS Generated Cost Allocation Tags Use Both Linked Acc ounts and Cost Allocation Tags • Consolidated Billing for Organizations • AWS Multiple Account Billing Strategy • AWS Multiple Account Security Strategy • What Is AWS Organi zations? Tag Everything User Defined Cost Allocation Tags Integrate with Authoritative Data Sources ITIL Asset and Configuration Management in the Cloud Use C ompound Tag Values Judiciously Now Organize Your AWS Resources by Using up to 50 Tags per Resource This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 17 Use Automation to Proactively Tag Resources • How c an I use IAM policy tags to restrict how an EC2 instance or EBS volume can be created? • How to Automatically Tag Amazon EC2 Resources in Response to API Events • Supported Resource Level Permissions for Amazon EC2 API Actions: Resource Level Permissions for Tagging • Example Policies for Working with the AWS CLI or an AWS SDK: Tagging Resources • Resource Tag Constrain Tag Values with AWS Service Catalog • AWS Service Catalog Announces AutoTags for Automatic Tagging of Provisioned Resources • AWS Service Catalog TagOption Library Propagate Tag V alues Across Related Resources CloudWatch Events for EBS Snapshots Lock Dow n Tags Used for Access Control • AWS Services That Work with IAM • How do I create an IAM policy to control access to Amazon EC2 resources using tags? • Controlling Access to Amazon VPC Resources Remediate Untagged Resources • Resource Groups and Tagging for AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 18 • AWS Resource Tagging API Document Revisions Date Description December 2018 First Publication,General,consultant,Best Practices The_Total_Cost_of_Non_Ownership_of_a_NoSQL_Database_Cloud_Service,ArchivedThe Total Cost of (Non) Ownership of a NoSQL Database Cloud Service Jinesh Varia and Jose Papo March 2012 This paper has been archived To find the latest technical content about the AWS Cloud go to the AWS Whitepapers & Guides page on the AWS website: https://awsamazoncom/whitepapers/,General,consultant,Best Practices U.S._Securities_and_Exchange_Commissions_SEC_Office_of_Compliance_Inspections_and_Examinations_OCIE_Cybersecurity_Initiative_Audit_Guide,"ArchivedUS Securities and Exchange Commissi on’s (SEC) Office of Compliance Insp ections and Examinations (OCIE) Cybersecurity Initi ative Audit Guide October 2015 This paper has been archived For the latest technical guidance on Security and Compliance refer to https://awsamazoncom/architecture/security identitycompliance/ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 2 of 21 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 3 of 21 Contents Executive Summary 4 Approaches for using AWS Audit Guides 4 Examiners 4 AWS Provided Evidence 4 OCIE Cybersecurity Audit Checklist for AWS 6 1 Governance 6 2 Network Configuration and Management 8 3 Asset Configuration and Management 9 4 Logical Access Control 10 5 Data Encryption 12 6 Security Logging and Monitoring 13 7 Security Incident Response 14 8 Disaster Recovery 15 9 Inherited Controls 16 Appendix A: References and Further Reading 18 Appendix B: Glossary of Terms 19 Appendix C: API Calls 20 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 4 of 21 Executive Summary This AWS US Securities and Exchange Commission’s (SEC) Office of Compliance Inspections and Examinations (OCIE) Cybersecurity Initiative audit guide has been designed by AWS to guide financial institutions which are subject to SEC audits on the use and security architecture of AWS services This document is intended for use by AWS financial institution customers their examiners and audit advisors to understand the scope of the AWS services provide guidance for implementation and discuss examination when using AWS services as part of the financial institutions environment for customer data Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shared Responsibility” model between AWS and the customer The audi t guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements In general AWS services should be treated similar to onpremise infrastructure services that have been traditionally used by customers for their operating services and applications Policies and processes that apply to devices and servers should also apply when those functions are supplied by AWS services Controls pertaining solely to policy or pr ocedure generally are entirely the responsibility of the customer Similarly management of access to AWS services either via the AWS Console or Command Line API should be treated like other privileged administrator access See the appendix and referenced points for more information AWS Provided Evidence AWS services are regularly assessed against industry standards and requirements In an attempt to support a variety of industries including federal agencies retailers international organizations health care providers and financial institutions AWS elects to have a variety of assessments performed ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 5 of 21 against the services and infrastructure For a complete list and information on assessment performed by third parties please refer to AWS Compliance web site Archived Amazon Web Services – OCIE Cybersecurity Audit Guide September 2015 Page 6 of 21 OCIE Cybersecurity Audit Checklist for AWS The AWS compliance program ensures that AWS services are regularly audited against applicable standards Some control statements may be satisfied by the customer’s use of AWS (for instance Physical access to sensitive data) However most controls have either shared responsibilities between AWS and the customer or are entirely the customer’s responsibility This audit checklist describes the customer responsibilities specific to the OCIE Cybersecurity Initiative when utilizing AWS services 1 Governance Definition: Governance includes the elements required to provide senior management assurance that its direction and intent are reflected in the security posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services the customer has purchased what kinds of systems and information the customer plan s to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Un derstand what AWS services and resources are being used by the customer and ensure that the customer ’s security or risk management program has taken into account the ir use of the public cloud environment Audit approach: As part of this audit determine who within the customer’s organization is an AWS account owner and resource owner and what kinds of AWS services and resources they are using Verify that the customer’s policies plans and procedures include cloud concepts and that cloud is included in t he scope of the customers audit program Governance Checklist Checklist Item Documentation and Inventory Verify that the customer ’s AWS network is fully documented and all AWS critical systems are included in their inventory docume ntation with limited access to this documentation  Review AWS Config for AWS resource inventory and configuration history of resources (Example API Call 1)  Ensure that resources are appropriately tagged with a customer’s application and/or customer data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 7 of 21 Checklist Item  Review application architecture to identify data flows planned connectivity between application components and resources that contain customer data  Review all connectivity between the custome r’s network and AWS Platform by reviewing the following:  VPN connections where the customers on premise Public IPs are mapped to customer gateways in any VPCs owned by the Customer (Example API Call 2 & 3)  Dire ct Connect Private Connections which may be mapped to 1 or more VPCs owned by the customer (Example API Call 4 ) Risk Assessment Ensure the customer’s risk assessment for AWS services includes potential cybersecurity threats vulnerabilities and business consequences  Verify that AWS services were included in the customer’s risk assessment and privacy impact assessment  Verify that system characterization was documented for AWS services as part of the risk assessment to identify and rank information assets IT Security Program and Policy Verify that the customer includes AWS services in its security policies and procedures including AWS account level best practices as highlighted within the AWS service Trusted Advisor which provides best practice and guidance across 4 topics – Security Cost Performance and Fault Tolerance  Review the customer’s information securit y policies and ensure that it includes AWS servic es and reflects the Identify Theft Red Flag Rules (17 CFR § 248 — Subpart C —Regulation S ID)  Confirm that the customer has assigned an employee (s) as an authority for the use and security of AWS services and there are defined roles for those noted key roles including a Chief Information Security Officer  Note any published cybersecurity risk management process standards the customer has used to model their information security architecture and processes  Ensure the customer maintains documentation to supp ort the audits conducted for their AWS services including its review of AWS third party certifications  Verify that the customer’s internal training records includes AWS security such as Amazon IAM usage Amazon EC2 Security Groups and remote access to Amazon EC2 instances  Confirm that the customer maintains a cybersecurity response policy and training for AWS services  Note any insurance specifically related to the customers use of AWS services and any claims related to losses and expenses attributed to cybersecurity events as a result ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 8 of 21 Checklist Item Service Provider Oversight Verify that the customer’s contract with AWS includes a requirement to implement and maintain privacy and security safeguards for cybersecurity requirements 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure that their network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS whether additional transmission protection is needed in the form of a VPN and whether to limit inbound and outbound traffic Customers who must perform monitoring of their network can do so using host based intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer ’s private networks Note: AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network seg mentation is applied within the customers AWS environment  Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and firewall setting s on AWS services (Example API Call 5 8)  Verify that the customer has a procedure for granting remote internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and sy stems ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 9 of 21 Checklist Item  Review the following to ensure the customer maintains an environment for testing and development of software and applications that is separate from its business environment:  VPC isolation is in place between business environment and environments us ed for test and development  VPC peering connectivity is between VPCs This ensure s network isolation is in place between VPCs  Subnet isolation is in place between business environment and environments used for test and development  NACLs are associated with Subnets in which Business and Test/Development environments are located to ensure network isolation is in place subnets  Amazon EC2 instance isolation is in place between the business environment and environments used for test and development  Security Groups associated to 1 or more Instances within the Business Test or Development environments ensure network isolation between Amazon EC2 instances Review the customer’ s DDoS layered defense solution running that operates directly on AWS which are leveraged as part of a DDoS solution such as:  Amazon CloudF ront configuration  Amazon S3 configuration  Amazon Route 53  ELB configuration  The above serv ices do not use Customer owned Public IP addresses and offer DoS AWS inherited DoS mitigation features  Usage of Amazon EC2 for Proxy or WAF Further guidance can be found within the “ AWS Best Practices for DDoS Resiliency Whitepaper ” Malicious Code Controls Assess the implementation and management of anti malware for Amazon EC2 instances in a similar manner as with physical systems 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything they install on or connect to their AWS resources Secure management of the customers ’ AWS resources means knowing what resources the customer is using (asset inventory) securely configuring the guest OS and applications on the customers resources (secure configuration settings patching and antimalware) and controlling changes to the customers resources (change management) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 10 of 21 Major audit focus: Customers must manage their operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate the customers OS and applications are designed configured patched and hardened in accordance to the customer’s policies procedures and standards All OS and application management practices can be common between onpremise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Assess configuration management Verify the use of the customer’s configuration management practices for all AWS system components and validate that these standards meet the customer baseline configurations  Review the customer’s procedu re for conducting a specialized wipe procedure prior to deleting the volume for compliance with their established requirements  Review the customers Identity Access Management system which may be used to allow authenticated access to the customer’s applica tions hosted on top of AWS services  Confirm the customer completed penetration testing including the scope for the tests Change Management Controls Ensure the customer’s use of AWS services follows the same change c ontrol processes as internal series  Verify that AWS services are included within the customer’s internal patch management process Review documented process es for c onfiguration and patching of Amazon EC2 instances:  Amazon Machine Images (AMIs) (Example API Call 9 10)  Operating systems  Applications  Review the customer’s API Calls for in scope services for delete calls to ensure the customer has properly disposed of IT assets  4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 11 of 21 resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer’s corporate directory (single sign on) AWS IAM enables a customer ’s users to securely control access to AWS servi ces and resources Using IAM a customer can create and manage AWS users and groups and use permissions to allow and deny their permissions to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up in AWS for the services being used by the customer It is also important to ensure that the credentials associated with all of the customer’s AWS accounts are being managed securely by the customer Audit approach: Validate that permissions for AWS assets are being managed in accordance with organizational policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for managing access to AWS services and Amazon EC2 instances Ensur e the customer documents their use and configuration of AWS access controls examples and options outlined below :  Description of how Amazon IAM is used for access management  List of controls that Amazon IAM is used to manage – Resource management Securi ty Groups VPN object permissions etc  Use of native AWS access controls or if access is managed through federated authentication which leverages the open standard Security Assertion Markup Language (SAML) 20  List of AWS Accounts Roles Groups and Us ers Policies and policy attachments to users groups and roles (Example API Call 11)  A description of Am azon IAM accounts and roles and monitoring methods  A description and configuration of systems within EC2 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 12 of 21 Checklist Item Remote Access Ensure there is an approval process logging process or controls to prevent unauthorized remote access Note: All access to AWS and Amazon EC2 instances is “remote access” by definition unless Direct Connect has been co nfigured Review the customer’s process for preventing unauthorized access which may include:  AWS CloudT rail for logging of Service level API calls  AWS CloudW atch logs to meet logging objectives  IAM Policies S3 Bucket Policies Security Groups for con trols to prevent unauthorized access Review the customer’s connectivity between the customer’s network and AWS:  VPN Connection between VPC and Firms network  Direct Connect (cross connect and private interfaces) between customer and AWS  Defined Secu rity Groups Network Access Control Lists and Routing tables in order to control access between AWS and the customer’s network Personnel Control Ensure that the customer restricts users to those AWS services strictly required for thei r business function (Example API Call 12)  Review the type of access control the customer has in place as it relates to AWS services  AWS access control at an AWS level – using IAM with Tagging to control management of Amazon EC2 instances (start/stop/terminate) within networks  Customer Access Control – using the customer IAM (LDAP solution) to manage access to resources which exist in networks at the Operating System / Application layers  Network Access control – using AWS Security Groups(SGs) Network Access Control Lists (NACLs) Routing Tables VPN Connections VPC Peering to control network access to resources within customer owned VPCs 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However some customers who have sensitive data may require additional protection by encrypting the data when it is stored on AWS Only Amazon S3 service currently provides an automated server side encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 13 of 21 Major audit focus: Data at rest should be encrypted in the same way as the customer protects onpremise data Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper protection of customers ’ data could create a security exposure for the customer Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential customer information in transport while using AWS services  Review methods for connection to AWS Console management A PI S3 RDS and Amazon EC2 VPN for enforcement of encryption  Review internal policies and procedures for key management including AWS services and Amazon EC2 instances  Review encryption methods used if any to protect customer PINs at Rest – AWS offer s a number of key management services such as KMS AWS CloudHSM and Server Side Encryption for S3 which could be used to assist with data at rest encryption (Example API Call 13 15) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within a customer ’s information systems and networks Audit logs are used to identify activity that may impact the security of those systems whether in realtime or after the fact so the pro per configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on the customers Amazon EC2 instances and that implementation is in alignment with the customer’s policies and procedures especially as it relates to the storage protection and analysis of the logs ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 14 of 21 Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review logging and monitoring policies and procedures for adequacy retention defined thresholds and secure maintenance specifically for detecting unauthorized activity within AWS services  Review the customer’s logging and monitoring policies and procedures and ensure their inc lusion of AWS services including Amazon EC2 instances for security related events  Verify that logging mechanisms are configured to send logs to a centralized server and ensure that for Amazon EC2 instances the proper type and format of logs are retain ed in a similar manner as with physical systems  For customers usi ng AWS CloudWatch review the customer’s process and record of their use of network monitoring  Ensure the customer utilizes analytics of events to improve their de fensive measures and pol icies  Review AWS IAM Credential report for unau thorized users AWS Config and resource tagging for unauthorized devices (Example API Call 16)  Confirm the customer aggregates and correlates event data from multipl e sources The customer may use AWS services such as: a) VPC Flow logs to identify accepted/rejected network packets entering VPC b) AWS CloudT rail to identify authenticated and unauthenticated API calls to AWS services c) ELB Logging – Load balancer logging d) AWS CloudF ront Logging – Logging of CDN distributions Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems  Review AWS provided evidence on where information on intru sion detection processes can be reviewed 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may be monitored by the interaction of both AWS and AWS customers AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application The customer should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 15 of 21 Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporti ng Ensure that the customer’s incident response plan and policy for cybersecurity incidents includes AWS services and addresses controls that mitigate cybersecurity incidents and recovery  Ensure the customer is leveraging existing incident monitoring to ols as well as AWS available tools to monitor the use of AWS services  Verify that the Incident Response Plan undergoes a periodic review and that changes related to AWS are made as needed  Note if the Incident Response Plan has customer notification pro cedures and how the customer addresses responsibility for losses associated with attacks or instructions impacting customers 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact to the customer While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design would often utilize multiple components in different AWS availability zones and involve data replication ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 16 of 21 Audit approach: Understand the DR strategy for the customer’s environment and determine the faulttolerant architecture employed for the customer ’s critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities Disaster Recovery Checklist : Checklist Item Business Continuity Plan (BCP) Ensure there is a comprehensive BCP for A WS services utilized that addresses mitigation of the effects of a cybersecurity incident and/or recover y from such an incident  Within the Plan ensure that AWS is included in the customer’s emergency preparedness and crisis management elements senior m anager oversight responsibilities and the testing plan Backup and Storage Controls Review the customer’s periodic test of their backup system for AWS services (Example API Call 17 18)  Review i nventory of data backed up to AWS services as off site backup 9 Inherited Controls Definition: Amazon has many years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate that the customer conducted the appropriate due diligence in selecting service providers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 17 of 21 Audit approach: Understand how the customer can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can b e reviewed that are managed by AWS for physical security controls ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 18 of 21 Appendix A: References and Further Reading 1 Amazon Web Services: Introduction to AWS Security https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Security pdf 2 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Com pliance_Whitepaperpdf 3 Using Amazon Web Services for Disaster Recovery http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf 4 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 5 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=sear chQuery&x=20&y=25&fromSearch=1&searchPath=all&searchQuery=iden tity%20federation 6 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8& queryArg=searchQuery&fromSearch=1&searchQuery=Token%20Vending %20machine 7 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 8 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 9 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 19 of 21 Appendix B: Glossary of Terms API: Application Programming Interface (API) in the context of AWS These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS AWS provides SDKs and CLI reference which allows customers to programmatically manage AWS services via API Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make webscale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 20 of 21 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services Read more: http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws and http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 3 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 4 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 5 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 6 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 7 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 8 Alternatively use Security Group focused CLI: aws ec2 describesecuritygroups 9 List AMI currently owned/registered by the customer aws ec2 describeimages –owners self 10 List all Instances launched with a specific AMI aws ec2describeinstances filters “Name=image idValues=XXXXX” (where XXXX = imageid value eg ami12345a12 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 21 of 21 11 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 12 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies rolename XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 13 List KMS Keys aws kms listaliases 14 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 15 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes ""Name=encryptedValues=true"" targeted eg useast 1) 16 Credential Report aws iam generatecredentialreport aws iam getcredentialreport 17 Create Snapshot/Backup of EBS volume aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 18 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX)",General,consultant,Best Practices Understanding_T2_Standard_Instance_CPU_Credits,Understanding T2 Standard Instance CPU Credits March 8 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s products or services are provid ed “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agree ment between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Earned CPU Credits 1 Launch CPU Credits 1 CPU Utilization and CPU Credits 2 CPU Credit Earn Rates and CPU Utilization Rates 3 CPU Credit Earn Rates and Instance Sizes 4 Baseline Rates and Instance Sizes 5 CPU Credit Accrual Limits and the Discarding of Credits 6 The Five Phases in the CPU Credit System 7 Example: Tracking CPU Credit Usage 8 Period A — Balance at Maximum 9 Period B — Balance Stable 10 Period C — Balance Decreasing 11 Period D — Balance Decreasing 12 Period E — Balance Stable 13 Period F — Balance Decreases to Almost Zero 14 Period G — Balance at Minimum 15 Period H — Balance Increasing 16 Period I — Balance Increasing 17 Period J — Balance at Maximum 18 T2 Standard Instance Launch Credits 19 Launch Credit Allocati on Limits 20 The Effects of Launch Credits on the CPU Credit Balance 21 Example: Tracking CPU Credit Accrual and Usage with Launch Credits 22 Period A — Launch Credits + 24 Hours of Earned C redits 23 Period B — Maximum Earned and Launch Credits 24 Period C — Spending Earned Credits 25 Period D — Balance Stable 24 Hours of Earned Credits 26 Period E — Spending Earned Credits 27 Period F — Accruing Earned Credits 28 Period G — Balance Stable 24 Hours of Earned Credits 30 Comparing T2 Instance Sizes With Identical Workloads 31 Scenario 1: Consuming CPU Credits at Different Rates 32 Scenario 2: Consuming 72 Credits Every 24 Hours 33 Scenario 3: Consuming 76 Credits Every 24 Hours 34 Scenario 4: Steady and Gradual Depletion of Credit Balance 35 Scenario 5: Variable CPU Utilization Rate 36 Scenario 6: Variable CPU Utilization Duration 36 Scenario 7: Consuming CPU Credits Immediately After Launch 37 Instances with Multiple vCPUs 38 Conclusion 39 Contributors 39 Further Reading 39 Document Revisions 39 Abstract Choosing the best Amazon EC2 instance type for your workload can be a challenge especially if you are considering using a burstable instance type such as a T2 Standard instance This document describes how a T2 Standard instance earns CPU credits how launch credits are allocated and how those launch and earned CPU credits are spent Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 1 Introduction Most Amazon Elastic Compute Cloud (Amazon EC2) instance types provide a fixed level of CPU performance However the burstable performance instance types T2 and T3 provide a baseline level of CPU performance with the ability to burst to a higher level (a bove that baseline) as required The ability to use vCPUs at a rate higher than the baseline CPU utilization rate is governed by a CPU credit system Unlike the T2 Unlimited and T3 instance types in addition to earned credits T2 Standard instances can a lso be allocated launch credits These two types of credits are treated differently and because the credit balance is presented as a single numeric value it can be difficult to understand how the credits work Earned CPU Credits As a burstable instance ty pe is running it earns CPU credits The rate at which an instance earns credits is based on the instance size —larger instance sizes earn CPU credits at a faster rate CPU credits are earned in fractions of credits and are allocated at 5 minute intervals Up to 24hrs of earned credits can be accrued in the credit balance to be used later to burst above the baseline CPU utilization rate Launch CPU Credits A T2 Standard instance is allocated launch CPU credits during the instance launch provided that the AW S account has not exceeded its launch credit limit (See the Launch Credit Allocation Limits section for details ) These launch credits enable the instance to burst above the baseline CPU utilization rate immediately after launch — before any earned CPU credits have been accrued by the instance Launch credits are spent before earned CPU credits A ny unspent launch credi ts in the balance do not affect the accumulation of earned CPU credits Note : When a T2 instance is stopped (shut down) all CPU credits remaining within the CPU credit balance are forfeited Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 2 CPU Utilization and CPU Credits During periods of CPU utilizat ion (above 0%) CPU credits are redeemed for CPU time used The utilization and corresponding CPU credit costs are calculated at millisecond granularity The following three vCPU utilization scenarios all result in the usage of 1 CPU credit: • 1 vCPU @ 100% utilization for 60 seconds • 1 vCPU @ 50% utilization for 120 seconds • 2 vCPUs @ 25% utilization for 120 seconds The following three vCPU utilization scenarios all result in the usage of 05 CPU credits: • 1 vCPU @ 100% utilization for 30 seconds • 1 vCPU @ 50% utilization for 60 seconds • 2 vCPUs @ 25% utilization for 60 seconds Table 1: CPU Utilization Percentage vs Credit Utilization Rate Table vCPU Utilization Rate Credits per Minute Credits per Hour 100% 1 60 75% 075 45 50% 05 30 30% 03 18 25% 025 15 20% 02 12 15% 015 9 10% 01 6 5% 005 3 0% 0 0 Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 3 CPU Credit Earn Rate s and CPU Utilization Rate s The CPU credit earn rate for an instance depends on the instance size and is directly related to the CPU utilization baseline For example a t2small instance has a baseline CPU utilization rate of 20% and earns 12 CPU credits per hour In the next three exam ples we see the effect of 3 different CPU utilization rates for a t2small instance : below the baseline (10%) at the baseline (20%) and above the baseline (30%): Example CPU Utilization Rate CPU Credits Spent Description 10% 6 / hour CPU credits are being spent at a slower rate than they are being earned 20% 12 / hour CPU credits are being spent at the same rate than they are being earned 30% 18 / hour CPU credits are being spent at a faster rate than they are being earned Amazon Web Services Understanding T2 Standard I nstance CPU Credits Page 4 CPU Credit Earn Rates and Instance Sizes T2 Standard instances are available in multiple sizes to match different workloads The number of vCPUs the CPU credit earn rate and the amount of memory varies by instance size as shown in the following table and graph Instance Size CPU Credits Earned per 24 Hours CPU Credits Earned per Hour Maximum CPU Credit Balance *1 Baseline CPU Utilization *2 Launch Credits Granted Number of vCPUs Amount of Memory (GiB) t2nano 72 3 102 5% 30 1 05 t2micro 144 6 174 10% 30 1 1 t2small 288 12 318 20% 30 1 2 t2medium 576 24 636 40% 60 2 4 t2large 864 36 924 60% 60 2 8 t2xlarge 1296 54 1416 90% 120 4 16 t22xlarge 1944 81 2184 135% 240 8 32 *1 – The maximum CPU credit balance includes launch credits Launch credits are allocated at launch and are not replenished after they are spent *2 – Baseline CPU utilization is based on the equivalent utilization rate for a single vCPU See the multiple vC PU section for details Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 5 Base line Rates and Instance Sizes The baseline CPU utilization rate for an instance is determined by the instance size — larger instance sizes have a higher baseline rate The per vCPU utilization rate and the associated CPU credits spent do not vary with T2 Standard instance sizes One minute of 100% vCPU utilization on a t2nano t2micro or t2small equates to 1 CPU credit (See “Instances with Multiple vCPUs” for inform ation on CPU credit usage for t2medium and larger instances) In the following example the CPU utilization rate for both instances is 15% (9 CPU credits per hour) This utilization rate is above the baseline rate for a t2micro instance but below the ba seline rate for a t2small instance: Instance Details CPU base rate = 10% Credit earn rate = 6 / hour Instance Details CPU base rate = 20% Credit earn rate = 12 / hour Current utilization details CPU utilization rate – 15% Credit utilization rate – 9 / hour Current utilization details CPU utilization rate – 15% Credit utilization rate – 9 / hour Result Credits are being spent at a faster rate than they are being earned Result Credits are being spent at a slower rate than they are being earned Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 6 CPU Credit Accrual Limits and the Discarding of Credits The maximum number of earned CPU credits that can be accrued by a T2 Standard instance varies by instance size As the following diagram shows a larger instance size has a larger bucket for accruing CPU credits During time periods where the CPU credit spend rate is lower than the CPU credit earn rate after the maximum number of earned CPU credits have been acc rued any additional earned credits are discarded Note: To avoid the complexity associated with lau nch credits the next examples describing CPU credits exclude launch credits See “T2 Standard Launch Credits” for a complete discussion t2micro t2small Amazon Web Services Underst anding T2 Standard Instance CPU Credits Page 7 The Five Phases in the CPU Credit System Phase Details Credit Balance Chart State Before State After Balance Increasing During periods where the CPU credit spend rate is less than the earn rate you accumulate credits Balance Decreasing During periods where the CPU credit spend rate is greater than the earn rate your credit balance declines Balance Stable During periods where the CPU credit spend rate is the same as the earn rate the number of accumulated credits remains unchanged Balance at Maximum During periods where the CPU spend rate is less than the earn rate and you have the maximum number of CPU credits accrued additional earned credits are discarded Balance at Minimum During periods where the credit balance is nearly depleted the maximum utilization rate is restricted to the base rate (The credit balance does not reach zero) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 8 Example: Tracking CPU Credit Usage In this section we illustrate CPU credit usage over time and its effect on the CPU credit balance for a t2small instance over 3 days The 3 days are divided into 10 separate periods identified by the letters A through J and each period is described ind ividually in the following sections At the start of this example assume the following: • The credit balance contains the maximum number of earned CPU credits (288) that can be accrued by a t2small instance • The credit balance consists only of earned CPU credits There are no launch credits in the balance (A later example includes launch credits) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 9 Period A — Balance at Maximum During this first period the credit utilization rate is zero and the number of earned credits is at the maximum limit of 24 hours of earned credits (288) Any newly earned credits are discarded Period A Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 12 credits per hour (100% of credit earn rate) Credit Balance Balance is stable at 288 credits (0 launch credits and 288 earned CPU credits) Start of Period A End of Period A Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 10 Period B — Balance Stable During this period the credit utilization rate is equal to the credit earn rate therefore credits are being replaced as they are spent This results in the balance remaining unchanged at 288 credits Period B Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at 288 credits (0 launch credits and 288 earned CPU credits) Start of Period B End of Period B Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 11 Period C — Balance Decreasing During this period the credit utilization rate is two times the credit earn rate therefore credits are being consumed from the credit balance faster than they can be replenished by earned credits Period C Credit Spend Rate 24 credits per hour (200% of credit earn rate) 40% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 12 credits per hour Change rate = earn rate (12 ) spend rate (24) Start of Period C End of Period C Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 12 Period D — Balance Decreasing During this period the credit utilization rate is three times higher than the credit earn rate therefore credits are being consumed from the credit balance at a faster rate than during period C Period D Credit Spend Rate 36 credits per hour (300% of credit earn rate) 60% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 24 credits per hour Change rate = earn rate (12 ) spend rate (36) Start of Period D End of Period D Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 13 Period E — Balance Stable During this period as in period B the credit utilization rate is equal to the credit earn rate Therefore credits are being replaced as they are spent resulting in the balance remaining stable Period E Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at 72 credits (0 launch credits and 72 earned CPU credits) Start of Period E End of Period E Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 14 Period F — Balance Decreases to Almost Zero During this period the instance was consuming CPU credits two times faster than they are being earned Because there were enough CPU credits in the credit balance the workload was able to run unrestricted most of this period However near the end of the period when the credit balance was nearly depleted the CPU credit system restricted the maximum attainable CPU utilization to the base rate for a t2small instance 20% Period F Credit Spend Rate 24 credits per hour (200% of credit earn rate) 40% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 12 credits per hour Change rate = earn rate (12) spend rate (24) At the end of period F the credit balance is nearly depleted and the CPU utilization is limited to the base rate Start of Period F End of Period F Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 15 Period G — Balance at Minimum During this period the credit balance remains stable near zero as the number of CPU credits are being spent as fast as they are earned When the credit balance is near zero the maximum attainable CPU utilization is restricted to the baseline for the instance size which is 20% in the case of a t2small Even if the workload required a similar vCPU utilization rate to what it had in periods C D and F the T2 Standard CPU credit system limits it to the base rate Period G Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at almost zero credits (0 launch credits and almost zero earned CPU credits) Start of Period G End of Period G Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 16 Period H — Balance Increas ing During this period the credit utilization rate is half of the credit earn rate and CPU credits are being added to the credit balance at a rate if 6 per hour Period H Credit Spend Rate 6 credits per hour (50% of credit earn rate) 10% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance increases at a rate of 6 credits per hour Change rate = earn rate (12) spend ra te (6) End of Period H Start of Period H Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 17 Period I — Balance Increas ing During this period the credit utilization rate is zero and all earned CPU credits are being added to the credit balance at a rate of 12 per hour which is double that of period H By the end of the period the credit balance contains the maximum number of earned credits allowed Period I Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance increases at a rate of 12 credits per hour Change rate = earn rate (12) spend rate (0) End of Period I Start of Period I Amazon Web Services Understanding T2 St andard Instance CPU Credits Page 18 Period J — Balance at Maximum During this period as in period A the credit utilization rate is zero and the credit balance contains the maximum number of earned credits allowed (288) Any newly earned and unspent credits are discarded Period J Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 12 credits per hour (100% of credit earn rate) Credit Balance Balance is stable at 288 credits Change rate = earn rate (12) – spend rate (0) discard rate (12) End of Period J Start of Period J Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 19 T2 Standard Instance Launch Credits Launch credits enable a T2 Standard instance to burst above the baseline level of CPU utilization immediately after launch —prior to it having earned CPU credits and accruing them in the credit balance Launch credits only apply to T2 Standard instances Launch credit features: • Launch credits are added to the overall CPU credit balance • Launch credits are spent before earned CPU credits • Launch credits do not affect t he accumulation of earned CPU credits • Launch credits do not get replenished while the instance is running • Launch credits are not allocated when the allocation limit is exceeded If you don’t take these features into account under certain circumstances the CPU credit balance can seem to behave in ways that you might not expect For example: • The CPU credit balance can plateau at different values • The CPU credit balance can experience different behavior over time even if the workload CPU utilization rate is unchanged To better understand the effect of launch credits on the overall CPU credit balance picture the credit balance as being comprised of two buckets of credits instead of one: • A bucket for the accrued earned CPU credits which is filled during times when the spend rate is lower than the earn rate • A second bucket for the launch credits that is filled at launch time but does not get replenished while the instance is running Overall credit balance = 174 ( 144 earned ) + ( 30 launch ) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 20 Launch Credit Allocation Limits Launch credits are only allocated to T2 Standard instances during their launch if the particular instance launch is within the account’s Launch Credit Allocation Limit The default limit is 100 la unches or starts per account per region per rolling 24 hour period The limit can be reached through any combination of launches (or stops and starts) within the same account and same region during the same rolling 24 hour period For example: • 100 new T2 Standard instance launches or • 100 existing T2 Standard instance stops and starts or • 50 existing T2 Standard instance stops and starts and 50 new T2 Standard instance launches Note : If you are regularly exceeding the launch credit allocation limit you might want to switch to a T2 Unlimited or T3 instance instead Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 21 The Effects of Launch Credits on the CPU Credit Balance If a T2 Standard instance is launched but does not consume all of the launch credits within the first 24 hours then the credi t balance will consist of the remaining launch credits plus 24 hours of earned credits For example a t2nano instance could potentially accrue a total of 102 credits (72 earned credits plus 30 launch credits) The instance could then spend all 102 credit s in a single continuous burst as illustrated in period 1B in the following graph Note: Launch credits in the credit balance are illustrated by the blue line in the graph Remember that Amazon CloudWatch only reports total credits —you cannot see the bre akdown of launch credits and earned CPU credits Attaining a credit balance that is higher than the 24 hour earned CPU credit value can only be achieved one time per instance launch because after the launch credits are spent they are not replenished Any subsequent CPU credit accruals are limited to the value of 24 hours of earned credits as illustrated at the start of period 2B in the graph Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 22 Example: Tracking CPU Credit Accrual and Usage with Launch Credits In this section we illustra te CPU credit accrual and usage for a t2micro instance over a 4 day period considering the effect that launch credits have on the credit balance This example is specifically tailored to highlight some of the complexity that can be associated with launch credits Because of launch credits the credit balance for periods A B and C in this example is above the t2micro 24hour earned credit value of 144 Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 23 Period A — Launch Credits + 24 Hours of Earned Credits Immediately upon the launch of the t2micro instance 30 launch credits are added to the overall credit balance and the instance starts to earn credits Because no CPU credits are spent or discarded during this period the credit balance increases at a rate of 6 credits per hour In addition to the 30 launch credits after 24 hours the instance has accrued 144 earned CPU credits The credit balance is able to increase above 144 credits because the unspent launch credits do not affect the accumulation of earned CPU credits Period A (duration 24 hours) 12 AM Monday – 12 AM Tuesday Credit Spend Rate 0 credits per hour (0% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance increases from 30 at launch to 174 credits (30 launch credits and 144 earned CPU credits) Start of Period A End of Period A Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 24 Period B — Maximum Earned and Launch Credits At the start of period B the credit balance is 174 credits The overall balance consists of 30 launch credits and 144 earned credits Because the credit balance contains the maximum number of earned CPU credits for a t2micro instance (144 credits) any newly earned credits above this limit are discarded This results in the credit balance plateauing at a value equal to 24 hours of earned credits (144) plus the unspent launch credits (30) Period B (duration 6 hours) 12 AM T uesday – 6 AM Tuesday Credit Spend Rate 0 credits per hour (0% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 6 credits per hour Credit Balance Balance remains stable at 174 credits (30 launch credits and 144 earned CPU credits) Start of Period B End of Period B Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 25 Period C — Spending Earned Credits In period C the instance consumes CPU credits at a rate of 3 credits per hour (50% of the credit earn rate) Despite the spend rate being less than the earn rate the overall credit balance is decreasing at a rate equal to the credit spend rate (3 credits per hour) This occurs because the non replenishable launch credits are being spent first and all freshly earned CPU credits are being discarded because the credit balance already has the maximum number of earned CPU credits (144) Period C (duration 10 hours) 6 AM Tuesday – 4 PM Tuesday Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 6 credits per hour Credit Balance Balance decreases from 174 to 144 credits (0 launch credits and 144 earned CPU credits) Start of Period C End of Period C Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 26 Period D — Balance Stable 24 Hours of Earned Credits In period D the instance continues to consume CPU credits at a rate of 3 credits per hour (50% of the credit earn rate) as it did in period C The credit balance contains the maximum number of earned credits (144) Half of the newly earned CPU credits ar e being spent while the other half are being discarded Therefore the balance now plateaus at 144 credits instead of at the 174 credit level seen in period B because there are no longer any launch credits in the credit balance Period D (duration 8 hours) 4 PM Tuesday – 12 AM Wednesday Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 3 credits per hour Credit Balance Balance is stable at 144 credits (0 launch credits and 144 earned CPU credits) Start of Period D End of Period D Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 27 Period E — Spending Earned Credits In period E the instance is consuming CPU credits at a rate of 12 credits per hour (200% of the credit earn rate) The credit balance decreases at a rate of 6 credits per hour from 144 to 72 credits Period E (duration 12 hours) 12 AM Wednesday – 12 PM Wednesday Credit Spend Rate 12 credits per hour (20% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance decreases from 144 to 72 credits (0 launch credits and 72 earned CPU credits) Start of Period E End of Period E Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 28 Period F — Accruing Earned Credits In period F as in periods C and D the instance is consuming CPU credits at a rate of 3 per hour (50% of the credit earn rate) The credit balance decreased during period C was stable during period D however it increases in period F Why is that? • In per iod C the credit balance contained launch credits in addition to the maximum number of earned credits The launch credits were being spent and all of the newly earned and unspent earned CPU credits were being discarded • In period D the credit balance co ntained the maximum number of earned credits H alf of the newly earned CPU credits were being spent while the other half were being discarded • In period F the number of earned CPU credits is under the 24 hour maximum (144) No credits are being discarded half of the newly earned CPU credits are being spent while the other half are being accrued in the credit balance This results in the overall credit balance increasing at half of the earn rate Period F (duration 24 hours) 12 PM Wednesday – 12 PM Thurs day Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance increases from 72 to 144 credits (0 launch credits and 144 earned CPU credits) Amazon Web Services Understanding T2 Standard Insta nce CPU Credits Page 29 Period F (duration 24 hours) 12 PM Wednesday – 12 PM Thurs day End of Period F End of Period F Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 30 Period G — Balance Stable 24 Hours of Earned Credits In period G the instance continues to consume CPU credits at a rate of 3 per hour (50% of the credit earn rate) which is the same as periods C D and F However because the credit balance contains the maximum number of earned credits any freshly earned but unspent CPU credits are discarded Period G (duration 12 hours) 12 PM Thursday – 12 AM Friday Credit Spend Rate 3 credits p er hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 3 credits per hour Credit Balance Balance is stable at 144 credits (0 launch credits and 144 earned CPU credits) Start of Period G End of Period G Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 31 Comparing T2 Instance Sizes with Identical Workloads In this section we will be repeating the same workload (green line) on different sizes of T2 Standard instances to illustrate the effect that different CPU credit earn rates have on the CPU credit balance All three of the instances t2nano t2micro and t2small have a single vCPU and are allocated 30 launch credits The instances have different CPU credit earn rates with the maximum earned CPU credit accrual limits being 72 144 and 288 credits respectively Larger instance sizes have larger maximum credit balances If the same workload is repeated across the three instances the credit balance changes will differ due to the different earn rates that are offsetting the same spen ding rate Amazon Web Services Understandi ng T2 Standard Instance CPU Credits Page 32 Scenario 1: Consuming CPU Credits at Different Rates In this scenario the first utilization period had a vCPU utilization rate of 40% (24 credits per hour) which consumed a total of 100 CPU credits over the 250 minute duration of the period The change in credit balance depends on the instance size: t2nano — credit balance decreased by approximately 91 credits t2micro — credit balance decreased by approximately 82 credits t2small — credit balance decreased by approximately 65 credits In the second utilization period the difference in the credit balance depletion rate is more apparent The vCPU utilization rate of 20% (12 credits per hour) is equal to the CPU credit earn rate of a t2small instance so its credit balance does not decrea se However the credit balances for the smaller instances do decrease Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 33 Scenario 2: Consuming 72 Credits Every 24 Hours t2nano — The daily credit utilization of 72 credits drains the entire credit balance of the instance during the 24 hour period t2micro — The daily credit utilization of 72 credits partially depletes the credit balance during the 24 hour period t2small — The peak credit usage rate is lower than the credit earn rate of a t2small instance therefore the credit balance (red line ) remains stable This graph shows all three instance sizes for comparison Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 34 Scenario 3: Consuming 76 Credits Every 24 Hours t2nano — The daily credit usage rate exceeds the daily CPU credit earn rate During periods of low CPU credit utilization the balance is partially replenished The credit balance will eventually be depleted over time t2micro — The daily credit usage rate is lower than the daily CPU credit earn rate During periods of low credit utilization the credit balance is fully replenished t2small — The peak credit usage rate is lower than the CPU credit earn rate for a t2small instance therefore the credit balance (red line) remains stable This graph shows all three instance sizes for comparison Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 35 Scenario 4: Steady and Gradual Depletion of Credit Balance t2nano — The 7% CPU utilization workload starts 14 hours after the instance is launched and consumes CPU credits at a rate of approximately 4 per hour The spend rate is higher than the earn rate of 3 credits per hour for a t2nano therefore the credit balance gradually decreases The credit balance is depleted approximately 72 hours after launch at which point the maximum attainable CPU utilization is restricted to the base rate of 5% (3 credits per hour) t2micro — The 7% CPU utilization (approximately 4 credits per hour) workload is lower than the base earn rate of a t2micro instance (10% or 6 credits per hour) Therefore the credit balance does not decrease and the workload can continue at this utilization rate Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 36 Scenario 5: Variable CPU Utilization Rate In this scenario the duration of the daily workload varies On Thurs day the workload increased to the point where it almost depleted the credit balance Scenario 6: Variable CPU Utilization Duration In this scenario the duration of the daily workload is slowly and gradually increasing If the workload continues to incr ease in this manner it might result in a depletion of the credit balance Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 37 Scenario 7: Consuming CPU Credits Immediately After Launch A total of 99 CPU credits are required to complete this workload —ideally at a rate of 9 credits per hour t2nano — The workload running at a rate of 9 CPU credits per hour consumes the 30 launch credits in approximately 33 hours and then begins to consume accrued earned CPU credits Approximately 5 hours after launch the credit balance is depleted and the maximum a ttainable CPU utilization is restricted to the base rate (5%) At this reduced utilization rate the workload is restricted and requires approximately 23 hours to complete t2small — The workload running at a rate of 9 CPU credits per hour has a lower spend rate than the base rate for a t2small instance of 12 credits per hour The workload can run unrestrained and requires approximately 11 hours to complete Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 38 Instances with Multiple vCPUs T2 instance sizes larger than t2small have more than 1 vCPU The individual vCPUs consume credits from the single credit balance based on their individual CPU utilization rates The CPU credit utilization rate for an instance is the aggregate of the credit utilization rate across all of the vCPUs In the following example one vCPU is consuming 45 credits per hour while the other vCPU is consuming 15 credits per hours Therefore the total credit utilization for this instance is 60 credits per hour Note: A t2medium or t2large instance with 2 vCPUs can consume up to 2 CPU credits in 1 minute A t2xlarge instance with 4 vCPUs can consume up to 4 CPU credits in 1 minute A t22xlarge instance with 8 vCPUs can consume up to 8 CPU credits in 1 minute An instance’s specified baseline % rate is based on a single vCPU For example a t2medium baseline rate is spe cified as 40% Which can equate to 1 x vCPU @ 40% utilization or 2 x vCPUs @ 20% utilization Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 39 Conclusion Having an in depth understanding of how the T2 Standard instance CPU credit system works will help you decide if this particular Amazon EC2 instance ty pe is the best match for your workload If so this knowledge will assist you with optimizing your workload and obtaining the best cost and performance Contributors Contributors to this document include: • Seamus Murray Amazon Web Services Further Reading For additional information see: • AWS Documentation: Burstable Performance Instances1 Document Revisions Date Description March 8 2021 Reviewed for technical accuracy March 1 2019 Second publication February 4 2019 First publication Notes 1 https://docsawsamazoncom/AWSEC2/latest/UserGuide/burstable performance instanceshtml,General,consultant,Best Practices Understanding_the_ASDs_Cloud_Computing_Security_for_Tenants_in_the_Context_of_AWS,Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS June 2017 © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own ind ependent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations co ntractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agree ment between AWS and its customers Contents Introduction 1 AWS Shared Responsibility approach to Managing Cloud Security 2 What does the shared responsibility model mean for the security of customer content? 3 Understanding ASD Cloud Computing Security for Tenants in the Context of AWS 4 General Risk Mitigations 4 IaaS Risk Mitigations 27 PaaS Risk Mitigations 38 SaaS Risk Mitigations 40 Further Reading 41 Document Revisions 42 Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 1 Introduction The Australian Signals Directorate (ASD) publishes the Cloud Computing Security for Tenants paper to provide guidance on how an organisations’ cyber security team cloud architects and business representatives can work together to perform a risk assessment and use cloud services securely The paper highlights the shared responsibility that organisation s (referred to as Tenants) share with the cloud service providers (CSP) to design a solution that uses security best practices This document addresses each risk identified in the Cloud Computing Security for Tenants paper and describes the AWS services and features that you can use to mitigate those risks Important: You should understand and acknowledge that that the risks discu ssed in this document cover only part of your responsibilities for securing your cloud solution For more information about the AWS Shared Responsibility Model see AWS Shared Responsibility Approach to Managing Cloud Security below AWS provides you with a wide range of security functionality to protect your data in accordance with ASD’s Information Security Manual ( ISM ) controls agency guidelines and policies We are continually iterating on the security tools we provide our customers and regularly release enhancements to existing security functionality AWS has assessed ASD’s ISM controls against the following services: • Amazon Elastic Compute Cloud (Amazon EC2) – Amazon EC2 provides resizable compute capacity in the cloud It is designed to make webscale computing easier for developers For more information go here • Amazon Simple Storage Service (S3) – Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data at any time from anywhere on the web For more information go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 2 • Amazon Virtual Private Cloud (VPC) – Amazon VPC provides the ability for you to provision a logically isolated section of AWS where you can launch AWS resources in a virtual network that you define For more information go here • Amazon Elastic Block Store (EBS) – Amazon EBS provides highly available highly reliable predictable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance For more information g o here Important: AWS provides many services in addition to those listed above If you would like to use a service not listed above you should evaluate your workloads for suitability Contact AWS Sales and Business Development for a detailed discussion of security controls and risk acceptance considerations Our global whitepapers have recommendations for securing your data that are just as applicable to Australian government workloads on AWS For a complete list of our security and compliance whitepapers see the AWS Whitepapers website Our AWS Compliance website contains more specific discussions of security AWS Risk and Compliance practices certifications and reports If you need answers to questions that are not covered in the above resou rces you can contact your account manager directly AWS Shared Responsibility approach to Managing Cloud Security When you move your IT infrastructure to AWS you will adopt a model of shared responsibility between you and AWS (as shown in Figure 1) This shared model helps relieve your operational burden because AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 3 As par t of the shared model you are responsible for managing the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group f irewall and other security related features You will also generally connect to the AWS environment through services that you acquire from third parties (for example internet service providers) As AWS does not provide these connections they are part of your area of responsibility You should consider the security of these connections and the security responsibilities of such third parties in relation to your systems Figure 1: The AWS Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for you to understand and distinguish between: • Security measures that AWS implements and operates – “security of the cloud” • Security measures that you implement and operate related to the security of your content and applications that make use of AWS services – “security in the cloud” Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 4 While AWS manages the security of the cloud security in the cloud is your customer responsibility as you retain control of what security you choose to implement to protect your own content platform applications systems and networks – no differently than you would for applications in an on site data centre Understanding ASD Cloud Computi ng Security for Tenants in the C ontext of AWS The following sections describe the AWS compliance and AWS offerings that can help you as the Tenant mitigate the risks identified in the Cloud Computing Security for Tenants paper General Risk Mitigations 1 – General Requirement Use a cloud service that has been assessed certified and accredited against the ISM at the appropriate classification level addressing mitigations in the document Cloud Computing Security for Cloud Service Providers AWS Response An independent IRAP assessor examined the controls of in scope AWS services’ people process and technology to ensure they address the needs of the ISM AWS has been certified for Unclassified DLM (UD) workloads by the Australian Signals Directo rate (ASD) as the Certification authority and is an inaugural member of the ASD Certified Cloud Services List (CCSL) 2 – General Requirement Implement security governance involving senior management directing and coordinating security related activities including robust change management as well as having technically skilled staff in defined security roles Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 5 AWS Response AWS customers are required to maintain adequate governance over t he entire IT control environment regardless of how IT is deployed This is true for both on premise and cloud deployments Leading practices include : • Develop an understanding of required compliance objectives and requirements (from relevant sources) • Establ ish a control environment that meets those objectives and requirements • Understand the validation required based on the organization’s risk tolerance • Verify the operating effectiveness of their control environment AWS provides options to apply various ty pes of controls and verification methods Strong customer compliance and governance might include the following basic approach: 1 Review information available from AWS together with other information to understand as much of the entire IT environment as po ssible and then document all compliance requirements 2 Design and implement control objectives to meet the enterprise compliance requirements 3 Identify and document controls owned by outside parties 4 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help you gain a better understanding of your control environment and will help you clearly delineate the verification activities that you need to per form You can run nearly anything on AWS that you would run on premise including websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks AWS provides services that are designed to w ork together so that you can build complete solutions An often overlooked benefit of migrating workloads to AWS is the ability to achieve a Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 6 higher level of security at scale by utilizing the many governance enabling features offered For the same reason s that delivering infrastructure in the cloud has benefits over on premise delivery cloud based governance offers a lower cost of entry easier operations and improved agility by providing more oversight security control and central automation The Governance at Scale whitepaper describes how you can achieve a high level of governance of your IT resources using AWS 3 – General Requirement Implement and annually test an incident response plan covering data spills electronic discovery and how to obtain and analyse evidence eg time synchronised logs hard disk images memory snapshots and metadata AWS Response AWS recognizes the importance of customers implementing and testing an incident response plan Using AWS you can requisition compute power storage and other services in minutes and have the flexibility to choose the development plan or programming model that makes the most sense for the problems you’re trying to solve You pay only for what you use with no up front expenses or long term commitments making AWS a cost effective way to deliver applications plus conduct incident response tests and simulations in realistic environments This presentation from A WS re:Invent 2015 conference provides further details on incident response simulation on AWS The AWS platform includes a range of monitoring services that can be leveraged as part of your i ncident detection and response capability some Inscope services include the following: • CloudWatch • CloudWatch Logs • CloudWatch Events • Cloudtrail • Trusted Advisor • Elastic Load Balancer Logs • S3 logs • Cloudfront logs • VPC Flow Logs Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 7 • Simple Notification Service • Lambd a 4 – General Requirement Use ASD approved cryptographic controls to protect data in transit between the Tenant and the CSP eg application layer TLS or IPsec VPN with approved algorithms key length and key management AWS Response AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted Customers may also use third party encryption technologies In addition customers can leverage AWS Key Management Sys tems (KMS) to create and control encryption keys (refer to https://awsamazoncom/kms/) All of the AWS APIs are available via TLS protected endpoints which provide server authentication AWS cryptographic proces ses are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS ISO 27001 and FedRAMP For Tenants leveraging the Amazon Elastic Load Balancer in their solutions it has s ecurity features relevant to this mitigation Elastic Load Balancing has all the advantages of an on ‐premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports creation and management of security groups associated with your Elastic Load Balancing to provide additional netwo rking and security options • Supports end ‐to‐end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long‐ term secret key to generate a short ‐term session key to be used between the server and the browser to create th e ciphered (encrypted) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 8 message Amazon Elastic Load Balancing con figures your load balancer with a pre ‐defined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer The pre ‐defined cipher set provides compatibility with a broad range of client s and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI SOX etc) from clients to ensure that standards are met In these cases Amazon Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers You can choose to enable or disable the ciphers depending on your specific requirements To help ensure the use of newer and stronger cipher suites when establishing a secur e connection you can configure the load balancer to have the final say in the cipher suite selection during the client ‐serv er negotiation When the Server Order Preference option is selected the load balancer will sel ect a cipher suite based on the server’s prioritization of cipher suites rather than the client’s This gives you more control over the level of security that clients use to connect to your load balancer For even greater communication privacy Amazon Elastic Load Balancer allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decodin g of captured data even if the secret long ‐term key itself is compromised 5 – General Requirement Use ASD approved cryptographic con trols to protect data at rest on storage media in transit via post/courier between the tenant and the CSP when transferring data as part of on boarding or off boarding AWS Response Snowball is a petabyt escale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud Using Snowball addresses common challenges with large scale data transfers including high network costs long transfer times and security concerns Transferring data with Snowball is simple fast secure and can be as little as one fifth the cost of high speed Internet Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 9 Snowball encrypts all data with AES 256bit encryption You manage your encryption keys by using the AWS Key Man agement Service (AWS KMS) Your keys are never sent to or stored on the appliance Further details on the AWS KMS are available in this paper In addition to using a tamper resistant enclosure Snowball uses an industry standard Trusted Platform Module (TPM) with a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software AWS inspec ts every appliance for any signs of tampering and to verify that no changes were detected by the TPM When the data transfer job has been processed and verified AWS performs a software erasure of the Snowball appliance that follows the National Institute of Standards and Technology (NIST) guidelines for media sanitization Snowball uses an innovative E Ink shipping label designed to ensure the appliance is automatically sent to the correct AWS facility and which also helps in tracking When you have completed your data transfer job you can track it by using Amazon SNS text messages and the console 6 – General Requirement Use a corporately approved and secured computer multi factor authentication a strong passphrase least access privileges and encrypted network traffic to administer (and if appropriate access) the cloud service AWS Response All of the AWS APIs are available via TLS protected endpoints that provide server authentication For more information on our region end points go here AWS requires that all API requests be signed —using a cryptographic hash function If you use any of the AWS SDKs to generate requests the digital signature calculation is done for you ; otherwise you can have your application calculate it and include it in your REST or Query requests by following the directions in our documentation Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 10 Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users Using IAM you can create and manag e AWS users and groups and use permissions to allow and deny their access to AWS resources To get started using IAM go to the AWS Management Console and get started with these IAM Best Practices You can set a password policy on your AWS account to spec ify complexity requirements and mandatory rotation periods for your IAM users' passwords You can use a password policy to do these things: • Set a minimum password length • Require specific character types including uppercase letters lowercase letters nu mbers and non alphanumeric characters Be sure to remind your users that passwords are case sensitive • Allow all IAM users to change their own passwords Note: When you allow your IAM users to change their own passwords IAM automatically allows them to v iew the password policy IAM users need permission to view the account's password policy in order to create a password that complies with the policy • Require IAM users to change their password after a specified period of time (enable password expiration) • Prevent IAM users from reusing previous passwords • Force IAM users to contact an account administrator when the user has allowed his or her password to expire Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 11 AWS Multi Factor Authentication (MFA) is a simple best practice that adds an extra layer of prot ection on top of your user name and password With MFA enabled when a user signs in to an AWS website they will be prompted for their user name and password (the first factor —what they know) as well as for an authentication code from their AWS MFA device (the second factor —what they have) Taken together these multiple factors provide increased security for your AWS account settings and resources You can enable MFA for your AWS account and for individual IAM users you have created under your account M FA can be also be used to control access to AWS service APIs After you've obtained a supported hardware or virtual MFA device AWS does not charge any additional fees for using MFA If you already manage user identities and MFA outside of AWS you can use IAM identity providers instead of creating IAM users in your AWS account With an identity provider (IdP) you can manage your user identities outside of AWS and give these external user identities permissions to use AWS resources in your account This is useful if your organization already has its own identity system such as a corporate user directory It is also useful if you are creating a mobile app or web application that requires access to AWS resources To use an IdP you create an IAM identity pro vider entity to establish a trust relationship between your AWS account and the IdP IAM supports IdPs that are compatible with OpenID Connect (OIDC) or SAML 2 0 (Security Assertion Markup Language 20) The following services are relevant in the enforcing use of corporate controlled computers: • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic When you launch a n instance in a VPC you can assign the instance to up to five security groups Security groups act at the instance level not the subnet level Therefore each instance in a subnet in your VPC could be assigned to a different set of security groups For e ach security group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 12 traffic For example you could restrict access to SSH and RDP ports to only your approved corporate IP ranges • A network access control list (ACL) is a recommended layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets You might set up network ACLs with rules similar to your security groups in order to add an additi onal layer of security to your VPC For more information about the differences between security groups and network ACLs see Comparison of Security Groups and Network ACLs • Permissions let you specify access to AWS resources Permissions are granted to IAM entities (users groups and roles) and by default these entities start with no permissions In other words IAM entities can do nothing i n AWS until you grant them your desired permissions To give entities permissions you can attach a policy that specifies the type of access the actions that can be performed and the resources on which the actions can be performed In addition you can s pecify any conditions that must be set for access to be allowed or denied To assign permissions to a user group role or resource you create a policy that lets you specify: o Actions – Which AWS actions you allow For example you might allow a user to call the Amazon S3 ListBucket action Any actions that you don't exp ressly allow are denied o Resources – Which AWS resources you allow the action on For example what Amazon S3 buckets will you allow the user to perform the ListBucket action on? Users cannot access any resources that you do not explicitly grant permissions to o Effect – Whether to allow or deny access Because access is denied by default you typically write policies where the effect is to allow o Conditions – Which conditions must be present for the policy to take effect For example you might allow access only to the specific S3 buckets i f the user is connecting from a specific IP range or has used multi factor authentication at login For an example o f this policy go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 13 7 – General Requirement Protect authentication credentials eg avoid exposing Application Programming Interface (API) authentication keys placed on insecure computers or in the source code of software that is accessible to unauth orised third parties AWS Response When you access AWS programmatically you use an access key to verify your identity and the identity of your applications An access key consists of an access key ID (something like AKIAIOSFODNN7EXAMPLE) and a secret acce ss key (something like wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) Anyone who has your access key has the same level of access to your AWS resources that you do Consequently AWS goes to significant lengths to protect your access keys and in keeping with our shared responsibility model you should as well The following steps can help you protect access keys For general background see AWS Security Credentials Note: Your organization may have different security requirements and policies than those described in this topic The suggestions provided here are intended to be general guidelines • Remove (or Don't Generate) a Root Account Access Key One of the best ways to protect your account is to not have an access key for your root account Unless you must have a root access key (which is very rare) it is best not to generate one Instead the recommended best practice is to create one or more AWS Identity and Access Management (IAM) users give them the necessary permiss ions and use IAM users for everyday interaction with AWS • Use Temporary Security Credentials (IAM Roles) Instead of Long Term Access Keys In ma ny scenarios you don't need a long term access key that never expires (as you have with an IAM user) Instead you can create IAM Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 14 roles and generate temporary security credentials Temporary security credentials consist of an access key ID and a secret access key but they also include a security token that indicates when the credentials expire • Manage IAM User Access Keys Properly If you do need to create access keys for programmatic access to AWS create an IAM user and grant that user only the permissions he or she needs Then generate an access key for that user For details see Managing Access Keys for IAM Users in IAM User Guide Observe these precautions when using access keys: o Don't embed access keys directly into code o Use different access keys for different applications o Rotate access keys periodically o Remove unused access keys o Configure multifactor authentication for your most sensitive operations • More Resources You can also leverage AWS Trusted Advisor checks as part of your Security monitoring AWS Trusted Advisor provides best practices in four categories: • Cost Optimization • Security • Fault Tolerance • Performance Improvement The complete list of over 50 Trusted Advisor checks available with business and enterprise support plans can be used to monitor and improve the deployment of Amazon EC2 Elastic Load Balancing Amazon EBS Amazon S3 Auto Scaling AWS Identity and Access Management Amazon RDS Amazon Redshift Amaz on Route 53 CloudFront and CloudTrail You can view the overall status of your AWS resources and savings estimations on the Trusted Advisor dashboard Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 15 One of the Trusted Advisor checks is for exposed Access Keys This checks popular code repositories for access keys that have been exposed to the public and for irregular Amazon Elastic Compute Cloud (Amazon EC2) usage that could be the result of a compromised access key An access key consists of an access key ID and the corresponding secret access key Ex posed access keys pose a security risk to your account and other users could lead to excessive charges from unauthorized activity or abuse and violate the AWS Customer Agreement If your access key is exposed take immediate action to secure your account To additionally protect your account from excessive charges AWS temporarily limits your ability to create some AWS resources This does not make your account secure; it only partially limits the unauthorized usage for which you could be charged Note: This check does not guarantee the identification of exposed access keys or compromised EC2 instances You are ultimately responsible for the safety and security of your access keys and AWS resources 8 – General Requirement Obtain and promptly analyse detai led time synchronised logs and real time alerts for the T enant’s cloud service accounts used to access and especially to administer the cloud service AWS Response AWS CloudTrail is a web service that rec ords AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements return ed by the AWS service With CloudTrail you can get a history of AWS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as AWS CloudFormation) The AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 16 To maintain the integrity of your log data it is important to carefully manage access around the generation and storage of your log files The ability to view or modify your log data should be restricted to authorized users A common log related challenge for on premise environments is the ability to demonstrate to regulators that access to log data is restricted to authorized users This cont rol can be time consuming and complicated to demonstrate effectively because most on premise environments do not have a single logging solution or consistent logging se curity across all systems With AWS CloudTrail access to Amazon S3 log files is centrally controlled in AWS which allows you to easily control access to your log files and help demonstrate the integrity and confidentiality of your log data This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements 9 – General Requirement Obtain and promptly analyse detailed time synchronised logs and real time alerts generated by the cloud service used by the tenant eg operating system web server and application logs AWS Response You can execute Continuous Monitoring of logical controls on your own systems You assume the responsibility and management of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall In addition to the monitoring services th at AWS provides you can leverage most OS level and application monitoring tools that you have used in traditional on premise deployments Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources as well as cu stom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 17 to gain system wide visibility into resource utilization application performance and operational health You can use the se insights to react and keep your application running smoothly CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system appl ication and custom log files With CloudWatch Logs you can monitor your logs in near real time for specific phrases values or patterns (metrics) For example you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs You can view the original log data to see the source of the problem if needed Log data can be stored and accessed for as long as you need using highly durable low cost storage so you don’t have to wor ry about filling up hard drives You can use Amazon CloudWatch Logs to monitor store and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances AWS CloudTrail or other sources You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console the CloudWatch Logs commands in the AWS CLI the CloudWatch Logs API or the CloudWatch Logs SDK You can use CloudWatch Logs to: • Monitor Logs from Amazon EC2 Instances in Real time • Monitor AWS CloudTr ail Logged Events • Archive Log Data 10 – General Requirement Avoid providing the CSP with account credentials (or the ability to authorise access) to sensitive systems outside of the CSP’s cloud such as systems on the tenant’s corporate network AWS Response AWS does not request that you disclose your customer passwords in order to provide the services or support AWS provides infrastructure and you manage everything else including the operating system the network configuration and Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 18 the insta lled applications You control your own guest operating systems software and applications When you launch an instance you should specify the name of the key pair you plan to use to connect to the instance If you don't specify the name of an existing key pair when you launch an instance you won't be able to connect to the instance When you connect to the instance you must specify the private key that corresponds to the key pair you specified when you launched the instance Amazon EC2 doesn't keep a co py of your private key; therefore if you lose a private key there is no way to recover it 11 – General Requirement Use multi tenancy mechanisms provided by the CSP eg to separate the tenant’s web application and network traffic from other tenants use the CSP’s hypervisor virtualisation instead of web server software virtual hosting AWS Response Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon EC2 reduces the time required to obtain and boot new server instances to minutes allowing you to quickly scale capacity both up and down as your computing require ments change The AWS environment is a virtualized multi tenant environment Customer can also select dedicated Amazon EC2 instances which are single tenant AWS has implemented security management processes PCI controls and other security c ontrols designed to isolate each customer from other customers AWS systems are designed to prevent you from accessing physical hosts or instances not assigned to you by filtering through the virtualization software This architecture has been validated by an independent PCI Qualified Security Assessor (QSA) and was found to be in compliance with all requirements of PCI DSS version 31 published in April 2015 Note : AWS also has single tenancy options Dedicated Instances are Amazon EC2 instances launched w ithin your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer Dedicated Instances Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 19 let you take full advantage of the benefits of Amazon VPC and the AWS cloud while isolating your Amazon EC2 compute instances at the hardware level 12 – General Requirement Perform up todate encrypted backups in a format avoiding CSP lock in stored offline at the tenant’s premises or at a second CSP requiring multi factor authentication to modify/delete data Annually test the recovery process AWS Response You retain control and ownership of your content and it is your responsibility to manage your data backup plans You can export your EC2 instance image (an EC2 instance image in AWS is referred to as an Amazon Machine Image A MI) and use it on premise or at another provider (subject to software licensing restrictions) For more information see Introduction to AWS Security Processes AWS supports several methods for loading and retrieving data including: the public Internet; a direct network connection with AWS Direct Connect; the AWS Import/Export service where AWS will import data into S3; and for backups of application data the AWS St orage Gateway helps you backup your data to AWS AWS allows you to move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport AWS allows you to perform your own backups to tapes using your own tape backup service provider However a tape backup is not a service provided by AWS Amazon S3 service is designed to drive the likelihood of data loss to near zero percent and the durab ility equivalent of multi site copies of data objects is achieved through data storage redundancy Amazon S3 provides a highly durable storage infrastructure Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Re gion Once stored Amazon S3 maintains the durability of objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 20 verifies the integrity of data stored using checksums If corruption is detected it is repaired using redunda nt data Data stored in S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year AWS allows you to us your own encryption mechanisms t o encrypt backups for nearly all the servi ces including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted Amazon S3 also offers you Server Side Encryption as an option You can also use third party encryption technologies The AWS CloudHSM service allows you to protect your encryption keys within HSMs designed and validated to government standards for secure key management You can securely generate store and manage the cryptographic keys used for data encryption such that they are accessible only by you AWS CloudHSM helps you comply with strict key manageme nt requirements without sacrificing application performance AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses Hardware Security Modules (HSMs) to protect the security of your keys AWS Key Management Service is integrated with several other AWS services to help you protect your data you store with these services AWS Key Management Service is also integrated with AWS CloudTrail to provide you with l ogs of all key usage to help meet your regulatory and compliance needs 13 – General Requirement Contractually retain legal ownership of tenant data Perform a due diligence review of the CSP’s contract and financial viability as part of assessing privacy and legal risks AWS Response You retain control and ownership of your data AWS only uses your content to maintain or provide the AWS services that you have selected or to comply with the law or a binding legal government request AWS treats all custome r content the same and has no insight as to what type of content that you choose to store in AWS AWS simply makes available the Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 21 compute storage database and networking services that you select See https ://awsamazoncom/agreement/ for further information AWS errs on the side of protecting your privacy and is vigilant in determining which law enforcement requests we must comply with AWS does not hesitate to challenge orders from law enforcement if we think the orders lack a solid basis Further legal information is available at this site https://awsamazoncom/legal/ 14 – General Requirement Implement adequately high bandwidth low latency reliable networ k connectivity between the tenant (including the tenant’s remote users) and the cloud service to meet the tenant’s availability requirements AWS Response You can choose your network path to AWS facilities including multiple VPN endpoints in each AWS Regi on In addition AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment w hich in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internet based connections Refer to AWS Overview of Security Processes Whitepaper for additional details available at http://awsamazoncom/security AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021q VLANs thi s dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 in stances running within an Amazon Virtual Private Cloud (VPC) using private IP space while maintaining network separation between the public and private environments Virtual interfaces can be reconfigured at any time to meet your changing needs Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 22 Network latency over the Internet can vary given that the Internet is constantly changing how data gets from point A to B With AWS Direct Connect you choose the data that utilizes the dedicated connection and how that data is routed which can provide a more consistent network experience over Internet based connections AWS Direct Connect makes it easy to scale your connection to meet your needs AWS Direct Connect provides 1 Gbps and 10 Gbps connections and you can easil y provision multiple connections if you need more capacity You can also use AWS Direct Connect instead of establishing a VPN connection over the Internet to your Amazon VPC avoiding the need to utilize VPN hardware that frequently can’t support data tran sfer rates above 4 Gbps 15 – General Requirement Use a cloud service that meets the tenant’s availability requirements Assess the Service Level Agreement penalties and the number severity recency and transparency of the CSP’s scheduled and unschedule d outages AWS Response AWS commit s to high levels of availability in its service level agreements (SLAs) For example Amazon EC2 commits to annual uptime percentage of at least 9995% during the service year Amazon S3 commits to monthly upt ime percentage of at least 999 % Service credits are provided in the case these availability metrics are not met See https://awsamazoncom/legal/service level agreements/ For many servi ces AWS can perform regular maintenance and system patching without rendering the service unavailable or requiring reboots AWS’ own maintenance and system patching generally do not impact you You control maintenance of the instances themselves AWS publ ishes our most up totheminute information on service availability on the Service Health Dashboard Amazon Web Services keeps a running log of all service interruptions that we publish for the past year Refer to http://statusawsamazoncom Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 23 You should architect your AWS usage to take advantage of multiple Regions and Availability Zones Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes including natural disasters or system failures AWS utilizes automated monitoring systems to provide a high level of service performance and availability Proactive monitoring is available through a variety of online tools both for internal a nd external use Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call schedu le is used such that personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel AWS Network Management is regularly reviewed by independent third party auditors as a part of AWS ongoing compliance with SOC PCI DSS ISO 27001 and FedRAMP 16 – General Requirement Develop and annually test a disaster recovery and business continuity plan to meet the tenant’s availability requirements eg where feasibl e for simple architectures temporarily use cloud services from an alternative CSP AWS Response You retain control and ownership of your data AWS provides you with the flexibility to place instances and store data within multiple geographic regions as we ll as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In case of failure automated processes move your data traffic away from the affected area AWS SOC reports provides further det ails ISO 27001 standard Annex A domain 15 provides additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification Using AWS you can enable faster disaster recovery of your critical IT systems without incurring the infrastructure expense of a second physical site The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover For more information about Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 24 Disaster Recovery on AWS see the Disaster Recovery website and Disaster Recovery whitepaper AWS provides you with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/ava ilability zone deployment architectures You can place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In case of failure automated processes move customer data traff ic away from the affected area AWS data centers incorporate physical protection to mitigate against environmental risks AWS’ physical protection against environmental risks has been validated by an independent auditor and has been certified as being in alignment with ISO 27002 best practices Refer to ISO 27001 standard Annex A domain 9 and the AWS SOC 1 Type II report for additional information You retain control and ownership of your content and it is your responsibility to manage your data backup plans You move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport VM Impor t/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on premises environment This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances You can also export imported instances back to your on premises virtualizatio n infrastructure allowing you to deploy workloads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 See https://awsamazoncom/ec2/vm import/ for further information Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 25 17 – General Requirement Manage the cost of a genuine spike in demand or denial of service via contractual spending limits denial of service mitigation services and judicious use of the CSP’s infrastructure capacity eg limits on automated scaling AWS Response To help guarantee availability of AWS resources as well as minimize billing risk for new customers AWS maintains service limits for each account Some service limits are raised automatically as you build a history with AWS though most AWS services require that you request limit increases manually For a list of the default limits for each service as well as how to request a service limit increase see AWS Service Limits Note : Most limits are specific to a particular AWS region so if your use case requires higher limits in multiple regions file separate limit increase requests for each region you plan to use To avoid exceeding service limits while building or scaling your application you can use the AWS Trusted Advisor Service Limits check to monitor some limits For a list of limits that are included in the Trusted Advisor check see Service Limit s Check Questions EC2 has a service specific limits dashboard that can help you manage your instance EBS and Elastic IP limits For more information about EC2's Limits dashboard see Amazon EC2 Service Limits For more information about service limits go here You can also monitor your AWS costs by using CloudWatch With CloudWatch you can create billing alerts that notify you when your usage of your services exceeds thresholds that you define You specify these threshold amounts when you create the billing alerts When your usage exceeds these amounts A WS sends you an email notification You can also sign up to receive notifications when AWS prices change For more information go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 26 Cost Explorer is a fr ee tool that you can use to view graphs of your costs (also known as spend data) for up to the last 13 months and forecast how much you are likely to spend for the next three months You can use Cost Explorer to see patterns in how much you spend on AWS r esources over time identify areas that need further inquiry and see trends that you can use to understand your costs You can also specify time ranges for the data you want to see and you can view time data by day or by month For example you can use Cost Explorer to see which service you use the most which Availability Zone (AZ) most of your traffic is in which linked account uses AWS the most and more For more information go here Within the Cost Explorer tool a budget is a way to plan your costs (also known as spend data) and to track how close your costs are to exceeding your budgeted amount Budgets use data from Cost Explorer to provide y ou with a quick way to see your estimated charges from AWS and to see how much your predicted usage will accrue in charges by the end of the month Budgets also compare the estimated charges to the amount that you want to spend and lets you see how much of your budget has been spent Budgets are updated every 24 hours Budgets track your unblended costs and subscriptions but do not track refunds AWS does not use your forecasts to create a budget for you You can create budgets for different types of cos t For example you can create a budget to see how much you are spending on a particular service or how often you call a particular API operation Budgets use the same data filters as Cost Explorer For more information go here Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down aut omatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns or that experience hourly daily or weekly variability in usage You can specify the maximum number of instances in each Auto Scaling group and Auto Scaling ensures that your group never goes above this size Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 27 Distributed denial of service (DDoS) attacks are sometimes used by malicious actors in an attempt to flood a network system or application with more traffic connections or requests than it can handle Not surprisingly customers often ask us how we can help them protect their applications against these types of attacks To help you optimize for availability AWS provides best practices that allow you to use the scale of AWS to build a DDoS resilient architecture IaaS Risk Mitigations 1 – IaaS Requirement Securely configure harden and maintain VMs with host based security controls eg firewall intrusion prevention system logging ant ivirus software and prompt patching of all software that the tenant is responsible for AWS Response You retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of your own systems Regularly patch update and secure the operating system and applications on your instance For more information about updating Amazon Linux see Managing Software on Your Linux Instance For more information about updating your Windows instance see Updating Your Windows Instance in the Amazon EC2 User Guide for Microsof t Windows Instances Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default deny ‐all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be rest ricted by protocol by service port as well as by source IP address (individual IP or Classless Inter ‐Domain Routing (CIDR) block) AWS further encourages you to apply additional per ‐instance filters with host ‐based firewalls such as IPtables or the Windo ws Firewall and VPNs This can restrict both inbound and outbound traffic AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application ava ilability compromise security or consume excessive resources AWS WAF gives you Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 28 control over which traffic to allow or block to your web applications by defining customizable web security rules You can use AWS WAF to create custom rules that block commo n attack patterns such as SQL injection or cross site scripting and rules that are designed for your specific application New rules can be deployed within minutes letting you respond quickly to changing traffic patterns Also AWS WAF includes a full featured API that you can use to automate the creation deployment and maintenance of web security rules With AWS WAF you pay only for what you use AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives There are no upfront commitments This paper provides AWS best practices for DDoS resiliency https://d0awsstaticcom/whitepapers/DDoS_White_Paper_June2015pdf AWS Elastic Beanstalk is an easy touse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk regularly releases updates for the Linux and Windows Server based platforms that run applications on an Elastic Beanstalk environment A platform consists of a software component (an AMI running a specific version of an OS tools and Elastic Beanstalk specific scripts) and a configuration component (the default settings applied to environments created with the platform) New platform versions provide updates to existing software components and support for new fea tures and configuration options With managed platform updates you can configure your environment to automatically upgrade to the latest version of a platform during a scheduled maintenance window Your application remains in service during the update process with no reduction in capacity You can configure your environment to automatically apply patch version updates or both patch and minor version updates Managed platform updates don't support major version updates which may introduce changes that are backwards incompatible Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 29 2 – IaaS Requirement Use a corporately approved and secured computer to administer VMs requiring access from the tenant’s IP address encrypted traffic and a SSH/RDP PKI key pair protected with a strong passphrase AWS Response Amazon VPC offers a wide range of tools that give you more control over your AWS infrastructure Within a VPC you can define your own network topology by defining subnets and routing tables and you c an restrict access at the subnet level with network ACLs and at the resource level with VPC security groups You can isolate your resources from the Internet and con nect them to your own data center through a VPN You can assign elastic IP addresses to some instances and connect them to the public Internet through an Internet gateway while keeping the rest of your infrastructure in private subnets VPC makes it easier to protect your AWS resources while you keep the benefits of AWS with regards to flexibility scalab ility elasticity performance availability and the pay asyou use pricing model You can add or remove rules for a secu rity group (also referred to as authorizing or revoking inbound or outbound access) A rule applies either to inbound traffic (ingress) or outbound traffic (egress) You can grant access to a specific CIDR range or to another security group in your VPC or in a peer VPC (requires a VPC p eering connection) For example by leveraging part of your organisation’s public IP address range you could limit inbound SSH and RDP access to be allowed only from your network (via the VPC Internet Gateway) Similarly if a VPN or Direct Connect connecti on to the VPC is in place you could limit SSH and RDP access to only a section of your organisation’s private IP range You can connect your VPC to remote networks by using a VPN connection The following are some of the connectivity options available to you • AWS Hardware VPN (VPC VPG) • AWS Direct Connect • Software VPN Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 30 Amazon EC2 uses public –key cryptography to encrypt and decrypt login information Public –key cryptography uses a public key to encrypt a piece of data such as a password then the recipient u ses the private key to decrypt the data The public and private keys are known as a key pair To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SSH With Windows instances you use a key pair to obtain the administrator password and then log in using RDP You can use Amazon EC2 to create yo ur key pair this will create a 2048 bit SSH 2 RSA keys For more information see Creating Your Ke y Pair Using Amazon EC2 Alternatively you could use a third party tool and then import the public key to Amazon EC2 For more information see Importing Your Own Key Pair to Amazon EC2 Amazon EC2 stores the public key only and you store the private key Anyone who possesses your private key can decrypt your login infor mation so it's important that you store your private keys in a secure place Amazon EC2 accepts the following formats: • OpenSSH public key format (the format in ~/ssh/authorized_keys) • Base64 encoded DER format • SSH public key file format as specified in RFC4716 Amazon EC2 does not accept DSA keys Make sure your key generato r is set up to create RSA keys Supported lengths: 1024 2048 and 4096 3 – IaaS Requirement Only use VM template images p rovided by trusted sources to help avoid the accidental or deliberate presence of malware and backdoor user accounts Protect the tenant’s VM template images from unauthorised changes Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 31 AWS Response An Amazon Machine Image (AMI) provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need You can also launch instances from as many different AMIs as you need You can customize the instance that you launch from a public AMI and then save that configuration as a custom AMI for your own use Instances that you launch from your AMI use all the customizations that you've made You can also use custom AMI instances with A WS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating t hem in an orderly and predictable fashion After you create an AMI you can keep it private so that only you can use it or you can share it with a specified list of AWS accounts You can also make your custom AMI public so that the community can use it Building a safe secure usable AMI for publ ic consumption is a fairly straightforward process if you follow a few simple guidelines For information about how to create and use shared AMIs see Sh ared AMIs You also control the updating and patching of your guest OS including security updates Amazon ‐provided Windows and Linux ‐based AMIs are updated regularly with the latest patches so if you do not need to preserve data or customizations on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on premises environment This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances You can also export imported instances back to your on premises virtualization infrastructure allowing you to deploy workl oads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 32 The Center for Internet Security Inc (CIS) is a 501c3 nonprofit organization focused on enhancing the cy ber security readiness and response of public and private sector entities with a commitment to excellence through collaboration CIS provides resources that help partners achieve security goals through expert guidance and cost effective solutions CIS pro vide preconfigured AMI’s on the AWS Marketplace here: https://awsamazoncom/marketplace/seller profile/ref=dtl_pcp_sold_b y?ie=UTF8&id=6b3b0dc2 c6f4 487b 8f29 9edba5f39eed Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS Amazon Inspector automatically assessed applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of security 4 – IaaS Requirement Implement netw ork segmentation and segregation eg n tier architecture using host based firewalls and CSP’s network access controls to limit inbound and outbound VM network connectivity to only required ports/protocols AWS Response Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selec tion of your own IP address range creation of subnets and configuration of route tables and network gateways You can easily customize the network configuration for your Amazon Virtual Private Cloud For example you can create a public facing subnet for your webservers that has access to the Internet and place your backend systems such as databases or application servers in a private facing subnet with no Internet access You can leverage multiple layers of security including security groups and networ k access control lists to help control access to Amazon EC2 instances in each subnet Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 33 Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extensio n of your corporate datacenter A security group acts as a virtual firewall for your instance to control inbound and outbound traffic When you launch an instance in a VPC you can assign the instance to up to five security groups Security groups act at the instance level not the subnet level Therefore each instance in a subnet in your VPC could be assigned to a different set of security groups If you don't specify a particular group at launch time the instance is automatically assigned to the default security group for the VPC For each security group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic This section describes the basics things you need to know about security grou ps for your VPC and their rules The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications Well ‐ informed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional per ‐instance filters with host ‐based firewalls such as IPtables or the Windows Firewall and VPNs This can restrict both inbound and outbound traffic A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC For more information about the differences between security groups and network ACLs see Comparison of Security Groups and Network ACLs 5 – IaaS Requirement Utilise secure programming practices for software developed by the tenant AWS Response It is your responsibility to u se secure programming practices Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 34 AWS’s development process for AWS infrastructure and services follows secure software development best practices whic h include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring p enetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoin g operations This whitepaper describes how Amazon Web Services (AWS) adds value in the various phases of the software development cycle with specific focus on development and test For the development phase it shows how to use AWS for managing version control; it describes project management tools the build process and environments hosted on AWS; and it illustrates best practices For the test phase it describes how to manage test environments and run various kinds of tests including load testing acceptance testing fault toler ance testing etc AWS provides unique advantages in each of these scenarios and phases allowing you to pick and choose the ones most appropriate for your software development project The intended audiences for this paper are project managers developers testers systems architects or anyone involved in software production activities With AWS your development and test teams can have their own resources scaled according to their own needs Provisioning complex environments or platforms composed of mul tiple instances can be done easily using AWS CloudFormation stacks or some of the other automation techniques described In large organizations comprising multiple teams it is a good practice to create an internal role or service responsible for centraliz ing and managing IT resources running on AWS This role typically consists of: • Promoting internal development and test practices described here • Developing and maintaining template AMIs and template AWS CloudFormation stacks with the different tools and p latforms used in your organization • Collecting resource requests from project teams and provisioning resources on AWS according to your organization’s policies including network configuration (eg Amazon VPC) security configurations (eg Security Gr oups and IAM credentials) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 35 • Monitoring resource usage and charges using Amazon CloudWatch and allocating these to team budgets While you can use the AWS Management Console to achieve the tasks above you might want to develop your own internal provisionin g and management portal for a tighter integration with internal processes You can do this by using one of the AWS SDKs which allow programmatic access to resources running on AWS 6 – IaaS Requirement Architect to meet availability requirements eg minimal single points of failure data replication automated failover multiple availability zones geographically separate data centres and real time availability monitoring AWS Response AWS provides you with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/availability zone deployment architectures The AWS Well Architected Framework whitepaper describes how you can assess and improve your cloud based architectures to better understand the business impact of your design decisions Included in the paper are the four general design principles as w ell as specific best practices and guidance in four conceptual areas (security reliability performance efficiency and cost optimization) These four areas are defined as the pillars of the Well Architected Framework AWS provides you with the flexibilit y to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region You should architect your AWS usage to take advantage of multiple Regions and Availability Zones The Architecting for the Cloud whitepaper Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 36 as custom m etrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these ins ights to react and keep your application running smoothly This whitepaper is intended for solutions architects and developers who are building solutions that will be deployed on Amazon Web Services (AWS) It provides architectural patterns and advice on how to design systems that are secure reliable high performing and cost efficient It includes a discussion on how to take advantage of attributes that are specific to the dynamic nature of cloud computing (elasticity infrastructure automation etc) In addition this whitepaper also covers general patterns explaining how t hese are evolving and how they are applied in the context of cloud computing 7 – IaaS Requirement If high availability is required implement clustering and load balancing a Content Delivery Network for public web content automated scaling with an adequ ate maximum scale value and real time availability monitoring AWS Response Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Achieve higher levels of fault tolerance for your a pplications by using Elastic Load Balancing to automatically route traffic across multiple instances and multiple Availability Zones Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances If all of your EC2 instances in one Availability Zone are unhealthy and you have set up EC2 instances in multiple Availability Zones Elastic Load Balancing will route traffic to your healthy EC2 instances in those other zones Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 37 Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns or that experience hourly daily or weekly variability in usage Whether you are running one Amaz on EC2 instance or thousands you can use Auto Scaling to detect impaired Amazon EC2 instances and unhealthy applications and replace the instances without your intervention This ensures that your application is getting the compute capacity that you expe ct Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service You can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Elastic Load Balancers Route 53 will fail away from a load balancer if there are no healthy EC2 instances registered with the load balancer or if the load balancer itself is unhealthy Using Route 53 DNS failover you can run applications in multiple AWS regions and designate alternate load balancers for failover across regions In the event that your application is unresponsive Route 53 will remove t he unavailable load balancer endpoint from service and direct traffic to an alternate load balancer in another region To get started with Route 53 failover for Elastic Load Balancing visit the Elastic Load Balancing Developer Guide and the Amazon Route 53 Developer Guide Amazon CloudFront is a global content delivery network (CDN) service It integrates with other Amazon Web Services products to give developers and businesses an easy way to distribute content to end users with low latency high data transfer speeds and no minimum usage commitments The service automatically responds as demand increases or decreases without any intervention from you Amazon CloudFront also uses multiple layers of caching at each edge location and collapses simultaneous requests for the same object before contacting your origin server These optimizations further help reduce the need to scale your origin infrastructure as your website becomes more popular Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 38 Amazon CloudFront is built using Amazo n’s highly reliable infrastructure The distributed nature of edge locations used by Amazon CloudFront automatically routes end users to the closest available location as required by network conditions Origin requests from the edge locations to AWS origin servers (eg Amazon EC2 Amazon S3 etc) are carried over network paths that Amazon constantly monitors and optimizes for both availability and performance AWS WAF is a web application firewall that helps pr otect your web applications from common web exploits that could affect application availability compromise security or consume excessive resources AWS WAF gives you control over which traffic to allow or block to your web applications by defining custom izable web security rules You can use AWS WAF to create custom rules that block common attack patterns such as SQL injection or cross site scripting and rules that are designed for your specific application New rules can be deployed within minutes let ting you respond quickly to changing traffic patterns Also AWS WAF includes a full featured API that you can use to automate the creation deployment and maintenance of web security rules With AWS WAF you pay only for what you use AWS WAF pricing is b ased on how many rules you deploy and how many web requests your web application receives There are no upfront commitments PaaS Risk Mitigations 1 – PaaS Requirement Securely configure and promptly patch all software that the tenant is responsible for AWS Response While AWS provides a managed service you are responsible for setting up and managing network controls such as firewall rules and for managing platform level identity and access management separately from IAM AWS is responsible for patching systems supporting the delivery of service to custom ers This is done as required per AWS policy and in accordance with ISO 27001 NIST and PCI requirements AWS manages the underlying Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 39 infrastructure and foundation services the operating system and the application platform Elastic Beanstalk regularly re leases platform updates to provide fixes software updates and new features With managed platform updates you can configure your environment to automatically upgrade to the latest version of a platform during a scheduled maintenance window Your application remains in service during the update process with no reduction in capacity You can configure your environment to automatically apply patch version updates or both patch and minor version updates Managed platform updates don't support major version updates which may introduce changes that are backwards incompatible When you enable managed platform updates you can also configure AWS Elastic Beanstalk to replace all instances in your environment during the maintenance window even if a platform update isn't available Replacing all instances in your environment is helpful if your application encounters bugs or memory issues when running for a long period 2 – PaaS Requirement Utilise secure programming practi ces for software developed by the tenant AWS Response Covered in 5 IaaS 3 – PaaS Requirement Architect to meet availability requirements eg minimal single points of failure data replication automated failover multiple availability zones geographical ly separate data centres and real time availability monitoring AWS Response Covered in 6 IaaS Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 40 4 – PaaS Requirement If high availability is required implement clustering and load balancing a Content Delivery Network for public web content automated scaling with an adequate maximum scale value and real time availability monitoring AWS Response Covered in 7 IaaS SaaS Risk Mitigations 1 – SaaS Requirement Use security controls specific to the cloud service eg tokenisation to replace sensitive data with non sensitive data or ASD approved encryption of data (not requiring processing) and avoid exposing the decryption key AWS Response AWS provides specific SOC controls to address the threat of inappropriate access and the public certification and compliance initiatives covered in this document address efforts to prevent inappropriate access All certifications and third party attestations evaluate logical access preventative and detective controls In addition periodic risk assessments focus on ho w access is controlled and monitored AWS allows you to implement yo ur own security architecture For more information about server and network security see the AWS security whitepaper All data stored by AWS on behalf of you has strong tenant isolation security a nd control capabilities You retain control and ownership of your data thus it is your responsibility to choose to encrypt the data AWS allows you to use your own encryption mechanisms for nearly all of the AWS services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition you can leverage AWS Key Management Systems (KMS) to create and control encryption keys using 256 bit AES envelope encryption (refer to https://awsamazoncom/kms/ ) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 41 2 – SaaS Requirement If high availability is required where possible and appropriate implement additional cloud services providing layered denial of service mitigation where these cloud services might be provided by third party CSPs AWS Response Covered in 7 IaaS Additionally the A WS Best Practices for DDoS Resiliency whitepaper provides guidance on how you can improve the resiliency of your applications running on Amazon Web Services (AWS) against Distributed Denial of Service attacks The paper provides an overview of Distributed Denial of Service attacks techniques that can help maintain availability and reference architectures to provide architectural guidance with the goal of improving your resiliency Further Reading For additional help see the following sources: • AWS Security Page: http://awsamazoncom/security • AWS Compliance Page: http://awsamazoncom/compliance • AWS IRAP Page: http://awsamazoncom/compliance/irap/ • Overview of AWS Security Processes: http://d0awsstaticcom/whitepapers/Security/AWS_Security_Whitepa perpdf • AWS Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Co mpliance_Whitepaperpdf • AWS Security Best Practices: https://d0awsstaticcom/whitepapers/aws security best practicespdf • KMS Cryptographic Details: https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 42 Document Revisions Date Description June 2017 Initial publication,General,consultant,Best Practices Use_Amazon_Elasticsearch_Service_to_Log_and_Monitor_Almost_Everything,"This paper has been archived For the latest version of this content visit: https://d1awsstaticcom/architecturediagrams/ArchitectureDiagrams/observabilitywith logstracesmetricsrapdf Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything First published December 2016 Updated July 13 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 What Is Elasticsearch? 2 How Is Elasticsearch used? 5 What about commercial monitoring tools? 6 Why use Amazon ES? 7 Best practices for configuring your Amazon ES domain 8 Elasticsearch Security and Compliance 9 Security 9 Compliance 10 MultiAccount Log aggregation use case 11 UltraWarm storage for Amazon ES 12 Pushing Log data from EC2 instances into Amazon ES 13 Pushin g Amazon CloudWatch Logs into Amazon ES 14 Using AWS Lambda to send Logs into Amazon ES 16 Using Amazon Kinesis Data Firehose to load data into Amazon ES 18 Implement Kubernetes logging with EFK and Amazon ES 19 Settin g up Kibana to visualize Logs 20 Alerting for Amazon ES 20 Other configuration options 20 Conclusion 21 Contributors 21 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and many more use cases It is a fully managed service that delivers the easy touse APIs and realtime capabilities of Elasticsearch along with the availability scalability and security required by production workloads Amazon ES is a service designed to be useful for logging and monitoring It is fully managed by Amazon Web Services (AWS) and offers com pelling value relative to its cost of operation This whitepaper provide s best practices for feeding log data into Elasticsearch and visualizing it with Kibana using a serverless inbound log management approach It show s how to use Amazon CloudWatch Logs and the unified Amazon CloudWatch Logs agent to manage inbound logs in Amazon Elasticsearch You can use this approach instead of the more traditional ELK Stack (Elasticsearch Logstash Kibana) approach It also show s you how to move log data into Amazon ES using Amazon Kinesis Data Firehose – and identifies the strengths and weaknesses of using Kinesis versus the simpler CloudWatch approach while providing tips and techniques for easy setup and management of the solution To get the most out of reading this whitepaper it’s helpful to be familiar with AWS Lambda functions Amazon Simple Storage Service ( Amazon S3) and AWS Identity and Access Management (IAM) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 1 Introduction AWS Cloud implementations differ significantly from on premises infrastructure New log sources the volume of logs and the dynamic nature of the cloud introduce new logging and monitoring challenges AWS provides a range of services that help you to meet thos e challenges For example AWS CloudTrail captures all API calls made in an AWS account Amazon Virtual Private Cloud (Amazon VPC) Flow Logs capture network traffic inside an Amazon VPC and both containers and EC2 instances can come and go in an elastic f ashion in response to AWS Auto Scaling events Many of these log types have no direct analogy in the on premises data center world This whitepaper explains how to use Amazon Elasticsearch Service (Amazon ES) to ingest index analyze and visualize logs p roduced by AWS services and your applications without increasing the burden of managing or monitoring these systems Elasticsearch and its dashboard extension called Kibana are popular open source tools because they are simple to use and provide a quick time to value Additionally the tools are fully supported by AWS Support as well as by an active open source community With the m anaged Amazon ES service AWS reduces the effort required to set up and configure a search domain by creating and managing a multi node Elasticsearch cluster in an automated fashion replacing failed nodes as needed The domain is the searchable interface for Amazon ES and the cluster is the collection of managed compute nodes needed to power the system AWS currently supports versions of Elasticsearch and Kibana from 15 to 710 At the date of this writing the new 6x and 7x versions of Elasticsearch and Kibana offer several new features and improvements including UltraWarm real time anomaly detection index splitting weighted average aggregation higher indexing performance improved cluster coordination safeguards and an option to multiplex token filters support for field aliases and improved workflow for inspecting the data behind a visualization You can create new domains running Elasticsearch 710 and also easily upgrade existing 56 and 6x domains with no downtime using inplace version upgrades You can easily scale your cluster with a single API call and configure it to meet your performanc e requirements by selecting from a range of instance types and storage options including solid state drive (SSD) backed EBS volumes Amazon ES provides This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticse arch Service to Log and Monitor (Almost) Everything 2 high availability using zone awareness which replicates data among three Availability Zones Amazon ES can be scaled up from a default limit of 20 data nodes to 200 data nodes in a single cluster including up to 3 petabytes of storage by requesting a service limit increase By taking advantage of Amazon ES you can concentrate on getting value from the data that is indexed by your Elasticsearch cluster and not on managing the cluster itself You can use AWS tools settings and agents to push data into Amazon ES Then you can configure Kibana dashboards to make it easy to understand interesting correlat ions across multiple types of AWS services and application logs Examples include VPC networking logs application and system logs and AWS API calls Once the data is indexed you can access it via an extensible simple and coherent API using a simple q uery domain specific language (DSL) and piped processing language (ppl) without worrying about traditional relational database concepts such as tables columns or SQL statements As is common with full text indexing you can retrieve results based on the closeness of a match to your query This can be very useful when working with log data to understand and correlate a key problem or failure This whitepaper show s you how to provision an Amazon ES Cluster push log data from Amazon EC2 Instances into Amaz on Elasticsearch push Amazon CloudWatch Logs into Amazon Elasticsearch use AWS Lambda to send logs into Amazon Elasticsearch use Amazon Kinesis Firehose to load data into Amazon ES implement Kubernetes logging with EFK and Amazon Elasticsearch and con figure Alerting for Amazon ES What Is Elasticsearch? Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to create a domain and deploy operate and scale Elasticsearch clusters in the AWS Cloud An Amazon ES domain is a serv ice wrapper around an Elasticsearch cluster A domain contains the engine instances (nodes) that process Amazon ES requests the indexed data that you want to search snapshots of the domain access policies and metadata This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 3 The first public release of the E lasticsearch engine was issued in early 2010 and since then the Elasticsearch project has become one of the most popular open source projects on GitHub Based on Apache Lucene internally for indexing and search Elasticsearch converts data such as logs t hat you supply into a JSON like document structure using key value pairs to identify the strings and values that are present in the data In Elasticsearch a document is roughly analogous to a row in a database and it has the following characteristics: • Has a unique ID • Is a collection of fields (similar to a column in a database table ) In the following example of a document the document ID is 34171 The fields include first name last name and so on Note that document types will be deprecated in APIs in Elasticsearch 700 and completely removed in 800 Figure 1 – Example of an Elasticsearch document Elasticsearch supports a RESTful web services interface You can use PUT GET POST and DELETE commands to interface with an Elasticsearch index which is a logical collection of documents that can be split into shards Most users and developers use command line tools such as cURL to test these capabilities and run simple queries and then develop their applications in the language of their choice The following illustration shows an Amazon ES domain that has an index with two shards Shard A and Shard B This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 4 Figure 2 – Elasticsearch terminology You can think of an Amazon ES domain as a service wrapper around an Elasticsearch cluster and the logical API entry point to interfaces with the system A cluster is a logical grouping of one or more nodes and indices An index is a logical grouping of do cuments each of which has a unique ID Documents are simply groupings of fields that are organized by type An index can be further divided into shards The Lucene search engine in Elasticsearch executes on shards that contain a subset of all documents th at are managed by a given cluster Conventional relational database systems aren’t typically designed to organize unstructured raw data that exists outside a traditional database in the same manner as Elasticsearch Log data varies from semi structured (su ch as web logs) to unstructured (such as application and system logs and related error and informational messages) Elasticsearch does not require a schema for your data and is often orders of magnitude faster than a relational database system when used to organize and search this type of data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 5 Figure 3 – Amazon ES architecture Because Elasticsearch does not store data in a normalized fashion clusters can grow to 10s or 1000s of servers and petabytes of data Searches remain speedy because Elastic search stores documents that it creates in close proximity to the metadata that you search via the full text index When you have a large distributed system running on AWS there is business value in logging everything Elasticsearch helps you get as clos e to this ideal as possible by capturing logs on almost everything and making the logs easily accessible How Is Elasticsearch used? Many users initially start with Elasticsearch for consumption of logs (~50% of initial use cases involve logs) then event ually broaden their usage to include other searchable data Elasticsearch is also frequently used for marketing and clickstream analytics Some of the best examples of analytic usage come from the online retailing world where several major retailers use E lasticsearch One example of how they use the data is to follow the clickstream created by their order pipeline to understand buyer behavior and make recommendations either before or after the sale This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 6 Many log applications that target Elasticsearch also st art with the use of the Logstash agent and forwarder to transform and enrich their log data (such as geographic information and reformatting) prior to sending to their cluster Elasticsearch can produce analytic value in a relatively short period of time given the performance of its indexing engine The default index refresh rate is set at one second but is configurable given the size of your cluster and the rate of log ingestion B ecause Elasticsearch and Kibana are open source software it is not unusual to see enterprise customers providing Kibana web access across a large subset of desktops in departments that need to understand their customers better Amazon Elasticsearch Servic e (Amazon ES) provides support for cross cluster search enabling you to perform searches aggregations and visualizations across multiple Amazon ES domains with a single query or from a single Kibana interface With this feature you can separate heterog eneous workloads into multiple domains which provides better resource isolation and the ability to tune each domain for their specific workloads which can improve availability and reduce costs Trace Analytics is a new feature of Amazon Elasticsearch Se rvice that enables developers and IT operators to find and fix performance problems in distributed applications which leads to faster problem resolution times Trace Analytics is built using OpenTelemetry a Cloud Native Computing Foundation (CNCF) project that provides a single set of APIs libraries agents and collector services to capture distributed traces and metrics which enables customers to leverage Trace Analytics without having to re instrument their a pplications Trace Analytics is powered by the Open Distro for Elasticsearch project which is open source and freely available for everyone to download and use What about commercial monitoring tools? There are many popular commercial logging and monitori ng tools available from AWS partners such as Splunk Sumologic Loggly and Datadog These software asaservice (SaaS) and packaged software products provide real value and typically support a high level of commercial feature polish These packages gener ally require no installation or they are software packages that install very simply making getting started easy You might decide that you have enough spare time to devote to setting up Amazon ES and related log agents and that the capability it provides meets your requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 7 Your decision to pick Amazon ES versus commercial software should include the cost of labor to establish and manage the service the setup and configuration time for the AWS services that you are using and the server and applicat ion instance logs that you want to monitor Kibana’s analytics capabilities continue to improve but are still relatively limited when compared with commercial purpose built monitoring software Commercial monitoring and logging products such as the ones we mentioned typically have very robust user administration capabilities Why use Amazon ES ? If you use Amazon ES you will save considerable effort establishing and configuring a cluster as well as maintaining it over time Amazon ES automatically finds and replaces failed nodes in a cluster and you can create or scale up a cluster with a few cl icks in the console or a simple API call or command line interface (CLI) command Amazon ES also automatically configures and provisions a Kibana endpoint which you can use to begin visualizing your data You can create Kibana dashboards from scratch or import JSON files describing predefined dashboards and customize from there It is easy to provision an Amazon ES cluster You can use the Amazon ES console to set up and configure a domain in minutes If you prefer programmatic access you can use the AWS CLI or the AWS SDKs Following steps are typically what you need to do to provision an Amazon ES cluster: • Create a domain • Size the domain appropriately for your workload • Control access to your domain using a domain access policy or finegrained access control • Index data manually or from other AWS services • Use Kibana to search your data and create visualizations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 8 Best practices for configuring your Amazon ES domain When you configure your Amazon ES domain you choose the instance type and count for data and the dedicated master nodes Elasticsearch is a distributed service that runs on a cluster of instances or nodes These node types have different functions and require different sizing Data nodes store the data in your indexes and process indexing and query requests Dedicated master nodes don’t process these requests; they maintain the cluster state and orchestrate Amazon ES supports five instance classes : M R I C and T As a best practice use the latest generation instance type from each instance class For the latest supported instance classes see Supported instance types in Amazon Elasticsearch Service When choosing an instance type for your data nodes bear in mind that these nodes carry all the data in your indexes (storage) and do all the processing for your requests (CPU) As a best practice for heavy production workloads choose the R5 or I3 instance type If your emphasis is primarily on performance the R5 typically delivers the best performance for log analytics workloads and often for search workloads The I3 instances are strong contenders and may suit your workload better so you should test both If your emphasis is on cost the I3 instances have better cost efficiency at scale especially if you choose to purchase reserved instances For an entry level instan ce or a smaller workload choose the M5s The C5s are a specialized instance relevant for heavy query use cases which require more CPU work than disk or network Use the T2 or T3 instances for development or QA workloads but not for production When choosing an instance type for your dedicated master nodes keep in mind that these nodes are primarily CPU bound with some RAM and network demand as well The C5 instances work best as dedicated masters up to about 75 data node clusters Above that no de count you should choose R5 For log analytics use cases you want to control the life cycle of data in your cluster You can do this with a rolling index pattern Each day you create a new index then archive and delete the oldest index in the cluster You define a retention period that controls how many days (indexes) of data you keep in the domain based on your analysis needs For more information see Index State Management You should try to align your shard and instance counts so that your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monito r (Almost) Everything 9 shards distribute equally across your nodes You do this by adjusting shard counts or data node counts so that they are evenly divisible Elasticsearch Security and Compliance Security Amazon ES service is a managed service this means that AWS is responsible for security of the underlying infrastructure and operating system patching and management of Elasticsearch software while you are responsible for setup of service level security controls This would include areas such as management of authentication and access controls data encryption in motion and data encryption at rest Authentication and access control for Elasticsearch are implemented using a combination of Sigv4 signing and AWS IAM Integration with Sigv4 will be covered in greater depth during the setup of logging services For examples of IAM policies that can be used in security access to Amazon Elasticsearch using resource based policies identity based policies or IP based policies review these policy examples All Amazon Elasticsearch domains are created in a dedicated VPC This setup keeps the cluster secure and isolates inter node network traffic By default traffic within this isolated VPC is unencrypted but you can also enable node tonode TLS encryption This feature must be enabled at the time of Elasticsearch cluster creation To use this feature for an existing cluster you must create a new cluster and migrate your data Node tonode encryption requires Elasticsearch version 60 or later For enabling data encryption at rest Amazon ES service natively integrates with AWS Key Management Service ( AWS KMS) making it easy to secure data within Elasticsearch indices automated snapshots Elasticsearch logs swap files and all data in the application directory This option along with node tonode encryption must be set up during domain creation Encryption of data at rest requires Elasticsearch 51 or later Encryption of manual snapshots and encryption of slow logs a nd error logs must also be configured separately Manual snapshots can be encrypted using server side encryption in S3 For more details see Registering a manual snapshot repository If published to Amazon CloudWatch slow logs and error logs can be encrypted using the same KMS master key as the ES domain For more information see Encrypt log data in CloudWatch Logs using AWS KMS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 10 Amazon ES Service o ffers fine grained access control (FGAC) which adds multiple capabilities to give you tighter control over your data FGAC features include the ability to use roles to define granular permissions for indices documents or fields and to extend Kibana with readonly views and secure multi tenant support Two forms of authentication and authorization are provided by FGAC: a built in user database which makes it easy to configure usernames and passwords inside of Elasticsearch and AWS Identity and Access M anagement (IAM) integration which lets you map IAM principals to permissions Powered by Open Distro for Elasticsearch which is an Apache 20 licensed distribution of E lasticsearch Fine grained access control is available on domains running Elasticsearch 67 and higher Compliance By choosing to use the Amazon Elasticsearch service you can greatly reduce your compliance efforts by building compliant applications on to p of existing AWS compliance certifications and attestations Amazon Elasticsearch Service is HIPAA Eligible You can use Amazon Elasticsearch Service to stor e and analyze protected health information (PHI) and build HIPAA compliant applications To set up visit AWS Artifact in your HIPAA accounts and agree to the AWS Business Associate Agreement This BAA can be set up for individual AWS accounts or for all of the accounts under your AWS Organization supervisory account Amazon Elasticsearch Service is also in scope of AWS Payment Card Industry Data Security Standard (PCI DSS ) which allow s you to store process or transmit cardholder data using the service Additionally Amazon Elasticsearch Service is in scope for the AWS ISO 9001 27001 27017 an d 27018 certifications PCI DSS and ISO are among the most recognized global security standards for attesting to quality and information security management in the cloud AWS Config is a service that continuously monitors the configuration of AWS Services for compliance and can automate remediation actions using AWS Config rules In the case of Amazon Elasticsearch Service you should consider enabling Config rules such as: • elasticsearch invpconly – This checks whether the Amazon Elasticsearch cluster i s deployed in a VPC and is NON_Compliant if the ES domain is public This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsear ch Service to Log and Monitor (Almost) Everything 11 • elasticsearch encrypted atrest – This will do a check to ensure Amazon Elasticsearch domains have been deployed with encryption at rest enabled and is NON_Compliant if the EncryptionAtRe stOptions field is not enabled Amazon Elasticsearch Service offers a detailed audit log of all Elasticsearch requests Audit Logs allows customers to record a trail of all user actions helping meet compliance regulations improving the overall security posture and providing evidence for security investigations Amazon Elasticsearch Service Audit Logs allows customers to log all of their user activity on their Elasticsearch clusters including keeping a history of user authentication success and failures logging all requests to Elasticsearch modifications to indices recording incoming search queries and much more Audit Logs provides a default configuration that covers a popular set of user actions to be tracked Administrators can further configure a nd fine tune the settings to meet their needs Audit Logs is integrated with Fine Grained Access Control allowing you the ability to log access or modification requests to sensitive documents or fields to meet any compliance requirements Once configure d Audit Logs will be continuously streamed to CloudWatch Logs and can be further analyzed there Audit Logs settings can be changed at any time and are automatically updated Both new and existing Amazon Elasticsearch Service domains (version 67+) with F ine Grained Access Control enabled can use the Audit Logs feature Multi Account Log aggregation use case An important part of every large enterprise AWS deployment is a multi account strategy that is setup using either AWS Control Tower or AWS Landing Zo nes This creates a core for the centralized governance of accounts including the aggregation of the logs from all of a customer’s accounts into one centralized account where they can be ingested into Elasticsearch to be correlated and monitored in one central location It can include logs from services and components such as CloudTrail Logs CloudWatch Log Groups VPC Flow Logs AWS Config Logs and Amazon GuardDuty Logs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 12 In the case of CloudWatch logs these can be streamed directly to Elasticsearch f rom all accounts in a customers’ organization using methods described in Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis Because Amazon ES runs in an AWS managed VPC and not in a VPC that you control you must secure access to it and the Kibana dashboards that you use with it There are two starting points for this: • IP address restrictions configured with EC2 Security Groups • HTTP basic Auth configured through an nginx proxy that sits in front of the Amazon ES endpoint Using nginx with SSL/TLS to provide user administration and block all other traffic should be implemented prior to using this method with production data as the first two methods are relatively weak security methods Beyond these two basic controls the preferred method for securing access to Kibana is to enable access using AWS Single Sign On or your o wn Federation service This setup will allow for only users within your Microsoft Active Directory access to visualize data stored in Elasticsearch It uses a standard SAML identity federation approach and a specific Active Directory group can be used to r estrict access to an Amazon Elasticsearch domain If you do not already have an Active Directory Domain with your users set up another option would be to use Amazon Elasticsearch Service native integration with Amazon Cognito User Pools to manage access This approach provides user level access control to Kibana access to ES domains and the ability to set polic ies for groups of users within the Amazon Cognito User Pool UltraWarm storage for Amazon ES UltraWarm provides a cost effective way to store large amou nts of read only data on Amazon Elasticsearch Service Standard data nodes use ""hot"" storage which takes the form of instance stores or Amazon EBS volumes attached to each node Hot storage provides the fastest possible performance for indexing and search ing new data UltraWarm nodes use Amazon S3 and a sophisticated caching solution to improve performance For indices that you are not actively writing to query less frequently and don't need the same performance as hot storage UltraWarm offers signific antly lower costs per GiB of data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 13 In Elasticsearch these warm indices behave just like any other index You can query them using the same APIs or use them to create dashboards in Kibana Because UltraWarm uses Amazon S3 it does not incur overhead which was typically from hot storage When calculating UltraWarm storage requirements you consider only the size of the primary shards The durability of data in S3 removes the need for replicas and S3 abstracts away any operating system or service considera tions Each UltraWarm node can use 100% of its available storage for primary data Pushing Log data from EC2 instances into Amazon ES While many Elasticsearch users favor the “ELK” (Elasticsearch Logstash and Kibana) stack a serverless approach using A mazon CloudWatch Logs has some distinct advantages You can consolidate your log feeds install a single agent to push application and system logs remove the requirement to run a Logstash cluster on Amazon EC2 and avoid having any additional monitoring o r administration requirements related to log management However before going serverless you might want to review and consider whether you will need some of the more advanced Logstash transformation capabilities that the CloudWatch Logs agent does not s upport The following process shows how to set up CloudWatch Logs agent on an Ubuntu EC2 instance to push logs to Amazon ES AWS Lambda lets you run code without provisioning or managing servers As logs come in AWS Lambda runs code to put the log data in the right format and move it into Amazon ES using its API Figure 4 – CloudWatch Logs architecture This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 14 You will be prompted for the location of the application and system logs datestamp format and a starting point for the log upload Your logs will be st ored in CloudWatch and you can stream them into Amazon ES You can perform the preceding steps for all EC2 instances that you want to connect to CloudWatch you can use the EC2 Run command to install across a fleet of instances or you can build a boot script to use with auto scaled instances To connect a CloudWatch stream to Amazon ES follow the steps in the AWS documentation Streaming CloudWatch Logs data to Amazon Elasticsearch Service using the name of an Amazon ES domain previously created to subscribe your new log group to Amazon ES Note that there are several log formatting options that you might want to review during the connection process and you c an exclude log information that is not of interest to you You will be prompted to create an AWS Lambda execution role because AWS uses Lambda to integrate your CloudWatch log group to Amazon ES You have now created an Amazon ES domain and configured one or more instances to send data to CloudWatch Logs which then can be forwarded to Amazon ES via Lambda Pushing Amazon CloudWatch Logs into Amazon ES The CloudWatch Logs → Lambda → Amazon ES integration makes it easy to send data to Elasticsearch if source data exists in CloudWatch Logs The following figure shows the fe atures and services that you can use to process different types of logs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 15 Figure 5 – Pushing CloudWatch Logs into Amazon ES • AWS API activity logs (AWS CloudTrail) : AWS CloudTrail tracks your activity in AWS and provides you with an audit trail for API activity in your AWS account The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service • You should enable CloudTrail logging for all AWS Regions CloudTrail logs can be sent to Amazon S3 or to CloudWatch Logs; for the purposes of sending logs to Amazon ES as a final destination it is easier to send to CloudWatch Logs • Network activity logs (VPC Flow Lo gs): VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your Amazon Virtual Private Cloud (Amazon VPC) VPC Flow Log data is stored as CloudWatch Logs • Application logs from AWS L ambda functions: Application logs from your Lambda code are useful for code instrumentation profiling and general troubleshooting In the code for your AWS Lambda functions any console output that typically would be sent to standard output is delivered as CloudWatch Logs For example: consolelog() statements for Nodejs functions print() statements for Python functions and Systemoutprintln() statements for Java functions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 16 Using AWS Lambda to send Logs into Amazon ES For maximum flexibility you can use AWS Lambda to send logs directly to your Elasticsearch domain Custom logic in your Lambda function code can then perform any desired data processing cleanup and normalization before sending the log data to Amazon ES This approach is highly flexible However it does require technical understanding of how AWS Signature Version 4 security works For security purposes in order to issue any queries or updates agai nst an Elasticsearch cluster the request must be signed using AWS Signature Version 4 (“SigV4 signing”) Signature Version 4 is the process to add authentication information to AWS requests Rather than implementing SigV4 signing on your own we highly re commend that you adapt existing SigV4 signing code For the CloudWatch Logs →Lambda→Amazon ES integration described earlier the Lambda code for implementing SigV4 signing is automatically generated for you If you inspect the code associated with the aut o generated Lambda function you can view the SigV4 signing code that is used to authenticate against the Elasticsearch cluster You can copy the code as a starting point for your Lambda functions that need to interact with the Amazon ES cluster Another example of code implementing SigV4 signing is described in the AWS blog post How to Control Access to Your Amazon Elasticsearch Service Domain Using the AWS SDKs based on your programming language of choice will also take c are of the heavy lifting of SigV4 signing making this process much easier This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 17 Figure 6 – Overview of Lambda to Amazon ES data flow These AWS event sources can provide data to your Lambda function code and your Lambda function code can process and send that data to your Amazon ES cluster For example log files stored on S3 can be sent to Amazon ES via Lambda Streaming data sent to an Amazon Kinesis stream can be forwarded to Amazon ES via Lambda A Kinesis stream will scale up to handle very high log d ata rates without any management effort on your part and AWS will manage the durability of the stream for you For more information about the data provided by each of these AWS event sources see the AWS Lambda documentation The S3→Lambda→Amazon ES integration pattern is a particularly useful one As one example many AWS powered websites store their web access logs in Amazon S3 If your website uses Amazon Cloud Front (for global content delivery) Amazon S3 (for static website hosting) or Elastic Load Balancing (for load balancers in front of your web servers) then you should enable the access logs for each service There is no extra charge to enable logging other than the cost of storage for the actual logs in Amazon S3 Once the log files are in Amazon S3 you can process them using Lambda and send them to Amazon ES where you can analyze your website traffic using Kibana This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 18 Using Amazon Kinesis Data Firehose to load data into Amazon ES You can use Amazon Kinesis Data Firehose to transform your data and load it to Amazon ES This approach requires you to install the Amazon Kinesis agent on the EC2 instances that you want to monitor You don’t need to transmit log information to CloudWatch Logs Because Kinesis Data Firehose is a highly scalable managed service you can transmit log data from hundreds or thousands of instances in a very large installation You should consider Kinesis Data Firehose if you have the following requirements: • Large s cale log monitoring installation • Serverless approach to transforming and loading log data Simultaneously store logs in an S3 bucket for compliance or archival purposes while continuously transmitting to Amazon ES Amazon Kinesis Data Firehose is a rich and powerful real time stream management system that is directly integrated with Amazon ES The following illustration shows the flow of logs managed by Kinesis Data Firehose into Amazon ES Figure 7 – Overview of F irehose to Amazon ES data flow This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 19 Support for Apache Web logs is built in to Amazon Kine sis Data Firehose To help you evaluate Amazon Kinesis Data Firehose for log analytics using Amazon ES as a target see th e tutorial Build a Log Analytics Solution Implement Kubernetes logging with EFK and Amazon ES The combination of Fluentd unified logging Elasticsearch RESTFul analytics engine and Kibana for visualizations is known as the EFK stack Fluentd is configured as a DaemonSet where it collects logs and forwards to Cloudwatch Logs where they can be filtered using a subscription filter and then sent to an ES domain for further querying and visualization This AWS workshop – Implement Logging with EFK will walk you through the setup of Kubernetes logging to the EFK stack Figure 8 – Setup of Kubernetes logging to EFK Stack AWS is also supporting Fluent Bit for streaming logs from containerized applications to AWS and partners’ solutions for log retention and analytics With the Fluent Bit plugin for AWS co ntainer images you can route logs to Amazon CloudWatch and Amazon Kinesis Data Firehose destinations (which includes the Amazon Elasticsearch Service) The blog post Centralized Container Logging with Fluent Bit contains more information on relative performance of Fluent Bit versus Fluentd and the advantages it offers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 20 Setting up Kibana to visualize Logs One advantage of Amazon ES is that Kibana is set up and ready to configure after you create your search domain When you first start Kibana you are prompted to configure an index pattern Community support for Kibana has produced several types of useful Kibana dashboards that are preconfigured The main GitHub repository contains dashboards to visualize: • Amazon Elasticsearch Cluster statistics (KOPF) • Amazon VPC Flow Logs • AWS CloudTrail Logs • AWS Lambda Logs Remember the requirement to lock down access to Kibana for all users A best practice for this would be to use a cor porate LDAP or Active Directory Service to manage access to Kiban a Alerting for Amazon ES The Amazon ES alerting feature notifies you when data from one or more Elasticsearch indices meets certain conditions For example you might want to receive an emai l if your application logs more than five HTTP 503 errors in one hour or you might want to page a developer if no new documents have been indexed in the past 20 minutes Alerting requires Elasticsearch 62 or higher Compared to Open Distro for Elasticsearch the Amazon ES alerting feature has some notable differences Amazon ES supports Amazon SNS for notifications This integration with Amazon SNS means that in addition to standard destinations (Slack custom webhooks and Amazon Chime) the alerting feature can send emails text messages and even run AWS Lambda functions using SNS topics The alerting feature supports finegrained access control You can mix and match permissions to fit your use case s Other configuration options Once you have Clou dWatch Logs flowing into Amazon ES make sure you have all of the other types of AWS logs enabled (such as CloudTrail Logs) As you add new log This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 21 types you can add or configure additional Kibana dashboards to match the inbound log pattern In addition yo u can use the Amazon ES anomaly detection feature to automatically detect anomalies in your log data in near real time by using the Random Cut Forest (RCF) mach ine learning algorithm You can use Trace Analytics to help you visualize this flow of events and identify performance problems Conclusion This whitepaper explained what Elasticsearch is It also covered how to use it compar ed it with commercial monitoring tools – and explored why you would want to use Amazon Elasticsearch In addition it covered how to configure Amazon Elasticsearch – as well as how to push logs into it from Amazon EC2 Amazon CloudWatch AWS Lambda and Amazon Kinesis Firehose Finally it also explained the setup of Kibana for visualization of logs and alerting for Amazon ES Contributors The following individuals and organizations contributed to this document: Jim Tran Principal Product Manager AWS Pete Buonora Principal Solutions Architect AWS Changbin Gong Senior Solutions Architect AWS Naresh Gautam Senior Analytics Specialist Architect AWS",General,consultant,Best Practices Use_AWS_Config_to_Monitor_License_Compliance_on_Amazon_EC2_Dedicated_Hosts,"ArchivedUse AWS Config to M onitor License Compliance on Ama zon EC2 Dedicated Hosts April 2016 This paper has been archived For the latest technical guidance about Amazon EC2 see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 2 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 3 of 16 Contents Abstract 4 Introduction 4 Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances 5 Creating a Custom Rule to Check that Launched Instances Are on a Dedicated Host 7 Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules 15 Conclusion 15 Contributors 16 Further Reading 16 ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 4 of 16 Abstract Amazon Elastic Compute Cloud (EC2) Dedicated Hosts can help enterprises reduce costs by allowing the use of existing serverbound licenses Many customers can also use Dedicated Hosts to address corporate compliance and regulatory requirements Oftentimes customers using Dedicated Hosts want to continuously record and evaluate changes to their infrastructure to stay compliant with license terms and regulatory requirements This paper outlines the ways in which you can leverage AWS Config and AWS Config Rules to monitor license compliance on Amazon EC2 Dedicated Hosts Introduction This paper discusses how you can set up AWS Config to record configuration changes to Amazon EC2 Dedicated Hosts and EC2 instances in order to ascertain your licensing compliance posture Y ou’ll learn how t o create AWS Config Rules to govern the way your serverbound licenses are used on Amazon Web Services (AWS) We’ll create a sample rule that checks whether all instances in an account created from an Amazon Machine Image (AMI) called MyWindowsImage are launched onto a specific Dedicated H ost We’ll also describe other checks that can be employed to monitor compliance with common licensing restrictions and to govern your Dedicated Host resources An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated for your use You get complete visibility into the number of sockets and physical cores that support your instances on a Dedicated Host Dedicated Hosts allow you to place your instances on a specific physical server This level of visibility and control in turn allows you to use your existing per socket percore or pervirtual machine ( VM) software licenses (eg Microsoft Windows Server) to save costs and meet compliance and regulatory requirements To track the history of instances that are launched stopped or terminated on a Dedicated Host you can use AWS Config AWS Config pairs this information with host and instancelevel information relevant to software licensing such as ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 5 of 16 the host ID AMI IDs and number of sockets and physical cores per host You can then use this data to verify usage against your licensing metrics You can use AWS Config Rules to choose from a set of prebuilt rules based on common AWS best practices or define custom rules You can set up rules that check the validity of changes made to resources tracked by AWS Config against policies and guidelines defined by you You can set these AWS Config Rules to evaluate each change to the configuration of a resource or you can execute them at a set frequency You can also author your own custom rules by creating AWS Lambda functions in any supported language Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances Open the AWS Management Console and go to the EC2 console On the EC2 Dedicated Hosts page notice the Edit Config Recording button at the top The icon in red indicates that AWS Config is not currently set up to record configuration changes to Dedicated Hosts and instances Figure 1: Edit Config Recording Button with the Red Icon on Dedicated Host Console ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 6 of 16 Getting started with AWS Config is simple Click the Edit Config Recording button to open the AWS Config settings page On this page check Record all resources supported in this region Figure 2: Selecting Resource Types to Record on the AWS Config Settings Page You can choose to only enable recording for Dedicated Hosts and instances by selecting these resources in Specific types If you are setting up AWS Config for the first time you must specify an Amazon S3 bucket into which AWS Config can deliver configuration history and snapshot files Optionally you can also provide an Amazon Simple Notification Service (SNS) topic to which change and compliance notifications will be delivered Finally you’ll be asked to grant appropriate permissions to AWS Config and save the settings For more details on setting up AWS Config using the AWS Management Console or the CLI see the Getting Started with AWS Config documentation ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 7 of 16 After the AWS Config setup is complete you’ll notice that the icon on the EC2 console page for Dedicated Hosts has turn ed green This indicates that AWS Config is recording configuration changes to all EC2 instances and Dedicated Hosts Figure 3: The Edit Config Recording Button with Green Icon Creating a Custom Rule to Check that Launched Instances Are o n a Dedicated Host Now that you have set up AWS Config to start recording configuration changes to Dedicated Hosts and EC2 instances you can start writing rules to evaluate the license compliance state of all instances in the account To get started you will write a rule that checks whether all instances launched from the MyWindowsImage AMI are placed onto a specific Dedicated Host For this sample assume that MyWindowsImage is the name of an AMI you have imported and is the machine image of a Microsoft Server license you own Before creating the rule first inspect the instances and Dedicated Hosts on your account: Look up EC2 Instance and EC2 Host resource types In Figure 4 you can see one Dedicated Host and a number of instances ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 8 of 16 Figure 4: Review the Resource Inventory Click the icon for the Dedicated Host to go to the Config Timeline to see the configuration of the Dedicated Host including the sockets cores total vCPUs and available vCPUs You can also see all the instances that are currently running on the host Traversing the timeline provides all historical configurations of the Dedicated Host including the instances that were launched onto the Dedicated Host in the past You also can look into the Config timeline of each of those instances Figure 5: The Config Resource Configuration History Timeline ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 9 of 16 Next you will set up the new rule in AWS Config and write the AWS Lambda function for the rule To do this click Add rule in the AWS Config console and then click Create AWS Lambda function to set up the function you want to execute Figure 6: AWS Config Rule Creation Page ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 10 of 16 On the Lambda console select the configrulechangetriggered blueprint to get sta rted Figure 7: The Lambda Select Blueprint Page You can annotate compliance states To do this first add a global variable called annotation var aws = require( 'aws sdk'); var config = new awsConfigService(); var annotation; You also need to modify the evaluateCompliance function and the handler invoked by AWS Lambda The rest of the blueprint code can be left untouched function evaluateCompliance(configurationItem ruleParameters context) { checkDefined(configurationItem "" configurationItem ""); checkDefined(configurationItemconfiguration ""configurationItem configuration ""); checkDefined(ruleParameters "" ruleParameters ""); if ( 'AWS::EC2::Instance' !== configurationItemresourceType) { return 'NOT_APPLICABLE' ; } if (ruleParametersimageId === configurationItemconfigurationimageId ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 11 of 16 && ruleParametershostId !== configurationItemconfigurationplacementhostId) { annotation = ""Instance "" + configurationItemconfigurationinstanceId + "" launc hed from BYOL AMI "" + configurationItemconfigurationimageId + "" has not been placed on dedicated host "" + ruleParametershostId ; return 'NON_COMPLIANT' ; } else { return 'COMPLIANT' ; } For this example function imageId and hostId are parameters that are passed to the function by the AWS Config rule that will be created next The imageId parameter will contain the AMI ID of MyWindowsImage Use this to identify instances that are launched from this image After you detect that an instance was launched from MyWindowsImage you then can check whether the instance was launched onto the specified Dedicated Host identified by the hostId parameter The instance is marked noncompliant if it is found to be not running on the host on which all instances launched from MyWindowsImage should be running You can annotate compliance states of a resource with additional information indicating why the resource was marked noncompliant This sample elaborates the details of why the instance was marked noncompliant and assigns this text to the annotation global variable Finally changes are made to the handler to pass on the annotation along with the rest of the compliance information ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 12 of 16 putEvaluationsRequestE valuations = [ { ComplianceResourceType: configurationItemresourceType ComplianceResourceId: configurationItemresourceId ComplianceType: compliance OrderingTimestamp: configurationItemconfigurationItemCaptureTime Annotation: annotation } ]; After changes are made to the AWS Lambda function select the appropriate role and save the function In our example we also noted the Amazon Resource Name (ARN) of the function After the function is created go back to the AWS Config console and enter the ARN of the function that was just created ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 13 of 16 Figure 8: Entering the AWS Lambda Function ARN on the AWS Config Rul e Creation Page After specifying the appropriate settings for the rule save it The rule is evaluated once immediately after it is created and thereafter for any changes that are made to EC2 instances In this example two instances were launched from MyWindowsImage out of which only one was launched onto the specified Dedicated Host The AWS Config rule marks the other instance noncompliant Figure 9: Instance Marked as Noncompliant ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 14 of 16 The Compliant or Noncompliant state for each rule is also sent as a notification via the Amazon SNS topic you created when you set up AWS Config You can configure these notifications to send an email trigger a corrective action or log a ticket The Amazon SNS notification contains details about the change in compliance state including the annotation that elaborates the reason for noncompliance View the Timeline for this Resource in AWS Config Management Console: https://consoleawsamazoncom/config/home?region=useast 1#/timeline/AWS::EC2::Instance/ia46d7125?time=2016 0128T02:02:35606Z New Compliance Change Record: { ""awsAccountId"": ""434817024337"" ""configRuleName"": ""restrictedAMI"" ""configRuleARN"": ""arn:aws:config:us east 1:434817024337:config rule/config rule hz8yxz"" ""resourceType"": ""AWS::EC2::Instance"" ""resourceId"": ""i a46d7125"" ""awsRegion"": ""us east 1"" ""newEvaluati onResult"": { ""evaluationResultIdentifier"": { ""evaluationResultQualifier"": { ""configRuleName"": ""restrictedAMI"" ""resourceType"": ""AWS::EC2::Instance"" ""resourceId"": ""i a46d7125"" } ""orderingTimestamp"": ""2016 0128T02:02:35606Z"" } ""complianceType"": ""NON_COMPLIANT"" ""resultRecordedTime"": ""2016 0128T02:02:41417Z"" ""configRuleInvokedTime"": ""2016 0128T02:02:40396Z"" ""annotation"": ""Instance i a46d7125 launched from BYOL AMI ami 60b6c60a has not been placed on dedicated host h 086f4a5066fb7b991"" ""resultToken"": null } ""oldEvaluationResult"": { ""evaluationResultIdentifier"": { ""evaluationResultQualifier"": { ""configRuleName"": ""restrictedAMI"" ""resourceType"": ""AWS::E C2::Instance"" ""resourceId"": ""i a46d7125"" } ""orderingTimestamp"": ""2016 0128T01:44:54553Z"" ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 15 of 16 } ""complianceType"": ""COMPLIANT"" ""resultRecordedTime"": ""2016 0128T01:45:03438Z"" ""configRuleInvokedTime"": ""2016 0128T01:45:01298Z"" ""annotation"": null ""resultToken"": null } ""notificationCreationTime"": ""2016 0128T02:02:42317Z"" ""messageType"": ""ComplianceChangeNotification"" ""recordVersion"": ""10"" } Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules The AWS Config rule created in the example above checks one of the several compliance requirements you may have associated with BYOL serverbound licenses This rule can be further extended to check other licensespecific restrictions such as the following :  Host affinity of the instances  Number of sockets or number of cores of the Dedicated Host onto which the instances are launched  Duration for which an instance needs to be on a specified Dedicated Host In addition you can also monitor the utilization of Dedicated Hosts you own and mark them noncompliant if their usage drops below a threshold This can help you optimize your fleet of Dedicated Hosts Conclusion In this paper you learned how you can use AWS Config in conjunction with AWS Config r ules to ascertain your license compliance posture on Amazon EC2 Dedicated Hosts AWS Config can be more broadly used to monitor and govern all your resources For more information see Further Reading below ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document:  Chayan Biswas Senior Product Manager AWS Config Further Reading For additional help please consult the following sources:  Documentation on what AWS Config supports : Supported Resources Configuration Items and Relationships  Blog post: How to Record and Govern your IAM Resource Configurations Using AWS Config  AWS Config product page: AWS Config",General,consultant,Best Practices Use_AWS_WAF_to_Mitigate_OWASPs_Top_10_Web_Application_Vulnerabilities,This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Use AWS WAF to Mitigate OWASP ’s Top 10 Web Application Vulnerabilities July 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own inde pendent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Web Application Vulnerability Mitigation 2 A1 – Injection 3 A2 – Broken Authentication and Session Management 5 A3 – Cross Site Scripting (XSS) 7 A4 – Broken Access Control 9 A5 – Security Misconfiguration 12 A6 – Sensitive Data Exposure 15 A7 – Insufficient Attack Protection 16 A8 – Cross Site Request Forgery (CSRF) 19 A9 – Using Components with Known Vulnerabilities 21 A10 – Underprotected APIs 23 Old Top 2013 A10 – Unvalidated Redirects and Forwards 24 Companion CloudFormation Template 26 Conclusion 29 Contributors 30 Further Reading 30 Document Rev isions 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract AWS WAF is a web application firewall that helps you protect your websites and web applications against various attack vectors at the HTTP protocol level This paper outlines how you can use the service to mitigate the application vulnerabilities that are defined in the Open Web Application S ecurity Project (OWASP) Top 10 list of most common categories of application security flaws It’s targeted at anyone who ’s tasked with protecting websites or applications and maintain ing their security posture and availability This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 1 Introduction The Open Web Application Security Project (OWASP) is an online community that creates freely available articles methodologies documentation tools and technologies in the field of web application secu rity1 They publish a ranking of the 10 most critical web application security flaws which are known as the OWASP Top 10 2 While the current version was published in 2013 a new 2017 Release Candidate version is currently available for public review The OWASP Top 10 represents a broad consensus of the most critical web application security flaws It’s a widely accepted metho dology for evaluat ing web application security and build mitigation strategies for websites and web based applications It outlines the top 10 areas where web applications are susceptible to attacks and where com mon vulnerabilities are found in such workl oads For any project aimed at enhancing the security profile of websites and web based applications it’s a great idea to understand the OWASP Top 10 and how it relate s to your own workloads This will help you implement effective mitigation strategies AWS WAF is a web application firewall (WAF) you can use to help protect your web applications from common web exploits that can affect application availability compromise security or consume excessive resources3 With AWS WAF you can allow or block requests to your web applications by defining customizable web security rules Also y ou can use AWS WAF to create rules to block common attack patterns as well as specific attack patterns targeted at your application AWS WAF works with Amazon CloudFront 4 our global content delivery network (CDN) service and the Application Load Balancer option for Elastic Load Balancing 5 By u sing these together you can analyze incoming HTTP requests apply a set of rules and take actions based on the matching of those rules AWS WAF can help you mitigate the OWASP Top 10 and othe r web application security vulnerabilities because attempts to exploit them often have common This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 2 detectable patterns in the HTTP requests You can write rules to match the patterns and block those requests from reaching your workloads However it ’s importan t to understand that using any web application firewall does n’t fix the underlying flaws in your web application It just provides an additional layer of defense which reduc es the risk of them being exploited This is especially useful in a modern develop ment environment where software evolves quickly Web Application Vulnerability Mitigation In April 2017 OWASP released the new iteration of the Top 10 for public comment The categories listed in the new proposed Top 10 are many of the same application fl aw categories from the 2013 Top 10 and past versions: A1 Injection A2 Broken Authentication and Session Management A3 Cross Site Scripting (XSS) A4 Broken Access Control (NEW) A5 Security Misconfiguration A6 Sensitive Data Exposure A7 Insufficient Attack Protection (NEW) A8 Cross Site Request Forgery (CSRF) A9 Using Components with Known Vulnerabilities A10 Underprotected APIs (NEW) The new A4 category consolidates the categories Insecure Direct Object References and Missing Function Level Access Controls from the 2013 Top 10 The previous A10 category Unvalidated Redirects and Forwards has been replaced with a new category that focus es on Application Programming Interface (API) security In this paper we discuss both old and new categories You can deploy AWS WAF to effectively mitigate a representative set of attack vectors in many of the categories above It can also be effective in other categories However the effectiveness depends on the specific workload that’s protected and the ability to derive recognizable HTTP request patterns Given that the attacks and exploits evolve constantly it ’s highly unlikely that any one web application firewall can mitigate all possible scenarios of an attack that target s flaws in any of these categori es This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 3 This paper describes recommendations for each category that you can implement easily to get started in mitigating application vulnerabilities At the end of the paper you can download an example AWS CloudFormation template that implement s some of the generic mitigations discussed here However be aware that the applicability of these rules to your particular web application can vary A1 – Injection Injection flaws occur when an application sends untrusted data to an interpreter6 Often the interpreter has its own domain specific language By using that language and inserting unsanitized data into requests to the interpreter an attacker can alter the intent of the requests and cause unexpected actions Perhaps the most well known and widespread injection flaws are SQL injection flaws These occur when input isn’t properly sanitized and escaped and the values are inserted in SQL statements directly If the values t hemselves contain SQL syntax statements the database query engine executes those as such This trigger s actions that weren’t originally intended with potentially dangerous consequences Credit: XKCD: Exploits of a Mom published by permission Using AWS WAF to Mitigate SQL injection attacks are relatively easy to detect in common scenarios They’ re usually detected by identifying enough SQL reserved words in the HTTP re quest components to signal a potentially valid SQL query However more complex and dangerous variants can spread the malicious query (and associated key words) over multiple input parameter or request components based on the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 4 internal knowledge of how the application composes them in the backend These can be more difficult to mitigate using a WAF alone —you might need to address them at the application level AWS WAF has built in capabilities to match and mitigate SQL injection attacks You can use a SQL i njection match condition to deploy rules to mitigate such attacks7 The following table provides some common condition configurations: HTTP Request Component to Match Relevant Input Transformations to Apply Justification QUERY_STRING URL_DECODE HTML_ENTITY_DECODE The most common component to match Query string parameters are frequently used in database lookups URI URL_DECODE HTML_ENTITY_DECODE If your application is using friendly dirified or clean URLs then parameters m ight appear as part of the URL path segment —not the query string (they are later rewritten server side) For example: https://examplecom/products//reviews/ BODY URL_DECODE HTML_ENTITY_DECODE A common component to match if your application accepts form input A WS WAF only evaluates the first 8 KB of the body content HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE A less common component to match But if your application uses cookie based parameters in database lookups consider matching on this component as wel l HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE A less common component to match B ut if your application uses the value of this header for database validation consider matching on this component as well Additionally consider any other components of custom request headers that your application uses as parameters for database lookups You might want to match these components in your SQL injection match condition Other Considerations Predictably this det ection pattern is less effective if your workload legitimately allows users to compose and submit SQL queries in their requests For those cases consider narrowly scoping an exception rule that bypasses the SQL injection rule for specific URL patterns tha t are known to accept such input You can do that by using a SQL injection match condition as described in the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 5 preceding table while listing the URLs that are excluded from checking by using a string match condition : 8 Rule action: BLOCK when request matches SQL Injection Match Condition and request does not match String Match Condition for excluded Uniform Resource Identifiers ( URI) You can also mitigate o ther types of injection vulnerabilities against other domain specific languages to varying degrees using string match conditions —by matching against kno wn key words assuming they ’re not also legitimate input values A2 – Broken Authentication and Session Management Flaws in the implementation of au thentication and session management mechanisms for web applications can lead to exposure of unwanted data stolen credentials or sessions and impersonation of legitimate users9 These flaws are difficult to mitigate using a WAF Broadly attackers rely on vulnerabilities in the way client server communication is implemented Or they target how session or authorization tokens are generated stored transferred reused timed out or invalidated by your application to obtain these credentials After they obt ain credentials attackers impersonate legitimate users and make requests to your web applications using those tokens For example if an attacker obtains the JWT token that authoriz es communication between your web client and the RESTful API they can impersonate that user until the token expires by launching HTTP requests with the illicitly obtained authorization token 10 Using AWS WAF to Mitigate Because illicit requests with stolen authorization credentials sessions or tokens are hard to distinguish from legitimate ones AWS WAF takes on a reactive role This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 6 After your own application security controls are able to detect that a token was stolen you can add that token to a blacklist AWS WAF rule This rule block s further requests with those signatures either permanently or until they expire You can also automate t his reaction to reduce mitigation time AWS WAF offers an API to interact with the service11 For this kind of solution you would use infrastructure specific or application specific monitoring and logging tools to look for patterns of compromise Automation of AWS WAF rules is discussed in greater detail under A7 – Insufficient Attack Protection To build a blacklist use a string match condition The following table provides some example patterns: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING Avoid exposing session tokens in the URI or QUERY_STRING because they’re visible in the browser address bar or server logs and are easy to capture URI HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE CONTAINS Session ID or access tokens HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE CONTAINS JWT token or other bearer authorization tokens You can use various mechanisms to help detect leaked or misused session tokens or authorization tokens One mechanism is to k eep track of client devices and the location where a user commonly accesses your application from This gives you the ability to quickly detect if requests are made from an entirely different location or client device with the same tokens and blacklist them for safety AWS WAF also supports rate based rules Rate based rules trigger and block when the rate of requests from a n IP address exceeds a customer defined threshold (request s per 5min interval ) You can combine t hese rules with other predicates (conditions) that are available in AWS WAF You can enforce rate based limits to protect your applications’ authentication or authorization URLs and endpoints against brute force attack attempts to guess credentials You can also use a string match condition to match authentica tion URI paths of the application: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 7 HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE HTML_ENTITY_DECODE STARTS_WITH /login (or relevant application specific URLs) This condition is then used inside a rate based rule with the desired threshold for requests originating from a given IP address : Rule action: BLOCK; rate limit: 2000; rate key: IP Only requests that match the string match condition are counted When that count exceeds 2000 requests per 5minute interval the originating IP address is blocked The minimum rate limit over a 5 minute you can set is 2000 requests A3 – Cross Site Scripting (XSS) Cross site scripting (XSS) flaws occur when web applications include user provided data in webpages that is sent to the browser without proper sanitization 12 If the data isn’t proper ly validat ed or escap ed an attacker can use those vectors to embed scripts inline frames or other objects into the rendered page (reflection) These in turn can be used for a variety of malicious purposes including stealing user credentials by using key loggers in order to install system malware The impact of the attack is magnified if that user data persist s server side in a data store and then delivered to a large set of other users Consider the example of a common but popular blog that accept s user commen ts If user comments aren’t correctly sanitized a malicious user can embed a malicious script in the comments such as: